Oracle High Availability on Azure – What & Why

Tags

, , , , ,

Many organisations come to cloud from an approach of ‘not my computer’. This is occurs for a number of reasons but considerations such as:

  • OPEX (operational spend) over CAPEX (capital spend)- converting significant upfront expenditure into an outlay on more regular intervals. Some years ago this might have been approached through lease agreements once you got into the server space
  • Flexibility in sizing (although many forget that this flexibility does come at a premium)
  • Ability to host the kit – many organisations won’t have he appropriate physical infrastructure necessary to house servers to a standard that offers the desirable levels of security and assurance for always on capabilities.everest-group-cloud-chart

But cloud by which I mean IaaS (Infrastructure as a Service), does not really equate to someone housing my computer, or potentially even as simple as virtualising my computer. This comes from several factors:

  • Really big cloud providers such as Amazon with AWS, Microsoft with Azure, Google, Dropbox are not using run of the mill servers, but build their own servers so they can optimise the design to allow the best VM to server densities
  • Ability to make hardware be very cost effective, for example Google is well known for by commodity storage and using data distribution techniques to give performance and. Failure resilience.

So how does this relate to Oracle and High Availability? Well when you want to make you data tier of an oracle solution both highly available as well as scaling through scale out you end up using Real Application Cluster (RAC) at the database. Simply providing VM resilience will not give sufficient availability for continuously on conditions, you need the software tier to continuously pickup demand, and availability of servers to do that is handled by the virtualisation tier so if you have a node failure then you will have at least 1 remaining whilst the virtualisation launches another instance.

cloud-azureThe problems start because RAC has some platform requirements (disk sharing either virtual or physical) that can’t be offered by all cloud (IaaS) that can be typically established with on premise hardware such as a SAN. Microsoft Azure has one of these very issues meaning it presently can’t run RAC (see here).  Amazon doesn’t have this issue (details here) and obviously not be a problem for Oracle cloud (see here).

mapThe second consideration that tends to get overlooked is data centre level DR. It is very easy to forget regardless how good the data centre is with precautions and redundancy there are some events that can bring a centre down. Even the most sophisticated monitoring and live VM movement can’t avoid the data centre level problems. There are well published illustrations of such issues, the best known are those Amazon have had (probably because it has hit some many customers – Amazon’s own analysis of one event here). So if you want a truly resilient always on, you need Dataguard replicating to another data centre if possible. You can of course use Dataguard within a data centre as well to offset the possibility on not having RAC, but it does mean scaling is limited to what you can do vertically (I.e. More CPU cores, more memory, or disk). It will also place different demands on the design of you application  tiers.

Turin Brakes

I was fortunate enough to catch Turin Brakes touring their latest album Lost Property and got a few photos of their performance = you call see the full set here.
Turin Brakes at The Brook
Turin Brakes at The Brook

Review of Oracle API Management 12c

Tags

, , , ,

3635en_4575_oracle20api20management2012c20implementation_0My review of the Oracle API Management 12c has been published the the UKOUG at http://www.ukoug.org/what-we-offer/news/review-of-oracle-api-management-12c-implementation/ – rather than repeat the review here, I’d recommed people go read the page.  But I will say here is that it is an excellent book.  The book can be found at:

Along with a range of other book sellers.

Integration Cloud Service – In the Eyes of the A-Team

Tags

, , , , , ,

ics-dev-testSo the A-Team (not the TV show which managed to have lots of things blow up and no one ever get hurt) but the technology gurus at Oracle have started to write blog posts about Integration Cloud Service (ICS).  This is will be a reflection of the increasing uptake of the cloud service.  A fellow Oracle Ace Associate (Robert van Mölken – blog here) and I should about to get a book on the subject underway.

As an aside to this, as part of creating a case to the publishers of the potential value of a book on the subject, I picked up a number of market assessments which are pretty interesting:

  • “​By 2016, at least 35% of all large and midsized organizations worldwide will be using one or more iPaaS offerings in some form” (Gartner RAS Core Research Note G00210747 – March 11)
  • 2013 Gartner say iPaaS is going to really take off (G00258046)
  • Last summer Gartner forecast by 2018 that iPaaS will be the 2nd largest PaaS offering (G00277176)
  • By 2018, in most organizations, at least 50% of new integration flows will be implemented by citizen integrators – January 2016
  • Various tweets from Gertner researchers have indicated Oracle’s entry into this form of iPaaS is going to be a market disruptor.

All that before you look at what other analysts are saying such as Forrester, Ovum and others.

Oracle ITSO Mindmap Update

Tags

, , , ,

So I have been chipping away at my mind mapping of the foundation reference architecture from Oracle (part of the IT Strategies from Oracle – ITSO material).  So I have recently updated the mind map.  You can see it via WiseMapping here. Navigate an image of it below (very large now).

ITSO

TOGAF is document centric & has no place in the Agile world?

There is an almost constant drip of articles about why Enterprise Architecture and TOGAF particularly is or is not appropriate/valid, particularly in an agile environment because it typically results in lots of documentation which often date rather quickly when it comes to describing the landscape rather than be sustained.

So why am adding to this mass of MBs and GBs of text on the subject?  Well, when I do look at these articles (which can be frustrating at times) there seems to me several  points often overlooked, which is what I want to address.  These points are:

  • A document might not generate vast amounts of value (unless you’re Gartner or a government think tank) but the process or journey (and hopefully act of engagement of the right parts of an organisation) should shake out influencing points.
  • On the agile documentation perspective that people often argue as a reason not to document we should remember the agile manifesto says ‘Individuals and interactions over processes and tools‘ and  ‘Working software over comprehensive documentation.

Let me expand on my first point a bit more.  When looking at a solution space it is easy to define the requirements (or be SME stakeholder)that deliver capabilities for current and near future operational needs and ways of working.  These challenges will always gain precedence over the desired direction of travel because you’re working shorter cycles.  Okay, the answer to this is bound to be – well that is what is needed and delivers value. But this always puts you in a position of delivering against now. If you’re focused on the now in a competitive landscape, the first organisation to build for a likely future is going stand a good chance of winning ground. Or worse a sudden change of direction or focus can set you onto the back foot. This is where EA can help.  Well engaged EA effort will bring the right people together  because it seeks to draw out not just the end users but the influencers, the people defining the business capabilities, and value propositions. Typically those stakeholders will differ from those involved as directly or 1 step removed from the delivery engagement). As EA techniques and modelling challenges people to look at fundamentals it should  draw people away from considering the short and near term  focuses and address the bigger game. We could reduce this to extracting business capabilities rather than defining function points. As a result you have a business roadmap on which an IT roadmap can be hung. With this   you can focus on what delivers value and when, it may even validate the original position held by the delivery team. This validation could be easily considered a waste of time. Except it isn’t because you have a confirmation, but more critically those a bit more removed from the day to day will appreciate the value being delivered as they have helped define and confirm it, and hopefully more brought into the delivery goal. It may even present the opportunity show how technology innovation can inform the business of opportunity.

Let me illustrate with an example. Many IT systems deal with user and customers, and as a result a level of data security can be defined (at its most simplistic how would I want my details to be handled).  A development programme can be running smoothly delivering, but because the bigger picture/full business capability needed is not been recognised. That can be expected as those involved in delivery are more likely to be thinking about how to make now and next week easier. However if the organisation then  starts pitching and winning government contracts, as they perceive the business to be essentially the same service.  But those closer to the details of Government contracts will know that they often have a higher bar regarding data residency.  If you’ve been building against a single deployment location model (plenty fine for Joe public) then the change  can throw a seriously big spanner at the works if the contract doesn’t happen to be in the same place as your data centre. Yes, you can refactor the solution but actually what would have been easier is if this direction had determined at the outset, then you’d have designed ready to build the features when the work came in that handled residency questions. Ideally the process of getting the engagement and working towards EA views should have drawn out the view of a capability being wider than justJoe public.

The naysayers will probably argue back that you cant know everything in advance. To which I agree to an extent, but life is not black and white and there are varying probabilities and you can choose to only work the certainties, or work and engage with the probabilities. If you work with a bigger picture and probabilities it will be easier to handle now and potentially be ahead of the competition. Oh, and that is where Gartner & the government think tankers I mentioned make their money understanding trends and likely needs.

As to the second point,  as you have seen I am emphasising the tool (EA and TOGAF) as means to achieve the bigger picture aspects of individuals & interactions. Where bringing some of these people together to interact may not be the easiest as they will be the furthest from the day to day development. Remember that TOGAF does not need to be swallowed verbatim – infact like delivery methodologies such as OUM and RUP you’re encouraged to tailor the framework. The benefit of something being codified creates a context in which people well invest greater effort to achieve the process.  Consider this, standups in an agile operation happen consistently and reliably because the timing and obligation have been ‘codified’ (not necessarily formally) in the same way how standups actually function have.

So there are places for EA, but you’ve got to remember not the process that is the key, not the documents you will produce, but what is it you’re trying to achieve with its use.  It is not necessarily

Thought Provoking Video

Tags

, , ,

So, I’m not a great fan of things like the World Economic Forum (WEF) – unlike TED it appears to be overtly political (and an opportunity for big business to lobby governments) rather than pure presentation ideas and innovation. That said, last week at Oracle’s Digital Transformation conference I did see a couple of videos produced by WEF that really got some messages across.

All Tech & No Business?

Tags

, ,

accounting3So as an Enterprise Integration Architect it is very easy to get wrapped up in the technology and what it can do. But in an end user business you really can’t maximise your use of technology if you don’t have a good handle on the business.  I do have a solid handle most aspects of our business, with one possible exception – the mechanics of accounting in any depth. I get invoices, credit notes, how advanced shipping notes and goods receipts impact the accounting, what SEPA is and so on, but chart of accounts, when to use a general ledger or subledger?  So I have pulled together a few resources to help address insight and thought I’d share them (and give me a point to check back to if I need to revisit this in the future).

Impact of XaaS on the Technical Publishing Business

Tags

, ,

Over the last couple of months I have had a number of interesting conversations with a couple of publishers around publishing books. So to be fully transparent, I am working with a fellow Oracle Ace Associate to get a book of the ground. But what appears to be an interesting challenge has become distilled in my insight.

For book publishers they want to develop a title that will have a reasonable period of validity – lets say a couple of years as this is the kind of time frame required for a book to make an acceptable level of profit in the technical market. Whilst supporting an IT industry where software is deployed and installed by customers – this timeframe is reasonable. A large part of any business’ customer base don’t keep upgrading unless there is a distinct need. To keep the number of versions down that need to be supported, the release cycle is kept relatively slow other than to issue patches (i.e. fix bugs but not change the essential product and what it does and how it does it). A slow release cycle means books don’t date too quickly.

  But we’re quickly moving into the world of XaaS. when all your customers are running on your cloud, then it becomes a lot easier to push out upgrades and keep everyone on as few as 2 product versions (new and previous) and only a few product versions needing to be supported as a result. That means a vendor can release updates far faster, with updates including new features. That acceleration increasingly becomes an arms war where to compete you also need to release updates as fast to match or differentiate from another vendor.

For example Mulesoft release updates of their cloud solution every quarter. Oracle will release in the PaaS space every 8-12 weeks, if not quicker than that.

This all adds upto the possibility that a print book can date (or atleast be perceived to date) more quickly. So how does the publisher who needs a longer cycle to make an acceptable return cope with this?

You either sell books covering just areas where you know you’re going to see significant sales and therefore have a shorter acceptable book life cycle – and this holds true for things like AWS and CloudShift. But those middleware platforms like Boomi, Mulesoft & Oracle ICS which will have a smaller readership there is a real challenge.

  O Reilly offer free updates (details here) for the edition of the ebook you have (note no mention of the print edition). There is the further challenge of how the relationship with the author works and the on going cost of proofing the authors work. Maybe the answer is that rather than selling whole books, the purchasing is on a chapter model. So if a book needs to be extended to reflect new capabilities as we will see in the XaaS world, they have to buy the new chapters.

Monster has some fun

A while after we got the moniker of mp3monster we took some daft photos of the ‘monster’ with music and business cards etc. Well that was about 10 years ago. So I thought for a little light hearted fun I’d do it again …