Exploring Helidon – Part 1


, , ,

So I recently blogged (here) about the announcement of Helidon – the open source project from Oracle to provide a microservice app server that includes optional support for the J2EE Microprofile. This is the first of what will probably become a series of blogs about Helidon, particularly in its SE form (non J2EE Micro-Profile) as Micro Profile and the wider J2EE model in general will have been documented more widely.

Hello World

Helidon comes with a quick start example app implemented both in SE and MP forms.  It is worth following the very simple instructions provided by the Helidon site to instantiate both versions of the Hello World app as it provides a good way to start to understand the differences in the way Helidon can be used.

The thing that really jumps out when you compare the code (for me at least) is the fact that the SE code being driven from values loading from configuration is more dynamic. The configuration can be sourced in a number of different ways from YAML files to etcd.So for our for first experiment we took the hello World app, and made the path /greet dynamic by loading the path from from some additional configuration. Enhancing the main with :
private static Routing createMultiRouting() {
Map greetingConfig = Config.create().get(“greeting”).asMap();
Routing.Builder routing = Routing.builder();
if (!greetingConfig.isEmpty())
System.out.println (“Read config value>” + k + “=” + v + “<“);
// as the key is the fully qualified name, I just want the last piece,
// so let’s strip it to be a substring
String key = (String)k;
key = key.substring(key.lastIndexOf(“.”)+1);
// each different URI should have its own instance of the Greet Service
// with its tailored key which will mean it responds with the key e.g. France
.register(“/”+v, new GreetService(key));
System.out.println(“No config\n”);
return routing.build();
We can create URLs for greetings in different languages, and see the different instances of the Service object responding to the web calls. Whilst many may associate this approach with Node.js for me it felt more like the webserver multiplexer (MUX) such as Gorilla used with Go (you can see what I mean here).

Helidon and On the Road

Two of the leading figures Tomas Langer and Dmitry Kornikov will be presenting at a number of meet-ups in Europe, including the London Meetup (go here)

Part 1?

Yes, I will be blogging more about Helidon as soon as I can, but presently wrapping up a white paper and running an API Design training session soon.

Packt Christmas $5 Promotion


, ,

Head over to Packt.com to take advantage

Yes it’s that time of year and Packt have launched their Christmas Promotion where the books and videos are all $5 (£4.76) including the two I’ve co-written and others I’ve tech reviewed.

Based on past trends, this is the best time to get any EBooks you want from Packt, there are other promotions but not as good as this!

Experian data breach report and analysis


, , ,

A good friend of mine (Howard Durdle) is a security expert and CSO, he pointed out this really good Twitter trail breaking down the newly published report on the massive Experian data breach.


You don’t need to be a geek or a security expert to understand what is being said here, and more importantly reading between the lines as they say, the likely root causes. For me, this all points to cultural challenges, where organisational pressures or a lack of appreciation by mid level decision makers struggle to appreciate the need to invest in non functional factors such as security, patching and maintenance.

Sadly, Experian aren’t the first with this challenge, and won’t be the last. With DevSecOps etc the people building the software will understand the issue. But, I think we need to be working with educating the business stakeholders on the need for dealing with NFRs, and the need to prioritise certain types of issues.

UKOUG Conference 2018


, , , , , ,

With the start of December comes the UK Oracle User Group conference, or to be more precise the Independent UKOUG.  This year the conference is back in Blackpool, a slightly smaller venue than the ICC in Birmingham, but in many respects that made the event feel more vibrant and busy.

The user group also announced some of the changes it is making going forwards reflecting the changing needs of its members – SIGs being largely superseded by multi-stream single day events (Summits) with the Call for Papers for the first of these here.  A wider list of Oracle related Calls for Papers is available here.

Of course being a UKOUG Volunteer, I have been presenting and co-presenting.  The slides from my presentation sessions can be found at:

This was an abridged version of the an update on my presentation here 

My second presentation was a review of Oracle Integration Cloud, in which I presented some customer use cases of OCI  as part of a wider presentation on OIC by Sid Joshi

This was followed on the second day with two API based sessions, the first being a deep dive into custom API Policies on the Oracle API Platform.

The final session, was another short one looking at Apiary which was primarily a demo of what the solution can do.

On top of trying to keep up with my usual workload – a very hectic couple of days.

London Oracle Developer Meet-up – November 18


, , , , , ,

Another Oracle Developer Meet-up took place in London yesterday. This meet-up focused on Terraform and Microservices. The summary of the evening slides:

Chris Hollies’ slides can be found at here.  As demo’s aren’t included in the deck, the following videos are alternatives:

Our second session, that I presented on how we can establish paths of transition that make it easy to adopt microservices. The presentation material for this is available here:

My next Packt Project has been announced!


, , , , ,

My next Packt project (via O’Reilly) is not a book, but a short online training course about  good API design, API 1st and some tools that can support an API 1st methodologies. Register for the session here.

It includes a closer look at cloud tools such as Oracle’s excellent Apiary (sorry if this sounds like a sales pitch, but it is a good tool and the words of the found of RestLet confirm this) along with SwaggerHub and a few other options.

A good API goes beyond just the payload definition and I’ll walk through the other considerations and explain why these other areas are important.

Microservices Patterns Book


, , ,

Earlier this year, I wrote a short post on Chris Richardson’s book Microservice Patterns (Praise for Microservice Patterns). When the I read the book I mind mapped my notes which can be seen at Mindmap Index or access directly here.  The mind map is no substitute, but should act as a reasonable aide-memoire.

We would highly recommend getting and reading the book.

UK Oracle User Group Annual Conference


, , , , , , , , , , , ,

As we rapidly approach the end of the Year.  We’re still pretty busy.  The UK Oracle User Group Annual Conference covering Oracle Technology (Tech18) ranging from Database on premises to Polyglot development in Oracle and other clouds passing through Hybrid Integration, SOA and so on.  Alongside this is also Apps18 which covers Oracle Applications from EBusiness Suite and Seibel to the Fusion cloud solutions such HCM, Financials, Taleo and so on. Then finally the 3rd part covering all things JDEdwards.


Being on the committee for the conference means that I have been heavily involved in developing the conference agenda and choosing sessions.  So I can say with great confidence that there is a very diverse range of sessions from highly respected SMEs and presenters along with new blood presenting on subjects from Oracle JET to Robotic Process Automation (RPA) for example.

I hope we’ll see you there.

Value of Technical Capability Models


, , ,

The use of Technical Capability models is not something I have seen a lot of use of, which is a little unfortunate as they can provide tremendous insight into an organizations IT needs.

Typically you want to use the Technical capability model to be used in conjunction with a business capability model, and this is where things can get tricky as developing the business views can take time.  I came across this short video which focuses on the more business aspect but helps explain the ideas behind the models:

Note how the model is largely groups of capabilities that happen in the business. Underlying this kind of diagram you would have a brief explanation of each capability.  If you want to go all out on EA modelling then you can link the capabilities to the documented associated processes etc.

Independently, the ideal is to then identify the technical capabilities that are likely to be needed. This will provide a similar looking model. The technical capabilities are probably best drawn from industry best practices, and specific business needs. The model should be completely product agnostic. The real value comes in by then mapping the technical capabilities to which business capabilities use.

This will now help inform a number of decisions identity areas of focus.  The technical capabilities should have mappings or fall into one of the following states, with the associated reasons:

  • Maps To Business Capabilities
    • This is healthy
  • Technology is Being Used but no Business Mapping
    • Gap in the business capability model?
    • Nuance of the business model not understood by IT?
    • Redundant processes being performed?
  • Business Process with no Technology
    • Opportunity for business improvement?
    • Genuinely no value in applying technology e.g. Business Value is something is hand made?
    • Capability delivered by Shadow IT?
  • Doesn’t Map to Any Business Capabilities
    • Capability isn’t needed and therefore jettisoned OR
    • Potential capability that the business are unaware or haven’t understood what can be offered

With the exception of the 1st condition, the other scenarios things should be examined more closely and adjust the models accordingly.

With the capability models linked and the miss matches addressed.  The Technical Capability model can really deliver value by linking the capabilities to the actual technologies being used.  Very quickly it is possible to see details such as:

  • Technology weaknesses (i.e. a key business area is not well supported by IT e.g. products being mapped are End of Life, not got the level of support). Whilst some of these will be ‘no brainers’ it is more than likely a free surprises will show up
  • Technology duplication – sometimes we’ll see multiple products in one area, can the product list be rationalized to maximize license investment? Would it be more cost effective to invest 1 one high-end product and eliminate lots of smaller niche pieces?
  • Where IT investment will likely improve key capabilities vs investment on niche capabilities
  • How technology change can impact the business, for example replacing a Content Management System may impact an organizations online presence, but it may also show to impact how we deliver support services to customers.
  • If the business prioritize a specific area, how does that map onto IT systems and processes?

Whilst a lot of this will seem pretty obvious, it will uncover unexpected details and most importantly provide a relatively simple set of visualizations as cross references that help understand the business and explain the impact of IT related decisions to the business in their terms.

The following deck provides a presentation on the value of Technical Capability Models:

Defining Boundaries for Logical Gateways on the API Platform a multi cloud / multi region context


, , , ,

The Oracle API Platform takes a different licensing model to many platforms, rather than on CPU it works by the use of Logical Gateways and blocks of 25 million successful API calls per month. This means you can have as many actual gateway nodes as you like within a logical group to ensure resilience as you like, essentially how widely you deploy the gateways is more of a maintenance consideration (i.e. more nodes means more gateways to take through a maintenance process from the OS through to the gateway itself).

In our book (here) we described the use of logical gateways (groups of gateway nodes operating together) based on the classic development model, which provides a solid foundation and can leverage the gateway based routing policy very effectively.

logical partitions

But, things get a little trickier if you move into the cloud and elect to distribute the back end services geographically rather than perhaps have a single global instance for the back-end implementation and leverage technologies such as Content Delivery Networks to cache data at the cloud edge and their rapid routing capabilities to offset performance factors.


Classic Global split of geographies

Some of the typical reasons for geographically distributing solutions are …

  • Low hit rate on data meaning caching solutions like CDNs are unlikely to yield performance benefits wanted and considerable additional work is needed to ‘warm’ the cache,
  • Different regions require different back end implementations ordering of products in one part of the world maybe fulfilled using a partner, but in another it is directly satisfied,
  • Data is subject to residency/sovereignty rules – consider China for example. But Germany and India also have special considerations as well.

So our Global splits start to look like:


Global Split now adding extra divisions for India, China, Russia etc

The challenge that comes, is that the regional routing which maybe resolved on the Internet side of things through Geo Routing such as the facilities provided by AWS Route53 and Oracle’s Dyn DNS as a result finding nearest local gateway. However Geo DNS may not be achievable internally (certainly not for AWS), as a result routing to the nearest local back-end needs to be handled by the gateway. Gateway based routing can solve the problem based on logical gateways – so if we logically group gateways regionally then that works. But, this then conflicts with the use of gateway based routing for separation of Development, Test etc.

Routing Options

So, what are the options? Here are a few …

  • Make you Logical divisions both by environment and by region – this is fine if you’re processing very high volumes i.e. hundreds of millions or more so the cost of additional Logical gateways is relatively small it the total budget.


Taking the geo split and applying the traditional layers as well has increased the number of Logical gateways

This problem can be further exacerbated, if you consider many larger organisations are likely to end up with different cloud vendors in the same part of the world for example AWS and Azure, or Oracle and Google. So continuing the segmentation can become an expensive challenge as the following view helps show:


It is possible contract things slightly by only have development and test cloud services where ever your core development center is based. Note that in the previous and next diagrams we’ve removed the region/country specific gateway drivers.


  • Don’t segment based on environment, but only on region – but then how do you control changes in the API configuration so they don’t propagate immediately into production?
  • Keep the existing model but clone APIs for each region – certainly the tooling we’ve shared (Managing API Policy Versioning in Oracle API Platform) makes this possible, but it’s pretty inelegant and error prone as it be easy to forget to clone a change, and the cloning logic needs to be extended to take into account the bits that must be region specific.
  • Assuming you have a DNS address for the target, you could effectively rewrite the resolution of the address by changing its meaning in each gateway node’s host file. Inelegant, but effective if you have automated deployment and configuration of your gateway servers.
  • Header based routing with the region and environment as header attributes. This does require either the client to set the values (not good as you’re revealing to your API consumer traits of the implementation), or you apply custom policies before the header based routing that insert those attributes based on the gateway’s location etc.
  • Build a new type of gateway based routing which allows both the environment (dev, test etc) and location (region) to inform the routing,

Or, and the point of this blog, use gateway based routing and leverage some intelligent DNS naming and how the API Platform works with a little bit of Groovy or a custom Java policy.

Continue reading