So I recently blogged (here) about the announcement of Helidon – the open source project from Oracle to provide a microservice app server that includes optional support for the J2EE Microprofile.
This is the first of what will probably become a series of blogs about Helidon, particularly in its SE form (non J2EE Micro-Profile) as Micro Profile and the wider J2EE model in general will have been documented more widely.
Helidon comes with a quick start example app implemented both in SE and MP forms. It is worth following the very simple instructions provided by the Helidon site to instantiate both versions of the Hello World app as it provides a good way to start to understand the differences in the way Helidon can be used.
The thing that really jumps out when you compare the code (for me at least) is the fact that the SE code being driven from values loading from configuration is more dynamic. The configuration can be sourced in a number of different ways from YAML files to etcd.So for our for first experiment we took the hello World app, and made the path /greet dynamic by loading the path from from some additional configuration. Enhancing the main with :
We can create URLs for greetings in different languages, and see the different instances of the Service object responding to the web calls. Whilst many may associate this approach with Node.js for me it felt more like the webserver multiplexer (MUX) such as Gorilla used with Go (you can see what I mean here).
Helidon and On the Road
Yes, I will be blogging more about Helidon as soon as I can, but presently wrapping up a white paper and running an API Design training session soon.
Yes it’s that time of year and Packt have launched their Christmas Promotion where the books and videos are all $5 (£4.76) including the two I’ve co-written and others I’ve tech reviewed.
Based on past trends, this is the best time to get any EBooks you want from Packt, there are other promotions but not as good as this!
You don’t need to be a geek or a security expert to understand what is being said here, and more importantly reading between the lines as they say, the likely root causes. For me, this all points to cultural challenges, where organisational pressures or a lack of appreciation by mid level decision makers struggle to appreciate the need to invest in non functional factors such as security, patching and maintenance.
Sadly, Experian aren’t the first with this challenge, and won’t be the last. With DevSecOps etc the people building the software will understand the issue. But, I think we need to be working with educating the business stakeholders on the need for dealing with NFRs, and the need to prioritise certain types of issues.
With the start of December comes the UK Oracle User Group conference, or to be more precise the Independent UKOUG. This year the conference is back in Blackpool, a slightly smaller venue than the ICC in Birmingham, but in many respects that made the event feel more vibrant and busy.
The user group also announced some of the changes it is making going forwards reflecting the changing needs of its members – SIGs being largely superseded by multi-stream single day events (Summits) with the Call for Papers for the first of these here. A wider list of Oracle related Calls for Papers is available here.
Of course being a UKOUG Volunteer, I have been presenting and co-presenting. The slides from my presentation sessions can be found at:
This was an abridged version of the an update on my presentation here
My second presentation was a review of Oracle Integration Cloud, in which I presented some customer use cases of OCI as part of a wider presentation on OIC by Sid Joshi.
This was followed on the second day with two API based sessions, the first being a deep dive into custom API Policies on the Oracle API Platform.
The final session, was another short one looking at Apiary which was primarily a demo of what the solution can do.
On top of trying to keep up with my usual workload – a very hectic couple of days.
Chris Hollies’ slides can be found at here. As demo’s aren’t included in the deck, the following videos are alternatives:
Our second session, that I presented on how we can establish paths of transition that make it easy to adopt microservices. The presentation material for this is available here:
My next Packt project (via O’Reilly) is not a book, but a short online training course about good API design, API 1st and some tools that can support an API 1st methodologies. Register for the session here.
It includes a closer look at cloud tools such as Oracle’s excellent Apiary (sorry if this sounds like a sales pitch, but it is a good tool and the words of the found of RestLet confirm this) along with SwaggerHub and a few other options.
A good API goes beyond just the payload definition and I’ll walk through the other considerations and explain why these other areas are important.
Earlier this year, I wrote a short post on Chris Richardson’s book Microservice Patterns (Praise for Microservice Patterns). When the I read the book I mind mapped my notes which can be seen at Mindmap Index or access directly here. The mind map is no substitute, but should act as a reasonable aide-memoire.
We would highly recommend getting and reading the book.
As we rapidly approach the end of the Year. We’re still pretty busy. The UK Oracle User Group Annual Conference covering Oracle Technology (Tech18) ranging from Database on premises to Polyglot development in Oracle and other clouds passing through Hybrid Integration, SOA and so on. Alongside this is also Apps18 which covers Oracle Applications from EBusiness Suite and Seibel to the Fusion cloud solutions such HCM, Financials, Taleo and so on. Then finally the 3rd part covering all things JDEdwards.
Being on the committee for the conference means that I have been heavily involved in developing the conference agenda and choosing sessions. So I can say with great confidence that there is a very diverse range of sessions from highly respected SMEs and presenters along with new blood presenting on subjects from Oracle JET to Robotic Process Automation (RPA) for example.
I hope we’ll see you there.
The use of Technical Capability models is not something I have seen a lot of use of, which is a little unfortunate as they can provide tremendous insight into an organizations IT needs.
Typically you want to use the Technical capability model to be used in conjunction with a business capability model, and this is where things can get tricky as developing the business views can take time. I came across this short video which focuses on the more business aspect but helps explain the ideas behind the models:
Note how the model is largely groups of capabilities that happen in the business. Underlying this kind of diagram you would have a brief explanation of each capability. If you want to go all out on EA modelling then you can link the capabilities to the documented associated processes etc.
Independently, the ideal is to then identify the technical capabilities that are likely to be needed. This will provide a similar looking model. The technical capabilities are probably best drawn from industry best practices, and specific business needs. The model should be completely product agnostic. The real value comes in by then mapping the technical capabilities to which business capabilities use.
This will now help inform a number of decisions identity areas of focus. The technical capabilities should have mappings or fall into one of the following states, with the associated reasons:
- Maps To Business Capabilities
- This is healthy
- Technology is Being Used but no Business Mapping
- Gap in the business capability model?
- Nuance of the business model not understood by IT?
- Redundant processes being performed?
- Business Process with no Technology
- Opportunity for business improvement?
- Genuinely no value in applying technology e.g. Business Value is something is hand made?
- Capability delivered by Shadow IT?
- Doesn’t Map to Any Business Capabilities
- Capability isn’t needed and therefore jettisoned OR
- Potential capability that the business are unaware or haven’t understood what can be offered
With the exception of the 1st condition, the other scenarios things should be examined more closely and adjust the models accordingly.
With the capability models linked and the miss matches addressed. The Technical Capability model can really deliver value by linking the capabilities to the actual technologies being used. Very quickly it is possible to see details such as:
- Technology weaknesses (i.e. a key business area is not well supported by IT e.g. products being mapped are End of Life, not got the level of support). Whilst some of these will be ‘no brainers’ it is more than likely a free surprises will show up
- Technology duplication – sometimes we’ll see multiple products in one area, can the product list be rationalized to maximize license investment? Would it be more cost effective to invest 1 one high-end product and eliminate lots of smaller niche pieces?
- Where IT investment will likely improve key capabilities vs investment on niche capabilities
- How technology change can impact the business, for example replacing a Content Management System may impact an organizations online presence, but it may also show to impact how we deliver support services to customers.
- If the business prioritize a specific area, how does that map onto IT systems and processes?
Whilst a lot of this will seem pretty obvious, it will uncover unexpected details and most importantly provide a relatively simple set of visualizations as cross references that help understand the business and explain the impact of IT related decisions to the business in their terms.
The following deck provides a presentation on the value of Technical Capability Models: