Microservice & APIs Exploiting HTTP Response Codes

Tags

, , , ,

When it comes to the use of micro-services and APIs. It appears pretty common for a few key response codes to be used. However, if you look at the IANA Status Code Registry of defined codes, there a number of other very useful codes that can help convey issues clearly, without compromising  security.

The IANA list, references the relevant IETF RFCs, but I’ve taken this a step further and obtained the relevant deep hyperlinks to the code explanations. In addition to that, I’ve also highlighted some response codes, that perhaps benefit from a closer look, or considered with caution.

Continue reading

EMEA PaaS Forum 2019 in review

Tags

, , , , , , , , , ,

Image by @motivcx

Image by @motivcx

Another Spring means another excellent Oracle EMEA PaaS Forum for Oracle partners. Every Year Juergen Kress organizes the event, finding really nice venues to host several hundred people over four and half days.

The event is split into several parts,  Monday afternoon normally involves Oracle Ace’s presenting on best practices, insights on applying the various technologies etc.  For me this meant presenting on the London Developer Meetup, looking at how it worked, what has been successful, and what hasn’t.  For those know have read my blogs on the subject (here) will know about our Drone initiative.

Picture by @AmyGrangeX

Then Tuesday is a single stream day where Juergen has managed to pull in SVPs and Senior Product Managers from around the globe to provide a high level views of what has been going on with their products. For anyone consulting in the Oracle domain this is incredibly useful. For example there is a clear strategy coalescing around AI and Machine Learning both as a service proposition to users, but also how these technologies are being made available and used within other products.  Other areas such as OIC and SOA CS have stability and maturity, and the road map is about maximising connectivity with the newer products.

But before the sessions start, Juergen starts with opening remarks, and demos’ something engaging.  In previous years this has been things like Digital Assistants/Chatbots and so on.  This year, we have been fortunate to be an active contributor by demoing the drone through the use of APIs and talking about the ideas.  The dry runs of the demo on Monday went without problem, but when it came to the main show, the drone was a little uncooperative – we think because the air-con had really kicked in.  But importantly, even not achieving the desired result, the message of engagement made it home.

Wednesday is split into streams with in-depth sessions from the different Product Managers, he amount of insight gained from these sessions is tremendous, some of which is very much protected by safe harbour statements or not for public disclosure such is the honest and open discussions. The day closes with an Ace Director initiative which demonstrates the application of Oracle Cloud products to a plausible use case, and Luis Weir (Capgemini Oracle CTO) is part of. This session has become something of a tradition now.

The day’s business concludes awards, and for a second year the UK Capgemini team have taken home two awards for APIs and PaaS Contribution.

Luis Weir with his API award

The final two days are then a choice of Hackerthon or 1/2 day training sessions on different products with the relevant Product Managers, and an excellent opportunity to pick the brains of the presenters as well as get hands on experience with the different products.

The week isn’t without it’s social and networking activities of course …

Building Evolutionary Architectures

Tags

, , , , ,

I have been working my way through Building Evolutionary Architectures by Neal Forward, Rebecca Parsons and Patrick Kua. Three senior and respected members of Thoughtworks (also the home of Martin Fowler). Having read and listened to Neal and Rebecca’s presentations and writing I had expected a deeply thought provoking read, but have to admit to being disappointed. There are some good points without a doubt, but the book pretty much focuses on one idea, the application of fitness functions. But I’m not convinced it warrants several hundred pages of a book as result the point does at times feel laboured.

There are some arguments made, that leave me thinking that there is a view that the only answer is microservices in the conventional model of Kubernetes, Docker etc, which I agree is a powerful paradigm to allow solutions to evolve, but it isn’t a silver bullet and not always right in every case (if you have a team lacking the underlying appreciation of the goals, or put in to place in an ad-hoc manner (see Chris Richardson‘s work) it isn’t going to help.

Alongside this, there is little said about the interface definition for microservices (typically APIs of one form or another). Whilst mention of leaky abstractions is made, the material illustrations such as code lead API definitions are omitted (risk being, code changes, the API changes and the impact cascades).

What surprised me the most is the on more than one occasion the books points to ERPs not being sufficiently customisable. Yet, anyone working with ERPs will tell you that ERPs are at their best when you use them to leverage industry best practices rather than crow bar them to fit unconventional ways of operating. If you’re a manufacturer, is fiscal reporting part of your differentiator; probably not, so why not take best practice OOTB.

As usual I have mind mapped things as I read through the book.  The dynamic/interactive version is here, the image (but not in full detail) is below.

evolutionary architectures.png

 

On Line Training – API Driven Architecture

Tags

, , , , ,

In January we presented our 1st online training, looking at API Design and use of Apiary and Swagger. Things went well until near the end where for some reason voice and video dropped for no apparent reason. Our coordinator Lindsay kept the recording going and soon as we reconnected I continued the session and went through the Q&A.

So if you missed the end of the end of the training, please do check back with the recording.

If you missed the training – we’ll be rerunning in March – go here.

For those on the training will have seen at the start of the training links to my social media profile, so happy to try respond to any further questions.

We are also scheduled to run the session again in a month or so.

One of the questions received during the session I thought worth mentioning was when would Apiary support Open API 3.0. Well according to their blog very soon, looking forward to it as the OAS 3 does look a little cleaner.

Making Scripts Work with IDCS Deployed PaaS

Tags

, , , , , ,

A while back I made some utilities I developed to help with managing the API Platform. At the time we didn’t have access to an IDCS based environment, so credentials worked using basic auth (I.e. user name and password). But with environments managed by IDCS tokens are used.

As a developer with a Java background I have to admit to liking Groovy over Python for scripting, not to mention for the API Platform groovy is part of the gateway deployment and SDK. Meaning it is readily available in its 2.x form (3.x is relatively recent and aligning to the latest Java idioms). We haven’t tested against Groovy 3.0

Thank you to Andy Knight for sharing with us some Java code which I adapted to be pure Groovy (removing external dependencies for processing JSON). The result is a script that can taken can be taken and worked into other scripts (which is what we have done for our previously provided scripts – Understanding API Deployment State on API Platform, Managing API Policy Versioning in Oracle API Platform, Documenting APIs on the Oracle API Platform). But this script can be used to get a token and display it on the command line or write it to file. Writing tokens to files is generally not good practise, but as a temporary measure when working on developing scripts arguably a managed risk.

The script can be found at https://github.com/mp3monster/API-Platform-Utils/tree/master/getToken and all the details on using the script can obtained by passing -h as a parameter. The important thing is to understand how to obtain the Client ID and Client Secret, the details of which are described at https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/find-your-client-id-and-client-secret.html

Oracle Developer Meetup – London Feb 19

Tags

, , , , , , , ,

Last night was the first Oracle Developer Meetup in London for 2019.  We were very fortunate to have Tomas Langer fly over to talk about the new micro container/framework being developed as an open source solution by Oracle.

Oracle Developer Meet-up - Tomas Langer presenting on Helidon

Tomas, opened by explaining the evolution of the micro-profile being championed by the Eclipse Foundation who are now the guardians of J2EE also known as Jakarta and how the J2EE and Micro-Profile standards compare (in simplistic terms – micro-profile is J2EE stripped back to be simple and support what is typically needed in a microservice world).

The presentation then went onto compare Helidon SE and Helidon MP (micro-profile).  What was really pleasing is that with a couple of exceptions everything that Helidon MP can do, can be done in the SE edition, the difference being that for SE you have to implement more code, rather than the auto-magic of annotations, but in return you have a Reactive Java platform with a development paradigm which relates to JavaScript Express.

In addition to talking about what can be done, Tomas described the kinds of features being developed, this includes:

  • Bringing micro-profile support up to the very latest specification,
  • More reactive persistence technologies support,

With the scene set, Tomas then worked through a series of live code scenarios starting with a clean slate and building Hello World in both the SE & MP models illustrating the differences in approach.  This was then built upon to add the following capabilities:

You can get the complete example which uses Helidon in both configurations from Tomas GitHub.

In addition to Helidon itself on GitHub, there resources provided include rich documentation and examples of each key feature.  Plus a Slack community, that if you contact any of the Helidon team will get you invited allowing you to discuss with the development team how to do things along with other developers using Helidon.

Tomas can be contracted via @Langer_Tomas.  Helidon project also has its own Twitter account – Helidon Project

Helidon itself can be found at:

I have previously blogged on Helidon at Exploring Helidon – Part 1

 

API Caching with the Oracle API Platform

Tags

, , , , , , , , , , , ,

We have been developing some advanced custom API policies for a client and in the process picked up on a few insights that didn’t even make into the API book. One of these policies is to provide an optimization around caching of API calls. The rest of this blog will talk about the tricks we have specifically applied to link an API Gateway to a caching mechanism and why.

Before I go into the details, I’d like to thank the Oracle product management team and particularly Glenn Mi at Oracle for their support getting through the deeper undocumented elements of the capabilities of the API Platform SDK.

Caching Options

Caching comes in may forms, and is motivated by varying reasons and not always the wanting the same behaviours. When getting into the subject of caching it is surprising how polarised people’s view points can be about which cache strategies are correct. The following diagram illustrates the diversity of caches that could appear in an end to solution solution.

Bringing together a caching technology in the Reverse Proxy model and an API Gateway makes a lot of sense. Data being provided to API consumers needs to be protected whether it comes from a cache or an active back-end system. At the same time you also want to exploit an API Gateway to provide analytics on API traffic, so any caching needs to be behind the gateway. But, if In front of an application layer then we can reduce the application workload.

When it comes to caching technology to partner with the gateway, there are a number of options available from Coherence to ehCache, memcache and Redis. We have avoided Coherence, whilst the gateway currently runs on a WebLogic server, we don’t want to need to unduly distort the performance profile and configuration of the Gateway by forcing a cache onto that server. In addition to which as Coherence is a licensed addition to WebLogic it raises difficult questions about licensing when deploying gateways (with gateways licensed based on logical groupings and API volumes but Coherence is licensed by OCPU). We also know that Oracle is moving towards having a micro-gateway which may mean we see the gateway engine moved onto something like Helidon (but this last point is my speculation).

We have elected to use Redis for several reasons –

  • Available as a PaaS service with several cloud providers (AWS & Azure) so no setup or management effort but can also be deployed on-premises,
  • Has an out of the box deployment that means cached entities can have a time to live (TTL) rather than needing to implement separate processes to expire cached values,
  • The ability to make it scale through clustering,
  • Cost

This caching model also allows us to optionally allow application development teams to push into the cache directly results. So rather than waiting on TTL the cache can be refreshed directly or even primed, rather than having to create fake requests to prime a cache.

Custom Policy

Continue reading