Successful PaaS Team


, , , , ,

39919340285_bde0aacde1_zLast week we attended the Oracle EMEA PaaS Forum in Budapest. I have to say we’re proud to be part of a team that has picked up two conference awards.  Firstly our CTO Luis Weir for Contributions to API Solutions and then the wider team have collected an award for overall Outstanding PaaS Contributions.


L-R, me, Amy Grange, Soham Dasgupta (Capgemini Netherlands), Luis Weir (Oracle Delivery Unit CTO), Jurijs (Yury) Fjodorovs

Some photos from the trip can be found a here.

London Oracle Developer Meetup No. 2


, , , , , , ,

Last night was the 2nd London Oracle Developer meetup with 100 people registered and  CTO Luis Weir presented on 3rd generation APIs which included their relationship to microservices. The outcomes was not only a excellently delivered presentation with an enaged audience, but led to some really thought provoking discussions, particuarly on the application of microservices in financial  contexts.  For example is it possible for for financial institutions to get near the post child Netflix in terms of building and delivering solutions to production given the certification and compliance requirements? Some interesting insights on PSD2 (Payments Service Directive 2)  as well.  Are questions like is a monolith wrong?

Screenshot 2018-03-06 14.27.26In addition to the high level architectural debate the evening included a shortened walk through on the possibilities of deploying the API Gateway and its relative ease compared to other products.

Finally we concluded with an update on the project that we’re running as an underlying theme for some of the meetups – that of using variuous tech to command the drone(s) via a set pf published APIs.  This included a call to arms to contribute eith to building your own apps to use the API or contribute to the back end as the solution gets built using an API  1st approach.

The slides from the event.



Managing API Policy Versioning in Oracle API Platform


, , , , , , , , ,

Oracle’s API Platform (API-P) product avoids the use of external configuration management. If you want to better understand why, then checkout our forthcoming book as it goes into detail about why this is the case (it can be pre-release version of the book can be obtained here). In a previous blog I wrote about and illustrated the use of the API-P’s own APIs so that it was possible to see what API iterations had been deployed to API Gateways.

In this blog I want to explore the issues of version management a bit further. API-P provides internal version management through the idea of iterations as previously explored (Understanding API Deployment State on API Platform). In addition to this there are API policy attributes called version, status etc. This information whilst having some impact on behavior reflects the version of the ‘contract’ that the API represents between the consumer and provider, and requires a manual change.

The API policies themselves are version tracked through the iteration identifiers. Each time a policy is saved the iteration is is incremented. What the API-P doesn’t support is the concept of branching. In relatively simple API Policy branching is unlikely to ever be an issue.

Why is a reversion capability needed?

Let’s take a more complex scenario.  In our book API Platform we introduced the some APIs that would allow the retrieval of meta data about artists in the record companies’ catalog. It has however come to light that the API has been targeted with malicious calls, firstly through trying to attack using injection attacks and secondly trying to overload the back end by creating data requests that make the back-end work hard in retrieving data.

To defend against this, the API Policy has been enhanced to include some custom groovy policies to inspect the values provided. Strictly speaking following the principles of Semantic Versioning the API version should go from 1.0.0 to 1.0.1. However seeing that the ‘contract’ as presented to the consumer hasn’t really changed – the data models are the same, the URI goes unaltered resulting in the implementation team not changing the version.

During development processes, it is not unusual to be developing existing logic, and decide that approach being used isn’t right or not going to perform as well. So you abandon your changes and revert back to the last approved version. However, this isn’t possible as any save will result in a new iteration. The API-P will be getting some enhanced version management features. But today to be able to undo the changes we need a means to ‘revert forward‘ (hence the tool name) that is to take an older iteration and make it the latest, as illustrated in the next diagram.

Today the API-P doesn’t provide a means to perform this process within the User interface. However when looking at a gateway you can review the API policy deployed. As we established previously that you may have different gateways deployed with different iterations. Given this, as API-P has been built true to the principles of separating the UI from the backend through the use of APIs we can deduce there should be a means to get the details of an API with a specific iteration.

To this end we have built on the pattern previously illustrated to provide the means to ‘Revert Forward’ by creating a Groovy script that will use the APIs provided by API-P to retreive an iteration and push it back at the latest version. When the policy is pushed back it also modifies the description to show which iteration has been pushed.

You may ask, why not use the conventional developer approach of branching as suggested in the following diagram. However API-P’s iteration framework doesn’t extend to support this.

The next with this is pretty predictable – how do I know which iteration to revert to. You have two options here, firstly either revert in order so you can see the prior version in turn – which whilst visually good, is not necessarily the most practical option. So in the tool we have a parameter that will allow you to display on the console the configuration of each iteration. This does mean you are going to see the policies in a JSON presentation. To make life easier we would recommend good practise and recording in the policy description information that helps determine the policy’s characteristics – and this can be used to better determine iteration behaviour.

If we are able to take an earlier iteration and make it the latest one by pushing it back then it is a short step to actually target a different management cloud in effect migrating the policies. Whilst possible it comes with some serious cautions …

  • You risk undermining your version management, which management cloud has the master, and the iteration numbers will NOT migrate so it’s not like this info can be used to distinguish the laster version
  • The logic included doesn’t accommodate handling differences in policy versions – so if trying move between instances of the API Platform they need to be the same version otherwise your configuration could make a mess of thing
  • This issue is further compounded if you are deploying custom Java policies.
  • Environment specific policies simply won’t work for example gateway based routing.

Oracle does not recommend that the policies be stored anywhere outside of the platform, whilst it this utility makes that a possibility, it deliberately avoids writing any of the information to the file, the policies only reside outside of the platform for the duration of the process execution.

The Tools Commands

All the parameters assume the values will not contain any space characters. Each command is preceded by a dash eg. revertForward.groovy -inpassword mypass -inSvr

  • -h or -help – provides this information
  • -inName – user name to access the source management cloud
  • -inPass – password for the source management cloud
  • -inSvr – The server address without any attributes e.g.
  • -policy – numeric identifier for the policy of interest
  • -iter – iteration number of interest for the policy – optional
  • -outNameoptional, the target management cloud username, only needed for migrations
  • -outPassoptional, the target management cloud password, only needed for migrations
  • -outSvroptional, the target management cloud server address – same formatting as inSvr, only needed for migrations
  • -overrideoptional, if migrating to another management, tells the script to replace the existing policy of the same name if found
  • -viewoptional, separate command to allow viewing of the policy – requires one of the following values:
    • display – displays all the details of the policy, if no iteration is provided this will be the latest iteration
    • summary – provides the headline information of the policy including name, change date etc
    • summary-all – summarizes all the iterations from the current one back to the 1st
  • -debugoptional, will get script to report more information about what is happening

The code can be obtained from my GitHub repository here.

Tracing Executions in an API Environment


, , , , , , ,

As APIs become more pervasive within our solutions we see the arrival of not just design and cataloging tools such as Apiary, Apigee and others but also the arrival of gateways. The gateways provide execution of operations including validation, accounting (moneytization), routing, and other controls such as throttled checks that would often not occur until the first contact with a service bus. For example initial routing based on the API call, fine grained authentication and authorisation (differing from your firewalls who will just perhaps authorise).

In the more traditional integration middleware of Oracle Service Bus and SOA (regardless of cloud, on-premises) you can trace the execution through the middleware end to end. This tracing can be achieved because the platform creates and assigns the a UUID (aka eCID) and ensures that it is carried through the middleware. It is this very behaviour that allows Oracle to provide Business Activity Monitoring without any need to be invasive. Burt not only that in a highly distyributed environment you can track the processing of a transaction from end to end.

The challenge now is that the first point of middleware style behaviour can be at the gateway. So we actually need to move forward the UUID or equivalent forward the the first point of contact. Not only that we need to think about the fact we will see non Oracle integration middleware involved.  Within Spring, Kabana and other established frameworks and tools which are getting signficiant traction with the rise of Microservices, the Ids being used are not the same as the UUIDs used by Oracle. For example Spring Cloud Slueth uses the same HTTP header Ids that Zipkin and Kabana support:

  • X-B3-TraceId
  • X-B3-SpanId
  • X-B3-ParentSpanId

More information can be found here and here.

For the new Oracle API Platform Cloud Service we can check for the existance of the header attributes as a policy and apply actions such as:

  • Apply a header with a TraceId or spanId,
  • If a SpanId exists then we may wish to nest our call as a child span by moving the SpandId to the ParentSpanId and creating a new SpanId.

Ultimately it would be more attractive to apply the logic using the API Platform’s SDK but to get things rolling applying the IDs with the API Groovy policy is sufficient (more here).

Next question that begs, is where to put the ID and want to call it? Put the value in the body, and you’re invading the business aspect of an API with execution specific details, not to mention potentially  changing the API definition. Whilst stuffing HTTP(s) headers with custom attributes is often discouraged as the values aren’t to immediately visible. In my opinion at least the answer largely has been set by president for us. If you weren’t using HTTPS but JMS then you would use the header, but also a number of frameworks already exist that do make use of this strategy such as those mentioned above.

Using the header defined by Kabana etc means that the very process will mean that a number of Support tools out of the box will understand and be able to help visualise your logs with no additional effort.

The following is a Groovy script that could ber used for the purposes of applying an Id appropriately into the HTTP header in the API Platform:

if (!context.getApiRequest().getHeaders().containsKey ("baggage-UUID "))
  // use current time to seed random generator
  def now =
  def random = new java.util.Random(now.getEpochSecond())

  // create an array of 16 bytes to hold the random value
  byte[] uid = new byte[16]

  // convert the random string from Hex to an ASCII string
  Writable uidInHex= uid.encodeHex()
  String uidStr= uidInHex.toString() 

  // set the outbound header
  context.getServiceRequest().setHeader("baggage-UUID", uidStr)

API Design


, , , ,

When it comes to ensuring I keep up good practises, I try to look at books  in areas I think I have a good handle on such as APIs.  Why?  well it confirms and validates I’m upto date; sometimes another view point can spark ideas on how to make something better, improve an approach or simply understand another way of explaining an idea.  The later is important as the key benefit of knowing something is the opportunity to help someone else. Not everyone communicates or understands ideas in the same so this is always helpful.

Designing Great Web APIsSo recently I ran through James Higginbotham’s Designing Great Web API’s book(let).  Often when goping through a book I mindmap it so that I can share it, and refer to it as a lit of prompts reminders if necessary.  Whilst’s James’ book doesnt  reveal anything new or relevatory for anyone working with APIs it does provide a good succinct explination to  basic practises. So here is the mindmap:

great APIs

You can also access my MapWise view here. James’ book can be obtained freely from O’Reilly here.

The book doesn’t go into the depth of details for practises that Apiary (Pro Edition) offers with style guidelines which will describe morec detailed recommended practises (more here).

Validating API Platform Policies & Gateway Deployments


, , , , , , ,

When configuring API Policies in the the Oracle API Platform it helps if there is a simple back end that can take the received payload and record the sent values (header & body) as well as reflect the call details back as the response, or possibly respond with a test payload (so that response policies, particularly policies that require payload navigation  can be exercised correctly).  By having this facility it becomes a lot easier to determine whether the policies are executing correctly in terms of routing, transforming, filtering etc. without needing to worry about whether the API implementation is correct. You could say that this is a kind of mock for testing the API Platform.

The added benefit of having a mock back end is that it is easy to ‘smoke test’ a gateway deployment very easily.  Particularly if the mock is happy to receive any form of call.

Whilst implementing such a capability can be done in pretty much any language and platform you like.  We have in the past for example built a Springboot Java application that can have the dependencies configured to then deploy into WebLogic for example.  We have come to refer these test apps/mocks as PlatformTests as that’s exactly what they help do. A Node.js implementation of a PlatformTest such as as the following implementation is particularly appealing as the Node.js footprint is small and simple to deploy and undeploy. A basic Node.js implementation can also consume any URL and operation you choose to use. The nature of JavaScript makes it very quick to adapt the mock if need be. Although in the ideal world, we write the solution once and then use simple configuration to tune behavior.

The following code looks for a local file called testResponse.json if found then returns the content of the file (assumed to be JSON) otherwise it reflects back in the body, the received header and body.  This reflection makes it extremely easy to see how the policies have changed the inbound call.  The content is also logged to the console – making it easy to also see what came through to the back end.

The implementation also assumes port 8080, but changing the port is exceptionally easy.

There one enhancement planned, and this is to allow the response test payload to be handled as XML.  This will need a little tweaking of the code as presently a JSON Object is currently stringified.

JavaScriptThe code is also available in my GitHub repository – and an example test response file is at

const http = require('http');
const fs = require('fs');

// create a simple HTTP server that will handle the requests
http.createServer((request, response) => {
const { headers, method, url } = request;
console.log("Called at " + new Date().toLocaleDateString());
let body = [];
request.on('error', (err) => {
console.log("Svr Error Handler :" + err.toString);
}).on('data', (chunk) => {
}).on('end', () => {
body = Buffer.concat(body).toString();
// At this point, we have the headers, method, url and body, and can now
// do whatever we need to in order to respond to this request.


// record in the console what details have been received
console.log ("Received:\nMethod:" + method.toString() +
"\n URL:"+ url.toString + "\nheaders:\n"+headers.toString() +
"\nBody:\n" + body);
// now build the response
response.setHeader('Content-Type', 'application/json');
response.setHeader('PlatformTestTime', new Date().toLocaleDateString());

// initialise our response object so that if we don't load a response
// file then we reflect the content
var responseBody = { headers, method, url, body };

try {
// try reading a response file
fs.readFile('testResponse.json', function(err, data) {
console.log("handling file");
if (err != null) {
if (err.code === 'ENOENT') {
console.log("on return file - will reflect");
} else {
console.log("Read error:" + err.toString());
} else {
// a file exists - but is empty?
if ((data != null) && (data.length > 0)) {
// we have a file with content - lets process so it into a JSON
// object
if (Buffer.isBuffer(data)) {
// convert the buffer from hex to an ASCII string
body = data.toString('utf8');
console.log("test response:" + body);
responseBody = JSON.parse(body);

// create an array with our values and then make it

// JSON with stringfy

var output = JSON.stringify(responseBody);
console.log("Returning:" + output);
response.statusCode = 200;


} catch (err) {

if (err.code === 'ENOENT') {
console.log("on return file - will reflect");
} else {
var output = JSON.stringify(responseBody);
console.log("Returning:" + output);
response.statusCode = 200;
}).listen(8080); // Activates this server, listening on port 8080.

Understanding API Deployment State on API Platform


, , , , , , , , ,

The new Oracle API Platform makes it possible to deploy different versions of your APIs to different gateway instances. When you you’re managing the Development API Policies through all the different stages of the lifecycle (Design to Production) from a single management tier such a capability is essential. This is further challenged by the fact that each save of you API Definition creates a new iteration (the term used to identify each saved ‘version’ of the API)

However it does lead the challenge from a management perspective of knowing which iterations are running on each Gateway.. you can get the information from the current UI but it requires multiple steps to get the information. The UI also lends itself more to the design processes today than perhaps the more dense information views that a operational report might warrant.

I’m sure that over time these views will come, but today we can solve the problem by taking advantage of the fact that the product lives by its own ‘mission’ by offering a very rich set of APIs. As a result it becomes possible to actually build your own views. To that end I have written a Groovy script which will go through each API that can be seen and retrieves the iteration deployed to each logical gateway.

In terms of running the script you obviously need Groovy installed. It expects 3 parameters which are:

  • Server address e.g.
  • Username e.g. weblogic
  • Password e.g. Welcome1

You can hardwire into the script default values which will then be used if no parameters are provided.

Here is a screenshot of some output.  I have masked out some information for reasons of security. But there should be enough here to give a sense of what is happening:


The script includes suppressing certificate validation – necessary if you haven’t yet deployed your own specific certificate and still working with the default Oracle certificate.

Feel free to take the script and play with it. I make no claims to it’s elegance etc but I have tried to comment it so you can see what is going on. I have tried to keep the code fairly simple so you can see how it works and processes the JSON responses. The script is available at:

For more about the APIs involved in the script, checkout

2017 into 2018 as a Geek


, , , ,

It seems that it becoming common for people to write a personal review of the year. If you’re old school Christmas Card sort then it gets printed and put in the card. If you’re a bit more hip then it’s a Facebook post. For those trendier than that, who knows?

Anyway, I thought I’d use my blog to reflect on what has happened and what we hope to be upto in 2018.

So the big headlines for us …

  • 1st book published as a co-author about ICS, started another book project which should be finished in 2018.Artwork-front
  • Packt have been talking to me about another book project (even though my contribution to book 2 not yet finished!) Have to admit what is being suggested is intriguing and a bit different
  • Then there was the UKOUG Journey to the Cloud event. Having been postponed because of venue flooding it was good to see this happen. Not to mention it being one of s number of events I have presented at this year.39178284831_45be4e943c_m
  • We attended and presented at the Oracle EMEA Partner Conference for the 1st time and presented with my co-author on ICS.
  • Contributions to supporting the UKOUG as part of a SIG committee member, reviewed for Oracle Scene. Being involved in a SIG committee also meant helping plan the conference.
  • Writing hasn’t just been about the books, we continue to write our own blog posts, content for plus several journals including Oracle Technology Network,
  • Presenting at Oracle Open World for the 1st time, and signing copies of our book on ICS
  • Promoted from an Oracle Ace Associate to a full Oracle Ace.

So where will 2018 take us, well somethings we’re confident of …

What do we hope to pull off …

  • Another year presenting at Open World,
  • UKOUG Tech 18 presentations
  • Articles for Oracle Scene
  • Submissions accepted at Oracle Code London
  • Presenting at Oracle EMEA Partner Conference

Message Push Listener – Article Update


, , , , , , , , , , , , , ,

When I first wrote about Oracle Messaging Cloud we used a service called to make it easy to demonstrate the Message Push Listener. WebScript was essentially what we better know as a Serverless or Functions oriented offering (that is we wrote pieces of code and deployed them without any consideration servers etc). Well as I prepared my demos for Messaging Cloud for the UK Oracle User Group Tech 17 Conference I discovered that WebScript is being shutdown in December 2017.

In the light of this news, I needto provide an alternate implementation for my Message Push Listener demo Google’s Cloud Functions.  Before I go into the Google implementation I thought it worth sharing how I landed on Google’s offering.

The Google Cloud Functions is a new service that has been launched with an interesting future. I had hoped to try using project Fn (Oracle’s open source serverless offering) but the cloud offering is not yet publicly available – although you can run Fn on any platform today if you’re prepared to invest in setting up the environment (defeating the point of serverless). I know some of Oracle’s Developer Champions have had a preview so it cant be too far away now. I’m sure when we get a chance to access the new Cloud Native Service announced which will include Fn we will revisit it. Before settling on Google we looked at several other offerings in the serverless space. Whilst this is not an exhaustive analysis it should help give a sense of the challenges and ease of adoption. If you search today on Serverless you’ll most commonly come across Auth0’s, AWS Lambda and IBM OpenWhisk (based on Apache OpenWhisk).

I started with and it was very nearly a done deal, with a nice easy to work with Cloud Development Platform, integrated testing. Extensive support for Node.js and a number of standard frameworks to use with it such as Express available without doing anything.

Other languages are supported as well by But as I’m trying to create a demo that warrants very little explanation of the Serverless platform we didn’t dig in to this area. Everything went swimmingly until I tried to setup external calls to my function. This became a headache as the security model whilst not overly complex (several ways to provide the REST call with authentication e.g. adding a key in the URI). The process of generating and associating the credentials was far from clear in the documentation.

AWS Lambda

I moved to look at AWS Lambda, this I just found horribly confusing to get started with. I have heard others saying that getting going isn’t straight forward. So I found myself giving up pretty quickly as the setting up wasn’t that clear. Whilst having used AWS with its IaaS capabilities which is both powerful, flexible and pretty easy to get to grips with if you understand basic ideas like virtual machines this didnt hold true fory Lambdas.


As for OpenWisk, we started to look at it, but getting a 404 error when trying to access the Editor following the IBM documentation didn’t inspire confidence. The was plenty of supoprting documentation which explains how OpenWhisk works.


The Execution framework for OpenWhisk

  1.  Ningx is used for SSL termination and forwarding appropriate HTTP calls to the next component
  2. Controller first disambiguates what the user is trying to do. It does so based on the HTTP method you use in your HTTP request. This is a Scala solution built using Akka & Spray. This includes ..
  3. Verification who you are (Authentication) against a CouchDB based identiy store.
  4. Once approved details of the Action to be executed is retrieved from the whisks database in CouchDB.
  5. With information on what to do, the action of service discovery is formed using Consul. Which tracks the available executors in the system. Those executors are called Invokers
  6. Kafka is then used to mitigate the demand pipeline from a failure by recording the request and the consumer (invoker) identified by Consul.
  7. The invoker is built using Scala and uses a Docker instance to run the Action which could be anything e.g. Node.js. The action is injected into the container to be processed.
  8. As the result is obtained by the Invoker, it is stored into the whisks database as an activation under the ActivationId. The whisks database lives in CouchDB.

In addition to the 404, as you can see we have a two step process to execute an action and return a respoinse. However the Message Push Listener initial challenge needs a call and response in a single step. So trying to massage this into a call and response is going to be challenging and a distraction from what we want to be conveying.

Using Google Functions

This brings us Back to Google, whilst the Cloud IDE is not as elegant or mature as WebTask it was sufficient and the security model wasn’t imposed. I liked the documentation when needed to refer back  to it, but to be honest it is pretty intuitive. You can’t fault the docs, to the point Google gave time over to explaining how to manage or avoid incurring costs.

Setting up, was very simple, and then once you’ve choosen your cloud services you get a dashboard like this:

Google CLoud mgmt

Google provides the idea of projects which allows you to group pieces together – such as related functions. Each project is name space separated. If we then navigate into a Functions project we get a view as follows:

Google cloud functionsAs you can see in the preceeding diagram I created two functions within a project called OMCS. From here you can create more functions in your project or drill into an individual function, as the following view shows:

Google Functiuons performance

An individual function provides you with several tabbed views overing the Gernal information  (as shown above) or Trigger, Source and Testing. We can see the other views in the following screenshots. The following screen shot shows the Functions Editor, as you can see it is fairly simple – but sufficient to do the job.


Once saved, if valid the code will automatically get deployed, or you can work offline and then upload the code if you want to use a nice editor like Sublime.

with your code edited and saved, then the next step is to invoke it. This can be done with the next tab, or the details such as the URI can be copied and you can test from your preferred test tool such as SOAPUI, Postman and APIFortress.

Google functions Trigger

The testing view allows you 5o define input and output values, along with the outcomes. Personally I worked with SOAPUI.

Google Functiuons Test

The important thing with running tests or diagnosing issues, is to be able to examine execution logs. In this area Google Functions is pretty feature rich with a solution that works in a style somewhat like the searching in Splunk (and I’m sure other log analytics tools) where you can drill into the logs and build log filters on the fly. The log view is shown in the next screenshot.

Gogole Functions Logging 2

as you can see tool looks pretty straight forward and uncomplicated to use, with freedom to adapt how you work to your preferred style. Based on my experience of using Project FN on my desktop – it is this simplicity I think we’ll see with the Cloud Native Platform from Oracle when it becomes available.

Finally, let’s take a look at the code in Google Functions code produced for this example:



Google Code whilst its UI is a bit basic, it is easy to use and get started, certainly for using as a demo platform or perhaps for creating stubs, test and mock end points. Having been critical of the other offerings for security and it getting in the way of a simple illustration it is possible that the Google Functions may need some work in this area. I didn’t see anything that obviously integrated security features in easily.

Back to my Orginal Articles…

Just to tie back the impacted articles …