Tracing Executions in an API Environment

Tags

, , , , , , ,

As APIs become more pervasive within our solutions we see the arrival of not just design and cataloging tools such as Apiary, Apigee and others but also the arrival of gateways. The gateways provide execution of operations including validation, accounting (moneytization), routing, and other controls such as throttled checks that would often not occur until the first contact with a service bus. For example initial routing based on the API call, fine grained authentication and authorisation (differing from your firewalls who will just perhaps authorise).

In the more traditional integration middleware of Oracle Service Bus and SOA (regardless of cloud, on-premises) you can trace the execution through the middleware end to end. This tracing can be achieved because the platform creates and assigns the a UUID (aka eCID) and ensures that it is carried through the middleware. It is this very behaviour that allows Oracle to provide Business Activity Monitoring without any need to be invasive. Burt not only that in a highly distyributed environment you can track the processing of a transaction from end to end.

The challenge now is that the first point of middleware style behaviour can be at the gateway. So we actually need to move forward the UUID or equivalent forward the the first point of contact. Not only that we need to think about the fact we will see non Oracle integration middleware involved.  Within Spring, Kabana and other established frameworks and tools which are getting signficiant traction with the rise of Microservices, the Ids being used are not the same as the UUIDs used by Oracle. For example Spring Cloud Slueth uses the same HTTP header Ids that Zipkin and Kabana support:

  • X-B3-TraceId
  • X-B3-SpanId
  • X-B3-ParentSpanId

More information can be found here and here.

For the new Oracle API Platform Cloud Service we can check for the existance of the header attributes as a policy and apply actions such as:

  • Apply a header with a TraceId or spanId,
  • If a SpanId exists then we may wish to nest our call as a child span by moving the SpandId to the ParentSpanId and creating a new SpanId.

Ultimately it would be more attractive to apply the logic using the API Platform’s SDK but to get things rolling applying the IDs with the API Groovy policy is sufficient (more here).

Next question that begs, is where to put the ID and want to call it? Put the value in the body, and you’re invading the business aspect of an API with execution specific details, not to mention potentially  changing the API definition. Whilst stuffing HTTP(s) headers with custom attributes is often discouraged as the values aren’t to immediately visible. In my opinion at least the answer largely has been set by president for us. If you weren’t using HTTPS but JMS then you would use the header, but also a number of frameworks already exist that do make use of this strategy such as those mentioned above.

Using the header defined by Kabana etc means that the very process will mean that a number of Support tools out of the box will understand and be able to help visualise your logs with no additional effort.

The following is a Groovy script that could ber used for the purposes of applying an Id appropriately into the HTTP header in the API Platform:

if (!context.getApiRequest().getHeaders().containsKey ("baggage-UUID "))
{
  // use current time to seed random generator
  def now = java.time.Instant.now()
  def random = new java.util.Random(now.getEpochSecond())

  // create an array of 16 bytes to hold the random value
  byte[] uid = new byte[16]
  random.nextBytes(uid)

  // convert the random string from Hex to an ASCII string
  Writable uidInHex= uid.encodeHex()
  String uidStr= uidInHex.toString() 

  // set the outbound header
  context.getServiceRequest().setHeader("baggage-UUID", uidStr)
}

API Design

Tags

, , , ,

When it comes to ensuring I keep up good practises, I try to look at books  in areas I think I have a good handle on such as APIs.  Why?  well it confirms and validates I’m upto date; sometimes another view point can spark ideas on how to make something better, improve an approach or simply understand another way of explaining an idea.  The later is important as the key benefit of knowing something is the opportunity to help someone else. Not everyone communicates or understands ideas in the same so this is always helpful.

Designing Great Web APIsSo recently I ran through James Higginbotham’s Designing Great Web API’s book(let).  Often when goping through a book I mindmap it so that I can share it, and refer to it as a lit of prompts reminders if necessary.  Whilst’s James’ book doesnt  reveal anything new or relevatory for anyone working with APIs it does provide a good succinct explination to  basic practises. So here is the mindmap:

great APIs

You can also access my MapWise view here. James’ book can be obtained freely from O’Reilly here.

The book doesn’t go into the depth of details for practises that Apiary (Pro Edition) offers with style guidelines which will describe morec detailed recommended practises (more here).

Validating API Platform Policies & Gateway Deployments

Tags

, , , , , , ,

When configuring API Policies in the the Oracle API Platform it helps if there is a simple back end that can take the received payload and record the sent values (header & body) as well as reflect the call details back as the response, or possibly respond with a test payload (so that response policies, particularly policies that require payload navigation  can be exercised correctly).  By having this facility it becomes a lot easier to determine whether the policies are executing correctly in terms of routing, transforming, filtering etc. without needing to worry about whether the API implementation is correct. You could say that this is a kind of mock for testing the API Platform.

The added benefit of having a mock back end is that it is easy to ‘smoke test’ a gateway deployment very easily.  Particularly if the mock is happy to receive any form of call.

Whilst implementing such a capability can be done in pretty much any language and platform you like.  We have in the past for example built a Springboot Java application that can have the dependencies configured to then deploy into WebLogic for example.  We have come to refer these test apps/mocks as PlatformTests as that’s exactly what they help do. A Node.js implementation of a PlatformTest such as as the following implementation is particularly appealing as the Node.js footprint is small and simple to deploy and undeploy. A basic Node.js implementation can also consume any URL and operation you choose to use. The nature of JavaScript makes it very quick to adapt the mock if need be. Although in the ideal world, we write the solution once and then use simple configuration to tune behavior.

The following code looks for a local file called testResponse.json if found then returns the content of the file (assumed to be JSON) otherwise it reflects back in the body, the received header and body.  This reflection makes it extremely easy to see how the policies have changed the inbound call.  The content is also logged to the console – making it easy to also see what came through to the back end.

The implementation also assumes port 8080, but changing the port is exceptionally easy.

There one enhancement planned, and this is to allow the response test payload to be handled as XML.  This will need a little tweaking of the code as presently a JSON Object is currently stringified.

JavaScriptThe code is also available in my GitHub repository – https://github.com/mp3monster/Utils/blob/master/PlatformTest.js and an example test response file is at https://github.com/mp3monster/Utils/blob/master/testResponse.json

const http = require('http');
const fs = require('fs');

// create a simple HTTP server that will handle the requests
http.createServer((request, response) => {
const { headers, method, url } = request;
console.log("Called at " + new Date().toLocaleDateString());
let body = [];
request.on('error', (err) => {
console.log("Svr Error Handler :" + err.toString);
response.statusCode(400);
response.end();
}).on('data', (chunk) => {
body.push(chunk);
}).on('end', () => {
body = Buffer.concat(body).toString();
// At this point, we have the headers, method, url and body, and can now
// do whatever we need to in order to respond to this request.

});

// record in the console what details have been received
console.log ("Received:\nMethod:" + method.toString() +
"\n URL:"+ url.toString + "\nheaders:\n"+headers.toString() +
"\nBody:\n" + body);
// now build the response
response.setHeader('Content-Type', 'application/json');
response.setHeader('PlatformTestTime', new Date().toLocaleDateString());

// initialise our response object so that if we don't load a response
// file then we reflect the content
var responseBody = { headers, method, url, body };

try {
// try reading a response file
fs.readFile('testResponse.json', function(err, data) {
console.log("handling file");
if (err != null) {
if (err.code === 'ENOENT') {
console.log("on return file - will reflect");
} else {
console.log("Read error:" + err.toString());
}
} else {
// a file exists - but is empty?
if ((data != null) && (data.length > 0)) {
// we have a file with content - lets process so it into a JSON
// object
if (Buffer.isBuffer(data)) {
// convert the buffer from hex to an ASCII string
body = data.toString('utf8');
console.log("test response:" + body);
responseBody = JSON.parse(body);
}
}
}

// create an array with our values and then make it

// JSON with stringfy

var output = JSON.stringify(responseBody);
response.write(output);
console.log("Returning:" + output);
response.statusCode = 200;
response.end();

});

} catch (err) {

if (err.code === 'ENOENT') {
console.log("on return file - will reflect");
} else {
console.log(err.toString());
}
var output = JSON.stringify(responseBody);
response.write(output);
console.log("Returning:" + output);
response.statusCode = 200;
response.end();
}
}).listen(8080); // Activates this server, listening on port 8080.

Understanding API Deployment State on API Platform

Tags

, , , , , , , , ,

The new Oracle API Platform makes it possible to deploy different versions of your APIs to different gateway instances. When you you’re managing the Development API Policies through all the different stages of the lifecycle (Design to Production) from a single management tier such a capability is essential. This is further challenged by the fact that each save of you API Definition creates a new iteration (the term used to identify each saved ‘version’ of the API)

However it does lead the challenge from a management perspective of knowing which iterations are running on each Gateway.. you can get the information from the current UI but it requires multiple steps to get the information. The UI also lends itself more to the design processes today than perhaps the more dense information views that a operational report might warrant.

I’m sure that over time these views will come, but today we can solve the problem by taking advantage of the fact that the product lives by its own ‘mission’ by offering a very rich set of APIs. As a result it becomes possible to actually build your own views. To that end I have written a Groovy script which will go through each API that can be seen and retrieves the iteration deployed to each logical gateway.

In terms of running the script you obviously need Groovy installed. It expects 3 parameters which are:

  • Server address e.g. https://1.2.3.4
  • Username e.g. weblogic
  • Password e.g. Welcome1

You can hardwire into the script default values which will then be used if no parameters are provided.

Here is a screenshot of some output.  I have masked out some information for reasons of security. But there should be enough here to give a sense of what is happening:

APIPlatformScript

The script includes suppressing certificate validation – necessary if you haven’t yet deployed your own specific certificate and still working with the default Oracle certificate.

Feel free to take the script and play with it. I make no claims to it’s elegance etc but I have tried to comment it so you can see what is going on. I have tried to keep the code fairly simple so you can see how it works and processes the JSON responses. The script is available at: https://github.com/mp3monster/Utils/blob/master/getDeployedIterations.groovy

For more about the APIs involved in the script, checkout

2017 into 2018 as a Geek

Tags

, , , ,

It seems that it becoming common for people to write a personal review of the year. If you’re old school Christmas Card sort then it gets printed and put in the card. If you’re a bit more hip then it’s a Facebook post. For those trendier than that, who knows?

Anyway, I thought I’d use my blog to reflect on what has happened and what we hope to be upto in 2018.

So the big headlines for us …

  • 1st book published as a co-author about ICS, started another book project which should be finished in 2018.Artwork-front
  • Packt have been talking to me about another book project (even though my contribution to book 2 not yet finished!) Have to admit what is being suggested is intriguing and a bit different
  • Then there was the UKOUG Journey to the Cloud event. Having been postponed because of venue flooding it was good to see this happen. Not to mention it being one of s number of events I have presented at this year.39178284831_45be4e943c_m
  • We attended and presented at the Oracle EMEA Partner Conference for the 1st time and presented with my co-author on ICS.
  • Contributions to supporting the UKOUG as part of a SIG committee member, reviewed for Oracle Scene. Being involved in a SIG committee also meant helping plan the conference.
  • Writing hasn’t just been about the books, we continue to write our own blog posts, content for Oracle-integration.cloud plus several journals including Oracle Technology Network,
  • Presenting at Oracle Open World for the 1st time, and signing copies of our book on ICS
  • Promoted from an Oracle Ace Associate to a full Oracle Ace.

So where will 2018 take us, well somethings we’re confident of …

What do we hope to pull off …

  • Another year presenting at Open World,
  • UKOUG Tech 18 presentations
  • Articles for Oracle Scene
  • Submissions accepted at Oracle Code London
  • Presenting at Oracle EMEA Partner Conference

Message Push Listener – Article Update

Tags

, , , , , , , , , , , , , ,

When I first wrote about Oracle Messaging Cloud we used a service called WebScript.io to make it easy to demonstrate the Message Push Listener. WebScript was essentially what we better know as a Serverless or Functions oriented offering (that is we wrote pieces of code and deployed them without any consideration servers etc). Well as I prepared my demos for Messaging Cloud for the UK Oracle User Group Tech 17 Conference I discovered that WebScript is being shutdown in December 2017.

In the light of this news, I needto provide an alternate implementation for my Message Push Listener demo Google’s Cloud Functions.  Before I go into the Google implementation I thought it worth sharing how I landed on Google’s offering.

The Google Cloud Functions is a new service that has been launched with an interesting future. I had hoped to try using project Fn (Oracle’s open source serverless offering) but the cloud offering is not yet publicly available – although you can run Fn on any platform today if you’re prepared to invest in setting up the environment (defeating the point of serverless). I know some of Oracle’s Developer Champions have had a preview so it cant be too far away now. I’m sure when we get a chance to access the new Cloud Native Service announced which will include Fn we will revisit it. Before settling on Google we looked at several other offerings in the serverless space. Whilst this is not an exhaustive analysis it should help give a sense of the challenges and ease of adoption. If you search today on Serverless you’ll most commonly come across Auth0’s WebTask.io, AWS Lambda and IBM OpenWhisk (based on Apache OpenWhisk).

WebTask.io

I started with WebTask.io and it was very nearly a done deal, with a nice easy to work with Cloud Development Platform, integrated testing. Extensive support for Node.js and a number of standard frameworks to use with it such as Express available without doing anything.

Other languages are supported as well by WebTask.io. But as I’m trying to create a demo that warrants very little explanation of the Serverless platform we didn’t dig in to this area. Everything went swimmingly until I tried to setup external calls to my function. This became a headache as the security model whilst not overly complex (several ways to provide the REST call with authentication e.g. adding a key in the URI). The process of generating and associating the credentials was far from clear in the documentation.

AWS Lambda

I moved to look at AWS Lambda, this I just found horribly confusing to get started with. I have heard others saying that getting going isn’t straight forward. So I found myself giving up pretty quickly as the setting up wasn’t that clear. Whilst having used AWS with its IaaS capabilities which is both powerful, flexible and pretty easy to get to grips with if you understand basic ideas like virtual machines this didnt hold true fory Lambdas.

OpenWhisk

As for OpenWisk, we started to look at it, but getting a 404 error when trying to access the Editor following the IBM documentation didn’t inspire confidence. The was plenty of supoprting documentation which explains how OpenWhisk works.

openwhisk_flow_of_processing

The Execution framework for OpenWhisk

  1.  Ningx is used for SSL termination and forwarding appropriate HTTP calls to the next component
  2. Controller first disambiguates what the user is trying to do. It does so based on the HTTP method you use in your HTTP request. This is a Scala solution built using Akka & Spray. This includes ..
  3. Verification who you are (Authentication) against a CouchDB based identiy store.
  4. Once approved details of the Action to be executed is retrieved from the whisks database in CouchDB.
  5. With information on what to do, the action of service discovery is formed using Consul. Which tracks the available executors in the system. Those executors are called Invokers
  6. Kafka is then used to mitigate the demand pipeline from a failure by recording the request and the consumer (invoker) identified by Consul.
  7. The invoker is built using Scala and uses a Docker instance to run the Action which could be anything e.g. Node.js. The action is injected into the container to be processed.
  8. As the result is obtained by the Invoker, it is stored into the whisks database as an activation under the ActivationId. The whisks database lives in CouchDB.

In addition to the 404, as you can see we have a two step process to execute an action and return a respoinse. However the Message Push Listener initial challenge needs a call and response in a single step. So trying to massage this into a call and response is going to be challenging and a distraction from what we want to be conveying.

Using Google Functions

This brings us Back to Google, whilst the Cloud IDE is not as elegant or mature as WebTask it was sufficient and the security model wasn’t imposed. I liked the documentation when needed to refer back  to it, but to be honest it is pretty intuitive. You can’t fault the docs, to the point Google gave time over to explaining how to manage or avoid incurring costs.

Setting up, was very simple, and then once you’ve choosen your cloud services you get a dashboard like this:

Google CLoud mgmt

Google provides the idea of projects which allows you to group pieces together – such as related functions. Each project is name space separated. If we then navigate into a Functions project we get a view as follows:

Google cloud functionsAs you can see in the preceeding diagram I created two functions within a project called OMCS. From here you can create more functions in your project or drill into an individual function, as the following view shows:

Google Functiuons performance

An individual function provides you with several tabbed views overing the Gernal information  (as shown above) or Trigger, Source and Testing. We can see the other views in the following screenshots. The following screen shot shows the Functions Editor, as you can see it is fairly simple – but sufficient to do the job.

GoogleCloud-OMCS

Once saved, if valid the code will automatically get deployed, or you can work offline and then upload the code if you want to use a nice editor like Sublime.

with your code edited and saved, then the next step is to invoke it. This can be done with the next tab, or the details such as the URI can be copied and you can test from your preferred test tool such as SOAPUI, Postman and APIFortress.

Google functions Trigger

The testing view allows you 5o define input and output values, along with the outcomes. Personally I worked with SOAPUI.

Google Functiuons Test

The important thing with running tests or diagnosing issues, is to be able to examine execution logs. In this area Google Functions is pretty feature rich with a solution that works in a style somewhat like the searching in Splunk (and I’m sure other log analytics tools) where you can drill into the logs and build log filters on the fly. The log view is shown in the next screenshot.

Gogole Functions Logging 2

as you can see tool looks pretty straight forward and uncomplicated to use, with freedom to adapt how you work to your preferred style. Based on my experience of using Project FN on my desktop – it is this simplicity I think we’ll see with the Cloud Native Platform from Oracle when it becomes available.

Finally, let’s take a look at the code in Google Functions code produced for this example:

code

conclusion

Google Code whilst its UI is a bit basic, it is easy to use and get started, certainly for using as a demo platform or perhaps for creating stubs, test and mock end points. Having been critical of the other offerings for security and it getting in the way of a simple illustration it is possible that the Google Functions may need some work in this area. I didn’t see anything that obviously integrated security features in easily.

Back to my Orginal Articles…

Just to tie back the impacted articles …

Passion of Music

Tags

, , , , ,

dust_and_grooves_9805-1-400x300

We dream of a collection like this

With Christmas we get some time off with the family and slowdown abit. Even indulge in things less technical. It’s been a while since I’ve blogged on the subject of music. I’ve been meaning to share the following TEDx presentation. I wish I could say that it reflects my personal manifesto…

Sadly very few people manage to devote the time and make and adequate living to keep a family to pursue this level of commitment. But we can wish, and take the suggestion to exploiting the ‘diggers’ recommendations. Want to know more, checkout ….

1st London Oracle Developer Meetup

Tags

, , , , , , ,

Meetup Dec 17-1Monday night (18th December) I co-hosted with Luis Weir the first London OracleDeveloperMeetup. Despite being a Monday evening in the run up to Christmas where a lot of people will attending Christmas events, needing to finish present shopping or event started their holiday we still had a tremendous turn out. With nearly 50 people out of almost 100 registrations coming to the Oracle London Office.

The evening kicked off just after 6pm with beer, pizza and time for people to Network. At 7pm we started with what had been scheduled to be two short 25 minute presentations to share insights into API design best practices and an overview of Apiary. Such was the interest,  interaction and conversation in the subject and content that the session over ran. But here in lies one the benefits of a Meetup over things like conferences. In the Meetup the is space and time for the presenters to adjust to what the attendees wish to cover rather than beholden to the venue scheduling.

Picture1With the presentation and discussions finding a suitable pause, it was an opportunity for a  call to arms to be made, and for people to try using developing APIs. With a mission defined which we hope people will try to continue with as it will contribute to the next Meetup. You don’t need to have attended last night’s event to participate in the next Meetup. If you want see what we’re going to try achieve take a look at the end of the slide deck. We think it will be be very entertaining and the source of a lot of laughter and amusement.

Some people did take up the challenge, others took it as an opportunity to talk further about the technology or just network.

We have now setup a GitHub so that people can contribute to the development of the API ready for the next event (https://github.com/oracledeveloperslondon/droneAPI­).

If you would like to see what is being tweeted about the event checkout #OracleDeveloperMeetup on twitter.

Photos can be seen here.

We hope you will join our Meetup and register for the event when we announce the final details. In the mean time give Apiary a try, share with us the API you have designed.

The slides are here:

 

A busy 25 hours at UKOUG Conference

Tags

, , , , , , , , , ,

I’ve just come to the end of a very busy 25 hours at the UK Oracle User Group (UKOUG) Conference in Birmingham. Four presentations – interestingly the same subject area, that of Oracle Integration Cloud (OIC) / Integration Cloud Service (ICS) started and ended the day.  Between this we also covered some approaches to start working towards Microservices in a Monolith World and Oracle Messaging Cloud.

Below are the presentations on the Microservices and ICS/OIC. The piece on Oracle Messaging Cloud was largely demo based, so rather than sharing the presentation slides, which won’t tell you too much. The best way to find out about this is to read the 2 articles about the capability in the OraWorld magazine (issues 6 & 7). With issue 7 perfectly timed by becoming available in the last couple of days.

With the Oracle Messaging Cloud article, there is one word of caution. When the article was written and submitted I used a free cloud service (which using contemporary terminology we’d describe as Serverless) called WebScript.io.  The WebScript piece served to make it easy to consume the webservice calls illustrating the PushListener feature.  This service however is being closed down – a shame as it was an elegantly simple solution.  Given this I am currently working on a blog post which will show how another services can take the place of WebScript.io; whilst not finalised, this maybe Google Cloud Functions.

If this wasn’t enough we also squeezed in the keynote presentations, a meeting with several other contributors to OMESA (Open Modern Enterprise Software Architecture) , a lunch conversation with our Publisher (Packt) and several other Oracle book authors, Oracle Ace dinner (great food with a lot of brilliant & friendly people), some very valuable incidental conversations and some work for a customer.

Microservices in a Monolith World

Look at Oracle Integration Cloud – its relationship to ICS. Customer use Cases an Insight into why ICS