OKit is a tremendous tool for the visual design and development for your Oracle Cloud environment. Visualizing your networks, positioning of service gateways and so on makes it a lot easier than filling in web forms or writing Terraform files as you can see the relationships between the different parts far more easily. For the same reason really that a lot of people use Visio and other tools for this work. The real beauty is that OKit can generate the Terraform and Ansible scripting that can then be used to deliver the implementation.
The tool isn’t currently an official Oracle product, but something built by the Oracle A-Team (a small team of gurus who have a role blending developer advocacy, architect supporting customers for the special edge cases and providing thought leadership). But we can hope that someone brings it into the fold and perhaps even incorporates it securely into the cloud dashboard. In the mean time, the code in its entirety is available on GitHub.
Oracle’s data integration product landscape outside of GoldenGate has with the arrival of Oracle Cloud been confusing at times. This has meant finding the right product documentation can be challenging, and knowing which product to use in your own technology road-map can be harder to formulate. I believe the landscape is starting to settle now. But to understand the position, let’s look at the causes of disturbance and the changes that have occurred.
Why the complexity?
This has come from I think a couple of key factors. The organizational challenges triggered by Thomas Kurian’s departure which has resulted in rather than the product organization being essentially in three parts aligning roughly to Infrastructure, Platform and Applications to being two Infrastructure and Apps. Add to this Oracle’s cloud has gone through two revolutions. Generation 1 now called Classic was essentially a recognition that they needed an answer to Microsoft, Google and AWS quickly (Oracle are now migrating customers off classic). Then came Generation 2, which is a more considered strategy which is leveraging not just the lowest level stack of virtualization (network and compute), but driving changes all the way through the internals of applications by having them leverage common technologies such as microservices along with a raft of software services such as monitoring, logging, metering, events, notifications, FaaS and so on. Essentially all the services they offer are also integral to their own offerings. The nice thing about Gen2 is you can see a strong alignment to CNCF (Cloud Native Compute Foundation) along with other open public standards (formal or de-facto such as Microprofile with Helidon and Apache). As a result despite the perceptions of Oracle, modern apps are standard a better chance of portability.
Impact on ODI
Oracle’s Data Integration capabilities, cloud or otherwise have been best known as Oracle Data Integrator or ODI. The original ODI was the data equivelent of SOA Suite implementing Extract Load Transform (ELT) rather than ETL as it meant the Oracle DB was fully leveraged. This was built on the WebLogic server.
Along Came Cloud
Oracle cloud came along, and there is a natural need for ODI capabilities. Like SOA Suite, the first evolution was to provide ODI Cloud Service just like SOA Suite had SOA Cloud Service. Both are essentially the same on-premises product with UIs to manage deployment and configuration.
ODI’s cloud transformation for the cloud lead ODI CS evolving into DIPC (Data Integration Platform Cloud). Very much an evolution, but with a more web centered experience for designing the integrations. However, DIPC is no longer available (except possible to customers already using it).
Whilst DIPC had been evolving the requirement to continue with on-premises ODI capabilities is needed. Whilst we don’t know for sure, we can speculate that there was divergent development happening creating overhead as ODI as an on-prem solution remained. We then saw the arrival of ODI Marketplace, this provides an easier transition, including taking into account licensing considerations. DIPC has been superseded by ODI Marketplace.
Oracle has developed a Marketplace just like the other major players so that 3rd party vendors could offer their technologies on the Oracle cloud, just as you can with Azure and AWS. But Oracle have also exploited it to offer their traditional products normally associated with on-premise deployments in the cloud. As a result we saw ODI Marketplace. A smart move as it offers the possibility of exploiting on-prem licensing into the cloud along with portability.
So far the ODI capabilities in its different forms continued to leverage its WebLogic foundations. But by this time the Gen2 Oracle Cloud and the organizational structures behind it has been well established, and working up the value stack. Those products in the middleware space have been impacted by both the technology strategies and organization. As a result API for example have been aligned to the OCI native space, but Integration Cloud has been moved towards the Apps space. in many respects this reflects a low code vs code native model.
This new product appears to be a very much ground up build as it exploits Apache Spark and Function as a Service (FaaS) as foundational elements. As a ground up build, it doesn’t inherit all the adapters the original ODI can offer. This does mean as a solution today it is very focused on some specific needs around supporting the data movement between the various Oracle Cloud storage and Database as a Service solutions rather than general ingestion and extraction processes.
Products are evolving, but the direction of travel appears to be resolving. But we are still in that period where there are capability gaps between the Gen2 native solution and the traditional ODI via Marketplace solution. As a result the question becomes less which product, but when and if I have to invest in using ODI Marketplace how to migrate when the native product catches up.
Earlier this month as part of the Virtual Oracle Developer Meetups, we were very fortunate to have Oracle Ace, Martien van den Akker present on the subject of the magic of correlations in SOA, BPM, and Oracle Integration Cloud. Martien not only presents to the Oracle community but also is very active on the Oracle community sites (community.oracle.com and Cloud Customer Connect) sharing his wealth of knowledge. When it comes to the tough questions about Oracle middleware tech on these platforms, you stand a good chance that Martien will be the one answering your question.
This insightful presentation not only addressed the traditional Oracle Integration approach using SOA and BPM but also contrasted the capabilities as provided by Oracle Cloud. Martien was generous enough to allow us to record the presentation and share it (below), along with the demo resources from:
Today was one of my sessions, whilst I only co-hosted, we got to hear a great presentation with a heart warming story which in this challenging times seems all the more appropriate. Christian McCabe (Steltix) and Filip Huysmans (Contribute) presented on how a multinational hackerthon spanning South Africa to Belgium was put together to only help children of Christel House (a charity who work to provide education to those who would not normally get access to it). Not only was the hackerthon engineered to given the students a chance to learn and experience software development in a pretty realistic context, it also provided the school with some software to help their everyday activities, in this case managing books in their library.
The hackerthon yielded a lot of successful outcomes (Steltix wrote about it here), but, what was really interesting is that whilst working with the school, children and interns (from both Steltix and Contribute) took a lot lessons away as well.
We’ve been told because of current events in the US that this event is going to be rescheduled.
I am pleased to say that I will be talking about Fluentd at the Cloud Native eParty virtual conference on 2nd June 2020. I’ll be presenting at 4pm, and then hanging out on the conference slack channel to answer any more questions people might have.
The news about Oracle offering some free cloud services ‘for life’ is making an impact. But, the free services don’t end there. The pricing of some other native cloud services includes some free bands. So it’s worth keeping an eye on the fine print. I wouldn’t be surprised if we see limited capacity access in other areas.
Oracle Functions – whilst the core of this service is built on the open-source Fn Project (also largely driven by Oracle) the managed service has a free tier allowing up to 2 million invocations that can consume 400, 000 gigabytes of memory per second use (details can be seen here). Plenty enough to experiment with the concepts behind Serverless aka FaaS capabilities.
Oracle Notifications whilst focussed on the technical side of gathering key event data from OCI and its services, as the document states “sending notifications to numerous interested parties, or even synchronizing the moving parts of a distributed application” – this obviously means a service with characteristics a bit like AWS’ SNS. Like SNS it can be hooked up to email and other HTTPS services using Oracle Events which also has free use. Events is particularly interesting as it is bases the event structure on the CNCF CloudEvents spec. There is an excellent illustration of such a use case in the Oracle blogs here.
It will be interesting to see if we a similar trend with other Oracle cloud-native services. A new take on the now-defunct Application Container Cloud Service (ACCS) would be an ideal vehicle – whether there is sufficient demand for such a capability is not clear (it would in effect be an always live service like a Kubernetes solution, but the simpler, smaller footprint more like Functions in a multi-tenant environment. At the same time, it doesn’t have potential latency of a Function being activated).
Whilst the weather may have put some off venturing out, not for our intrepid duo of presenters – Joost Volker (Oracle PM for a Blockchain) and Robert van Mölken Oracle Groundbreaker Ambassador and author of Blockchain Across a Oracle who both had to negotiate protesting farmers, traffic jams, flight delays (wrong kind of rain to land in London) and London’s rush hour traffic.
Getting to grips with FluentD configuration which describes how to handle logging event(s) it has to process can be a little odd (at least in my opinion) until you appreciate a couple of foundation points, at which point things start to click, and then you’ll find it pretty easy to understand.
It would be hugely helpful if the online documentation provided some of the points I’ll highlight upfront rather than throwing you into a simple example, which tells you about the configuration but doesn’t elaborate as deeply as may be worthwhile. Of course, that viewpoint may be born from the fact I have reviewed so many books I’ve come to expect things a certain way.
But before I highlight what I think are the key points of understanding, let me make the case getting to grips with FluentD.
Why master FluentD?
FluentD’s purpose is to allow you to take log events from many resources and filter, transform and route logging events to the necessary endpoints. Whilst is forms part of a standard Kubernetes deployment (such as that provided by Oracle and Azure for example) it can also support monolithic environments just as easily with connections working with common log formats and frameworks. You could view it as effectively a lightweight (particularly if you use FluentBit variant which is effectively a pared-back implementation) middleware for logging.
If this isn’t sufficient to convince you, if Google searches are a reflection of adoption, then my previous post reflecting upon Observability -London Oracle Developer Meetup shows a plot reflecting the steady growth. This is before taking into account that a number of cloud vendors have wrapped Fluentd/fluentbit into their wider capabilities such as Google (see here).
Not only can you see it as middleware for logging it can also have custom processes and adapters built through the use of Ruby Gems, making it very extensible.
When I first wrote about Oracle Messaging Cloud we used a service called WebScript.io to make it easy to demonstrate the Message Push Listener. WebScript was essentially what we better know as a Serverless or Functions oriented offering (that is we wrote pieces of code and deployed them without any consideration servers etc). Well as I prepared my demos for Messaging Cloud for the UK Oracle User Group Tech 17 Conference I discovered that WebScript is being shutdown in December 2017.
In the light of this news, I needto provide an alternate implementation for my Message Push Listener demo Google’s Cloud Functions. Before I go into the Google implementation I thought it worth sharing how I landed on Google’s offering.
The Google Cloud Functions is a new service that has been launched with an interesting future. I had hoped to try using project Fn (Oracle’s open source serverless offering) but the cloud offering is not yet publicly available – although you can run Fn on any platform today if you’re prepared to invest in setting up the environment (defeating the point of serverless). I know some of Oracle’s Developer Champions have had a preview so it cant be too far away now. I’m sure when we get a chance to access the new Cloud Native Service announced which will include Fn we will revisit it. Before settling on Google we looked at several other offerings in the serverless space. Whilst this is not an exhaustive analysis it should help give a sense of the challenges and ease of adoption. If you search today on Serverless you’ll most commonly come across Auth0’s WebTask.io, AWS Lambda and IBM OpenWhisk (based on Apache OpenWhisk).
I started with WebTask.io and it was very nearly a done deal, with a nice easy to work with Cloud Development Platform, integrated testing. Extensive support for Node.js and a number of standard frameworks to use with it such as Express available without doing anything.
Other languages are supported as well by WebTask.io. But as I’m trying to create a demo that warrants very little explanation of the Serverless platform we didn’t dig in to this area. Everything went swimmingly until I tried to setup external calls to my function. This became a headache as the security model whilst not overly complex (several ways to provide the REST call with authentication e.g. adding a key in the URI). The process of generating and associating the credentials was far from clear in the documentation.
I moved to look at AWS Lambda, this I just found horribly confusing to get started with. I have heard others saying that getting going isn’t straight forward. So I found myself giving up pretty quickly as the setting up wasn’t that clear. Whilst having used AWS with its IaaS capabilities which is both powerful, flexible and pretty easy to get to grips with if you understand basic ideas like virtual machines this didnt hold true fory Lambdas.
As for OpenWisk, we started to look at it, but getting a 404 error when trying to access the Editor following the IBM documentation didn’t inspire confidence. The was plenty of supoprting documentation which explains how OpenWhisk works.
The Execution framework for OpenWhisk
Ningx is used for SSL termination and forwarding appropriate HTTP calls to the next component
Controller first disambiguates what the user is trying to do. It does so based on the HTTP method you use in your HTTP request. This is a Scala solution built using Akka & Spray. This includes ..
Verification who you are (Authentication) against a CouchDB based identiy store.
Once approved details of the Action to be executed is retrieved from the whisks database in CouchDB.
With information on what to do, the action of service discovery is formed using Consul. Which tracks the available executors in the system. Those executors are called Invokers
Kafka is then used to mitigate the demand pipeline from a failure by recording the request and the consumer (invoker) identified by Consul.
The invoker is built using Scala and uses a Docker instance to run the Action which could be anything e.g. Node.js. The action is injected into the container to be processed.
As the result is obtained by the Invoker, it is stored into the whisks database as an activation under the ActivationId. The whisks database lives in CouchDB.
In addition to the 404, as you can see we have a two step process to execute an action and return a respoinse. However the Message Push Listener initial challenge needs a call and response in a single step. So trying to massage this into a call and response is going to be challenging and a distraction from what we want to be conveying.
Using Google Functions
This brings us Back to Google, whilst the Cloud IDE is not as elegant or mature as WebTask it was sufficient and the security model wasn’t imposed. I liked the documentation when needed to refer back to it, but to be honest it is pretty intuitive. You can’t fault the docs, to the point Google gave time over to explaining how to manage or avoid incurring costs.
Setting up, was very simple, and then once you’ve choosen your cloud services you get a dashboard like this:
Google provides the idea of projects which allows you to group pieces together – such as related functions. Each project is name space separated. If we then navigate into a Functions project we get a view as follows:
As you can see in the preceeding diagram I created two functions within a project called OMCS. From here you can create more functions in your project or drill into an individual function, as the following view shows:
An individual function provides you with several tabbed views overing the Gernal information (as shown above) or Trigger, Source and Testing. We can see the other views in the following screenshots. The following screen shot shows the Functions Editor, as you can see it is fairly simple – but sufficient to do the job.
Once saved, if valid the code will automatically get deployed, or you can work offline and then upload the code if you want to use a nice editor like Sublime.
with your code edited and saved, then the next step is to invoke it. This can be done with the next tab, or the details such as the URI can be copied and you can test from your preferred test tool such as SOAPUI, Postman and APIFortress.
The testing view allows you 5o define input and output values, along with the outcomes. Personally I worked with SOAPUI.
The important thing with running tests or diagnosing issues, is to be able to examine execution logs. In this area Google Functions is pretty feature rich with a solution that works in a style somewhat like the searching in Splunk (and I’m sure other log analytics tools) where you can drill into the logs and build log filters on the fly. The log view is shown in the next screenshot.
as you can see tool looks pretty straight forward and uncomplicated to use, with freedom to adapt how you work to your preferred style. Based on my experience of using Project FN on my desktop – it is this simplicity I think we’ll see with the Cloud Native Platform from Oracle when it becomes available.
Finally, let’s take a look at the code in Google Functions code produced for this example:
Google Code whilst its UI is a bit basic, it is easy to use and get started, certainly for using as a demo platform or perhaps for creating stubs, test and mock end points. Having been critical of the other offerings for security and it getting in the way of a simple illustration it is possible that the Google Functions may need some work in this area. I didn’t see anything that obviously integrated security features in easily.
So it has been a busy week in terms of seeing articles published that I’ve at least contributed to. It’s funny the gap between writing and publishing can be several weeks. So whilst we’re thinking about new things we see the twitter pickup etc or work several weeks old.
Anyway, first up was a contribution to Leon Smiers‘ blog on integrating chatbots. The latest in a series of excellent blog posts looking at the capabilities a chatbot solution needs etc. The latest post is about integration, hence my contribution. My contributions to the blog series go back to the conversations Leon and I had whilst at the Oracle Partner event earlier this year. Since then, I have helped Leon by providing a critical eye and offering suggestions.
The big event, has been to have an article published on Oracle Technology Network(OTN). This is a bit of an honour as we where invited to write. My piece can be found at here. It is actually a part of a pair of articles written for OTN. With article was written by Luis Weir, and is the parent article about API management.
My article came about as a result several discussions with Luis whilst travelling to and from a client about the relationship between between microservice registries, load balancers and API Gateways. Particularly as API Gateways have a natural relationship with microservices. I’ll say no more, go read the article.