• Home
  • Site Aliases
    • www.cloud-native.info
    • oracle.cloud-native.info
    • Phil-Wilkins.uk
  • About
    • Background
    • Presenting Activities
    • Internet Profile
      • LinkedIn
    • About
  • Books & Publications
    • Logging in Action with Fluentd, Kubernetes and More
      • Logging in Action with Fluentd – Book
      • Fluentd Book Resources
      • Log Generator
    • API & API Platform
      • API Useful Resources
    • Oracle Integration
      • Book Website
      • Useful Reading Sources
    • Publication Contributions
  • Resources
    • GitHub
    • Oracle Integration Site
    • Oracle Resources
    • Mindmaps Index
    • Useful Tech Resources
    • Python Setup & related stuff
  • Music
    • Music Reading

Phil (aka MP3Monster)'s Blog

~ from Technology to Music

Phil (aka MP3Monster)'s Blog

Tag Archives: Cloud

Costs in Multi-Cloud

28 Wednesday Aug 2019

Posted by mp3monster in General, Oracle, Technology

≈ Leave a comment

Tags

AWS, Azure, Cloud, costs, data, ExpressRoute, FastComnect, Oracle, OracleCloud

Over the last couple of years, we have seen growing references to multi-cloud. That is to say, people are recognizing that organisations, particularly larger ones are ending up with cloud services for many different vendors. This at least in part has come from where departments within an organization can buy meaningful resources within their local budgets.

Whilst there is a competitive benefit of the recent partnership agreement between Microsoft and Oracle given the market margin AWS has in comparison to everyone else. Irrespective of the positioning with AWS, this agreement has arisen because of the adoption of multi-cloud. It also provides a solution to the problem of running highly resilient Oracle database setups using RAC, DataGuard etc can be made available to Azure without risk to security and the all-important network performances that are essentially to DB operation. Likewise, Oracle’s SaaS offerings are sector leaders if not best in place, something Microsoft can’t compete with. But at the other end, regardless of Oracle’s offerings, often organisations will prefer Microsoft development ecosystem because of the alignment to office tooling, the ease of building solutions quickly.

Multi-cloud even with the agreements like the Microsoft and Oracle one (See here), doesn’t mean there won’t be higher costs in crossing clouds. Let’s see where the costs reside …

  • Data egress (and in some cases ingress as well) from clouds costs. Whilst the ingress costs have been eliminated because it can be seen as a barrier to selling services, particularly big data. Data egress can, however, be an issue. Oracle has made this cost very low to be almost negligible, but not necessarily others as the following comparison shows …

  • Establishing the high-performance connections That the agreement supports needed between Azure and Oracle cloud is the same tech for the cloud to the ground do incur a cost. In Oracle’s case, there is a fee for the connection (not a large cost, but one that exists none the less) plus any traffic fees the provider of the network connection spanning the data centre locations. This is because you’re leasing capacity on someone’s dedicated fibre or MPLS services. Now, this should prove to be small as part of the enabler of this offering is that both Oracle and Microsoft cloud DCs are often actually physically provided by the same provider or at-least the centres a physically pretty close, as a result of both companies gravitating to locations close together because of the optimal highly available infrastructure (power, telecommunications) legal and commercial factors along with the specialist skills needed.

If data egress is the key challenge to costs, what drives the data egress beyond the obvious content for user interfaces? …

  • Obviously, you have the business data flows, some of these flows will be understood by the business community. But not all, this is down to the way data from the cloud can be exposed to another. For example inefficient services with APIs that require frequent polling and not using expressing the request efficiently, rather than perhaps express the request using HTTP header attributes and other efficiencies or even utilize frameworks such as webhooks so data can be pushed.
  • High-speed data access, often drives data replication having databases in multiple clouds with mirror image data in each location even if the majority of the data is not necessarily needed. This can happen with technologies such as Kafka which for non compacted topics will mean every event can be replicated even if that event has a short lifetime.
  • One of the hidden costs is the operational tasks of gathering logs to a combined view so end to end insights can be obtained. A detailed log can actually yield more ‘data’ by volume than the business flows themselves because it is semi-structured, and intended to be very readable and at the most granular level there to help debug and test.

In addition to the data flows, you need to consider how other linkages in addition to the Oracle-Azure connection are involved. In the detailed documentation, it is not possible to get your on-premises location connected to one of the clouds (e.g. Oracle FastConnect, and then assume your traffic can hop to Azure via the bridge using FastConnect and Azure’s ExpressRoute.  To add performance to your solution parts in both Azure and Oracle Cloud, you still need to have FastConnect and ExpressRoute configured to your on-premises location. This, of course, may impact how bulk data for lift and shift app use cases such as EBS may be applied. For example, if you choose to regularly bulk data transfer between on-premise and EBS via the app/middleware tier rather than via direct DB, and that mid-tier is running in Azure – you will need both routes established.

Conclusion

There is no doubt that the Oracle-Azure cloud to cloud linkage is a step forward, but ‘the devil is in the details‘ as the saying goes. To optimize the benefits and savings we’d suggest that you;

  • you’ll need to think through your use cases – understand data flow and volume (someone bulk syncing application data with a data warehouse?),
  • define a cloud data strategy – to layout principles, approaches and identify compliance needs, this is particularly helpful for custom solution development, so the right level of log data is consolidated with the important details, data retention addresses compliance requirements and doesn’t ratchet up unnecessary costs (there is a tendency to horde data just in case – if this is really wanted, think about how its stored),
  • based on business common usage models define a simple forecasting formula – being able to quantify data costs will always make it easier to challenge back data hoarding tendency,
  • confirm the inter-cloud network vendor charges when working with multi-cloud.

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

EMEA PaaS Forum 2019 in review

15 Monday Apr 2019

Posted by mp3monster in General

≈ 1 Comment

Tags

Ace, Capgemini, Cloud, conference, development, drone, Luis Weir, meetup, Oracle, PaaS, Presentation, Technology

Image by @motivcx

Image by @motivcx

Another Spring means another excellent Oracle EMEA PaaS Forum for Oracle partners. Every Year Juergen Kress organizes the event, finding really nice venues to host several hundred people over four and half days.

The event is split into several parts,  Monday afternoon normally involves Oracle Ace’s presenting on best practices, insights on applying the various technologies etc.  For me this meant presenting on the London Developer Meetup, looking at how it worked, what has been successful, and what hasn’t.  For those know have read my blogs on the subject (here) will know about our Drone initiative.

Meetups – The Oracle Ace Way from Phil Wilkins

Picture by @AmyGrangeX

Then Tuesday is a single stream day where Juergen has managed to pull in SVPs and Senior Product Managers from around the globe to provide a high-level view of what has been going on with their products. For anyone consulting in the Oracle domain, this is incredibly useful. For example, there is a clear strategy coalescing around AI and Machine Learning both as a service proposition to users, but also how these technologies are being made available and used within other products.  Other areas such as OIC and SOA CS have stability and maturity, and the road map is about maximising connectivity with the newer products.

But before the sessions start, Juergen starts with opening remarks, and demos’ something engaging.  In previous years this has been things like Digital Assistants/Chatbots and so on.  This year, we have been fortunate to be an active contributor by demoing the drone through the use of APIs and talking about the ideas.  The dry runs of the demo on Monday went without a problem, but when it came to the main show, the drone was a little uncooperative – we think because the air-con had really kicked in.  But importantly, even not achieving the desired result, the message of engagement made it home.

Wednesday is split into streams with in-depth sessions from the different Product Managers, he amount of insight gained from these sessions is tremendous, some of which is very much protected by safe harbour statements or not for public disclosure such is the honest and open discussions. The day closes with an Ace Director initiative which demonstrates the application of Oracle Cloud products to a plausible use case, and Luis Weir (Capgemini Oracle CTO) is part of. This session has become something of a tradition now.

The day’s business concludes awards, and for a second year the UK Capgemini team have taken home two awards for APIs and PaaS Contribution.

Luis Weir with his API award

The final two days are then a choice of Hackerthon or 1/2 day training sessions on different products with the relevant Product Managers, and an excellent opportunity to pick the brains of the presenters as well as get hands-on experience with the different products.

The week isn’t without it’s social and networking activities of course …

39.506174 2.532125

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Defining Boundaries for Logical Gateways on the API Platform a multi cloud / multi region context

31 Wednesday Oct 2018

Posted by mp3monster in API Platform CS, General, Oracle, Technology

≈ 3 Comments

Tags

API, API Platform, Cloud, Gateways, Oracle

The Oracle API Platform takes a different licensing model to many platforms, rather than on CPU it works by the use of Logical Gateways and blocks of 25 million successful API calls per month. This means you can have as many actual gateway nodes as you like within a logical group to ensure resilience as you like, essentially how widely you deploy the gateways is more of a maintenance consideration (i.e. more nodes means more gateways to take through a maintenance process from the OS through to the gateway itself).

In our book (here) we described the use of logical gateways (groups of gateway nodes operating together) based on the classic development model, which provides a solid foundation and can leverage the gateway based routing policy very effectively.

logical partitions

But, things get a little trickier if you move into the cloud and elect to distribute the back end services geographically rather than perhaps have a single global instance for the back-end implementation and leverage technologies such as Content Delivery Networks to cache data at the cloud edge and their rapid routing capabilities to offset performance factors.

map1

Classic Global split of geographies

Some of the typical reasons for geographically distributing solutions are …

  • The low hit rate on data meaning caching solutions like CDNs are unlikely to yield performance benefits wanted and considerable additional work is needed to ‘warm’ the cache,
  • Different regions require different back end implementations ordering of products in one part of the world may be fulfilled using a partner, but in another, it is directly satisfied,
  • Data is subject to residency/sovereignty rules – consider China for example. But Germany and India also have special considerations as well.

So our Global splits start to look like:

map2

Global Split now adding extra divisions for India, China, Russia etc

The challenge that comes, is that the regional routing which may be resolved on the Internet side of things through Geo Routing such as the facilities provided by AWS Route53 and Oracle’s Dyn DNS as a result finding nearest local gateway. However Geo DNS may not be achievable internally (certainly not for AWS), as a result, routing to the nearest local back-end needs to be handled by the gateway. Gateway based routing can solve the problem based on logical gateways – so if we logically group gateways regionally then that works. But, this then conflicts with the use of gateway based routing for separation of Development, Test etc.

Routing Options

So, what are the options? Here are a few …

  • Make you Logical divisions both by the environment and by region – this is fine if you’re processing very high volumes i.e. hundreds of millions or more so the cost of additional Logical gateways is relatively small it the total budget.
map3

Taking the geo split and applying the traditional layers as well has increased the number of Logical gateways

This problem can be further exacerbated, if you consider many larger organisations are likely to end up with different cloud vendors in the same part of the world, for example, AWS and Azure, or Oracle and Google. So continuing the segmentation can become an expensive challenge as the following view helps show:

map4

It is possible to contract things slightly by only have development and test cloud services where ever your core development centre is based. Note that in the previous and next diagrams we’ve removed the region/country-specific gateway drivers.

map5

  • Don’t segment based on environment, but only on the region – but then how do you control changes in the API configuration so they don’t propagate immediately into production?
  • Keep the existing model but clone APIs for each region – certainly the tooling we’ve shared (Managing API Policy Versioning in Oracle API Platform) makes this possible, but it’s pretty inelegant and error-prone as it be easy to forget to clone a change, and the cloning logic needs to be extended to take into account the bits that must be region-specific.
  • Assuming you have a DNS address for the target, you could effectively rewrite the resolution of the address by changing its meaning in each gateway node’s host file. Inelegant, but effective if you have automated deployment and configuration of your gateway servers.
  • Header based routing with the region and environment as header attributes. This does require either the client to set the values (not good as you’re revealing to your API consumer traits of the implementation), or you apply custom policies before the header-based routing that insert those attributes based on the gateway’s location etc.
  • Build a new type of gateway based routing which allows both the environment (dev, test etc) and location (region) to inform the routing,

Or, and the point of this blog, use gateway based routing and leverage some intelligent DNS naming and how the API Platform works with a little bit of Groovy or a custom Java policy.

Continue reading →

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Analytics and Stats for APIs

05 Friday Oct 2018

Posted by mp3monster in API Platform CS, General, Oracle, Technology, tools

≈ 3 Comments

Tags

API, CLI, Cloud, Groovy, monetization, Oracle, reporting, stats, util

NOTE:This utility needs revamping to support IDCS for more see Making Scripts Work with IDCS Deployed PaaS

The Oracle API Platform provides the means to examine statistics and slice and dice the numbers by application, gateway, duration and so on resulting in visually appealing graphical representations.  The way the analytics works means you can book mark specific views, so you can return the same report view with the relevant features as often as you like.  However, presently there is no data export option.

The question why would I want to export the information comes down to several possible use cases, all of which relate to cost management.  The API Platform will eventually have all the desired data views, but now something to help address the following:

  • money-tization, we can see which consumer has been using the services by how much and then send the data to a companies accounting systems to invoice the users
  • Ability to examine demand and workload over time to create a projection of the likely infrastructure – to achieve this the API statistics need to be overlaid with infrastructure and performance details so we can extrapolate API growth against server workload.

To address these kinds of requirements, we have taken advantage of the fact the API Platform has drunk its own Champagne as they say and made many of the analytics querying APIs publicly available.  As with the other API Platform tools, the logic has been written in Groovy, and freely available for use – we’ve covered the code through a Create Common license.

Tool includes a range of parameters to allow the data retrieved into a CSV file having filtered in a number of different ways – which logical gateways to examine, which API or Application(s) to report on.  Finally, just to help some basic stats are produced with a count of logical gateways, API calls, APIs defined and Application definitions. The first three factors inform your cloud costs. Together the stats can help Oracle understand your use case. Note that the parameters which impact the CSV generation can also materially impact the reporting numbers.

Parameters:

The 1st three values must always be provided and in the order shown here

  1. user name to access the source management cloud
  2. password for the source management cloud
  3. The server address without any attributes e.g. https://1.2.3.4

All the following values are optional

  • -h or -help – provides this information
  • -g – Logical gateway to retrieve numbers from e.g. production or development. using ALL with this parameter will result in ALL gateways being examined
  • -f – the file to target the CSV data should be written to. If not set then the default of
  • -t – indicates whether the data provided should be taken from an APPS perspective or from an API view by passing either APPS | API
  • -d – will get script to report more information about what is happening
  • -p – reporting period which is defined by a number as follows:
    • 0 – Last 365 days – data is given as per month
    • 1 – Last 30 days – this is the default if no information is provided – data is given as per day
    • 2 – Last 7 days – data is given as per day
    • 3 – Last day – data is given as per hour

NB – still testing the utility at this moment – will remove this comment once happy

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

API Developer Podcast

23 Monday Jul 2018

Posted by mp3monster in API Platform CS, Books, General, Oracle, Podcasts, Technology

≈ Leave a comment

Tags

API, Cloud, developer, Oracle, podcast

The authors of the API Platform book, got to record an Oracle Developer Podcast together in support of the book – the recording can be here at here or at here

As ever, thanks to Bob Rhubart for giving us this opportunity.

 

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Implementing API Platform Book extract

19 Tuesday Jun 2018

Posted by mp3monster in API Platform CS, Books, Packt, Technology

≈ Leave a comment

Tags

API, book, Cloud, extract, Packt

An extract of our new book Implementing API Platform has been made available by the publishers Packt here. Of course you could enjoy all the content by buying the book directly from Packt (go here) or from book retails such as Amazon (here).

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Lessons in Oracle Cloud Password Management

07 Monday May 2018

Posted by mp3monster in General, Oracle, Technology

≈ 2 Comments

Tags

APIP CS, Cloud, DaaS, Oracle, password, Security

Oracle Cloud is growing and maturing at a tremendous rate if the breadth of PaaS capabilities is any indication.  However, there are a few gotchas out there, that can cause some headaches if they get you. These typically relate to processes that impact across different functional areas. A common middleware stack (API CS, SOA CS, OIC etc) will look something like the following:

cloudPassword

As the diagram shows when you build the cloud services, the layers get configured with credentials to the lower layers needed (although Oracle have in the pipeline the Oracle managed version of many services where this is probably going to be hidden from us). Continue reading →

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Message Push Listener – Article Update

30 Saturday Dec 2017

Posted by mp3monster in Cloud, development, General, NodeJS Cloud, Technology

≈ Leave a comment

Tags

"Message Push Listener", article, AWS, Cloud, fn, Functions, google, IBM, JMS, Lambda, messaging, OpenWhisk, Oracle, OraWorld, serverless

When I first wrote about Oracle Messaging Cloud we used a service called WebScript.io to make it easy to demonstrate the Message Push Listener. WebScript was essentially what we better know as a Serverless or Functions oriented offering (that is we wrote pieces of code and deployed them without any consideration servers etc). Well as I prepared my demos for Messaging Cloud for the UK Oracle User Group Tech 17 Conference I discovered that WebScript is being shutdown in December 2017.

In the light of this news, I needto provide an alternate implementation for my Message Push Listener demo Google’s Cloud Functions.  Before I go into the Google implementation I thought it worth sharing how I landed on Google’s offering.

The Google Cloud Functions is a new service that has been launched with an interesting future. I had hoped to try using project Fn (Oracle’s open source serverless offering) but the cloud offering is not yet publicly available – although you can run Fn on any platform today if you’re prepared to invest in setting up the environment (defeating the point of serverless). I know some of Oracle’s Developer Champions have had a preview so it cant be too far away now. I’m sure when we get a chance to access the new Cloud Native Service announced which will include Fn we will revisit it. Before settling on Google we looked at several other offerings in the serverless space. Whilst this is not an exhaustive analysis it should help give a sense of the challenges and ease of adoption. If you search today on Serverless you’ll most commonly come across Auth0’s WebTask.io, AWS Lambda and IBM OpenWhisk (based on Apache OpenWhisk).

WebTask.io

I started with WebTask.io and it was very nearly a done deal, with a nice easy to work with Cloud Development Platform, integrated testing. Extensive support for Node.js and a number of standard frameworks to use with it such as Express available without doing anything.

Other languages are supported as well by WebTask.io. But as I’m trying to create a demo that warrants very little explanation of the Serverless platform we didn’t dig in to this area. Everything went swimmingly until I tried to setup external calls to my function. This became a headache as the security model whilst not overly complex (several ways to provide the REST call with authentication e.g. adding a key in the URI). The process of generating and associating the credentials was far from clear in the documentation.

AWS Lambda

I moved to look at AWS Lambda, this I just found horribly confusing to get started with. I have heard others saying that getting going isn’t straight forward. So I found myself giving up pretty quickly as the setting up wasn’t that clear. Whilst having used AWS with its IaaS capabilities which is both powerful, flexible and pretty easy to get to grips with if you understand basic ideas like virtual machines this didnt hold true fory Lambdas.

OpenWhisk

As for OpenWisk, we started to look at it, but getting a 404 error when trying to access the Editor following the IBM documentation didn’t inspire confidence. The was plenty of supoprting documentation which explains how OpenWhisk works.

openwhisk_flow_of_processing

The Execution framework for OpenWhisk

  1.  Ningx is used for SSL termination and forwarding appropriate HTTP calls to the next component
  2. Controller first disambiguates what the user is trying to do. It does so based on the HTTP method you use in your HTTP request. This is a Scala solution built using Akka & Spray. This includes ..
  3. Verification who you are (Authentication) against a CouchDB based identiy store.
  4. Once approved details of the Action to be executed is retrieved from the whisks database in CouchDB.
  5. With information on what to do, the action of service discovery is formed using Consul. Which tracks the available executors in the system. Those executors are called Invokers
  6. Kafka is then used to mitigate the demand pipeline from a failure by recording the request and the consumer (invoker) identified by Consul.
  7. The invoker is built using Scala and uses a Docker instance to run the Action which could be anything e.g. Node.js. The action is injected into the container to be processed.
  8. As the result is obtained by the Invoker, it is stored into the whisks database as an activation under the ActivationId. The whisks database lives in CouchDB.

In addition to the 404, as you can see we have a two step process to execute an action and return a respoinse. However the Message Push Listener initial challenge needs a call and response in a single step. So trying to massage this into a call and response is going to be challenging and a distraction from what we want to be conveying.

Using Google Functions

This brings us Back to Google, whilst the Cloud IDE is not as elegant or mature as WebTask it was sufficient and the security model wasn’t imposed. I liked the documentation when needed to refer back  to it, but to be honest it is pretty intuitive. You can’t fault the docs, to the point Google gave time over to explaining how to manage or avoid incurring costs.

Setting up, was very simple, and then once you’ve choosen your cloud services you get a dashboard like this:

Google CLoud mgmt

Google provides the idea of projects which allows you to group pieces together – such as related functions. Each project is name space separated. If we then navigate into a Functions project we get a view as follows:

Google cloud functionsAs you can see in the preceeding diagram I created two functions within a project called OMCS. From here you can create more functions in your project or drill into an individual function, as the following view shows:

Google Functiuons performance

An individual function provides you with several tabbed views overing the Gernal information  (as shown above) or Trigger, Source and Testing. We can see the other views in the following screenshots. The following screen shot shows the Functions Editor, as you can see it is fairly simple – but sufficient to do the job.

GoogleCloud-OMCS

Once saved, if valid the code will automatically get deployed, or you can work offline and then upload the code if you want to use a nice editor like Sublime.

with your code edited and saved, then the next step is to invoke it. This can be done with the next tab, or the details such as the URI can be copied and you can test from your preferred test tool such as SOAPUI, Postman and APIFortress.

Google functions Trigger

The testing view allows you 5o define input and output values, along with the outcomes. Personally I worked with SOAPUI.

Google Functiuons Test

The important thing with running tests or diagnosing issues, is to be able to examine execution logs. In this area Google Functions is pretty feature rich with a solution that works in a style somewhat like the searching in Splunk (and I’m sure other log analytics tools) where you can drill into the logs and build log filters on the fly. The log view is shown in the next screenshot.

Gogole Functions Logging 2

as you can see tool looks pretty straight forward and uncomplicated to use, with freedom to adapt how you work to your preferred style. Based on my experience of using Project FN on my desktop – it is this simplicity I think we’ll see with the Cloud Native Platform from Oracle when it becomes available.

Finally, let’s take a look at the code in Google Functions code produced for this example:

code

conclusion

Google Code whilst its UI is a bit basic, it is easy to use and get started, certainly for using as a demo platform or perhaps for creating stubs, test and mock end points. Having been critical of the other offerings for security and it getting in the way of a simple illustration it is possible that the Google Functions may need some work in this area. I didn’t see anything that obviously integrated security features in easily.

Back to my Orginal Articles…

Just to tie back the impacted articles …

  • http://www.oraworld.org/home/ – Issues 6 & 7
  • http://www.oracle.com/technetwork/articles/cloudcomp/wilkins-ocms-3855268.html
  • https://www.youtube.com/watch?v=9Elf2DBisEU

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

A busy 25 hours at UKOUG Conference

05 Tuesday Dec 2017

Posted by mp3monster in General, OIC - ICS, Oracle, Technology

≈ Leave a comment

Tags

Cloud, conference, integration, messaging, OIC, OIC - ICS, OMESA, OraWorld, OUG, PushListener, WebScript

I’ve just come to the end of a very busy 25 hours at the UK Oracle User Group (UKOUG) Conference in Birmingham. Four presentations – interestingly the same subject area, that of Oracle Integration Cloud (OIC) / Integration Cloud Service (ICS) started and ended the day.  Between this we also covered some approaches to start working towards Microservices in a Monolith World and Oracle Messaging Cloud.

Below are the presentations on the Microservices and ICS/OIC. The piece on Oracle Messaging Cloud was largely demo based, so rather than sharing the presentation slides, which won’t tell you too much. The best way to find out about this is to read the 2 articles about the capability in the OraWorld magazine (issues 6 & 7). With issue 7 perfectly timed by becoming available in the last couple of days.

With the Oracle Messaging Cloud article, there is one word of caution. When the article was written and submitted I used a free cloud service (which using contemporary terminology we’d describe as Serverless) called WebScript.io.  The WebScript piece served to make it easy to consume the webservice calls illustrating the PushListener feature.  This service however is being closed down – a shame as it was an elegantly simple solution.  Given this I am currently working on a blog post which will show how another services can take the place of WebScript.io; whilst not finalised, this maybe Google Cloud Functions.

If this wasn’t enough we also squeezed in the keynote presentations, a meeting with several other contributors to OMESA (Open Modern Enterprise Software Architecture) , a lunch conversation with our Publisher (Packt) and several other Oracle book authors, Oracle Ace dinner (great food with a lot of brilliant & friendly people), some very valuable incidental conversations and some work for a customer.

Microservices in a Monolith World

Look at Oracle Integration Cloud – its relationship to ICS. Customer use Cases an Insight into why ICS

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

UKOUG Tech16 Presentation Slides, Apex & Oracle Cloud

15 Thursday Dec 2016

Posted by mp3monster in General, Oracle, Technology

≈ Leave a comment

Tags

APEX, Cloud, Oracle, OUG, tech16

My presentation for UKOUG Tech 16 can be seen by following the link – Introduction to SOA CS. or see below   It was a tremendous 4 days (if you include the Tech stream’s Super Sunday).  If you are a UKOUG member and didn’t make it to the conference I’d look out for the material to be become available.

Whilst I’m not a big Apex fan (stitching business logic into the persistence layer feels wrong to a middleware person), i did attend the keynote session which covered Apex’s history and future direction, and there are some very exiciting things coming and if everything materialises as I understand it then some big steps to getting developers engaged with Oracle cloud offerings.

Oracle has done a lot of work on the middleware layer with apps container (using common Docker configurations without needing to worry about Docker), Kafka, Node.js and others to engage developers and provide the means to offer a polyglot microservices platform that is not just attractive to  the traditional Oracle customer base but also those wanting the middle ground of supported open source. What Oracle are missing is the means to get developers trying the technology and being creative with it. Amazon and Red Hat have got this by offering limited footprints for a long time. Oracle offer 30 day trials which is fine to do a project sponsored PoC. But to hook grass roots users you need a lengthy period where people in spare time can built some cool/geeky solutions.

Now this maybe down to the fact that Oracle cloud is built on their Exa machines with clever on silicon security features, and Oracle can’t manufacture it quick enough. Whereas other cloud providers work with largely commodity components. But if they want to challenge Amazon as Ellison says they need to change this.

 

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...
← Older posts
Newer posts →

Aliases

  • phil-wilkins.uk
  • cloud-native.info
  • oracle.cloud-native.info

I work for Oracle, all opinions here are my own & do not necessarily reflect the views of Oracle

Oracle Ace Director Alumni

TOGAF 9

Logging in Action

Oracle Cloud Integration Book

API Platform Book


Oracle Dev Meetup London

Categories

  • App Ideas
  • Books
    • Book Reviews
    • manning
    • Oracle Press
    • Packt
  • Enterprise architecture
  • General
    • economy
    • LinkedIn
    • Website
  • Music
    • Music Resources
    • Music Reviews
  • Photography
  • Podcasts
  • Technology
    • APIs & microservices
    • chatbots
    • Cloud
    • Cloud Native
    • Dev Meetup
    • development
      • languages
        • node.js
    • drone
    • Fluentd
    • logsimulator
    • mindmap
    • OMESA
    • Oracle
      • API Platform CS
        • tools
      • Helidon
      • ITSO & OEAF
      • Java Cloud
      • NodeJS Cloud
      • OIC – ICS
      • Oracle Cloud Native
      • OUG
    • railroad diagrams
    • TOGAF
  • xxRetired

My Other Web Content & Contributions

  • Amazon Author entry
  • API Platform
  • Dev Meetup (co-managed)
  • Fluentd Book
  • ICS Book Website
  • OMESA
  • Ora World
  • Oracle Community Directory
  • Packt Author Bio
  • Phil on Blogs.Oracle.com
  • Sessionize Profile

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,573 other subscribers

RSS

RSS Feed RSS - Posts

RSS Feed RSS - Comments

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  
« Feb    

Twitter

  • .@Oracle #CloudWorld Tour #London is just around the corner! ⏰ 🇬🇧 Don’t forget to register for the one-day #cloud… twitter.com/i/web/status/1…Next Tweet: 4 hours ago
  • Learn how @OracleCloud can help you improve the efficiency of your business operations at the upcoming Level Up eve… twitter.com/i/web/status/1…Next Tweet: 2 days ago
  • Join Juan Loaiza for the Data Strategies Day keynote at Level Up to learn how to eliminate complexity by leveraging… twitter.com/i/web/status/1…Next Tweet: 3 days ago
  • RT @WunderlichRd: Great post by @mp3monster around how APIs are relevant in so many industries! lnkd.in/eshagCDKNext Tweet: 3 days ago
  • King’s College Hospital London in Dubai announces a strategic collaboration with Oracle Cerner to help accelerate i… twitter.com/i/web/status/1…Next Tweet: 3 days ago
Follow @mp3monster

History

Speaker Recognition

Open Source Summit Speaker

Flickr Pics

Pembroke CastleSeven Bridge Crossing
More Photos

    Social

    • View @mp3monster’s profile on Twitter
    • View philwilkins’s profile on LinkedIn
    • View mp3monster’s profile on GitHub
    • View mp3monster’s profile on Flickr
    • View philmp3monster’s profile on Twitch
    Follow Phil (aka MP3Monster)'s Blog on WordPress.com

    Blog at WordPress.com.

    • Follow Following
      • Phil (aka MP3Monster)'s Blog
      • Join 218 other followers
      • Already have a WordPress.com account? Log in now.
      • Phil (aka MP3Monster)'s Blog
      • Customize
      • Follow Following
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar
     

    Loading Comments...
     

    You must be logged in to post a comment.

      Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
      To find out more, including how to control cookies, see here: Our Cookie Policy
      %d bloggers like this: