Notifications from Oracle API Platform Cloud Service

Tags

, , , , , , , , , ,

There are circumstances in which notifications from the Oracle API Platform CS could be seen as desirable.  For example, if you wish to ensure that the developers are defining good APIs and not accidentally implementing APIs that hit the OWASP Top 10 for APIs. Then you will probably configure things such that developer users can design the APIs, configure the policies, but only request an API to be deployed.

However, presently notifications through mechanisms such as email or via collaboration platforms such as Slack aren’t available.  But implementing a solution isn’t difficult.  For the rest of this blog we’ll explore how this might be implemented, complete with a Slack implementation.

Continue reading

More Free Oracle Cloud than you might know

Tags

, , , , , ,

free-750x536The news about Oracle offering some free cloud services ‘for life’ is making an impact.  But, the free services don’t end there. The pricing of some other native cloud services includes some free bands. So it’s worth keeping an eye on the fine print. I wouldn’t be surprised if we see limited capacity access in other areas.

Oracle Functions – whilst the core of this service is built on the open-source Fn Project (also largely driven by Oracle) the managed service has a free tier allowing up to 2 million invocations that can consume 400, 000 gigabytes of memory per second use (details can be seen here). Plenty enough to experiment with the concepts behind Serverless aka FaaS capabilities.

Oracle Notifications whilst focussed on the technical side of gathering key event data from OCI and its services, as the document states “sending notifications to numerous interested parties, or even synchronizing the moving parts of a distributed application” – this obviously means a service with characteristics a bit like AWS’ SNS. Like SNS it can be hooked up to email and other HTTPS services using Oracle Events which also has free use. Events is particularly interesting as it is bases the event structure on the CNCF CloudEvents spec. There is an excellent illustration of such a use case in the Oracle blogs here.

It will be interesting to see if we a similar trend with other Oracle cloud-native services. A new take on the now-defunct Application Container Cloud Service (ACCS) would be an ideal vehicle – whether there is sufficient demand for such a capability is not clear (it would in effect be an always live service like a Kubernetes solution, but the simpler, smaller footprint more like Functions in a multi-tenant environment. At the same time, it doesn’t have potential latency of a Function being activated).

OGB Appreciation Day : Support of Hybrid

Tags

, , , , , , ,

This is my blog post as part of the Oracle Ground Breakers Appreciation Day (more about this with oracle-base) isn’t about a specific product or feature but an approach or possibly two approaches that exist with many of the PaaS services available from Oracle.

One of the key things that many of Oracle’s products such as Integration Cloud, API Platform and the foundation of Functions (Fn) and Containers is the recognition that many organisations are not so fortunate to be cloud-born, or even working with a cloud-native model for IT. For those organisations who would rather have across location unifying approach, Oracle cloud is not a closed capability like AWS, whilst products like Integration Cloud are at their best on Oracle Cloud Infrastructure, they can be executed in your data centre, or even another cloud.

Whilst the teams I work with experiment and build our service offerings ‘on Oracle’, when we engage with customers to help them with their specific problem spaces, we are more often than not operating in a multi-cloud or on-premises hybrid model.

This hybrid story is helped with a renewed vigour for open source both contributing to but also leading the development of open source. In addition to providing free tiers to some of their stack such as Functions, IaaS and Database (here). Many do forget the Oracle JVM is free as long as you keep up to date, you have got a small footprint Oracle database for free (XE), MySQL is part of the Oracle family. Then many of the modern development technologies are true to the core open-source, Blockchain, Container Engine meaning that the solutions on these layers are portable, can be run on-prem. Yes, Oracle adds value by wrapping these cores with tooling and features that make easier rather than diverging with proprietary Ingress controllers for example.

The irony is that organisations that tend to be associated with a low cost or being faithful to open source goals actually can end up locking you in and appear to be moving away from the original open-source ideals. Consider RedHat, the champion for a lot of open source-based enablement have removed Kubernetes from the official RedHat downloads for their Linux in-favour of a single node license of OpenShift, to get Kubernetes of RHEL you have to go outside of the normal binary source channels (other challenges are documented here).

London Oracle Dev Meet-up gets Blockchained

Tags

, , , , , , , , ,

Whilst the weather may have put some off venturing out, not for our intrepid duo of presenters – Joost Volker (Oracle PM for a Blockchain) and Robert van Mölken Oracle Groundbreaker Ambassador and author of Blockchain Across a Oracle who both had to negotiate protesting farmers, traffic jams, flight delays (wrong kind of rain to land in London) and London’s rush hour traffic.

So, what was covered in the meet-up…

Continue reading

Millennials in the Workforce – PTK

Tags

, , , , , ,

Those who know me will be aware that I try to support the UK Oracle User Group’s journal (#PTK) in a number of ways from submitting articles through to being part of the review panel.  I’ve mentioned in the past some of the changes that the journal has undergone (here for example).  But another change is that the editorial team are including more diverse content. For example in the latest issue just out. It includes an article about Millennials in the workforce and how things are changing. A theme that is confronting not only businesses as employers, but as the new generation of influencers and decision-makers and that will be making our enterprise buying decisions, and dare I say it,  members of a user group.

As part of the team who also informs the User Group’s event planning, I happened to throw in some thoughts about supporting and engaging the newer generation. That led to an invitation to participate in an interview which has contributed an interesting article on millennials in the workforce.

Putting the company man hat on for a moment, it was good to highlight the efforts that Capgemini make to support new talent into the organisation.

The article is here, and links to the Tech and App parts of #PTK journal are here.

PTK_ISSUE71_TECH_COVER_FINAL

Mastering FluentD configuration syntax

Tags

, , , , , , , ,

Getting to grips with FluentD configuration which describes how to handle logging event(s) it has to process can be a little odd (at least in my opinion) until you appreciate a couple of foundation points, at which point things start to click, and then you’ll find it pretty easy to understand.

It would be hugely helpful if the online documentation provided some of the points I’ll highlight upfront rather than throwing you into a simple example, which tells you about the configuration but doesn’t elaborate as deeply as may be worthwhile. Of course, that viewpoint may be born from the fact I have reviewed so many books I’ve come to expect things a certain way.

But before I highlight what I think are the key points of understanding, let me make the case getting to grips with FluentD.

Why master FluentD?

FluentD’s purpose is to allow you to take log events from many resources and filter, transform and route logging events to the necessary endpoints. Whilst is forms part of a standard Kubernetes deployment (such as that provided by Oracle and Azure for example) it can also support monolithic environments just as easily with connections working with common log formats and frameworks. You could view it as effectively a lightweight (particularly if you use FluentBit variant which is effectively a pared-back implementation) middleware for logging.

If this isn’t sufficient to convince you, if Google searches are a reflection of adoption, then my previous post reflecting upon Observability -London Oracle Developer Meetup shows a plot reflecting the steady growth.  This is before taking into account that a number of cloud vendors have wrapped Fluentd/fluentbit into their wider capabilities such as Google (see here).

Not only can you see it as middleware for logging it can also have custom processes and adapters built through the use of Ruby Gems, making it very extensible.

FluentD

Remember these points

and mastering the config should be a lot easier…

Continue reading

Observability -London Oracle Developer Meetup

Tags

, , , , , , , , , , ,

meetup-monitoringLast night was the London Oracle Developer Meetup’s sessions around observability.  Andrei Cioaca with a focus on the use of OpenTracing as provided by Jaeger, in a standard Kubernetes deployment with Istio – realized with Oracle Kubernetes Engine (OKE).  This was followed by my session on another pillar using logging via FluentD. Also incorporated into standard Kubernetes, but also able to support traditional monolithic use cases.

Andrei provided a great overview of the 3 pillars and the strengths and weaknesses of the different pillars. With the basics covered Andrei then dove into the configuration and execution of Istio combined with Jaeger and the corresponding insights available.  including a look at the kinds of visual insights that Jaeger and Kiali provide.  Some probing conversations followed about the relationship to Spring Cloud Sleuth, Open Zipkin and the OpenTracing as a concept more generally.

Andrei’s presentation material can be found in his GitHub repository here.

search-trend-fluentd

Google Analytics on Search Terms

My session followed a pizza break, as there was a delay in its arrival. With everybody having chatted over pizza about OpenTracing, we picked up on FluentD and the Logging aspect to Observability. FluentD, as an open-source project has been growing steadily, and actually baked into several Log Analytics products and services – as the above analytics from Google shows.

The presentation looked at the growing challenges of modern software in terms of making sense of logging.  We explored the capabilities of FluentD before drilling into real-world use cases and potential deployment models.

As you’ll see from the slides we ran a couple of demos. The configuration for the demos can be found at https://github.com/mp3monster/fluentd-demos along with an example payload.

The next meetup we have organized is around Blockchain, all the details can be found at https://www.meetup.com/Oracle-Developer-Meetup-London/events/264661742/.

Other related info …

Article direct to LinkedIn – OpenTracing and API Gateways

Tags

, , ,

Capgemini’s Oracle Expert Community – which includes myself, have been asked to publish articles directly to LinkedIn as part of the supporting activities to Oracle Open World. So here is my offerings: https://www.linkedin.com/pulse/connection-between-api-gateways-opentracing-phil-wilkins/.

This is a short look at why API Gateways at the boundary of your environment when supporting OpenTracing can offer more values.

Handling Socket connectivity with API Gateway

Tags

, , , , , ,

At the time of writing the Oracle API Platform doesn’t support the use of Socket connections for handling API data flows. Whilst the API Platform does provide an SDK as we’ve described in other blogs and our book it doesn’t allow the extension of how connectivity is managed.

The use of API Gateways and socket-based connectivity is something that can engender a fair bit of debate – on the one hand, when a client is handling a large volume of data, or expects data updates, but doesn’t want to poll or utilize webhooks then a socket strategy will make sense. Think of an app wanting to listen to a Kafka topic. Conversely, API gateways are meant to be relatively lightweight components and not intended to deal with a single call to result in massive latency as the back-end produces or waits to forward on data as this is very resource-intensive and inefficient. However, a socket-based data transmission should be subject to the same kinds of security controls, and home brewing security solutions from scratch are generally not the best idea as you become responsible for the continual re-verification of the code being secure and handling dependency patching and mitigating vulnerabilities in other areas.

So how can we solve this?

As a general rule of thumb, web sockets are our least preferred way of driving connectivity, aside from the resource demand, it is a fairly fragile approach as connections are subject to the vagaries of network connections, which can drop etc. It can be difficult to manage state (i.e. knowing what data has or hasn’t reached the socket consumer). But sometimes, it just is the right answer. Therefore we have developed the following pattern as the following diagram illustrates.

API Protected Sockets

How it works …

The client initiates things by contacting the gateway to request a socket, with the details of the data wanted to flow through the socket. This can then be validated as both a legitimate request or (API Tokens, OAuth etc) and that the requester can have the data wanted via analyzing the request metadata.

The gateway works in conjunction with a service component and will if approved acquire a URI from the socket manager component. This component will provide a URL for the client to use for the socket request. The URL is a randomly generated string. This means that port scans of the exposed web service are going to be difficult. These URLs are handled in a cache, which ideally has a TTL (Time To Live). By using Something like Redis with its native TTL capabilities means that we can expire the URL if not used.

With the provided URL we could further harden the security by associating with it a second token.

Having received the response by the client, it can then establish the socket-based connection which gets routed around the API Gateway to the Socket component. This then takes the randomly-generated part of the URL and looks up the value in the cache, if it exists in the cache and the secondary token matches then the request for the socket is legitimate. With the socket connection having been accepted the logic that will feed the socket can commence execution.

If the request is some form of malicious intent such as a scan, probe or brute force attempt to call the URL then the attempt should fail because …

  • If the socket URL has never existed in or has been expired from the Cache and the request is rejected.
  • If a genuine URL is obtained, then the secondary key must correctly verify. If incorrect again the request is rejected.
  • Ironically, any malicious attack seeking to overload components is most likely to affect the cache and if this fails, then a brute access tempt gets harder as the persistence of all keys will be lost i.e. nothing to try brute force locate.

You could of course craft in more security checks such as IP whitelisting etc, but every-time this is done the socket service gets ever more complex, and we take on more of the capabilities expected from the API Gateway and aside from deploying a cache, we’ve not built much more than a simple service that creates some random strings and caches them, combined with a cache query and a comparison. All the hard security work is delegated to the gateway during the handshake request.

Thanks to James Neate and Adrian Lowe for kicking around the requirement and arriving at this approach with us.

 

Costs in Multi-Cloud

Tags

, , , , , , , ,

Over the last couple of years, we have seen growing references to multi-cloud. That is to say, people are recognizing that organisations, particularly larger ones are ending up with cloud services for many different vendors. This at least in part has come from where departments within an organization can buy meaningful resources within their local budgets.

Whilst there is a competitive benefit of the recent partnership agreement between Microsoft and Oracle given the market margin AWS has in comparison to everyone else. Irrespective of the positioning with AWS, this agreement has arisen because of the adoption of multi-cloud. It also provides a solution to the problem of running highly resilient Oracle database setups using RAC, DataGuard etc can be made available to Azure without risk to security and the all-important network performances that are essentially to DB operation. Likewise, Oracle’s SaaS offerings are sector leaders if not best in place, something Microsoft can’t compete with. But at the other end, regardless of Oracle’s offerings, often organisations will prefer Microsoft development ecosystem because of the alignment to office tooling, the ease of building solutions quickly.

Multi-cloud even with the agreements like the Microsoft and Oracle one (See here), doesn’t mean there won’t be higher costs in crossing clouds. Let’s see where the costs reside …

  • Data egress (and in some cases ingress as well) from clouds costs. Whilst the ingress costs have been eliminated because it can be seen as a barrier to selling services, particularly big data. Data egress can, however, be an issue. Oracle has made this cost very low to be almost negligible, but not necessarily others as the following comparison shows …
  • Establishing the high-performance connections That the agreement supports needed between Azure and Oracle cloud is the same tech for the cloud to the ground do incur a cost. In Oracle’s case, there is a fee for the connection (not a large cost, but one that exists none the less) plus any traffic fees the provider of the network connection spanning the data centre locations. This is because you’re leasing capacity on someone’s dedicated fibre or MPLS services. Now, this should prove to be small as part of the enabler of this offering is that both Oracle and Microsoft cloud DCs are often actually physically provided by the same provider or at-least the centres a physically pretty close, as a result of both companies gravitating to locations close together because of the optimal highly available infrastructure (power, telecommunications) legal and commercial factors along with the specialist skills needed.

If data egress is the key challenge to costs, what drives the data egress beyond the obvious content for user interfaces? …

  • Obviously, you have the business data flows, some of these flows will be understood by the business community. But not all, this is down to the way data from the cloud can be exposed to another. For example inefficient services with APIs that require frequent polling and not using expressing the request efficiently, rather than perhaps express the request using HTTP header attributes and other efficiencies or even utilize frameworks such as webhooks so data can be pushed.
  • High-speed data access, often drives data replication having databases in multiple clouds with mirror image data in each location even if the majority of the data is not necessarily needed. This can happen with technologies such as Kafka which for non compacted topics will mean every event can be replicated even if that event has a short lifetime.
  • One of the hidden costs is the operational tasks of gathering logs to a combined view so end to end insights can be obtained. A detailed log can actually yield more ‘data’ by volume than the business flows themselves because it is semi-structured, and intended to be very readable and at the most granular level there to help debug and test.

In addition to the data flows, you need to consider how other linkages in addition to the Oracle-Azure connection are involved. In the detailed documentation, it is not possible to get your on-premises location connected to one of the clouds (e.g. Oracle FastConnect, and then assume your traffic can hop to Azure via the bridge using FastConnect and Azure’s ExpressRoute.  To add performance to your solution parts in both Azure and Oracle Cloud, you still need to have FastConnect and ExpressRoute configured to your on-premises location. This, of course, may impact how bulk data for lift and shift app use cases such as EBS may be applied. For example, if you choose to regularly bulk data transfer between on-premise and EBS via the app/middleware tier rather than via direct DB, and that mid-tier is running in Azure – you will need both routes established.

Conclusion

There is no doubt that the Oracle-Azure cloud to cloud linkage is a step forward, but ‘the devil is in the details‘ as the saying goes. To optimize the benefits and savings we’d suggest that you;

  • you’ll need to think through your use cases – understand data flow and volume (someone bulk syncing application data with a data warehouse?),
  • define a cloud data strategy – to layout principles, approaches and identify compliance needs, this is particularly helpful for custom solution development, so the right level of log data is consolidated with the important details, data retention addresses compliance requirements and doesn’t ratchet up unnecessary costs (there is a tendency to horde data just in case – if this is really wanted, think about how its stored),
  • based on business common usage models define a simple forecasting formula – being able to quantify data costs will always make it easier to challenge back data hoarding tendency,
  • confirm the inter-cloud network vendor charges when working with multi-cloud.