OraWorld Magazine – Latest Edition

Tags

, , , , ,

The latest edition of OraWorld has become available to today. With its blend of insight into the Oracle community, and Oracle technologies from database to modern apps. I have to own up and say, I mention the magazine not only because of the beautifully crafted independent insights, but also it includes an article from myself. Taking a look at GraphQL what it is and how recent new Oracle product features could make a big difference to the GraphQL adoption opportunities.

The next edition should include a follow up article to this focussing on API security considerations.

Extracting Dependencies and Versions for a Node Solution

Tags

, , , ,

We have had a requirement from a customer to be able define every package including dependencies within a Node solution (as it happens Apollo GraphQL), not only the complete download path but the version numbering as well. There are many ways to solve this problem. But here is an elegant(?) and portable answer. To ensure that we don’t get pollution from a global node space we created a project package in an empty folder using:

 npm init --yes

This defaults all the package,json settings which for our requirements is fine. Then in the same location its npm install <product from the npm registry to pull> e.g. for Apollo GraphQL:

npm install apollo-server graphql

This will bring down to your npm project all the dependencies putting them in the node_modules child folder. We’re now in a position to retrieve all the details of the packages, their dependencies and version information. This can be done by using the command:

npm list --json

If you don’t provide the –json paramteter then you’ll get a pretty text tree representation rather than a more usable JSON structure.

JSON Output
Without the JSON Flag

As you can see the JSON representation is a lot easier to work if we want to extract more meaning. So it’s worth piping the output to a file. The next step is to extract the two key values – any package nested or not with the JSON attributes Resolved and Version so that we can incorporate the values into a report, spreadsheet etc. There are plenty of ways to process the JSON, but I wanted to have a means that is platform agnostic and simple. So we elected to make use of a ./jq which a JSON processor that is cross-platform, supports an expression and a set of functions that allowed to recurse through the JSON tree of dependencies and tease out the attributes. You can download and put the jq binary in your PATH, but for getting the expression debugged there is a brilliant web tool (jqplay.org) where you can supply the JSON and then online edit your expression and it shows you the result.

JQ play in action

Not all parts of the structure have the resolved attribute. So we needed to introduce some conditionality into the JQ filter expressions, which is as follows:

..| if (has("resolved")? == true) then .resolved+", "+.version else "" end

The initial .. forces the the process to recurse through the structure. Each time we recurse the substructure gets passed onto the next expression like a pipeline in a bash script. Within this piped operation we have an if condition which is predicated on whether the resolved attribute exists. The question mark at the end of the expression has(“resolved”)? tells JQ that if the it can’t resolve, rather than terminating with an error, to return a false result, just to be safe. The condition then says to build a string using the attributes resolved and version with a comma separator character. Lastly if there is no value then just output an empty string. Unfortunately JQ does complain if the use of an else is missing.

As you can see in the screenshot this yields a comma separated lines with the full path and version number. The only challenge is that there empty lines with just quotes. The last step is to just search and replace the quotes will an empty string and we can then pull the content in as CSV.

Unified Logging with Fluentd becomes Logging in Action using Fluentd, Kubernetes and more

Tags

, , , , ,

The book has had a title change as Manning found that links the book was clashing with other solutions using the term ‘Unified Logging’. With the name change it helps bring the book inline with the Manning naming with their In action series. This means the book website is now https://www.manning.com/books/logging-in-action.

With the name change we’ve agreed that there should an additional chapter added. As I’d written the book with a view that everything we cover applies to both modern solutions such as Microservices coming from the CNCF camp but equally relevant to more traditional IT landscapes. Within the book we have explianed how things are positioned and can be used in Kubernetes, but it was agreed with our editorial team that not tackling the configuration of Fluentd with Kubernetes and Docker was to an extent ignoring a key community that will be using Fluentd. So the new chapter will be introduced to address this aspect.

In terms of progress we’re into the 1’s – 1 Chapter to start (the new one), 1 Chapter back from the Technical Editor (Logging Best Practises) – some edits to be done, 1 Chapter now with the editor (How To Create Custom Plugins), 1 Chapter being finished (Logging Frameworks) and finally 1 peer review cycle to go.

Given the lovely review comments that have been quoted on the book’s page. I can only recommend if you have an interest in logging and monitoring then check it out through Manning Early Access Programme (MEAP).

Adventures in DevOps – Fluentd

Tags

, , , , , , , ,

I was fortunate enough to record a podcast with the team at Adventures In Dev Ops just before Christmas. The recording has been fine tuned and now available on their web site here. From my perspective, the discussion was really interesting and explored a wide range of areas around the challenges of monitoring.

As the podcast is linked to the book we’re writing for Manning (Unified Logging With Fluentd), there is a discount code currently running – poddevopsadv20.

Thanks to Charles Wood and Jeffrey Groman for having me on as a guest.

Other news …

I will be presenting at the online conference Blueprint LDN, check out the subjects being covered, looks very interesting.

Oracle Developer Meetups – Gone Virtual

Tags

, , , , , , , , , , , , , , ,

I’ve not posted about the developer meetups for a little while, perhaps because with everything being virtual these days things blur together too much. But its time to put that right (at least a little). So over the last couple of month’s we’ve been fortunate enough to have a couple of Oracle’s guru’s from the A-Team covering some pretty interesting topics.

November saw Chris Peytier exploring the process management side of Integration Cloud and how process management and more traditional integration can come together to offer a very effective solution with example use cases such as the idea of when conditions are not valid for an integration to be executed Chris’ slides are here.

Then this week we had Angelo Santagata complete with Santa hat talking about Serverless as a means to enable SaaS extensions and integrations through the use of Oracle Functions (the cloud-deployed version of Project Fn). You can get the presentation here.

If the slides aren’t enough then you can catch the presentations as videos, Angelo’s is here and I’m sure we’ll see Chris’ made available as well.

2021

I’m excited to say that we have a coyuple of presentations lined up for 2021 already so keep an eye on the London Oracle Developer Meetup. So watch out for the updates in the new year.

Continue reading

Latest on book and APIs

Tags

, , , , , , , ,

My blogging is way down compared with only a post about OKit – OCI Design (on Windows). It largely comes down to lots of work on our Fluentd book. Chapter 6 is now available in the MEAP. As the promo info says …

What’s new?

Chapter 6, “Filtering and Extrapolation”

Gain control and insight!

Last chapter, we touched on the use of the Filter directive. But that was just the tip of the iceberg! In Chapter 6, we’ll plunge below the surface, exploring the when, why, and how of applying filters to give us more insight and precise control over events.

Promo Email from Manning

Earlier chapters have been tweaked, with some additional improvements which will make the live reading experience better.

Another chapter and an appendix should be finding their way to MEAP very soon as it was handed over by our project editor. That will make it seven chapters available, and all the appendices.

Whilst the peer review is taking place the chapter covering plugin development is progressing. The development work has got the basics of the output plugin with log events being stored in Redis and the input being worked on as well. If you want a peak, keep an eye on my GitHub repository (here).

But is isn’t all writing…

I presented on Twitch – you can catch that at https://m.twitch.tv/videos/809295979 I’ve been offered the opportunity to present again, so keep an eye out for something next year.

We recorded a podcast with the excellent guys over at Adventures In DevOps. We don’t have the exact date for the podcast to be released, but I imagine it will sometime during Jan 2021. I’d recommend checking out the podcasts. I’ve been dipping into their back catalogue of recordings and the team ask some really thought provoking questions.

If that wasn’t enough, we’ve been fortunate enough to have some time to talk with leading members of the Fluentd and Fluent Bit projects which was a real pleasure. Hopefully, as we leave this horrendous year behind we’ll get to talk and possibly collaborate some more.

OKit – OCI Design (on Windows)

Tags

, , , , , , , ,

OKit is a tremendous tool for the visual design and development for your Oracle Cloud environment. Visualizing your networks, positioning of service gateways and so on makes it a lot easier than filling in web forms or writing Terraform files as you can see the relationships between the different parts far more easily. For the same reason really that a lot of people use Visio and other tools for this work. The real beauty is that OKit can generate the Terraform and Ansible scripting that can then be used to deliver the implementation.

Okit for visual design of Oracle Cloud

The tool isn’t currently an official Oracle product, but something built by the Oracle A-Team (a small team of gurus who have a role blending developer advocacy, architect supporting customers for the special edge cases and providing thought leadership). But we can hope that someone brings it into the fold and perhaps even incorporates it securely into the cloud dashboard. In the mean time, the code in its entirety is available on GitHub.

Continue reading

Fluentd Book Update

Tags

, , , , , , ,

Things have been very hectic, so much so we’ve not really had much time to write a blog. Most of which has been related to my Fluentd book, so what has been happening (and keep an eye on my Twitter account for a promo code 🙂 ) …

  • Several virtual conferences (Open Source Summit, SPOUG and Oracle APAC Groundbreakers to name just a few)
  • Perhaps the biggest bit has been the book:
    • First 4 chapters went through a rigorous peer review, as a result a number of improvements have been made,
    • with Chapter 5 having been reviewed by our technical editor, and little bit of refinement applied it should be reaching MEAP very very soon along with updated appendices,
    • Chapter 6 has been reviewed by our development editor, so some revisions to apply and then onto the technical editor,
    • Chapter 7 writing in progress, with about 1/3rd complete including examples of applying scaling configuration that can be run on a desktop

So what is to follow:

  • We’re on Manning’s Twitch channel to do a session, which will cover Fluentd, some examples, the book and what it will cover,
  • Once Chapter 7 is done, then we go through a comprehensive review with external input. Depending on the feedback from this, we make another sweep through the existing chapters to make further improvements,
  • Chapter 8 I suspect will be the hardest to write, as we actually get into creating our own Plugin. So I it maybe a little while before this gets completed. The subsequent chapters will come more easily as we’ve got them part written in a rough draft already,
  • We have another round of external peer reviewing to come which will cover everything, so I’m sure we’ll be doing some refinements
  • A podcast recording is scheduled in December.

Talking of Manning on Twitch, this looks like a bit of a hidden Gem – worth looking at not only because of the live stream, but all the previous recordings with other Manning authors are available to watch.

If that isn’t enough with a day job, we have had some major work done on our house. Now we’re moved back in, there are lots of DIY jobs to do, get all the furniture back from storage. Every room apart from the kitchen needs to be painted. But that is my sob story.

I’m hoping to find time to experiment with Oracle’s new cloud native Log Management as this is built on Fluentd foundations and a little bit with API tech – but this is likely to be the Christmas break.

Unified Logging with Fluentd – Sample and more

Tags

, , , , , , ,

Manning have made section of my book freely available. The excerpt from the first draft of my book Unified Logging with Fluentd illustrates the Fluentd take on Hello World – the extract can be found at http://mng.bz/nzm8. This is from the 1st chapter to help set the scene of how Fluentd can be configured. The following quote comes from one of the peer reviewers:

The extract includes the use of the log simulator tool – https://github.com/mp3monster/LogGenerator which takes some configuration and can either play synthetic data or replay real logs as current log events in what ever format you want to simulate – for example standard Log4J through to Apache Server logs with the relevant time separation between events.

Book Discount …

If this isnt enough temptations, then perhaps saying that on September 22: Deal of the Day book is my book Unified Logging with Fluentd. Use code dotd092220au at https://bit.ly/3mBRLK2

Data Integration with Oracle

Tags

, , , , , , , , ,

Oracle’s data integration product landscape outside of GoldenGate has with the arrival of Oracle Cloud been confusing at times. This has meant finding the right product documentation can be challenging, and knowing which product to use in your own technology road-map can be harder to formulate. I believe the landscape is starting to settle now. But to understand the position, let’s look at the causes of disturbance and the changes that have occurred.

Why the complexity?

This has come from I think a couple of key factors. The organizational challenges triggered by Thomas Kurian’s departure which has resulted in rather than the product organization being essentially in three parts aligning roughly to Infrastructure, Platform and Applications to being two Infrastructure and Apps. Add to this Oracle’s cloud has gone through two revolutions. Generation 1 now called Classic was essentially a recognition that they needed an answer to Microsoft, Google and AWS quickly (Oracle are now migrating customers off classic). Then came Generation 2, which is a more considered strategy which is leveraging not just the lowest level stack of virtualization (network and compute), but driving changes all the way through the internals of applications by having them leverage common technologies such as microservices along with a raft of software services such as monitoring, logging, metering, events, notifications, FaaS and so on. Essentially all the services they offer are also integral to their own offerings. The nice thing about Gen2 is you can see a strong alignment to CNCF (Cloud Native Compute Foundation) along with other open public standards (formal or de-facto such as Microprofile with Helidon and Apache). As a result despite the perceptions of Oracle, modern apps are standard a better chance of portability.

Impact on ODI

Oracle’s Data Integration capabilities, cloud or otherwise have been best known as Oracle Data Integrator or ODI. The original ODI was the data equivelent of SOA Suite implementing Extract Load Transform (ELT) rather than ETL as it meant the Oracle DB was fully leveraged. This was built on the WebLogic server.

Along Came Cloud

Oracle cloud came along, and there is a natural need for ODI capabilities. Like SOA Suite, the first evolution was to provide ODI Cloud Service just like SOA Suite had SOA Cloud Service. Both are essentially the same on-premises product with UIs to manage deployment and configuration.

ODI’s cloud transformation for the cloud lead ODI CS evolving into DIPC (Data Integration Platform Cloud). Very much an evolution, but with a more web centered experience for designing the integrations. However, DIPC is no longer available (except possible to customers already using it).

Whilst DIPC had been evolving the requirement to continue with on-premises ODI capabilities is needed. Whilst we don’t know for sure, we can speculate that there was divergent development happening creating overhead as ODI as an on-prem solution remained. We then saw the arrival of ODI Marketplace, this provides an easier transition, including taking into account licensing considerations. DIPC has been superseded by ODI Marketplace.

Marketplace

Oracle has developed a Marketplace just like the other major players so that 3rd party vendors could offer their technologies on the Oracle cloud, just as you can with Azure and AWS. But Oracle have also exploited it to offer their traditional products normally associated with on-premise deployments in the cloud. As a result we saw ODI Marketplace. A smart move as it offers the possibility of exploiting on-prem licensing into the cloud along with portability.

So far the ODI capabilities in its different forms continued to leverage its WebLogic foundations. But by this time the Gen2 Oracle Cloud and the organizational structures behind it has been well established, and working up the value stack. Those products in the middleware space have been impacted by both the technology strategies and organization. As a result API for example have been aligned to the OCI native space, but Integration Cloud has been moved towards the Apps space. in many respects this reflects a low code vs code native model.

OCI ODI

Earlier this year (2020) Oracle launched a brand new ODI product, to use its full name Oracle Cloud Infrastructure Data Integration. This is an OCI native (i.e. Gen2 solution leveraging microservices technologies).

This new product appears to be a very much ground up build as it exploits Apache Spark and Function as a Service (FaaS) as foundational elements. As a ground up build, it doesn’t inherit all the adapters the original ODI can offer. This does mean as a solution today it is very focused on some specific needs around supporting the data movement between the various Oracle Cloud storage and Database as a Service solutions rather than general ingestion and extraction processes.

Conclusion

Products are evolving, but the direction of travel appears to be resolving. But we are still in that period where there are capability gaps between the Gen2 native solution and the traditional ODI via Marketplace solution. As a result the question becomes less which product, but when and if I have to invest in using ODI Marketplace how to migrate when the native product catches up.