This isn’t the first time I’ve written about the Oracle Cloud SDK (check here), but it seems rather fitting, as some of the utilities I’ve been working on are open to the community, and #JoelKallmanDay is all about community. If you’d like to know more about #JoelKallmanDay then checkout Tim Hall’s blog here.
Oracle have provided a very rich API and then overlaid it with a number of SDKs in Python, Java etc. The SDKs immediately remove the work of creating connections and correct payloads. Taking the Python SDK for example, all I need to do is create a standard configuration file with all the necessary connection properties to my OCI instance. Then it’s simply a case of creating the correct Python object for the correct group of services wanted. Then it’s down to populating the object attributes. This is the illustration of exactly what a good SDK does. I can lean on my IDE to use the correct set and get operators. The code for establishing a connection is done for me.
What I’ve found most striking is the level of consistency in the methods provided by the SDK regardless of the service. This makes it very easy to develop functionality without needing to check every API before I can write any code. it would be easy to say, so what. But when you look at the breadth of the OCI services it becomes more impressive.
The convenience doesn’t end there. Rather than having to run your utilities from a local command line (Python means we’re pretty much OS agnostic), the Oracle Cloud shell is preconfigured with Python, OCI SDK, GitHub and FTP server and basic Linux text editors. The all amounts to the fact that you can use your scripts/tools from within the web UI of OCI. Edit your credentials file locally, push and pull any changes to the scripts from the shell and any Git repo such as GitHub.
With this insight, we just need to build that catalogue of accelerator tools to make those repetitive processes just a little easier. For example ensuring that when you tear down your manually created services all interlinked entities are deleted first (which can be troublesome with policies, groups, compartments and so on).
While there is a deserved amount of publicity around the introduction of ARM compute onto OCI with the ARM Ampere CPU offering, and the amazing level of always free compute being provided (24GB of memory and 4 cores which can be used in any combination of servers). There have been some interesting announcements that perhaps haven’t drawn as much attention that they deserve. This includes OCI support for GitHub Actions, plus several new DevOps services and an Artifact Registry. We’ll comeback to the new services in another post. Today, let’s look at GitHub Actions.
Last night saw the final chapter of Logging in Action with Fluentd go back to my editor. The next step is that Chapter (and others I hope) will go to MEAP, so early readers not only get the final chapter, but also the raft of improvements we’ve made. Along with that, the manuscript goes for a full peers review. Once that’s back, its time for a round of edits as I address the feedback then into copy editing and Manning sign off review.
As you might have guessed, we’ve kept busy with an article in the 25th edition of OraWorld. This follows Part 1 talking about GraphQL with a look at considerations for API Security.
In addition to that we’re working on a piece around automation of OCI management activities such as setting up developers, allowing them a level of freedom to experiment without accidentally burning through all your credits by spinning up Exadata servers or 500 node Kubernetes clusters.
We might even have some time to write more about APIs and integration.
OCI Always Free compute node has a restriction that isn’t clearly documented or obvious when you go to a instantiate such compute resources. That restriction is the absence of OCI Custom logging. This is a little surprising given that this capability is based on Fluentd and the compute footprint needed by Fluentd is so small. In the following screen shot, as you can see when configuring the compute, there is no reason to believe you can’t use OCI Logging for custom logs.
Configuration for a logging agent on the Always Free VM
But when you go to configure the custom logging on your running compute, you can see that the feature is disabled with the message about the restriction. It would have been nice, to have the warning on the creation phase, as if I’d manually setup the VM then went to switch on OCI Logging knowing where I’d deployed my applications, I’d have wasted time in the setup.
Custom Logging Limitation
Solution, use one of the AMD Flex or Previous Generation to minimize the footprint to your needs.
UPDATE 09th June 2021
We’ve been told that the this constraint has been addressed. In addition Oracle also introduced the new Ampere offering which allows for nodes with a form factor of upto 4 OCPU and 24GB of RAM using the new ARM chips. You can also use variations on this such as 4x 1 OCPU 6GB RAM
Opening of the blog post on blogs.oracle.commy Author Profile on blogs.oracle.com
World Festival Conference
We’ve also scored another success, this time we’ve been invited to speak at WorldFestival in August, this is an online conference organized by the same team behind DeveloperWeek. This is the first time outside of an Oracle linked event where I’ve been amongst the first few named speakers, so proud of that. The conference looks really interesting as it looks beyond just core developer themes with conference tracks on Space & Transportation, Smart Cities, Robotics, Digital Health to name a few of the 12 streams. Worth checking out.
OKit is a tremendous tool for the visual design and development for your Oracle Cloud environment. Visualizing your networks, positioning of service gateways and so on makes it a lot easier than filling in web forms or writing Terraform files as you can see the relationships between the different parts far more easily. For the same reason really that a lot of people use Visio and other tools for this work. The real beauty is that OKit can generate the Terraform and Ansible scripting that can then be used to deliver the implementation.
Okit for visual design of Oracle Cloud
The tool isn’t currently an official Oracle product, but something built by the Oracle A-Team (a small team of gurus who have a role blending developer advocacy, architect supporting customers for the special edge cases and providing thought leadership). But we can hope that someone brings it into the fold and perhaps even incorporates it securely into the cloud dashboard. In the mean time, the code in its entirety is available on GitHub.
Oracle’s data integration product landscape outside of GoldenGate has with the arrival of Oracle Cloud been confusing at times. This has meant finding the right product documentation can be challenging, and knowing which product to use in your own technology road-map can be harder to formulate. I believe the landscape is starting to settle now. But to understand the position, let’s look at the causes of disturbance and the changes that have occurred.
Why the complexity?
This has come from I think a couple of key factors. The organizational challenges triggered by Thomas Kurian’s departure which has resulted in rather than the product organization being essentially in three parts aligning roughly to Infrastructure, Platform and Applications to being two Infrastructure and Apps. Add to this Oracle’s cloud has gone through two revolutions. Generation 1 now called Classic was essentially a recognition that they needed an answer to Microsoft, Google and AWS quickly (Oracle are now migrating customers off classic). Then came Generation 2, which is a more considered strategy which is leveraging not just the lowest level stack of virtualization (network and compute), but driving changes all the way through the internals of applications by having them leverage common technologies such as microservices along with a raft of software services such as monitoring, logging, metering, events, notifications, FaaS and so on. Essentially all the services they offer are also integral to their own offerings. The nice thing about Gen2 is you can see a strong alignment to CNCF (Cloud Native Compute Foundation) along with other open public standards (formal or de-facto such as Microprofile with Helidon and Apache). As a result despite the perceptions of Oracle, modern apps are standard a better chance of portability.
Impact on ODI
Oracle’s Data Integration capabilities, cloud or otherwise have been best known as Oracle Data Integrator or ODI. The original ODI was the data equivelent of SOA Suite implementing Extract Load Transform (ELT) rather than ETL as it meant the Oracle DB was fully leveraged. This was built on the WebLogic server.
Along Came Cloud
Oracle cloud came along, and there is a natural need for ODI capabilities. Like SOA Suite, the first evolution was to provide ODI Cloud Service just like SOA Suite had SOA Cloud Service. Both are essentially the same on-premises product with UIs to manage deployment and configuration.
ODI’s cloud transformation for the cloud lead ODI CS evolving into DIPC (Data Integration Platform Cloud). Very much an evolution, but with a more web centered experience for designing the integrations. However, DIPC is no longer available (except possible to customers already using it).
Whilst DIPC had been evolving the requirement to continue with on-premises ODI capabilities is needed. Whilst we don’t know for sure, we can speculate that there was divergent development happening creating overhead as ODI as an on-prem solution remained. We then saw the arrival of ODI Marketplace, this provides an easier transition, including taking into account licensing considerations. DIPC has been superseded by ODI Marketplace.
Marketplace
Oracle has developed a Marketplace just like the other major players so that 3rd party vendors could offer their technologies on the Oracle cloud, just as you can with Azure and AWS. But Oracle have also exploited it to offer their traditional products normally associated with on-premise deployments in the cloud. As a result we saw ODI Marketplace. A smart move as it offers the possibility of exploiting on-prem licensing into the cloud along with portability.
So far the ODI capabilities in its different forms continued to leverage its WebLogic foundations. But by this time the Gen2 Oracle Cloud and the organizational structures behind it has been well established, and working up the value stack. Those products in the middleware space have been impacted by both the technology strategies and organization. As a result API for example have been aligned to the OCI native space, but Integration Cloud has been moved towards the Apps space. in many respects this reflects a low code vs code native model.
OCI ODI
Earlier this year (2020) Oracle launched a brand new ODI product, to use its full name Oracle Cloud Infrastructure Data Integration. This is an OCI native (i.e. Gen2 solution leveraging microservices technologies).
This new product appears to be a very much ground up build as it exploits Apache Spark and Function as a Service (FaaS) as foundational elements. As a ground up build, it doesn’t inherit all the adapters the original ODI can offer. This does mean as a solution today it is very focused on some specific needs around supporting the data movement between the various Oracle Cloud storage and Database as a Service solutions rather than general ingestion and extraction processes.
Conclusion
Products are evolving, but the direction of travel appears to be resolving. But we are still in that period where there are capability gaps between the Gen2 native solution and the traditional ODI via Marketplace solution. As a result the question becomes less which product, but when and if I have to invest in using ODI Marketplace how to migrate when the native product catches up.
A couple of years ago I got to discuss some of the design ideas behind API Platform Cloud Service. One of the points we discussed was how API Platform CS kept the configuration of APIs entirely within the platform, which meant some version management tasks couldn’t be applied like any other code. Whilst we’ve solved that problem (and you can see the various tools for this here API Platform CS tools). The argument made that your API policies are pretty important, if they get into the public domain then people can better understand to go about attacking your APIs and possibly infer more.
Move on a couple of years, Oracle’s 2nd generation cloud is established an maturing rapidly (OCI) and the organisational changes within Oracle mean PaaS was aligned to SaaS (Oracle Integration Cloud, Visual Builder CS as examples) or more cloud native IaaS. The gateway which had a strong foot in both camps eventually became aligned to IaaS (note that this doesn’t mean that the latest evolution of the API platform (Oracle Infrastructure API) will lose its cloud agnostic capabilities, as this is one of unique values of the solution, but over time the underpinnings can be expected to evolve).
Any service that has elements of infrastructure associated with it has been mandated to use Terraform as the foundation for definition and configuration. The Terraform mandate is good, we have some consistency across products with something that is becoming a defacto standard. However, by adopting the Terraform approach does mean all of our API configurations are held outside the product, raising the security risk of policy configuration is not hidden away, but conversely configuration management is a lot easier.
This has had me wondering for a long time, with the use of Terraform how do we mitigate the risks that API CS’s approach was trying to secure? But ultimately the fundamental question of security vs standardisation.
Mitigation’s
Any security expert will tell you the best security is layered, so if one layer is found to be vulnerable, then as long as the next layer is different then you’re not immediately compromised.
What this tells us is, we should look for ways to mitigate or create additional layers of security to protect the security of the API configuration. These principles probably need to extend to all Terraform files, after all it not only identifies security of not just OCI API, but also WAF, networks that are public and how they connect to private subnets (this isn’t an issue unique to Oracle, its equally true for AWS and Azure). Some mitigation actions worth considering:
Consider using a repository that can’t be accidentally exposed to the net – configuration errors is the OWASP Top 10. So let’s avoid the mistake if possible. If this isn’t an option, then consider how to mitigate, for example …
Strong restrictions on who can set or change visibility/access to the repo
Configure a simple regular check that looks to see if your repos have been accidentally made publicly visible. The more frequent the the check the smaller the potential exposure window
Make sure the Terraform configurations doesn’t contain any hard coded credentials, there are tools that can help spot this kind of error, so use them. Tools exist to allow for the scanning of such errors.
Think about access control to the repository. It is well known that a lot of security breaches start within an organisation.
Terraform supports the ability to segment up and inject configuration elements, using this will allow you to reuse configuration pieces, but could also be used to minimize the impact of a breach.
Of course he odds are you’re going to integrate the Terraform into a CI/CD pipeline at some stage, so make sure credentials into the Terraform repo are also secure, otherwise you’ve undone your previous security steps.
Minimize breach windows through credentials tokens and certificate hanging. If you use Let’s Encrypt (automated certificate issuing solution supported by the Linux Foundation). Then 90 day certificates isn’t new.
Paranoid?
This may sound a touch paranoid, but as the joke goes….
Just because I’m paranoid, it doesn’t mean they’re not out to get me
Fundamental Security vs Standardisation?
As it goes the standardisation is actually a dimension of security. (This article illustrates the point and you can find many more). The premise is, what can be ensured as the most secure environment, one that is consistent using standards (defacto or formal) or one that is non standard and hard to understand?
A few weeks ago Oracle announced a new tool for all Oracle cloud users including the Always Free tier. Cloud Shell provides a Linux (Oracle v7.7) environment to freely use ( (within your tenancy’s monthly limits) – no paying for VM or using your limited set of VMs (for free-tier users) or anything like that.
As you can see the Shell can be started using a new icon at the top right (highlighted). When you open the shell for the 1st time, it takes a few moments to instantiate – and you’ll see the message at the top of the console window (also highlighted). The window provides a number of controls which allows you to expand to full screen and back again etc.
The shell comes preconfigured with a number of tools, such as Terraform with the Oracle extensions, OCI CLI, Java and Git, so linking to Developer Cloud or GitHub for example to manage your scripts etc is easy (as long as you know you GIT CLI – cheat sheet here). The info for these can be seen in the following screenshots.
In addition to the capabilities illustrated, the Shell is set up with:
Python (2 and 3)
SQL Plus
kubectl
helm
maven
Gradle
The benefit of all of this is that you can work from pretty much any device you like. It removes the need to manage and refresh security tokens locally to run scripts.
A few things to keep in mind whilst trying to use the Shell:
It is access controlled through IAM, so you can of course grant or block the use of the tool. Even with access to the shell, users will need to obviously have to have access to the other services to use the shell effectively.
The capacity of the home folder is limited to 5GB – more than enough for executing scripts and a few CLI based tools and plugins – but that will be all.
If the shell goes unused for 6 months then the tenancy admin will be warned, but if not used, then the storage will be released. You can, of course, re-activate the Shell features at a future date, but of course, it will be a blank canvas again.
For reasons of security access to the shell using SSH is blocked.
The shell makes for a great environment to manage and perform infrastructure development from and will be a dream for Linux hard code users. For those who like to be lazy with a visual IDE, there are ways around it (e.g. edit in GitHub) and sync. But power users will be more than happy with vim or vi.
You must be logged in to post a comment.