Outside of my Oracle cloud-related content, we’ve just published an article on DZone. Those who follow this blog will be familiar with the article theme as it relates to the Log Simulator work. We’ve also written for Devmio – although we don’t yet know when the article will be published and whether the content will be publicly available or behind their paid firewall.
The last week or so has been the DeveloperWeek 23 Conference – in Hybrid form, with the physical event last week and online this week. Circumstances prevented me from attending physically, but yesterday I was honored with the opportunity to present virtually. My session covered the adoption of API Streaming as an alternative approach to needing to poll with APIs to get the latest data state/updates.
Just before the Christmas break, I got to record an excellent podcast with Anatolii of UNmiss. It was a great conversation about Cloud Integration, APIs, and approaches to Cloud-based integration. While I am not in a consulting role in the conventional sense, a lot of an Evangelist’s task is still to listen, understand, and, when necessary, challenge assumptions and help people understand how technologies can help address problems. This might include sketching out a journey of evolution and improvement. During the podcast, we discussed some of these ideas.
In addition to some of the practices, we’ve used. The conversation touched upon books. My books are on the sidebar, including links to Manning, who, as a publisher, I’d recommend. I’ve previously blogged some reading recommendations and previously written some book reviews which may be of interest to anyone following up.
We haven’t blogged too much recently as we have been busy helping get and producing content for my employer Oracle, working with Software Engineering Daily, and developing a collaborative book. So, I thought I’d pull together some links to these new resources.
We’ve been busy putting together a number of Oracle Architecture Center assets over the last week. This has included building LogSimulator extensions that can either be run in a very simple manner using just a single file, but limited in the payloads that can be sent to OCI (if you take the appropriate custom file from the LogSimulator you do need to make one minor tweak. But the code has also been added to the Oracle GitHub repository here in a manner that doesn’t require the full tool. There of course a price to pay for the simplified implementation. This comes in the form of the notifications being sent and received being hardwired into the code rather than driven through the insulator’s configuration options.
The decoupling has been done by implementing the interface for the custom methods in a class without the implements declaration, and then we extend the base class and apply the implements declaration at that level.
While notifications could take log events, it is more suited to JSON payloads. But as the simulator can tailor the content being sent using some formatting, it does not care if the provided events to send are pre-formatted as JSON objects making it an easy tool to test the configuration of OCI Notifications.
Unit testing as well
In addition to the new channel, as previously mentioned we have been making some code improvements. To support that we have started to add unit tests, and double checking code will compile under Java. To keep the dependencies down we’re making use of Java assert statements rather than a pretty JUnit. But the implementation ideas are very similar. As the tests use Java asserts the use of asserts does need to be enabled in the command line; for example:
My blogging activities have expanded a little bit further. Oracle staff (particularly those involved with Product Management) and the Ace community have been contributing to a Medium publication called Oracle Devs (go here). The content here is 1st rate, technically comprehensive, and from some exceptional technical people. So I’d recommend tracking it if you aren’t already and I feel very honored to have several posts already included, which can be found at:
Those that follow me here, not o worry, this is the center of the universe for my blogging activity, and I will tie back anything that is unique to Medium o posts here.
In addition to that, I do also have a profile on the official Oracle Blog (here), and a range of posts have also been picked up via the PaaSCommuniy which haven’t been directly linked to my Oracle blog profile but can be found with this link
One of the areas I present publicly is the use of Fluentd. including the use of distributed and multiple nodes. As many events have been virtual it has been easy to demo everything from my desktop – everything is set up so I can demo things very easily. While doing this all on one machine does point to how compact and efficient Fluentd is as I can run multiple instances concurrently it does undermine distributed capabilities somewhat.
Add to that I now work for Oracle it makes sense to use OCI resources. With that, I have been developing the scripts to configure Ubuntu VMs to set up the demo environments installing Ruby, Fluentd, and various gems needed and pulling the relevant configurations in. All the assets can be found in the GitHub repository https://github.com/mp3monster/logging-demos. The repository readme includes plenty of information as well.
While I’ve been putting this together using OCI, the fact that everything is based on Ubuntu should mean it can be run locally on VMs, WSL2, and adaptable for MacOS as well. The environment has been configured means you can still run on Ubuntu with a single node if desired.
Additional Log Destinations
As the demo will typically be run on OCI we can not only run the demo with a multinode setup, we have extended the setup with several inclusion files so we can utilize OCI services OpenSearch and OCI Log Analytics. If you don’t want to use these services simply replace the contents of several inclusion files including files with the contents of the dummy_inclusion.conf file provided.
Representation of the Demo setup
The configuration works by each destination having one or two inclusion files. The files with the postfix of label-inclusion.conf contains the configuration to direct traffic to the respective service with a configuration that will push log events at a very high frequency to the destination. The second inclusion file injects the duplication of log events to each service. The inclusion declarations in the main node Fluentd config file references an environment variable that should provide the path to the inclusion file to use. As a result, by changing the environment variable to point to a dummy file it becomes possible o configure out the use of one of the services. The two inclusions mean we can keep the store declarations compact and show multiple labels being used. With the OpenSearch setup, we have a variant of the inclusion file model where the route inclusion can reference the logic that we would use in the label directly within the sore declaration.
The best way to see the use of the inclusions is to experiment with setting the different environment variables to reference the different files and then using the Fluentd dry-run feature (more on this in the book).
Setup script
The setup script performs a number of tasks including:
Pulling from Git all the resources needed in terms of configuration files and folders
Retrieving the necessary plugins against the possibility of their use.
Setting up the various environment variables for:
Slack token
environment variables to reference inclusion files
shortcut environment variables and aliases
network (IP) address for external services such as OpenSearch
Setting up a folder for OCI tokens needed.
Setting up temp folders to be used by OCI Plugins as a file-based cache.
Feeding the log analytics service is a more complex process to set up as the feeds need to have metadata about the events being ingested. The downside is the configuration effort is greater, but the payback is that it becomes easier to extract meaningful information quickly because the service has a greater understanding of the content. For example, attributing the logs to a type of source means the predefined or default log formats are immediately understood, and maximum meaning can be retrieved from the log event.
Going to OCI Log Analytics does cut out the need for the Connections hub, which would allow rules and routing to be defined to different OCI services which functionally can help such as directing log events to PagerDuty.
I’ve been a fan of Railroad syntax diagrams for a long time. I’ve always found them an easy way to understand the syntactical options and the reserved/keywords in an efficient manner.
Example of Railroad Syntax diagram
I have been digging around in the documentation to find a keyword in the OCI Policies syntax that the common cases don’t use. After a bit of rooting around, I found what I needed. But a Railroad representation would have helped me get the expression correct effortlessly and without so much effort.
Once I solved my problem, I decided to see if I could find something that could easily create the railroad diagrams and encountered a fantastic bit of code on GitHub from Tab Atkins Jr. It’s a neat bit of JavaScript, which can even be run from their GitHub pages – go here. Tab has taken the time to document the tool well, so working out the syntax to define the diagram is straightforward (not that you need to read it much as the tool is well written).
The following diagrams show the syntax for writing OCI Policies in a single image and with the full syntax broken into 2 images to make it a little easier to read on the screen. But also address the fact often you don’t need the Where clause.
If the diagrams need to be updated the source to use with the tools is in my GitHub repository. But a really cool feature of the utility is that the information to populate the editor view is included in the URL (does make for a long URL) but it means this link will take you directly to the view & editor if you want to tinker with the definition. So the links are:
A container registry is as essential as a Kubernetes service as you want to manage the deployable resources. That registry could be the public Docker repository or something else. In most people’s cases, the registry needs to be private as you don’t want to expose your product assets to potential external tampering. As a result, we need a service such as Oracle’s container registry OCIR.
The re of this blog is going to walk through how to push a container you’ve built into OCIR and a gotcha that can trip up users if you make assumptions about how the registry works.
Build container
Let’s assume you’re building your microservices locally or retrieving vetting 3rd party services for use. In both cases, you want to manually push your assets into OCIR manually rather than have an automated build pipeline do it for you.
This creates a container locally, and we can see the container listed using the command:
docker images
Setup of OCIR
We need an OCIR to target so the easiest thing is to manually create an OCIR instance in one of the regions, for the sake of this illustration we’ll use Ashburn (short code is IAD). To help with the visibility we can put the registry in a separate compartment as a child of the root. Let’s assume we’re going to call the registry GraphQL. So before creating your OCIR set up the compartment as necessary.
fragment of the compartment hierarchy
In the screenshot, you can see I’ve created a registry, which is very quick and easy in the UI (in the menu it’s in the Developer Services section).
The Oracle meu to navigate to the OCIR servicethe UI to create a OCIR
Finally, we click on the button to create the specific OCIR.
Deployment…
Having created the image, and with a repo ready we can start the steps of pushing the container to OCIR.
The next step is to tag the created image. This has to be done carefully as the tag needs to reflect where the image is going using the formula <region name>/<tenancy name/<registry name>:<version>. All the registries will be addressed by <region short code>.ocir.io In our case, it would be iad.ocir.io.
docker tag graph-svr:latest iad.ocir.io/ociobenablement/graphql-svr:v0.1-dev
As you may have realized the tag being applied effectively tells OCI which instance of OCIR to place the container in. Getting this wrong can be the core of the gotcha previously mentioned and we’ll elaborate upon it shortly.
To sign in you’ll need an auth token as that is passed as the password. For simplicity, I’ve passed the token in the docker command, which Docker will warn you of as being insecure, and suggest it is passed in as part of a prompt. Note my token will have been changed by the time this is published. The username is built on the structure of <cloud tenancy name>/identitycloudservice/<username>. The identitycloudservice piece only needs to be included for your authentication is managed through IDCS, as is the case here. The final bit is the URI for the appropriate regional OCIR address, as we’ve used previously.
With hopefully a successful authentication response we can push the container. It is worth noting that the Docker authenticated connection will timeout which is why we’ve put everything in place before connecting. The push command is very simple, it is the tag name assigned to the artifact including the version number.
When we deal with repositories from Git to SVN or Apache Archiva to Nexus we work with a repository that holds multiple different assets with multiple versions of those assets. as a result, when we identify an asset uniquely we would expect to name things based on server/location, repository, asset name, and version. However, here each repository is designed for one type of asset but multiple versions. In reality, a Docker repository works in the same manner (but the extended path impact is different).
This means it becomes easy to accidentally define a tag with an extra element. Depending upon your OCI tenancy privileges if you get the path wrong, OCI creates a new root compartment container repository with a name that is a composite of the name elements after the tenancy and puts your artifact in that repository, not the one you expected.
We can address this in several ways, first and probably the best option is to automate the process of loading assets into OCIR, once the process is correct, it will remain correct. Another is to adopt a principle of never holding repositories at the root of a tenancy, which means you can then explicitly remove the permissions to create repositories in that compartment (you’ll need to explicitly grant the permissions elsewhere in the compartment hierarchy because of policy inheritance. This will result in the process of pushing a container to fail because of privileges if the tag is wrong.
Visual representation of structure differences
Repository Structure
Registry Structure
Condensed to a simple script
These steps can be condensed to a simple platform neutral script as follows:
This script would need modifying for each container being built, but you could easily make it parameterized or configuration drive.
A Note on Registry Standards
Oracle’s Container Registry has adopted the Open Registries standard for OCIR. Open Registries come under the Linux Foundation‘s governance. This standard has been adopted by all the major hyperscalers (Google, AWS, Azure, etc). All the technical spec information for the standard is published through GitHub rather than the main website.
You must be logged in to post a comment.