In a previous blog (here) I wrote about the structure and naming of assets to be applied to OCIR. What I didn’t address is the interesting challenge of what if my development machine has a different architecture to my target environment. For example, as a developer, I have a nice shiny Mac Book Pro with the M1 chipset which uses an ARM architecture. However, my target cloud environment has been built and runs with an AMD64 chipset? As we’re creating binary images it does raise some interesting questions.
As we’re creating our containers with Docker, this addresses how to solve the problem with Docker. Other OCI Compliant containers will address the problem differently.
Buildx
Buildx is a development feature in Docker which makes use of a cross-platform build capability. When using buildx we can specify one or more build platform types. These are specified using the –platform parameter. In the code below we use it to define the Linux AMD64 architecture mentioned (linux/amd64). But we can make the parameter a comma-separated list targeting different platform types. When that is done, multiple images will be built. By default, the build will happen in sequence, but it is possible to switch on additional process threads for the Docker build process to get the build process running concurrently.
Unlike the following example (which is only intended for one platform, if you are building for multiple platforms then it would be recommended that the name include the platform type the image will work for. For production builds we would promote that idea regardless, just as we see with installer and package manager-related artifacts.
If you compare this version of the code to the previous blog (here) there are some additional differences. Now I’ve switched to setting the target tag as part of the build. As we’re not interested in hanging onto any images built we’ve included the target repository in the build statement. Immediately push it to OCIR, after all the images won’t work on our machine.
A container registry is as essential as a Kubernetes service as you want to manage the deployable resources. That registry could be the public Docker repository or something else. In most people’s cases, the registry needs to be private as you don’t want to expose your product assets to potential external tampering. As a result, we need a service such as Oracle’s container registry OCIR.
The re of this blog is going to walk through how to push a container you’ve built into OCIR and a gotcha that can trip up users if you make assumptions about how the registry works.
Build container
Let’s assume you’re building your microservices locally or retrieving vetting 3rd party services for use. In both cases, you want to manually push your assets into OCIR manually rather than have an automated build pipeline do it for you.
This creates a container locally, and we can see the container listed using the command:
docker images
Setup of OCIR
We need an OCIR to target so the easiest thing is to manually create an OCIR instance in one of the regions, for the sake of this illustration we’ll use Ashburn (short code is IAD). To help with the visibility we can put the registry in a separate compartment as a child of the root. Let’s assume we’re going to call the registry GraphQL. So before creating your OCIR set up the compartment as necessary.
fragment of the compartment hierarchy
In the screenshot, you can see I’ve created a registry, which is very quick and easy in the UI (in the menu it’s in the Developer Services section).
The Oracle meu to navigate to the OCIR servicethe UI to create a OCIR
Finally, we click on the button to create the specific OCIR.
Deployment…
Having created the image, and with a repo ready we can start the steps of pushing the container to OCIR.
The next step is to tag the created image. This has to be done carefully as the tag needs to reflect where the image is going using the formula <region name>/<tenancy name/<registry name>:<version>. All the registries will be addressed by <region short code>.ocir.io In our case, it would be iad.ocir.io.
docker tag graph-svr:latest iad.ocir.io/ociobenablement/graphql-svr:v0.1-dev
As you may have realized the tag being applied effectively tells OCI which instance of OCIR to place the container in. Getting this wrong can be the core of the gotcha previously mentioned and we’ll elaborate upon it shortly.
To sign in you’ll need an auth token as that is passed as the password. For simplicity, I’ve passed the token in the docker command, which Docker will warn you of as being insecure, and suggest it is passed in as part of a prompt. Note my token will have been changed by the time this is published. The username is built on the structure of <cloud tenancy name>/identitycloudservice/<username>. The identitycloudservice piece only needs to be included for your authentication is managed through IDCS, as is the case here. The final bit is the URI for the appropriate regional OCIR address, as we’ve used previously.
With hopefully a successful authentication response we can push the container. It is worth noting that the Docker authenticated connection will timeout which is why we’ve put everything in place before connecting. The push command is very simple, it is the tag name assigned to the artifact including the version number.
When we deal with repositories from Git to SVN or Apache Archiva to Nexus we work with a repository that holds multiple different assets with multiple versions of those assets. as a result, when we identify an asset uniquely we would expect to name things based on server/location, repository, asset name, and version. However, here each repository is designed for one type of asset but multiple versions. In reality, a Docker repository works in the same manner (but the extended path impact is different).
This means it becomes easy to accidentally define a tag with an extra element. Depending upon your OCI tenancy privileges if you get the path wrong, OCI creates a new root compartment container repository with a name that is a composite of the name elements after the tenancy and puts your artifact in that repository, not the one you expected.
We can address this in several ways, first and probably the best option is to automate the process of loading assets into OCIR, once the process is correct, it will remain correct. Another is to adopt a principle of never holding repositories at the root of a tenancy, which means you can then explicitly remove the permissions to create repositories in that compartment (you’ll need to explicitly grant the permissions elsewhere in the compartment hierarchy because of policy inheritance. This will result in the process of pushing a container to fail because of privileges if the tag is wrong.
Visual representation of structure differences
Repository Structure
Registry Structure
Condensed to a simple script
These steps can be condensed to a simple platform neutral script as follows:
This script would need modifying for each container being built, but you could easily make it parameterized or configuration drive.
A Note on Registry Standards
Oracle’s Container Registry has adopted the Open Registries standard for OCIR. Open Registries come under the Linux Foundation‘s governance. This standard has been adopted by all the major hyperscalers (Google, AWS, Azure, etc). All the technical spec information for the standard is published through GitHub rather than the main website.
We’ve got the peer review comments back on the completed 1st draft back of the book. So I’d like to take this opportunity to thanks those who have been involved as peer reviewers, particularly those involved in the previous review cycles. I hope the reviewers found it satisfying to see between iterations that their suggestions and feedback have been taken on board and where we can.
The feed back is really exciting to read. Some tweaks and refinements to do to address the suggestions made.
The work on the Kubernetes and Docker elements and the chapter which has become available on MEAP has helped round that aspect off. But importantly, the final chapters help address the wider challenges of logging, and some of the feedback positively reflects this.
To paraphrase the comments, we’ve addressed the issues of logging which don’t get the attention that they deserve. Which for me is a success.
OKit is a tremendous tool for the visual design and development for your Oracle Cloud environment. Visualizing your networks, positioning of service gateways and so on makes it a lot easier than filling in web forms or writing Terraform files as you can see the relationships between the different parts far more easily. For the same reason really that a lot of people use Visio and other tools for this work. The real beauty is that OKit can generate the Terraform and Ansible scripting that can then be used to deliver the implementation.
Okit for visual design of Oracle Cloud
The tool isn’t currently an official Oracle product, but something built by the Oracle A-Team (a small team of gurus who have a role blending developer advocacy, architect supporting customers for the special edge cases and providing thought leadership). But we can hope that someone brings it into the fold and perhaps even incorporates it securely into the cloud dashboard. In the mean time, the code in its entirety is available on GitHub.
As a middleware (to use a fading term) or technical architect, I preferred not to get too involved in the detailed OS layer considerations when it can be helped (my Infrastructure Architect colleagues will always know more about NICs, port bonding, kernel versions etc etc than I ever will) and why I prefer to work with PaaS over IaaS.
But there is an undeniable trend where having a greater understanding of the OS is necessary, this is because we’re seeing PaaS expanding to cover code abstracted solutions such as Oracle’s Integration Cloud, Mulesoft, Dell’s Boomi etc. down to every things as code in the form of Terraform, Kubernetes, Docker and of course microservices.
So what does this have to do with OpenShift? Well to apply those heady aspirations we’ve had with middleware of “I can build my solution and run it on my platform anywhere” means in the world of microservices I need to find a common denominator on which I can be portable. This comes in the form of Kubernetes and Docker and we’ll probably see service meshs in due course (Istio, Linkerd etc). Docker obviously brings the need to understand the OS albeit not at the level of bonding network connections, but still a good level of OS knowledge to do things properly. Over the last couple of years there has been a fair bit of work to achieve this with the inertia of Cloud Native Computing Foundation (CNCF), Open Container Initiative (OCI).
This the final part of the detailed look at Packt book, Learning Ansible. As the book says in the opening to chapter 6 we’re into the back straight, into the final mile. The first of two final chapters look at provisioning of platforms on Amazon AWS, DigitalOcean and the use of the very hip and cool Docker plus updating your inventory of systems given that we have dynamically introduced new ones. The approach is illustrated by not only instantiating servers but delivering a configured Hadoop cluster. As with everything else we’ve seen in Ansible there isn’t a standardised approach to all IaaS platforms as that restricts you the lowest common denominator which is contrary to Ansible goals described early on. But deploying the Hadoop elements on the two cloud IaaS providers is common. Although the chapter is pretty short, I did have to read through this more carefully, as the book leverages a lot of demonstrated features from previous chapters (configuration arrays etc) which meant seeing the key element of the interaction with AWS was harder. It does mean if you tried diving into this chapter straight away, although not impossible does require a bit more investment from the reader to see all the value points. That said it is great to see through the use of the various features how easy to setup the provisioning in the cloud is, and the inventory update. Perhaps the win would have been to just so the simple provision and then the clever approach.
Chapter 7 focuses on Deployment. When I read this, I was a little nonplussed, hadn’t we been reading about this in the previous 6 chapters. But when you look at the definition provided:
“To position (troops) in readiness for combat, as along a front or line.” Excerpt From: “Learning Ansible.” Packt Publishing.
You can start to see the true target of what we’re really thinking about, which is the process of going from software build to production readiness. So having gone through the software packaging activities you need to orchestrate the deployment across potentially multiple servers across a server farm. This orchestration piece is really just pulling together everything that has been explained before but also share some Ansible best practise. Then finally an examination of the Ansible approach for the nodes to pull deployments and updates.
The final piece of the book is an Appendix which looks at the work to bring Ansible to the Windows platform, Ansible Galaxy and Ansible Tower. Ansible Galaxy is a repository of roles build by the Ansible community. Ansible Tower provides a web front end to the Ansible server. The Tower product is the commercial side of the Ansible company – and effectively sales here fund the full time Ansible development effort.
So to summarise …
The Learning Ansible book explains from first principles to the very rich capabilities of building packaging software, instantiating cloud servers or containers through to configuring systems and deploying applications into new environments; and then capturing instantiated system details into the Ansible inventory. How Ansible compares with the more established solutions in this space in the form of Puppet and Chef is discussed, and the pros and cons of the different tools. All the way through, the books has been written in an easy engaging manner. You might even say wonderfully written. The examples are very good with the possible exception of 2 cases (just merely good in my opinion), the examples are supported with very clear explanations that demonstrate the power of the Ansible product. Even if you choose not to use Ansible, this book does an excellent job of showing the value of not resorting to the ‘black art’ of system build and configuration and suggesting good ways to realising automation of this kind of activity, in many place undoubtedly thought provoking
You must be logged in to post a comment.