Last week I was fortunate enough to have the opportunity to present at the Software Architecture Summit as part of the Bucharest Tech Week conference. My presentation, Monoliths in a Microservice World, was all new content that, by chance, worked well, bringing together a number of points made by other speakers. The presentation aimed at the challenges of adopting Microservices and whether Monoliths had a place in modern IT, and for those of us not fortunate enough to be working for one of the poster children for microservices like Netflix, Amazon, etc, how we can get our existing monoliths playing nicely with microservices.
The conference may not have the size of Devoxx (yet), but it certainly had quality with presenters from globally recognized organizations such as Google (Abdelkfettah Sghiouar), Thoughtworks (Arne Lapõnin), Vodafone (IT Services business unit – _VOIS – Stefan Ciobanu), Bosch, as well as subsidiaries of companies like DXC (Luxsoft) and rapid growth SaaS vendor LucaNet.
As a presenter, you’re always wanting to walk the tightrope of being at the biggest conferences to maximize reach for your message while at the same time wanting the experience to be friendly and personable, which often means slightly smaller conferences. The Software Architecture Summit balanced that really well; rather than lots of smaller breakout sessions, the conference focussed on a single auditorium for a large number of attendees, with presentation slots varying in length depending upon the subject matter. If a session didn’t interest you, then there were plenty of exhibitors to talk with – although, from what I saw, the auditorium was full during the sessions, reflecting the interest in the content.
“Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.” – John F. Woods
The conference organizers (Universum) certainly put in the effort to ensure the presenters were looked after. It is the little touches that really make the difference, such as taking care of logistics which can be as simple as organizing airport transfers. A letter of thanks will be waiting for you at the hotel after the event, organizing a meal for the presenters at a local restaurant and so on.
Permissions on ssh key files on Windows can be rather annoying. If you try to use ssh it will protest about the permissions and will stop the secure connection. On Linux, it is easy to modify the permissions with a chmod command (chmod 700 *.key).
Update
Since originally writing this blog post, we came across a cmd (.bat) script that can alter the file permissions for Windows 10 and later (the basis of the script can be found here). With this script’s directory in the PATH variable, we can call it anywhere with the command protect-key.bat my-key-file.key, and it will correct the permissions accordingly.
A while back, I was invited to contribute to Devmio (the knowledge portal driven by the publishers involved with the JAX London and other events). After a little bit of delay from my end, I offered an article that they decided was sufficient to be incorporated into DevOps magazine.
We could solve this with custom integrations, or we can exploit an IETF standard called SCIM (System for Cross-domain Identity Management). The beauty of SCIM is that it brings a level of standardization to the mechanics of sharing personal identity information, addressing the fact that this data goes through a life cycle.
While Oracle’s IDCS and IAM support identity management for authentication and authorization for OCI and SaaS such as HCM, SCM, and so on. Most software ecosystems need more than that. If you have personalized custom applications or COTS or non-Oracle SaaS that need more than just authentication and need some of your people’s data needs to be replicated.
The lifecycle would include:
Creation of users.
Users move in and out of groups as their roles and responsibilities change.
User details change, reflecting life events such as changing names.
Users leave as they’re no longer employees, deleted their account for the service, or exercise their right to be forgotten.
It means any SCIM-compliant application can be connected to IDCS or IAM, and they’ll receive the relevant changes. Not only does it standardize the process of integrating it helps handle compliance needs such as ensuring data is correct in other applications, that data is not retained any longer than is needed (removal in IDCS can trigger the removal elsewhere through the SCIM interface). In effect we have the opportunity to achieve master data management around PII.
SCIM works through the use of standardized RESTful APIs. The payloads have a standardized set of definitions which allows for customized extension as well. The customization is a lot like how LDAP can accommodate additional data.
The value of SCIM is such that there are independent service providers who support and aid the configuration and management of SCIM to enable other applications.
Securing such data flows
As this is flowing data that is by its nature very sensitive, we need to maximize security. Risks that we should consider:
Malicious intent that results in the introduction of a fake SCIM client to egress data
Use of the SCIM interface to ingress the poisoning of data (use of SCIM means that poisoned data could then propagate to all the identity-connected systems).
Identity hijacking – manipulating an identity to gain further access.
There are several things that can be done to help secure the SCIM interfaces. This can include the use of an API Gateway to validate details such as the identity of the client and where the request originated from. We can look at the payload and validate it against the SCIM schema using an OCI Function.
We can block the use of operations by preventing the use of certain HTTP verbs and/or URLs for particular or all origins.
We’ve just had a new article published for Software Engineering Daily which looks at monitoring in multi-cloud and hybrid use cases and highlights some strategies that can help support the single pane of glass by exploiting features in tools such as Fluentd and Fluentbit that perhaps aren’t fully appreciated. Check it out …
I wrote about how much I like the lens app K8s dashboard capability without needing to deploy K8s dashboard. Sadly recently, there has been some divergence from K8sLens being a pure open source to a licensed tool with an upstream open-source version called Open Lens (article here). It has fallen to individual contributors to maintain the open-lens binary (here) and made it available via Chocolatey and Brew. The downside is that one of the nice features of K8sLens has been removed – the ability to look at container logs. If you read the Git repo issue on this matter – you’ll see that a lot of people are not very happy about this.
If you read through all the commentary on the ticket, you’ll eventually find the following part of the post that describes how the feature can be reintroduced.
In short, if you use the extensions feature and provide the URL of the extension as @alebcay/openlens-node-pod-menu then the option will be reintroduced. The access to the extension is here:
I’m not sure why, but I did find the installation a little unstable, and needed to reinstall the plugin, restart OpenLens and reenable the plugin. But once we got past that, as you can see below the plugin delivered on its promise.
The problem with the licensing is that it doesn’t distinguish between me as an individual and using Lens for my own personal use vs. using Lens for commercial activities. The condition sets out:
ELIGIBILITY:You or your company have less than $10M in annual revenue or funding.
Given this wording, I can’t use the licensed version, even if I was working on an open-source project and in a personal capacity, as the company I’m employed by has more than $10 million in revenue. For me, the issue is $200 per year is a lot for something I only need to use intermittently. While I get k8slens includes additional features such as Lens Security which performs vulnerability management, and Lens Teamwork, along with support, are features and services that are oriented to commercial use – these are features I don’t actually want or need. Lens Kubernetes sounds like an interesting proposition (a built-in distribution of K8s), but when many others already provide this freely – from Docker Desktop to Kind it seems rather limited in value.
We did try installing Komodor, given its claims for an always free edition. But on my Windows 11 Pro (developer early access) installation, it failed to install, as you can see:
The API blog is part of a larger piece that explores the use of APIs across some of the industries that Oracle works in and how APIs can enable future innovations. The piece on OCI Queue is almost the opposite of the APIs material. Very detailed and implementation specific, covering the technical details of Terraform and the Stomp messaging protocol.
DZone
We also have an article over on DZone now, for regular readers of my blog, this is familiar stuff, as it looks at the LogGenerator utility I’ve built and working on custom extensions so it can be used with OCI to generate Notifications, messages on OCI Queue and more.
Presenting…
I’ll be presenting to the Bucharest Tech Week. With a presentation exploring Monoliths in the context of microservice solution delivery.
Outside of my Oracle cloud-related content, we’ve just published an article on DZone. Those who follow this blog will be familiar with the article theme as it relates to the Log Simulator work. We’ve also written for Devmio – although we don’t yet know when the article will be published and whether the content will be publicly available or behind their paid firewall.
The last week or so has been the DeveloperWeek 23 Conference – in Hybrid form, with the physical event last week and online this week. Circumstances prevented me from attending physically, but yesterday I was honored with the opportunity to present virtually. My session covered the adoption of API Streaming as an alternative approach to needing to poll with APIs to get the latest data state/updates.
One of the benefits of using cloud providers is the potential to scale solutions to meet demand by scaling out with additional compute nodes without concern for physical capacity limits (cost considerations are smaller, but still apply). Dynamic scaling is typically driven by monitoring defined groups of nodes for CPU use and when demand hits a threshold an additional node is spun up. Kubernetes can create some more nuanced scaling configurations but this is tends to be used for microservice style solutions.
This is well and fine, but if we need to finesse the configuration to ensure multiple container instances (such as microservices) can be allowed on a single node without compromising the deployment of containers across nodes of a cluster to provide resilience things can get a lot more challenging. We also want to manage the number of active containers – we don’t want to have unnecessary volumes of containers effectively idling. So how do we manage this?
In addition to this, what if we’re using services that demand tens of seconds or even minutes to be spun up to a point of readiness – such as instantiating database servers, adding new nodes to data caches such as Coherence which will need time to clone data, and adjust the demand balancing algorithms? Likewise, for more traditional application servers. Some service calls will impact upstream configuration changes, such as altering load-balancing configurations. Cloud-native Load Balancers typically can understand node groups, but what if your configuration is more nuanced? If you’re running services such as ActiveMQ you need to update all the impacted nodes in a cluster to be aware of the new node. Bottomline is some solutions or solution parts need lead time to handle increased demand?
This points to more advanced, nuanced metrics than simply current CPU load. And possibly the scaling algorithm needs to be aware of lead times of the different types of functionality involved. So how can we advance a more nuanced, and informed scaling logic? You could look at the inbound traffic on firewalls or load balancers. But this will require a fair bit of additional effort to apply application context. For example is the traffic just getting directed to static page content? We do need to derive some context so that the right rules are impacted. Even in simple K8s only hosted solutions a cluster may host services that aren’t related to the changes in demand and need to be scaled as a result.
A use case perspective
Let’s look at this from a hypothetical use case. We’re a music streaming service. Artists can control when their music is made available, but the industry norm is 12:01am on a Friday morning. There is always a demand spike at this time, but the size of the demand spike can vary, with some artists triggering larger spikes (this isn’t directly correlating to an artist’s popularity, some smaller groups can have very enthusiastic fans) – so dynamic scaling is needed, and reactive to demand. Reporting, analytics, and payment services see cyclical bumps around the monthly financial periods. These monthly cyclic activities can also be done during quieter operating hours. So it is clear we may wish to apply logical partitioning, but for maximum cost efficiency keep everything in the same cluster. There are correlations between different service demands. Certain data services see increased demand during the demand spikes and reporting period, but ‘The bottom line is we need to not only address dynamic scaling, but tailor the scaling to different services at different times.
Back to our question – how do we manage the scaling? We could monitor the firewall and load balancer logs and analyze the kinds of requests being received. That would need additional processing logic to determine which services are receiving the traffic. But our API gateway is likely to have that intelligence implicitly in place as we have different API policies for the different types of endpoints. So we can monitor specific endpoints or groups of endpoints very easily – meaning we can infer the types of traffic demand and how to respond. Not only that, we may have API plans in place so we can control priority and prevent the free versions of our service from using APIs to initiate high-quality media streams. So we already have business and specific process meaning. So linking scaling controls such as KEDA (Kubernetes Event-driven Autoscaling) directly to measurements such as plans and specific APIs creates a relatively easy way to control scaling. Further, we may also use the gateways to provide a rate throttle so it’s not possible to crash our backend with an instantaneous spike. This strategy isn’t that different from an approach we’ve demonstrated for scaling message processing backends (see here and here).
Representation of how we can use the metrics from a gateway to not only support the scaling of a K8s cluster but also other cloud services
We can also use the data KEDA can see in terms of node readiness to adjust the API Gateway rate limiting dynamically as well. Either as a direct trigger from the number of active container instances or by triggering a more advanced check because we still have to address our services that need more lead time before relaxing the rate limiting.
Handling slow scaling services
So we’ve got a way of targeting the scaling of services. But what do we do to address the slower scaling services we described? With the data dimensioned, we can do a couple of things. Firstly we can use the rate of change to determine how many nodes to add. A very sudden increase in demand and adding a node at a time will create the effect of the service performance stuttering as it scales, and all additional capacity is suddenly consumed and then scales again. Factoring in the rate of change to the workload threshold and the context of which services could generate the increased workload can be used to not only determine which databases may need scaling but also be used to adjust the threshold of workload that actually triggers the node introduction. So a sudden increase in demand on services that are known to create a lot of DB activity is then met with dropping the threshold that triggers new nodes from, say, 75% to 25% so we effectively start the nodes on a lower threshold, means the process is effectively started sooner.
With different rates of utilization growth, we can see that we need to alter the trigger threshold for launching new resources
For this to be fully effective we do need the Gateway tracking traffic both internally (East-West) and inbound (North-South). Using the gateway with East-West traffic means that we can establish an anti corruption layer.
Conclusion
Not all parts of a solution will be instantaneous in scaling – regardless of how fast the code startup cycle might be, some services have to address the need to move large chunks of data before becoming ready, e.g., in-memory databases. Some services may not have been built for the latest business demands and the ability to exploit cloud scale and dynamic scaling. Some services need time to adjust configurations.
We may also need to adjust our protection mechanisms if we’re protecting against service overloading.
Scaling capabilities that have response latency to the scaling process can be addressed by achieving earlier, more intelligent recognition of need than warning simply by CPU loads hitting a threshold. The intelligence that can be derived from implicit service context simplifies the effort in creating such intelligence and makes it easier to recognize the change. API Gateways, message queues, message streams, and shared data storage are all means that, by their nature, have implicit context.
Pushing the recognition towards the front of the process creates milliseconds or possibly seconds more warning of the demand than waiting for the impacted nodes to see compute spikes.
You must be logged in to post a comment.