DeveloperWeek 23 – Stop Polling, Let’s Go Streaming

Tags

, , , , , , ,

The last week or so has been the DeveloperWeek 23 Conference – in Hybrid form, with the physical event last week and online this week. Circumstances prevented me from attending physically, but yesterday I was honored with the opportunity to present virtually. My session covered the adoption of API Streaming as an alternative approach to needing to poll with APIs to get the latest data state/updates.

Developer Week logo

My presentation is available below.

API Gateways to manage dynamic scaling

Tags

, , , , ,

One of the benefits of using cloud providers is the potential to scale solutions to meet demand by scaling out with additional compute nodes without concern for physical capacity limits (cost considerations are smaller, but still apply). Dynamic scaling is typically driven by monitoring defined groups of nodes for CPU use and when demand hits a threshold an additional node is spun up. Kubernetes can create some more nuanced scaling configurations but this is tends to be used for microservice style solutions.

This is well and fine, but if we need to finesse the configuration to ensure multiple container instances (such as microservices) can be allowed on a single node without compromising the deployment of containers across nodes of a cluster to provide resilience things can get a lot more challenging. We also want to manage the number of active containers – we don’t want to have unnecessary volumes of containers effectively idling. So how do we manage this?

In addition to this, what if we’re using services that demand tens of seconds or even minutes to be spun up to a point of readiness – such as instantiating database servers, adding new nodes to data caches such as Coherence which will need time to clone data, and adjust the demand balancing algorithms? Likewise, for more traditional application servers. Some service calls will impact upstream configuration changes, such as altering load-balancing configurations. Cloud-native Load Balancers typically can understand node groups, but what if your configuration is more nuanced? If you’re running services such as ActiveMQ you need to update all the impacted nodes in a cluster to be aware of the new node. Bottomline is some solutions or solution parts need lead time to handle increased demand?

This points to more advanced, nuanced metrics than simply current CPU load. And possibly the scaling algorithm needs to be aware of lead times of the different types of functionality involved. So how can we advance a more nuanced, and informed scaling logic? You could look at the inbound traffic on firewalls or load balancers. But this will require a fair bit of additional effort to apply application context. For example is the traffic just getting directed to static page content? We do need to derive some context so that the right rules are impacted. Even in simple K8s only hosted solutions a cluster may host services that aren’t related to the changes in demand and need to be scaled as a result.

A use case perspective

Let’s look at this from a hypothetical use case. We’re a music streaming service. Artists can control when their music is made available, but the industry norm is 12:01am on a Friday morning. There is always a demand spike at this time, but the size of the demand spike can vary, with some artists triggering larger spikes (this isn’t directly correlating to an artist’s popularity, some smaller groups can have very enthusiastic fans) – so dynamic scaling is needed, and reactive to demand. Reporting, analytics, and payment services see cyclical bumps around the monthly financial periods. These monthly cyclic activities can also be done during quieter operating hours. So it is clear we may wish to apply logical partitioning, but for maximum cost efficiency keep everything in the same cluster. There are correlations between different service demands. Certain data services see increased demand during the demand spikes and reporting period, but ‘The bottom line is we need to not only address dynamic scaling, but tailor the scaling to different services at different times.

Back to our question – how do we manage the scaling? We could monitor the firewall and load balancer logs and analyze the kinds of requests being received. That would need additional processing logic to determine which services are receiving the traffic. But our API gateway is likely to have that intelligence implicitly in place as we have different API policies for the different types of endpoints. So we can monitor specific endpoints or groups of endpoints very easily – meaning we can infer the types of traffic demand and how to respond. Not only that, we may have API plans in place so we can control priority and prevent the free versions of our service from using APIs to initiate high-quality media streams. So we already have business and specific process meaning. So linking scaling controls such as KEDA (Kubernetes Event-driven Autoscaling) directly to measurements such as plans and specific APIs creates a relatively easy way to control scaling. Further, we may also use the gateways to provide a rate throttle so it’s not possible to crash our backend with an instantaneous spike. This strategy isn’t that different from an approach we’ve demonstrated for scaling message processing backends (see here and here).

Representation of how we can use the metrics from a gateway to  not only support the scaling of a K8s cluster but also other cloud services
Representation of how we can use the metrics from a gateway to not only support the scaling of a K8s cluster but also other cloud services

We can also use the data KEDA can see in terms of node readiness to adjust the API Gateway rate limiting dynamically as well. Either as a direct trigger from the number of active container instances or by triggering a more advanced check because we still have to address our services that need more lead time before relaxing the rate limiting.

Handling slow scaling services

So we’ve got a way of targeting the scaling of services. But what do we do to address the slower scaling services we described? With the data dimensioned, we can do a couple of things. Firstly we can use the rate of change to determine how many nodes to add. A very sudden increase in demand and adding a node at a time will create the effect of the service performance stuttering as it scales, and all additional capacity is suddenly consumed and then scales again. Factoring in the rate of change to the workload threshold and the context of which services could generate the increased workload can be used to not only determine which databases may need scaling but also be used to adjust the threshold of workload that actually triggers the node introduction. So a sudden increase in demand on services that are known to create a lot of DB activity is then met with dropping the threshold that triggers new nodes from, say, 75% to 25% so we effectively start the nodes on a lower threshold, means the process is effectively started sooner.

Illustrating how the rate of growth in utilization can mean the need to change the threshold to trigger scaling up
With different rates of utilization growth, we can see that we need to alter the trigger threshold for launching new resources

For this to be fully effective we do need the Gateway tracking traffic both internally (East-West) and inbound (North-South). Using the gateway with East-West traffic means that we can establish an anti corruption layer.

Conclusion

Not all parts of a solution will be instantaneous in scaling – regardless of how fast the code startup cycle might be, some services have to address the need to move large chunks of data before becoming ready, e.g., in-memory databases. Some services may not have been built for the latest business demands and the ability to exploit cloud scale and dynamic scaling. Some services need time to adjust configurations.

We may also need to adjust our protection mechanisms if we’re protecting against service overloading.

Scaling capabilities that have response latency to the scaling process can be addressed by achieving earlier, more intelligent recognition of need than warning simply by CPU loads hitting a threshold. The intelligence that can be derived from implicit service context simplifies the effort in creating such intelligence and makes it easier to recognize the change. API Gateways, message queues, message streams, and shared data storage are all means that, by their nature, have implicit context.

Pushing the recognition towards the front of the process creates milliseconds or possibly seconds more warning of the demand than waiting for the impacted nodes to see compute spikes.

Phoenix project

Tags

, , , , , , , , , , , , ,

The Phoenix Project by Gene Kim has been recommended reading for the IT industry for a long time now. It’s been on my to-read list for a good while, but to be candid, it never made the top of my reading list, as I had some reservations. Who wants to read a novel about IT if you already work and live it every day.

Recently IT Revolution, to celebrate its 10th anniversary, offered the book through Amazon for free for a limited period (even now, as a Kindle ebook, it isn’t that expensive anymore). Given I had determined I should read it at some point, I took the opportunity to get a copy. As it happens, through the Christmas break, I got into a run of reading books, and with it being at the top of my ebook list, I bit the bullet.

First of all, the story was engaging and very readable, characters are likable, human, and relatable. It isn’t a huge book either (perhaps I’ve been looking at too many 500+ page novels), making it a fairly quick read. As a pure novel, some of the devices to keep the narrative moving along were perhaps a little obvious, but then given the goal of the story, that isn’t really an issue. It didn’t break the reading flow, which did keep the pages turning with a plausible story; plausible enough to wonder how much of the story was based on a real-life experience of Gene or one of his writing collaborators.

What struck me the most is that most industry writing I have read doesn’t address all the points the book makes. So, DevOps, as is typically presented by a lot (most ?) content, uses some variation of a diagram like the infinite cycle, as shown here:

Image courtesy of Amis.nl

Not only that, it is common to view DevOps as focusing on either :

  • Automation, particularly around CI/CD and IaC (Infrastructure as Code)
  • The development team also owns the operations tasks

But the book portrayed DevOps as both and neither of these. I say this as these approaches can help with the goal, but they should be subservient to the larger objective. Unfortunately, we do get caught up with the mechanics and tools and not the wider goal. The story is about how to deliver business value and needs in a streamlined manner, so we aren’t tying up the investment (time or spend) any longer than necessary. Yes, that does mean IaC and CI/CD style automation but only in service of the goal, which is the business need, not IaC.

The book also highlights the point of continuously working on improvement and paying down against debt, as removing debt is part of the way we remove the blockages to the streamlining (in the story, we actually see the release pipeline being temporarily stopped so that time could be invested in paying down the debt). Yet, this aspect is rarely discussed in a lot of the industry content on the subject. Maybe we are, in part, our own enemy here, as debit work is not greenfield. It is going back over old ground and making it better. We all love breaking new ground and leaving the past behind. Not to mention many organizations measure progress on the number of features rather than how well a feature serves the business goals. I have to admit to that mistake, which in our world of microservices is a bit of a mistake. After all, aren’t microservices about doing one thing and doing it well?

Another interesting view that the book put a lot of emphasis on was a variant of what is sometimes called the Eisenhower matrix. Anyone who has done any leadership training will most likely recognize it (see below).

However, the quadrants of work are as the book describes them are:

  • Quadrant 1 – Project work (i.e., planned activities central to the business)
  • Quadrant II – IT work (work that is planned and needed but doesn’t originate from the business, such as building new infrastructure)
  • Quadrant III – Updates and changes (e.g., system patching)
  • Quadrant IV – Unplanned (e.g., outage recovery work, demands on the team that has bypassed scheduling / divert individuals from the agreed goals, etc.)

The key difference between this representation and that of the book’s work definition is the words on each of the columns and rows. For example, Quadrant IV isn’t ‘Not Important’ and ‘Not Urgent’ – but it would be fair to say ‘Not Wanted’ and ‘Not Productive’. Unplanned work is the killer and roughly aligns with quadrant IV. This comes from the issues of not dealing with technical debt and solution facilities etc.

My last observation is that in the last couple of years, we have seen the rise of DevSecOps, which recognizes the need that security should be as much of the delivery process as Dev and Ops. The book (written 10 years ago) showed that security should be part of the DevOps process. Like other areas, the story seeks to address the point that security focussing on just the development and operational processes while necessary for things like catching OWASP Top 10 also needs to see the bigger picture otherwise, you could easily add additional needs that are already handled by controls elsewhere in the end-to-end business processes. That doesn’t mean to say security controls can5 be in software, but are they part o& improvement and pushing actions left vs. getting out of the starting blocks?

More reading

The book provides references, but for my own personal benefit, a number of particularly interesting and useful references are made, which may interest:

API Gateway for data egress

Tags

, , ,

Most larger organizations route their outbound web traffic through a web proxy. The primary motivation for this is to measure where traffic is going. Log traffic for analysis to try and detect activities trying to egress data that should remain within the organization and prevent access to websites that are considered harmful in one form or another.

So why consider an API Gateway as part of an outbound traffic flow? After all, isn’t a Gateway there to protect us? Several very good reasons. Let’s look at them:

  • Managing the use of an external paid service. You may have multiple solutions using a third-party service – for example, an SMS service. Rather than expecting all these different calls to the external API, each having a copy of the 3rd party credentials to manage, we could use the gateway as a single point to attach the credentials.
  • When it comes to being charged for a service, being able to identify the requests at the API level makes it very easy to track your own consumption and forecast forward before being billed. This is really helpful if you have an agreement that provides a good price for pre-booked capacity and a higher charge for overage/capacity not pre-booked.
  • Economies of scale for using 3rd party services can be very powerful. But it can also present two problems.
    • Switching providers quickly can be difficult as multiple points of possible change
    • How to partition the cost of the external service across different departments if everyone is using a common account.

The first of these issues can be easily overcome using the anti-corruption layer pattern where the gateway represents the correct route so it can reformat the requests in one place to work with a different provider.

At the same time, we can more intelligently use Gateway’s metering mechanisms rather than having to implement functionality to mine the proxy’s logs.

Of course you can achieve same effect without a gateway, but you don’t get the benefits that a gateway will offer out of the box. In addition the chances are that you have already got an API Gateway running for your current North-South traffic.

Podcast with Anatolii Ulitovskyi of UNmiss

Tags

, , , , ,

Just before the Christmas break, I got to record an excellent podcast with Anatolii of UNmiss. It was a great conversation about Cloud Integration, APIs, and approaches to Cloud-based integration. While I am not in a consulting role in the conventional sense, a lot of an Evangelist’s task is still to listen, understand, and, when necessary, challenge assumptions and help people understand how technologies can help address problems. This might include sketching out a journey of evolution and improvement. During the podcast, we discussed some of these ideas.

You can listen to the podcast audio or the live video stream of the conversation here.

In addition to some of the practices, we’ve used. The conversation touched upon books. My books are on the sidebar, including links to Manning, who, as a publisher, I’d recommend. I’ve previously blogged some reading recommendations and previously written some book reviews which may be of interest to anyone following up.

Published content

Tags

, , , , , , , , ,

We haven’t blogged too much recently as we have been busy helping get and producing content for my employer Oracle, working with Software Engineering Daily, and developing a collaborative book. So, I thought I’d pull together some links to these new resources.

Continue reading

API payload design getting the semantics right

Tags

, , , , , , ,

One area of API design that doesn’t get discussed much is the semantics of the payload. That is, the names we give our attributes and elements for the values being communicated. When developing single-use APIs (usually for client applications), this is unlikely to be an issue as the team(s) involved are likely to know each other and are able to interact and resolve clarity issues easily enough (although getting the semantics right makes this easier particularly in the long term). But when it comes to providing reusable endpoints, we may know the early adopters but are unlikely to interact with consumers beyond that unless there is a problem.

This makes getting the semantics right somewhat harder. How do we know if our early adopters represent the wider customer base (internally and externally)? Conversely, if we simply use our own company terminology, how do we know that it is representative of the wider user base? It isn’t unusual for organizations to develop their own variations of a term or apply assumed meaning. Even simple things, a ‘post code’ element of an address, other parts of the world use ‘zip codes’ or PINS are they the same? Perhaps if we said ‘postal code,’ we break the direct specific country associations with ‘post code.’ We can overcome these issues by providing a dictionary of meanings and lengthy explanations. Using the right term goes beyond simply understanding the data value; it will infer specific formatting and potential application behaviors. Taking our postcode/zip code example. In the UK data is published, which means it is possible to easily validate a postcode against the address line and vice versa. In fact, in the UK to get something delivered, you only need the property number and the postcode. A US 5-digit zipcode can’t do that. For that precision, the ZIP+4 needs to be used.

If we can address these issues, then life becomes easier for us in maintaining the information and for consumers in not needing to look up the details. The question is how can we be sure of using semantics that is consistent across our APIs and widely understood and, when necessary, already documented, so we don’t have to document the information again?

Read more: API payload design getting the semantics right

Public Data Models

There is a shortcut to some of these problems. Many industries have agreed on data models for different industries. The bodies such as OASIS, OMG, and others are developed and maintained by multiple organizations. As a result, there is a commonality in the meaning achieved. So if you align with that meaning, then use that semantic. Not only can the naming of attributes become easier, but any documentation can be simplified to reference the published definitions. in most cases, these standards are publicly available as it promotes the widest adoption – one of the goals of developing such models. But there are some pitfalls to be mindful of using this approach:

  • Sometimes rather than arrive at a universal definition, the models will accommodate structural variations or aliased names – as a result, they may not necessarily be helpful to you.
  • The more well-known models are internationalized. If you have no intent to support international needs and not expecting to have international consumers, then the naming may not align with localized conventions.
  • If you use the semantics provided, ensure your data abides by the meaning. For example, don’t use ‘shipping address’ if you’re not shipping anything.
  • Don’t slavishly copy the data models provided – the model may not be intended for API use cases. At the same time, it doesn’t stop you from asking why the data in the model is there and whether your users may want such data (and whether it makes sense for you to provide that information).

Predefined APIs

Some organizations, such as TMForum have taken the public data model to the next step and provided predefined API specifications. This is ideal where you’re following industry standards and providing standardized/common services that aren’t a differentiator but need to be offered as part of doing business.

Data Catalogs

Larger, data-mature organizations will keep some form of Data Catalog. These catalogs are often held to help understand compliance needs, such as where personal data is held, how data issues can impact data accuracy and integrity, etc. It is possible that metadata may also be kept to address the semantic meaning of data or reference the definitions. Such information is used to help inform any data cleansing that may be needed. This offers a potentially good source of information for internal API use cases.

Vendor Led

If your business is delivery/service focussed so that your unique value isn’t in IT processes but perhaps something that the company manufactures or a specialist service such as consulting in a specific industry, then it is possible that the majority of your systems are SaaS or COTs based. If your business has opted to focus on a particular vendor, e.g., Oracle or SAP, for most services, then vendor-led data models are a possibility. These vendors are often involved with public data model development, so they won’t be too divergent in most situations – but awareness of differences is necessary, but as both models should be internally consistent, the differences will also be consistent. This approach will give you better alignment and reduce the chances of needing to address any divergence. The downside of this is a change of direction on strategic vendors can create additional work going forward as the alignment is disrupted. More work will be needed to map from your naming and semantics to the new core, and attempts to move away from the selected model to try to realign semantics with a new core will potentially create breaking changes for API consumers.

Don’t Forget

Regardless of the approach taken, there are some very simple but critical rules that will keep you in a good place:

  • Don’t use your underlying storage data models – this is a well-documented API anti-pattern.
  • Consistency of language across your APIs, regardless of whether they are internal or external, is important.

Information Sources

Regardless of approach – be careful not to lock your API semantics and data model to that of the storage layer – these can change and even create breaking changes that you shouldn’t expose to your users. Some sources to consider.

  • OAGIS – covers a broad variety of business data domains. Some ERP suppliers have used this as a foundation for their application data models.
  • OASIS – covers many industries
  • TMForum APIs
  • ARTS (formally hosted by NRF now with the OMG). The full OMG standards catalog.
  • GS1 – lots here on shipping, supply chain, and product tracking

Some more reading on the subject:

API more than a payload – Cloud Lunch Learn

Tags

, , , ,

Today I was fortunate enough to present at one of the Cloud Lunch and learn events (you can register for any of the events here and see previous sessions here). One of the questions asked at the end of the session was recommended reading on APIs. So I’ve gathered up some links to books I’d suggest worthwhile reading I’d suggest:

I should also mention an API book I’ve co-authored. While it focuses on an Oracle product, there is a lot of content that is relevant to any API development using an API Gateway (Amazon.co.uk). I’ve not looked at all the books at API-University, but from I have seen the content is worth examining.

The slides for my presentation can be found on slideshare, and here:

The Presentation recording can be found here:

Continue reading

OCI Notifications through the LogSimulator

Tags

, , , , , , ,

We’ve been busy putting together a number of Oracle Architecture Center assets over the last week. This has included building LogSimulator extensions that can either be run in a very simple manner using just a single file, but limited in the payloads that can be sent to OCI (if you take the appropriate custom file from the LogSimulator you do need to make one minor tweak. But the code has also been added to the Oracle GitHub repository here in a manner that doesn’t require the full tool. There of course a price to pay for the simplified implementation. This comes in the form of the notifications being sent and received being hardwired into the code rather than driven through the insulator’s configuration options.

The decoupling has been done by implementing the interface for the custom methods in a class without the implements declaration, and then we extend the base class and apply the implements declaration at that level.

While notifications could take log events, it is more suited to JSON payloads. But as the simulator can tailor the content being sent using some formatting, it does not care if the provided events to send are pre-formatted as JSON objects making it an easy tool to test the configuration of OCI Notifications.

Unit testing as well

In addition to the new channel, as previously mentioned we have been making some code improvements. To support that we have started to add unit tests, and double checking code will compile under Java. To keep the dependencies down we’re making use of Java assert statements rather than a pretty JUnit. But the implementation ideas are very similar. As the tests use Java asserts the use of asserts does need to be enabled in the command line; for example:

Groovy test.groovy -enableasserts

JMESPath is represented using Railroad diagrams

Tags

, , , , , ,

JMESPath is a mature syntax for traversing and manipulating JSON objects. The syntax is also supported with multiple language implementations available through GitHub (and other implementations exist). As a result, it has been very widely adopted; just a few examples include:

  • Azure CLI
  • AWS CLI and Lambda
  • Oracle Cloud WAF
  • Splunk

As the syntax is very flexible and recursive in its use following the documented notation can be a little tricky to start with. So following the syntax can be rather tricky. The complete definition runs to 97 lines, of which 32 lines focus on the syntactical structure. The others describe the base types such as numbers, characters, accepted escaped characters, and so on. Nothing wrong with this, as the exhaustive definition is necessary to build parsers. But for the majority of the time it is those 32 lines that we need to understand.

As the expression goes – ‘a picture says a thousand words’, there might not be a thousand words, but there is enough to suggest a visual representation will help. Even if the visual only helps us traverse the use of the detailed syntax. So we’ve use our favoured visual representation – the railroad diagram and the tool produced by Tab Akins to create the representation. We’ve put the code and created images for the syntax in my GitHub repository here, continuing the pattern previously adopted.

Here is the resulting diagram …

To make it easy to trace back to the original syntax document we’ve included groupings on the diagram that have names from the original speciofication.

Parts of the diagram make the expressions look rather simple, but you’ll note that it is possible for the sections to be iterative which allows for the expression to traverse a JSON object of undefined depth. But what can be really challenging is that an in many areas it is possible to nest expressions within expressions. Visually there is no simple way to represent the expression possibilities of this in a linear manner. Other than be clear about where the nesting can take place.