• Home
  • Site Aliases
    • www.cloud-native.info
  • About
    • Background
    • Presenting Activities
    • Internet Profile
      • LinkedIn
    • About
  • Books & Publications
    • Log Generator
    • Logs and Telemetry using Fluent Bit
      • Fluent Bit book
      • Book Resources in GitHub
      • Fluent Bit Classic to YAML Format configurations
    • Logging in Action with Fluentd, Kubernetes and More
      • Logging in Action with Fluentd – Book
      • Fluentd Book Resources
      • Fluentd & Fluent Bit Additional stuff
    • API & API Platform
      • API Useful Resources
    • Oracle Integration
      • Book Website
      • Useful Reading Sources
    • Publication Contributions
  • Resources
    • GitHub
    • Oracle Integration Site
    • Oracle Resources
    • Mindmaps Index
    • Useful Tech Resources
      • Fluentd & Fluent Bit Additional stuff
      • Recommended Tech Podcasts
      • Official Sources for Product Logos
      • Java and Graal Useful Links
      • Python Setup & related stuff
  • Music
    • Monster On Music
    • Music Listening
    • Music Reading

Phil (aka MP3Monster)'s Blog

~ from Technology to Music

Phil (aka MP3Monster)'s Blog

Tag Archives: development

AI to Agriculture

17 Friday Oct 2025

Posted by mp3monster in General, Oracle, Technology

≈ Leave a comment

Tags

AI, artificial-intelligence, Cloud, development, Oracle, Technology

Now that details of the product I’ve been involved with for the last 18 months or so are starting to reach the public domain  (such as the recent announcement at the UN General Assembly on September 25), I can talk to a bit about what we’ve been doing.  Oracle’s Digital Government Global Industry Unit has been working on a solution that can help governments address the questions of food security.

So what is food security?  The World Food Programme describes it as:

Food security exists when people have access to enough safe and nutritious food for normal growth and development, and an active and healthy life. By contrast, food insecurity refers to when the aforementioned conditions don’t exist. Chronic food insecurity is when a person is unable to consume enough food over an extended period to maintain a normal, active and healthy life. Acute food insecurity is any type that threatens people’s lives or livelihoods.

World Food Programme

By referencing the World Food Programme, it would be easy to interpret this as a 3rd world problem. But in reality, it applies to just about every nation. We can see this, with the effect the war in Ukraine has had on crops like Wheat, as reported by organizations such as CGIAR, European Council, and World Development journal. But global commodities aren’t the only driver for every nation to consider food security. Other factors such as Food Miles (an issue that perhaps has been less attention over the last few years) and national farming economics (a subject that comes up if you want to it through a humour filter with Clarkson’s Farm to dry UK government reports and US Department of Agriculture.

Looking at it from another perspective, some countries will have a notable segment of their export revenue coming from the production of certain crops.  We know this from simple anecdotes like ‘for all the tea in China’, coffee variants are often referred to by their country of origin (Kenyan, Columbian etc.). For example, Palm Oil is the fourth-largest economic contributor in Malaysia (here). 

So, how is Oracle helping countries?

One of the key means of managing food security is understanding food production and measuring the factors that can impact it (both positively and negatively), which range from the obvious—like weather (and its relationship to soil, water management, etc.) —to what crop is being planted and when. All of which can then be overlayed with government policies for land management and farming subsidies (paying farmers to help them diversify crops, periodically allowing fields to go fallow, or subsidizing the cost of fertilizer).

Oracle is a technology company capable of delivering systems that can operate at scale. Technology and the recent progress in using AI to help solve problems are not new to agriculture; in fact, several trailblazing organizations in this space run on Oracle’s Cloud (OCI), such as Agriscout. Before people start assuming that this is another story of a large cloud provider eating their customers’ lunch, far from it, many of these companies operate at the farm or farm cooperative level, often collecting data through aerial imagery from drones and aircraft, along with ground-based sensors.  Some companies will also leverage satellite imagery for localized areas to complement these other sources. This is where Oracle starts to differentiate itself – by taking high-resolution imagery (think about the resolution level needed to differentiate Wheat and Maize, or spot rice and carrots, differentiate an orchard from a natural copse of trees). To get an idea, look at Google Earth and try to identify which crops are growing.

We take the satellite multi-spectral images from each ‘satellite over flight’ and break it down, working out what the land is being used for (ruling out roads, tracks, buildings, and other land usage).  To put the effort to do this into context, the UK is 24,437,600,000 square meters and is only 78th in the list of countries by area (here).  It’s this level of scale that makes it impractical to use more localized data sources (imagine how many people and the number of drones needed to fly over every possible field in a country, even at a monthly frequency).

This only solves the 1st step of the problem, which is to tell us the total crop growing area.  It doesn’t tell us whether the crop will actually grow well and produce a good yield.  For this, you’re going to need to know about weather (current, forecast, and historic trends), soil chemical composition and structure, and information such as elevation, angle, etc. Combined with an understanding of optimal crop growing needs (water levels, sun light duration, atmospheric moisture, soil types and health) – good crops can be ruined by it simply being too wet to harvest them, or store them dryly.  All these factors need to be taken into account for each ‘cell’ we’re detecting, so we can calculate with any degree of confidence what can be produced.

If this isn’t hard enough, we need to account for the fact that some crops may have several growing seasons per year, or succession planting is used, where Carrots may be grown between March and June, followed by Cucumbers through to August, and so on.

Using technology

Hopefully, you can see there are tens of millions of data points being processed every day, and Oracle’s data products can handle that. As a cloud vendor, we’re able to provide the computing scale and, importantly, elasticity, so we can crunch the numbers quickly enough that users benefit from the revised numbers and can work out mitigation actions to communicate to farmers. As mentioned, this could be planning where to best use fertilizer or publishing advice on when to plant which crops for optimal growing conditions. In the worst cases recognizing there is going to be a national shortage of a staple crop and start purchasing crops from elsewhere and ensure when the crops arrive in ports they get moved out to the markets  (like all large operations – as we saw with the Covid crises – if you need to react quickly, more mistakes can be made, costs grow massively driven by demand).

I mentioned AI, if you have more than the most superficial awareness of AI, you will probably be wondering how we use it, and the problems of AI hallucination – the last thing you want is a being asked to evaluate something and hallucinating (injecting data/facts that are not based on the data you have collected) to create a projection.  At worst, this would mean providing an indication that everything is going well, when things are about to really go wrong.  So, first, most of the AI discussed today is generative, and that is where we see issues like hallucinations.  We’re have and are adopting this aspect of AI where it fits best, such as explainability and informing visualization, but Oracle is making heavy use of the more traditional ideas of AI in the form of Machine Learning and Deep Learning which are best suited to heavy numerical computational uses, that is not to say there aren’t challenges to be ddressed with training the AI.

Conclusion

When it comes to Oracle’s expertise in the specialized domains of agriculture and government, Oracle has a strong record of working with governments and government agencies from its inception. But we’ve also worked closely with the Tony Blair Institute for Global Change, which works with many national government agencies, including the agriculture sector.

My role in this has been as an architect, focused primarily on applying integration techniques (enabling scaling and operational resilience, data ingestion, and how our architecture can work as we work with more and more data sources) and on applying AI (in the generative domain). We’re fortunate to be working alongside two other architects who cover other aspects of the product, such as infrastructure needs and the presentation tier. In addition, there is a specialist data science team with more PhDs and related awards than I can count.

Oracle’s Digital Government business is more than just this agriculture use case; we’ve identified other use cases that can benefit from the data and its volume being handled here. This is in addition to bringing versions of its better-known products, such as ERP, Healthcare (digital health records management, vaccine programmes, etc.), national Energy and Water (metering, infrastructure management, etc).

For more on the agricultural product:

  • Government Data Intelligence for Agriculture
  • Agriculture on the Digital Government page
  • TBI on Food Security
  • iGrow News
  • Oracle Launches AI Platform to Strengthen Government Led Agricultural Resilience – AgroTechSpace

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Design of Web APIs – 2nd Edition

06 Monday Oct 2025

Posted by mp3monster in APIs & microservices, Books, General, Technology

≈ Leave a comment

Tags

API, API Evangelist, APIHandyman, AsyncAPI, book, development, OAS, Open API, practises, review, Swagger

When it comes to REST-based web APIs, I’ve long been an advocate of the work of Arnaud Lauret (better known as the API Handyman) and his book The Design of Web APIs. I have, with Arnaud’s blessing, utilized some of his web resources to help illustrate key points when presenting at conferences and to customers on effective API design. I’m not the only one who thinks highly of Arnaut’s content; other leading authorities, such as Kin Lane (API Evangelist), have also expressed the same sentiment. The news that a 2nd Edition of the book has recently been published is excellent. Given that the 1st edition was translated into multiple languages, it is fair to presume this edition will see the same treatment (as well as having the audio treatment).

Why 2nd Edition?

So, why a second edition, and what makes it good news? While the foundational ideas of REST remain the same, the standard used to describe and bootstrap development has evolved to address practices and offer a more comprehensive view of REST APIs. Understanding the Open API specification in its latest form also helps with working with the Asynchronous API specifications, as there is a significant amount of harmony between these standards in many respects.

The new edition also tackles a raft of new considerations as the industry has matured, from the use of tooling to lint and help consistency as our catalogue of APIs grows, to be able to use linting tools, we need guidelines on how to use the specification, and what we might want to make uniform nd ensure the divergence is addressed. Then there are the questions about how to integrate my API support / fit into an enriched set of documents and resources, such as those often offered by a developer portal.

However, the book isn’t simply a guide to Open API; the chapters delve into the process of API design itself, including what to expose and how to expose it. How to make the APIs consistent, so that a developer understanding one endpoint can apply that understanding to others. For me, the book shows some great visual tools for linking use cases, resources, endpoint definitions, and operations. Then, an area that is often overlooked is the considerations under the Non-Functional Requirements heading, such as those that ensure an API is performant/responsive, secure, supports compatibility (avoiding or managing breaking changes), and clear about how it will respond in ‘unhappy paths’. Not to mention, as we expand our API offerings, the specification content can become substantial, so helping to plot a way through this is excellent.

Think You Know API Design

There will be some who will think, ‘Hey, I understand the OpenAPI Specification; I don’t need a book to teach me how to design my APIs.’ To those, I challenge you to reconsider and take a look at the book. The spec shows you how to convey your API. The spec won’t guarantee a good API. The importance of good APIs grows from an external perspective – it’s a way to differentiate your service from others. When there is competition, and if your API is complex to work with, developers will fight to avoid using it. Not only that, in a world where AI utilizes protocols like MCP, having a well-designed, well-documented API increases the likelihood of an LLM being able to reason and make calls to it.

Conclusion

If there is anything to find fault with – and I’m trying hard, is it would be it would have been nice if it expanded its coverage a little further to Asynchronous APIs (there is a lot of Kafka and related tech out there which could benefit from good AsyncAPI material) and perhaps venture further into how we can make it easier to achieve natural language to API (NL2API) for use cases like working with MCP (and potentially with A2A).

  • Amazon UK
  • Amazon US
  • Manning Direct
  • Barnes & Noble

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

More Posts at The New Stack

12 Friday Sep 2025

Posted by mp3monster in Books, Fluentbit, Fluentd, manning, Technology

≈ Leave a comment

Tags

blog, book, development, ebook, logging, TheNewStack

With the publication of Logging Best Practices (for background to this, go here), more articles have been published through The New Stack, extending the original list we blogged about here.

The latest articles:

  • What To Know Before Building Fluent Bit Plugins With Go (21st November 25) NEW
  • How Are OpenTelemetry and Fluent Bit Related? (29th October 25)
  • A Guide To Fluent Bit’s Health Check API Endpoints (17th September 25)
  • Understanding Log Events: Why Context Is Key (11th September 25)
  • How to Evaluate Logging Frameworks: 10 Questions (21st August 25)
  • Using Logging Frameworks for Application Development (7th August 25)
  • Logging Best Practices: Defining Error Codes (18th July 25)

The previous list:

  • What’s Driving Fluent Bit Adoption? (26th June 25)
  • What Is Fluent Bit? (10th June 25)
  • What Are the Differences Between OTel, Fluent Bit and Fluentd? (8th July 25)
  • Fluent Bit, a Specialized Event Capture and Distribution Tool (30th May 25)

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Fluent Bit v4 the big news

17 Thursday Apr 2025

Posted by mp3monster in Books, Fluentbit, General, Technology

≈ Leave a comment

Tags

book, CNCF, development, eBPF, Fluent Bit, logs, OpenTelemetry, processors, sampling, Security, Trace, zig

With the announcement of Fluent Bit v4 at Kubecon Europe, we thought it worthwhile to take a look at what it means, aside from celebrating 10 years of Fluent Bit.

Firstly, normally using Semantic Versioning would suggest likely breaking (or incompatible changes to use SemVer wording) changes. The good news is that, like all the previous version changes for Fluent Bit the numbering change only reflects the arrival of major new features.

This is good news for me as the author of Logs and Telemetry with Fluent Bit, as it means the book remains entirely relevant. The book obviously won’t address the latest features, but we’ll try to cover those here as supplemental content.

Let’s reflect upon the new features, their benefits, and their implications.

  • New features in Processors allowing:
    • Conditionality to be included.
    • Trace sampling.
  • More flexible support for TLS (v1.3, choosing ciphers to enable)
  • New language for custom plugins in the form of Zig

Security Improvements

While security for many is not something that will get most developers excited about, there are things here that will make a CSO (Chief Security Officer) smile. Any developer who knows implementing security behaviors because it is a good thing, rather than because you have been told to do it, makes a CSO happy, puts them in a good place, to be given some more lianency when there is a need to do something that would get the CSO hot under the collar. Given this, we can now win those points with CSOs by using new Fluent Bit configurations that control TLS versions (1.1 – 1.3) and ciphers to support in use.

But even more fundamental than that are the improvements around basic credentials management. Historically, credentials and tokens had to be explicit in a configuration file or referenced back to an environment variable. Now, such values can come from a file, and as a result, there is no explicitness in the configuration. File security can manage access and visibility of such details. This will also make credentials rotation a lot easier to implement.

Processor Improvements

The processor improvements are probably the most exciting changes. Processors allow us to introduce additional activities within the pipeline as part of a process such as an input, rather than requiring additional buffer fetch and return which we see in standard plugin operations.

Of course, the downside is that if the processor introduces a lot of effort, we can create unexpected problems, such as back pressure, for example, as a result of a processor working hard on an input.

The other factor that extending processors bring is that they are not supported in classic format, meaning that to exploit such formats, you do need to define your configuration using YAML. The only thing I’m not a fan of, is that the configuration for these features does make me think I’m having to read algorithms expressed with Backus Naur form (BNF).

Trace Sampling

Firstly, the processors supporting OpenTelemetry Tracing can now sample. This is probably Fluent Bit’s only weakness in the Open Telemetry domain until now. Sampling is essential here as traces can become significant as you track application executions through many spans. When combined with each new transaction creating a new trace, traces can become voluminous. To control this explosion on telemetry data, we want to sample traces, collecting a percentage of typical traces (performance, latency, no errors, etc) and the outliers, where tracing will show us where a process is suffering, e.g., an end-to-end process is slowing because of a bottleneck. We can dictate how the sampling is applied based on values of existing attributes, the trace status, status codes, latencies, the number of spans, etc.

Conditionality in Processors

Conditionality makes it easier to respond to aspects of logs. For example, only when the logging payload has several attributes with specific values do we want to filter the event out for more attention. For example, an application reporting that it is starting up, and logs are classified as representing an error – then we may want to add a tag to the event so it can be easily filtered and routed to the escalation process.

Plugins with Zig

The enablement of Zig for plugin development (input, output and filters) is strictly an experimental feature. The contributors are confident they have covered all the typical use cases. But the innate flexibility supporting a language always represents potential edge cases never considered and may require some additional work to address.

Let’s be honest: Zig isn’t a well-known language. So, let’s start by looking briefly at it and why the community has adopted it for custom plugin development as an alternative to the existing options with Lua and WASM.

So Zig has a number of characteristics that align with the Fluent Bit ethos better than Lua and WASM, specifically:

  • It is a compiled rather than interpreted language, meaning that we reduce the runtime overheads of an interpreter or JIT compiler such as Lua and the proxy layer of WASM. This aligns to be very fast/minimal compute overhead to do its job, – ideal for IoT and minimising the cost of side-care container deployments.
  • The footprint for the Zig executable is very, very small—smaller than even a C-generated binary! As with the previous point, this lends itself to common Fluent Bit deployments.
  • The language definition is formally defined, compact, and freely available. This means you should be able to take a tool chain from anyone, and it is easy for specialist chip vendors to provide compilers.
  • Based on those who have tried, cross-compiling is far easier to deal with than working with GCC, MSVC, etc. Making it a lot easier to develop with the benefits we want from Go. Unlike Go – to connect to the C binary of Fluent Bit doesn’t require the use of a translation layer.

One of Zig’s characteristics that differs from C is its stronger typing and its approach of, rather than prescribing how edge cases are handled, e.g., null pointers, working to prevent you from entering those conditions.

Zig has been around for a few years (the first pre-release was in 2017, and the first non-pre-release was in August 2023). This is long enough for the supporting tooling to be pretty well fleshed out with package management, important building blocks such as the HTTP server, etc.

While asking a large enterprise with more conservative approaches to development (particularly when IT is seen as an overhead, and source of risk rather than a differentiator/revenue generator) to consider adopting Zig could be challenging compared to adopting, say Go. The different potential values here, make for some interesting potential.

Not Only, but Also

While we have made some significant advancements, each Fluent Bit release brings a variety of improvements in its plugins. For example, working with it with eBPF, HTTP output supports more compression techniques, such as Snappy and ZSTD, and Exit having a configurable delay.

The Plus version of library dependencies is being updated to exploit new capabilities or ensure Fluent Bit isn’t using libraries with vulnerabilities.

Additional resources

    • Chronosphere announcement

    • Announcement YouTube video

    • book
    • My links to technical resources – we’ve extended to include Zig related resources

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Fluent Bit and AI: Unlocking Machine Learning Potential

30 Monday Dec 2024

Posted by mp3monster in Fluentbit, General, Technology

≈ Leave a comment

Tags

AI, artificial-intelligence, Cloud, Data Drift, development, Fluent Bit, GenAI, Machine Learning, ML, observability, Security, Technology, Tensor Lite, TensorFlow

These days, everywhere you look, there are references to Generative AI, to the point that what have Fluent Bit and GenAI got to do with each other? GenAI has the potential to help with observability, but it also needs observation to measure its performance, whether it is being abused, etc. You may recall a few years back that Microsoft was trailing new AI features for Bing, and after only having it in use for a couple of days, it had been recorded generating abusive comments and so on (Microsoft’s Tay is such an example).

But this isn’t the aspect of GenAI (or the foundations of AI with Machine Learning (ML)) I was thinking about. Fluent Bit can be linked to GenAI through its TensorFlow plugin. Is this genuinely of value or just a bit of ‘me too’?

There are plenty of backend use cases once the telemetry has been incorporated into an analytics platform, for example:

  • Making it easy to query and mine the observability data, such as natural language searching – to simplify expressing what is being looked for.
  • Outlier / Anomaly detection – when signals, particularly metrics, diverge from the normal patterns of behavior, we have the first signs of a problem. This is more Machine Learning than generative AI.
  • Using AI agents to tune monitoring thresholds and alerting scenarios

But these are all backend, big data style use cases and do not center on Fluent Bit’s core value of getting data sources to appropriate destination systems for such analysis or visualization.

To incorporate AI into Fluent Bit pipelines, we need to overcome a key issue – AI tends to be computationally heavy – making it potentially too slow for streams of signals being generated by our applications and too expensive given that most logs reflecting ‘business as usual’ are, in effect, low value.

There are some genuine use cases where lightweight AI can deliver value. First, we should be a little more precise. The TensorFlow plugin is the TensorFlow Lite version, also known as LiteRT. The name comes from the fact that it is a lite-weight solution intended to be deployable using small devices (by AI standards). This fits the Fluent Bit model of having a small footprint.

So, where can we put such a use case:

  • Translating stack traces into actionable information can be challenging. A trained ML or AI model can help classify and characterize the cause of a stack trace. As a result, we can move from the log to triggering appropriate actions.
  • Targeted use cases where we’ve filtered out most signal data to help analyze specific events – for example, we want to prevent the propagation of PII data downstream. Some PII data can be easily isolated through patterns using REGEX. For example, credit card IDs are a pattern of 4 digits in 4 groups. Phone numbers and email addresses can also be easily identified. However, postal addresses aren’t easy, particularly when handling multinational addresses, where the postal code/zip code can’t be used as an indicative pattern. Using AI to help with such checks means we must filter out signals to only examine messages that could accidentally carry such information.

When adopting AI into such scenarios, we have to be aware of the problems that can impact the use of ML and AI. These use cases are less high profile than the issues of hallucinations but just as important. As we’re observing software, which will change over time. As a result, payloads or data shifts (technically referred to as data drift) and the detection rate can drop. So, we need to measure the efficacy of the model. However, issues such as data drift need to be taken into account, as the scenario being detected may change in volume, reflecting changes in software usage and/or changes in how the solution works.

There are ways to help address such considerations, such as tracking false positive outcomes, and if the model can provide confidence scoring, is there a trend in the score?

Conclusion

There are good use cases for using Machine Learning (and, to an extent, Artificial Intelligence) within an observability pipeline – but we have to be selective in its application as:

  • The cost of the computation can outweigh the benefits
  • The execution time for such computation can be notably slower than our pipeline, leading to risks of back pressure if applied to every event in the pipeline.
  • The effectiveness and how much data drift might occur (we might initially see very good results, but then things can fall off).

Possibly, the most useful application is when the AI/ML engine has been trained to recognize patterns of events that preceded a serious operational issue (strictly, this is the use of ML).

Forward-looking

The true potential for Gen AI is when we move beyond isolating potential faults based on pattern recognition to using AI to help recommend or even trigger remediation processes.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Fluent Bit 3.2: YAML Configuration Support Explained

23 Monday Dec 2024

Posted by mp3monster in Fluentbit, General, Technology

≈ Leave a comment

Tags

book, Cloud, config, configuration, development, Fluent Bit, parsers, streams, stream_task, YAML

Among the exciting announcements for Fluent Bit 3.2 is the support for YAML configuration is now complete. Until now, there have been some outliers in the form of details, such as parser and streamer configurations, which hadn’t been made YAML compliant until now.

As a result, the definitions for parsers and streams had to remain separate files. That is no longer the case, and it is possible to incorporate parser definitions within the same configuration file. While separate configuration files for parsers make for easier re-use, it is more troublesome when incorporating the configuration into a Kubernetes deployment configuration, particularly when using a side-car deployment.

Parsers

With this advancement, we can define parsers like this:

Classic Fluent Bit

[PARSER]
    name myNginxOctet1
    format regex
    regex (?<octet1>\d{1,3})

YAML Configuration

parsers:
  - name: myNginxOctet1
    format: regex
    regex: '/(?<octet1>\d{1,3})/'

As the examples show, we swap [PARSER] for a parsers object. Then, each parser is an array of attributes starting with the parser name. The names follow a one-to-one mapping in most cases. This does break down when it comes to parsers where we can define a series of values, which in classic format would just be read in order.

Multiline Parsers

When using multiline parsers, we must provide different regular expressions for different lines. In this situation, we see each set of attributes become a list entry, as we can see here:

Classic Fluent Bit

[MULTILINE_PARSER]
  name multiline_Demo
  type regex
  key_content log
  flush_timeout 1000
  #
  # rule|<state name>|<regex>|<next state>
  rule "start_state" "^[{].*" "cont"
  rule "cont" "^[-].*" "cont"

YAML Configuration

multiline_parsers:
  - name: multiline_Demo
    type: regex
    rules:
    - state: start_state
      regex: '^[{].*'
      next_state: cont
    - state: cont
      regex: "^[-].*"
      next_state: cont

In addition to how the rules are nested, we have moved from several parameters within a single attribute(rule) to each rule having several discrete elements (regex, next_state). In addition to this, we have also changed the use of single and double quote marks.

If you want to keep the configurations for parsers and streams separate, we can continue to do so, referencing the file and name from the main configuration file. While converting the existing conf to a YAML format is the bulk of the work, in all likelihood, you’ll change the file extension to be .YAML will means you must also modify the referencing parsers_file reference in the server section of the main configuration file.

Streams

Streams follow very much the same path as parsers. However, we do have to be a lot more aware of the query syntax to remain within the YAML syntax rules.

Classic Fluent Bit

[STREAM_TASK]
  name selectTaskWithTag
  exec SELECT record_tag(), rand_value FROM STREAM:random.0;

[STREAM_TASK]
  name selectSumTask
  exec SELECT now(), sum(rand_value)   FROM STREAM:random.0;

[STREAM_TASK]
  name selectWhereTask
  exec SELECT unix_timestamp(), count(rand_value) FROM STREAM:random.0 where rand_value > 0;

YAML Configuration

stream_processor:
  - name: selectTaskWithTag
    exec: "SELECT record_tag(), rand_value FROM STREAM:random.0;"
  - name: selectSumTask
    exec: "SELECT now(), sum(rand_value) FROM STREAM:random.0;"
  - name: selectWhereTask
    exec: "SELECT unix_timestamp(), count(rand_value) FROM STREAM:random.0 where rand_value > 0;"

Note, it is pretty common for Fluent Bit YAML to use the plural form for each of the main blocks, although stream definition is an exception to the case. Additionally, both stream_processor and stream_task are accepted (although stream_task is not recognized in the main configuration file)..

Incorporating Configuration directly into the core configuration file

To support directly incorporating these definitions into a single file, we can lift the YAML file contents and apply them as root elements (i.e., at the same level as the pipeline, and service, for example).

Fluent Bit book examples

Our Fluent Bit book (Manning, Amazon UK, Amazon US, and everywhere else) has several examples of using parsers and streams in its GitHub repo. We’ve added the YAML versions of the configurations illustrating parsers and stream processing to its repository in the Extras folder.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Books Books Books

22 Tuesday Oct 2024

Posted by mp3monster in Books, General, Technology

≈ Leave a comment

Tags

book, books, development, ebook, FluentBit, manning, pbook, print, published, reading

Today we got the official notification that our book has been published …

Logs and Telemetry book - order option

As you can see, the eBook is now available. The print edition can be purchased from Thursday (24th Oct). If you’ve been a MEAP subscriber, you should be able to download the complete book. The book will start showing up on other platforms in the coming weeks (Amazon UK has set an availability date, and Amazon.com you can preorder).

There are some lovely review quotes as well:

A detailed dive into building observability and monitoring.

Jamie Riedesel, author of Software Telemetry
Extensive real-life examples and comprehensive coverage! It’s a great resource for architects, developers, and SREs.

Sambasiva Andaluri, IBM
A must read for anyone managing a critical IT-system. You will truly understand what’s going on in your applications and infrastructure.

Hassan Ajan, Gain Momentum

And there is more …

I hadn’t noticed until today, but the partner book Logging in Action, which covers Fluentd, is available in ebook and print as well as audio and video editions. As you can see, these are available on Manning and platforms like O’Reilly/Safari…

In Logging in Action you will learn how to:

Deploy Fluentd and Fluent Bit into traditional on-premises, IoT, hybrid, cloud, and multi-cloud environments, both small and hyperscaled
Configure Fluentd and Fluent Bit to solve common log management problems
Use Fluentd within Kubernetes and Docker services
Connect a custom log source or destination with Fluentd’s extensible plugin framework
Logging best practices and common pitfalls

Logging in Action is a guide to optimize and organize logging using the CNCF Fluentd and Fluent Bit projects. You’ll use the powerful log management tool Fluentd to solve common log management, and learn how proper log management can improve performance and make management of software and infrastructure solutions easier. Through useful examples like sending log-driven events to Slack, you’ll get hands-on experience applying structure to your unstructured data.

I have to say that my digital twin, who narrated the book, sounds pretty intelligent.

Update

Amazon UK is correct now, and has an availability date

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Fluent Bit config from classic to YAML

02 Tuesday Jul 2024

Posted by mp3monster in Fluentbit, General, java, Technology

≈ 1 Comment

Tags

configuration, development, FluentBit, format, tool, YAML

Fluent Bit supports both a classic configuration file format and a YAML format. The support for YAML reflects industry direction. But if you’ve come from Fluentd to Fluent Bit or have been using Fluent Bit from the early days, you’re likely to be using the classic format. The differences can be seen here:

[SERVICE]
    flush 5
    log_level debug
[INPUT]
   name dummy
   dummy {"key" : "value"}
   tag blah
[OUTPUT]
   name stdout
   match *
#
# Classic Format
#
service:
    flush: 1
    log_level: info
pipeline:
    inputs:
        - name: dummy
          dummy: '{"key" : "value"}'
          tag: blah
    outputs:
        - name: stdout
          match: "*"
#
# YAML Format
#

Why migrate to YAML?

Beyond having a consistent file format, the driver is that some new features are not supported by the classic format. Currently, this is predominantly for Processors; it is fair to assume that any other new major features will likely follow suit.

Migrating from classic to YAML

The process for migrating from classic to YAML has two dimensions:

  • Change of formatting
    • YAML indentation and plugins as array elements
    • addressing any quirks such as wildcard (*) being quoted, etc
  • Addressing constraints such as:
    • Using include is more restrictive
    • Ordering of inputs and outputs is more restrictive – therefore match attributes need to be refined.

None of this is too difficult, but doing it by hand can be laborious and easy to make mistakes. So, we’ve just built a utility that can help with the process. At the moment, this solution is in an MVP state. But we hope to have beefed it up over the coming few weeks. What we plan to do and how to use the util are all covered in the GitHub readme.

The repository link (fluent-bit-classic-to-yaml-converter)

Update 4th July 24

A quick update to say that we now have a container configuration in the repository to make the tool very easy to use. All the details will be included in the readme, along with some additional features.

Update 7th July

We’ve progressed past the MVP state now. The detected include statements get incorporated into a proper include block but commented out.

We’ve added an option to convert the attributes to use Kubernetes idiomatic form, i.e., aValue rather than a_value.

The command line has a help option that outputs details such as the control flags.

Update 12th July

In the last couple of days, we pushed a little too quickly to GitHub and discovered we’d broken some cases. We’ve been testing the development a lot more rigorously now, and it helps that we have the regression container image working nicely. The Javadoc is also generating properly.

We have identified some edge cases that need to be sorted, but most scenarios have been correctly handled. Hopefully, we’ll have those edge scenarios fixed tomorrow, so we’ll tag a release version then.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Useful Quick Reference Links when Writing API Specs

13 Monday May 2024

Posted by mp3monster in APIs & microservices, General, Technology

≈ Leave a comment

Tags

API, Async, development, OAS

Whether you’re writing Asynchronous or Open APIs unless you’re doing it pretty much constantly, it is useful to have links to the specific details, to quickly check the less commonly used keywords, or to check whether you’re not accidentally mixing OpenAPI with AsyncAPI or the differences between version 2 or version 3 of the specs. So here are the references I keep handy:

  • API Handyman’s excellent OpenAPI Map and his spec navigator for Async and OpenAPI
  • Async API Schema (v3) and v2.6
  • Open API Specification (3.1)
  • JSON Schema
  • YAML Schema
  • Common Mark (as allowed in AsyncAPI for formatting text)

There are some useful ISO Specs for common data types like dates. Ideally, if you’re working in a specific industry domain, it is worth evaluating the industry standard definitions (even if you elect to use entire standardized objects). But when you’re not in such a position, it is at least work using standard ways of representing data—it saves on documentation effort.

  • ISO Message Catalog covers a wide range from Swift to Telecos (emphasis is on financial-related standards)
  • ISO 3166 – Country Codes
  • ISO 639 – Language Codes
  • ISO 8601 Date & Time

Not a standard, but still an initiative to promote consistency back by the likes of Microsoft etc, so could provide some insights/ideas/templates for common data structures – https://schema.org/

There are, of course, a lot of technology-centered standards such as media streaming, use of HTTP, etc.

These and many more resources are in my Tech resources.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Observing the Observer (Fluent Bit monitoring)

25 Thursday Apr 2024

Posted by mp3monster in Fluentbit, General, Technology

≈ Leave a comment

Tags

development, FluentBit, Grafana, Jaeger, logs, metrics, OpenSearch, Prometheus, Traces

In the Fluent Bit book I touch upon the point that we should be observing the observer. After all, if we don’t monitor our observability stack, then we’ll be operating blind and may never know until things go catastrophically wrong, and we’re getting complaints that production business solutions are down. One of the peer review comments was it would be really good to have a visual representation in the book. While I’d love to incorporate such diagrams, for them to be readable, they do use up a lot of space on the printed page, and very long chapters can also put some readers off. So, given the point wasn’t a key theme, we simply couldn’t incorporate the diagram.

But the suggestion is a good one. So we’ve created the visual representation here.

Annotated diagram showing how Fluent Bit could be used to monitor an open-source observation stack.

The diagram shows how we can monitor our observation tech stack so we don’t become ‘blind’ without knowing about it. The Outflow from the right of the diagram would send a signal to a notification service, which could be as simple as email or as user-friendly as Slack. We’ve indicated which kinds of metrics, logs, and traces could offer the most value from our observation tech stack.

If we’re running everything within a Kubernetes cluster, it would be easy to say we don’t need such a sophisticated setup as we can use Kubernetes liveness probes if the containers are well configured. While it is true if one of our services starts to fail, a liveness check should pick it up and recycle the container. But such probes only worry about the HTTP response code, not the cause. If we don’t monitor and capture more information we’ll never understand the problem. At worst we could end up seeing Kubernetes starting and then killing our containers in a vicious cycle and struggling to resolve the cause. So collecting the logs and metrics remains just as important.

How to Publish Fluent Bit Metrics and Logs

To publish Fluent Bit’s metrics to Prometheus, we need to configure the fluentbit-metrics input plugin (it does sound odd as an input, but there are reasons that become clearer in the book). We then route the output that supports using Fluent Bit as a Prometheus node exporter or makes use of the remote write API.

The log output for Fluent Bit can be configured via the command line or in the SERVICE blog (using the attributes log_file and log_level in the configuration file. Today this is setting the log threshold and identifying the log file. We can then, of course, configure a tail input plugin against the file if we want to send the logs to OpenSearch. We can also set plugin-specific logging thresholds as overrides to the Fluent Bit wide setting in the SERVICE block of configuration.

Configuring the other monitoring tools

  • Grafana‘s configuration will allow it to publish Prometheus scrapable metrics and Traces that are OTLP compliant can be found documented here.
  • Prometheus provides metrics on itself (details here) and logging controls as part of its command line and generates logfmt or JSON logs, details here.
  • OpenSearch‘s logs can be accessed as documented here. The Logs are created with Log4j2, which means out of the box, it will be easy to parse them. Configuring the output of slow query reports does need to be switched on. OpenSearch also illustrates a pre OpenTelemetry/OpenMetrics approach to sharing internal metrics by writing them as logs. However, there are ways to convert such log events to OTLP Metrics with Fluent Bit.
  • Jaeger provides metrics endpoints that are Prometheus-compatible, along with JSON-based logs, and are documented here. There is some support for tracing.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...
← Older posts

    I work for Oracle, all opinions here are my own & do not necessarily reflect the views of Oracle

    • About
      • Internet Profile
      • Music Buying
      • Presenting Activities
    • Books & Publications
      • Logging in Action with Fluentd, Kubernetes and More
      • Logs and Telemetry using Fluent Bit
      • Oracle Integration
      • API & API Platform
        • API Useful Resources
        • Useful Reading Sources
    • Mindmaps Index
    • Monster On Music
      • Music Listening
      • Music Reading
    • Oracle Resources
    • Useful Tech Resources
      • Fluentd & Fluent Bit Additional stuff
        • Logging Frameworks and Fluent Bit and Fluentd connectivity
        • REGEX for BIC and IBAN processing
      • Java and Graal Useful Links
      • Official Sources for Product Logos
      • Python Setup & related tips
      • Recommended Tech Podcasts

    Oracle Ace Director Alumni

    TOGAF 9

    Logs and Telemetry using Fluent Bit


    Logging in Action — Fluentd

    Logging in Action with Fluentd


    Oracle Cloud Integration Book


    API Platform Book


    Oracle Dev Meetup London

    Blog Categories

    • App Ideas
    • Books
      • Book Reviews
      • manning
      • Oracle Press
      • Packt
    • Enterprise architecture
    • General
      • economy
      • ExternalWebPublications
      • LinkedIn
      • Website
    • Music
      • Music Resources
      • Music Reviews
    • Photography
    • Podcasts
    • Technology
      • AI
      • APIs & microservices
      • chatbots
      • Cloud
      • Cloud Native
      • Dev Meetup
      • development
        • languages
          • java
          • node.js
      • drone
      • Fluentbit
      • Fluentd
      • logsimulator
      • mindmap
      • OMESA
      • Oracle
        • API Platform CS
          • tools
        • Helidon
        • ITSO & OEAF
        • Java Cloud
        • NodeJS Cloud
        • OIC – ICS
        • Oracle Cloud Native
        • OUG
      • railroad diagrams
      • TOGAF
    • xxRetired
    • AI
    • API Platform CS
    • APIs & microservices
    • App Ideas
    • Book Reviews
    • Books
    • chatbots
    • Cloud
    • Cloud Native
    • Dev Meetup
    • development
    • drone
    • economy
    • Enterprise architecture
    • ExternalWebPublications
    • Fluentbit
    • Fluentd
    • General
    • Helidon
    • ITSO & OEAF
    • java
    • Java Cloud
    • languages
    • LinkedIn
    • logsimulator
    • manning
    • mindmap
    • Music
    • Music Resources
    • Music Reviews
    • node.js
    • NodeJS Cloud
    • OIC – ICS
    • OMESA
    • Oracle
    • Oracle Cloud Native
    • Oracle Press
    • OUG
    • Packt
    • Photography
    • Podcasts
    • railroad diagrams
    • Technology
    • TOGAF
    • tools
    • Website
    • xxRetired

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 2,555 other subscribers

    RSS

    RSS Feed RSS - Posts

    RSS Feed RSS - Comments

    December 2025
    M T W T F S S
    1234567
    891011121314
    15161718192021
    22232425262728
    293031  
    « Nov    

    Twitter

    Tweets by mp3monster

    History

    Speaker Recognition

    Open Source Summit Speaker

    Flickr Pics

    Turin Brakes Acoustic Tour 24 @ The Maltings FarnhamTurin Brakes Acoustic Tour 24 @ The Maltings FarnhamTurin Brakes Acoustic Tour 24 @ The Maltings FarnhamTurin Brakes Acoustic Tour 24 @ The Maltings Farnham
    More Photos

    Social

    • View @mp3monster’s profile on Twitter
    • View philwilkins’s profile on LinkedIn
    • View mp3monster’s profile on GitHub
    • View mp3monster’s profile on Flickr
    • View mp3muncher’s profile on WordPress.org
    • View philmp3monster’s profile on Twitch
    Follow Phil (aka MP3Monster)'s Blog on WordPress.com

    Blog at WordPress.com.

    • Subscribe Subscribed
      • Phil (aka MP3Monster)'s Blog
      • Join 233 other subscribers
      • Already have a WordPress.com account? Log in now.
      • Phil (aka MP3Monster)'s Blog
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar
     

    Loading Comments...
     

    You must be logged in to post a comment.

      Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
      To find out more, including how to control cookies, see here: Our Cookie Policy
      %d