• Home
  • Site Aliases
    • www.cloud-native.info
  • About
    • Background
    • Presenting Activities
    • Internet Profile
      • LinkedIn
    • About
  • Books & Publications
    • Log Generator
    • Logs and Telemetry using Fluent Bit
      • Fluent Bit book
      • Book Resources in GitHub
      • Fluent Bit Classic to YAML Format configurations
    • Logging in Action with Fluentd, Kubernetes and More
      • Logging in Action with Fluentd – Book
      • Fluentd Book Resources
      • Fluentd & Fluent Bit Additional stuff
    • API & API Platform
      • API Useful Resources
    • Oracle Integration
      • Book Website
      • Useful Reading Sources
    • Publication Contributions
  • Resources
    • GitHub
    • Oracle Integration Site
    • Oracle Resources
    • Mindmaps Index
    • Useful Tech Resources
      • Fluentd & Fluent Bit Additional stuff
      • Recommended Tech Podcasts
      • Official Sources for Product Logos
      • Java and Graal Useful Links
      • Python Setup & related stuff
      • DevTips
  • Music
    • Monster On Music
    • Music Listening
    • Music Reading

Phil (aka MP3Monster)'s Blog

~ from Technology to Music

Phil (aka MP3Monster)'s Blog

Category Archives: Technology

Fluent Bit 4.1 what to be excited about in this major release?

10 Friday Oct 2025

Posted by mp3monster in Fluentbit, General, Technology

≈ Leave a comment

Tags

4.1, Fluent Bit, Fluentbit, release

Fluent Bit version 4’s first major release dropped a couple of weeks ago (September 24, 2025). The release’s feature list isn’t as large as 4.0.4 (see my blog here). It would therefore be easy to interpret the change as less profound. But that isn’t the case. There are some new features, which we’ll come to shortly, but there are some significant improvements under the hood.

Under the hood

Fluent Bit v4.1 has introduced significant enhancements to its processing capabilities, particularly in the handling of JSON. This is really important as handling JSON is central to Fluent Bit. As many systems are handling JSON, there is at the lowest levels, continuous work happening in different libraries to make it as fast as possible. Fluent Bit has amended its processing to use a mechanism based on a yyjson, without going into the deep technical details. If you examine the benchmarking of yyjson and other comparable libraries, you’ll see that its throughput is astonishing. So accelerating processing by even a couple of percentage points can have a profound performance enhancement.

The next improvement in this area is the use of SIMD (Single Instruction, Multiple Data). This is all about how our code can exploit microprocessor architecture to achieve faster processing. As logs often need characters like new line, quotes, and other special characters encoding, and logs often carry these characters (and we tend to overlook this – consider a stack trace, where each step of the stack chain is provided on a new line, or embedding a stringified XML or JSON block in the log, which will hve multiple uses of quotes etc. As a result, any optimization of string encoding will quickly yield performance benefits. As SIMD takes advantage of CPU characteristics, not every build will have the feature. You can check to see if SIMD is being used by checking the output during startup, like tthis:

This build for Windows x86-64 as shown in the 3rd info statement doesn’t have SIMD enabled.

Other backend improvements have been made with TLS handling. The ability to collect GPU metrics with AMD hardware (significant as GPUs are becoming ever more critical not only in large AI arrays, but also for AI at the edge).

So, how do we as app developers benefit?

If Fluent Bit’s performance improves, we benefit from reduced risk of backpressure-related issues. We can either complete more work with the same compute (assuming we’re at maximum demand all the time), or we can reduce our footprint – saving money either as capital expenditure (capex) or operational expenditure (opex) (not something that most developers are typically seeking until systems are operating at hyperscale). Alternatively, we can further utilize Fluent Bit to make our operational life easier, for example, by reducing sampling slightly, implementing additional filtering/payload load logic, and streaming to detect scenarios as they occur rather than bulk processing on a downstream platform.

Functional Features

In terms of functionality, which as a developer we’re very interested in as it can make our day job easier, we have a few features.

AWS

In terms of features, there are a number of enhancements to support AWS services. This isn’t unusual; as the AWS team appears to have a very active and team of contributors for their plugins. Here the improvement is for supporting Parquet files in S3.

Supervisory Process

As our systems go ever faster, and become more complex, particularly when distributed it becomes harder to observe and intervene if a process appears to fail or worse becomes unresponsive. As a result, we need tools to have some ‘self awareness’. Fluen bit introduces the idea of an optional supervisor process. This is a small, relatively simple process that spawns the core of Fluent Bit and has the means to check the process and act as necessary. To enable the supervisor , we can add to the command line –supervisor. This feature is not available on all platforms, and the logic should report back to you during startup if you can’t use the feature. Unfortunately, the build I’m trying doesn’t appear to have the supervisor correctly wired in (it returns an error message saying it doesn’t recognize the command-line parameter).

If you want to see in detail what the supervisor is doing – you can find its core in /src/flb_supervisor.c with the supervisor_supervise_loop function, specifically.

Conclusion

With the number of differently built and configured systems, we’ll see a 4.1.x releases as these edge case gremlins are found and resolved.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Design of Web APIs – 2nd Edition

06 Monday Oct 2025

Posted by mp3monster in APIs & microservices, Books, General, Technology

≈ Leave a comment

Tags

API, API Evangelist, APIHandyman, AsyncAPI, book, development, OAS, Open API, practises, review, Swagger

When it comes to REST-based web APIs, I’ve long been an advocate of the work of Arnaud Lauret (better known as the API Handyman) and his book The Design of Web APIs. I have, with Arnaud’s blessing, utilized some of his web resources to help illustrate key points when presenting at conferences and to customers on effective API design. I’m not the only one who thinks highly of Arnaut’s content; other leading authorities, such as Kin Lane (API Evangelist), have also expressed the same sentiment. The news that a 2nd Edition of the book has recently been published is excellent. Given that the 1st edition was translated into multiple languages, it is fair to presume this edition will see the same treatment (as well as having the audio treatment).

Why 2nd Edition?

So, why a second edition, and what makes it good news? While the foundational ideas of REST remain the same, the standard used to describe and bootstrap development has evolved to address practices and offer a more comprehensive view of REST APIs. Understanding the Open API specification in its latest form also helps with working with the Asynchronous API specifications, as there is a significant amount of harmony between these standards in many respects.

The new edition also tackles a raft of new considerations as the industry has matured, from the use of tooling to lint and help consistency as our catalogue of APIs grows, to be able to use linting tools, we need guidelines on how to use the specification, and what we might want to make uniform nd ensure the divergence is addressed. Then there are the questions about how to integrate my API support / fit into an enriched set of documents and resources, such as those often offered by a developer portal.

However, the book isn’t simply a guide to Open API; the chapters delve into the process of API design itself, including what to expose and how to expose it. How to make the APIs consistent, so that a developer understanding one endpoint can apply that understanding to others. For me, the book shows some great visual tools for linking use cases, resources, endpoint definitions, and operations. Then, an area that is often overlooked is the considerations under the Non-Functional Requirements heading, such as those that ensure an API is performant/responsive, secure, supports compatibility (avoiding or managing breaking changes), and clear about how it will respond in ‘unhappy paths’. Not to mention, as we expand our API offerings, the specification content can become substantial, so helping to plot a way through this is excellent.

Think You Know API Design

There will be some who will think, ‘Hey, I understand the OpenAPI Specification; I don’t need a book to teach me how to design my APIs.’ To those, I challenge you to reconsider and take a look at the book. The spec shows you how to convey your API. The spec won’t guarantee a good API. The importance of good APIs grows from an external perspective – it’s a way to differentiate your service from others. When there is competition, and if your API is complex to work with, developers will fight to avoid using it. Not only that, in a world where AI utilizes protocols like MCP, having a well-designed, well-documented API increases the likelihood of an LLM being able to reason and make calls to it.

Conclusion

If there is anything to find fault with – and I’m trying hard, is it would be it would have been nice if it expanded its coverage a little further to Asynchronous APIs (there is a lot of Kafka and related tech out there which could benefit from good AsyncAPI material) and perhaps venture further into how we can make it easier to achieve natural language to API (NL2API) for use cases like working with MCP (and potentially with A2A).

  • Amazon UK
  • Amazon US
  • Manning Direct
  • Barnes & Noble

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Challenges of UX for AI

21 Sunday Sep 2025

Posted by mp3monster in General, Technology

≈ 1 Comment

Tags

AI, UI, UX

AI, specifically Generative AI, has been dominating IT for a couple of years now. If you’re a software vendor with services that interact with users, you’ll probably have been considering how to ensure you’re not left behind, or perhaps even how to use AI to differentiate yourself. The answer to this can be light-touch AI to make the existing application a little easier to use (smarter help documentation, auto formatting, and spelling for large text fields). Then, at the other end of the spectrum, is how do we make AI central to our application? This can be pretty radical. Both ends of the spectrum carry risks – light touch use can be seen as ‘AI whitewashing’ – adding something cosmetic so you can add AI enablement to the product marketing. At the other end of the spectrum, rejecting chunks of traditional menus and form-based UI that allow users in a couple of quick clicks or keystrokes to access or create content can result in increasing the product cost (AI consumes more compute cycles, thereby incurring a cost along the way) for at best a limited gain.

While AI whitewashing is harmful and can impact a brand image, at least the features can be ignored by the user. However, the latter requires a significant investment and can easily lead to the perception that he product isn’t as capable as it could/should be.

At the heart of this are a couple of basic considerations that UX design has identified for a long time:

  • For a user to get the most out of a solution, they need a mental model of the capabilities your product can provide and the data it has. These mental models come from visual hints – those hints come from menus, right-click operations, and other visual clues. UI specialists don’t do eye tracking studies just for the research grant money.
  • UI best practices provide simple guidance stating that there should be at least three ways to use an application, supporting novice users, the average user, and the power user. We can see this in straightforward things, such as multiple locations for everyday tasks (right-click menus, main menu, ribbon with buttons), not to mention keyboard shortcuts. Think I’m over-stating things? I see very knowledgeable, technically adept users still type and then navigate to the menu ribbon to embolden text (rather than simply use the near-universal Ctrl+B). Next time you’re on a Zoom/Teams call, working with someone on a document, just watch how people are using the tools. On the other end of the spectrum, some tools allow us to configure accelerator key combinations to specific tasks, so power users can complete actions very quickly.
  • Users are impatient – the technology industry has prided itself on making things quicker, faster, more responsive (we see this with Moore’s law with computer chips to mobile networks … Edge, 3G … 5G (and 6G in development). So if things drop out of those norms, there is an exponential chance of the user abandoning an action (or worse, trying to make it happen again, multiplying the workload). AI is computationally expensive, so by its nature, it is slower.
  • Switching input devices incurs a time cost when transitioning between devices, such as a keyboard and mouse. Over the years, numerous studies have been conducted on this topic, identifying ways to reduce or eliminate such costs. Therefore, we should minimize such switching. Simple measures, such as being able to table through UI widgets, can help achieve this.
  • User tolerance to latency has been an ongoing challenge – we’re impatient creatures. There are well-researched guidelines on this topic, and if you take a moment to examine some of the techniques available in UI, particularly web UIs, you will see that they reflect this. For example, prefetching content, particularly images, rendering content as it is received, and infinite scrolling.

All of this could be interpreted as being anti-AI, and even as someone wanting to protect jobs by advocating that we continue the old way. Far from it, AI can really help, and I have been a long-standing advocate of the idea that AI could significantly simplify tasks such as report generation in products that rely heavily on structured data capture. Why, well, using structured form capture processes will help with a mental model of the data held, the relationships, and the terminology in the system, enabling us to formulate queries more effectively.

The point is, we should empower users to use different modes to achieve their goals. In the early days of web search, the search engines supported the paradigm of navigating using cataloguing of websites. Only as the understanding of search truly became a social norm did we see those means to search disappear from Yahoo and Google because the mental models of using search engines established themselves. But even now, if you look, those older models of searching/navigating still exist. Look at Amazon, particularly for books, which still offers navigation to find books by classification. This isn’t because Amazon’s site is aging, anything but. It is a recognition that to maximize sales, you need to support as many ways of achieving a goal as are practical.

A sidebar menu displaying categories of historical books, including various time periods and regions.
Navigation categories for historical books, demonstrating various time periods and regions – Amazon.

If there is a call to arms here, it is this – we should complement traditional UX with AI, not try to replace it. When we look at an AI-driven interaction, we use it to enable users to solve problems faster, solve problems that can’t be easily expressed with existing interactions and paradigms. For example, replacing traditional reporting tools that require an understanding of relational databases or reducing/removing the need to understand how data is distributed across systems.

Some of the better uses of AI as part of UX are subtle – for example, the use of Grammarly, Google’s introduction to search looks a lot like an oversized search result. But we can, and should consider the use of AI, not just as a different way to drive change into traditional UX, but to open up other interaction models – enabling their use in new ways, for example rather than watching or reading how to do something, we can use AI to translate to audio, and talk us through a task as we complete it. For example, a mechanical engineering task requires both hands to work with the necessary tools. Burt is also using different interaction models to help overcome disabilities.

Don’t take my word for it; here are some useful resources:

  • Neilsen Norman Group – article about the adverse impact AI can have
  • AI is reshaping UI
  • Designing with AI
  • AI for disabilities – UN Report
  • AI won’t kill UX – we will
  • NeuroNav blog

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

More Posts at The New Stack

12 Friday Sep 2025

Posted by mp3monster in Books, Fluentbit, Fluentd, manning, Technology

≈ Leave a comment

Tags

blog, book, development, ebook, FLB, Fluent Bit, logging, TheNewStack, TNS

As The New Stack regularly posts new extracts, we’re updating this page; accordingly, the date below reflects the last update.

December 22, 2025

With the publication of Logging Best Practices (for background to this, go here), more articles have been published through The New Stack, extending the original list we blogged about here.

The latest articles:

  • Kubernetes Auditing and Events: Monitoring Cluster Activity (19th December 25) NEW
  • What To Know Before Building Fluent Bit Plugins With Go (21st November 25)
  • How Are OpenTelemetry and Fluent Bit Related? (29th October 25)
  • A Guide To Fluent Bit’s Health Check API Endpoints (17th September 25)
  • Understanding Log Events: Why Context Is Key (11th September 25)
  • How to Evaluate Logging Frameworks: 10 Questions (21st August 25)
  • Using Logging Frameworks for Application Development (7th August 25)
  • Logging Best Practices: Defining Error Codes (18th July 25)

The previous list:

  • What’s Driving Fluent Bit Adoption? (26th June 25)
  • What Is Fluent Bit? (10th June 25)
  • What Are the Differences Between OTel, Fluent Bit and Fluentd? (8th July 25)
  • Fluent Bit, a Specialized Event Capture and Distribution Tool (30th May 25)

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Microservices Patterns 2nd edition in the works

24 Sunday Aug 2025

Posted by mp3monster in Book Reviews, Books, General, manning, Technology

≈ Leave a comment

Tags

book, ebook, Microservices, Patterns, review

Back in 2018, Manning published Chris Richardson‘s Microservices Patterns book. In many respects, this book is the microservices version of the famous Gang of Four patterns book. The exciting news is that Chris is working on a second edition.

One key difference between the GoF book and this is that engaging with patterns like Inversion of Control, Factories, and so on isn’t impacted by considerations around architecture, organization, and culture.

While the foundational ideas of microservices are established, the techniques for designing and deploying have continued to evolve and mature. If you follow Chris through social media, you’ll know he has, in the years since the book’s first edition, worked with numerous organisations, training and helping them engage effectively with microservices. As a result, a lot of processes and techniques that Chris has identified and developed with customers are grounded in real practical experience.

As the book is in its early access phase (MEAP), not all chapters are available yet, so plenty to look forward to.

So even if you have the 1st edition and work with microservice patterns, the updates will, I think, offer insights that could pay dividends.

If you’re starting your software career or considering the adoption of microservices (and Chris will tell you it isn’t always the right answer), I highly recommend getting a copy, as with the 1st edition, the 2nd will become a must-read book.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Logging Best Practices – a sort of new book

21 Thursday Aug 2025

Posted by mp3monster in Books, development, Fluentbit, Fluentd, General, Technology

≈ 1 Comment

Tags

book, books, ebook, reading, writing

So, there is a new book title published with me as the author (Logging Best Practices) published by Manning, and yes, the core content has been written by me. But was I involved with the book? Sadly, not. So what has happened?

Background

To introduce the book, I need to share some background. A tech author’s relationship with their publisher can be a little odd and potentially challenging (the editors are looking at the commerciality – what will ensure people will consider your book, as well as readability; as an author, you’re looking at what you think is important from a technical practitioner).

It is becoming increasingly common for software vendors to sponsor books. Book sponsorship involves the sponsor’s name on the cover and the option to give away ebook copies of the book for a period of time, typically during the development phase, and for 6-12 months afterwards.

This, of course, comes with a price tag for the sponsor and guarantees the publisher an immediate return. Of course, there is a gamble for the publisher as you’re risking possible sales revenue against an upfront guaranteed fee. However, for a title that isn’t guaranteed to be a best seller, as it focuses on a more specialized area, a sponsor is effectively taking the majority investment risk from the publisher (yes, the publisher still has some risk, but it is a lot smaller).

When I started on the Fluent Bit book (Logs and Telemetry), I introduced friends at Calyptia to Manning, and they struck a deal. Subsequently, Calyptia was acquired by Chronosphere (Chronosphere acquires Calyptia), so they inherited the sponsorship. An agreement I had no issue with, as I’ve written before, I write as it is a means to share what I know with the broader community. It meant my advance would be immediately settled (the advance, which comes pretty late in the process, is a payment that the publisher recovers by keeping the author’s share of a book sale).

The new book…

How does this relate to the new book? Well, the sponsorship of Logs and Telemetry is coming to an end. As a result, it appears that the commercial marketing relationship between Chronosphere and Manning has reached an agreement. Unfortunately, in this case, the agreement over publishing content wasn’t shared with the author or me, or the commissioning editor at Manning I have worked with. So we had no input on the content, who would contribute a foreword (usually someone the author knows).

Manning is allowed to do this; it is the most extreme application of the agreement with me as an author. But that isn’t the issue. The disappointing aspect is the lack of communication – discovering a new title while looking at the Chronosphere website (and then on Manning’s own website) and having to contact the commissioning editor to clarify the situation isn’t ideal.

Reading between the lines (and possibly coming to 2 + 2 = 5), Chronosphere’s new log management product launch, and presumably being interested in sponsoring content that ties in. My first book with Manning (Logging in Action), which focused on Fluentd, includes chapters on logging best practices and using logging frameworks. As a result, a decision was made to combine chapters from both books to create the new title.

Had we been in the loop during the discussion, we could have looked at tweaking the content to make it more cohesive and perhaps incorporated some new content – a missed opportunity.

If you already have the Logging in Action and Logs and Telemetry titles, then you already have all the material in Logging Best Practices. While the book is on the Manning site, if you follow the link or search for it, you’ll see it isn’t available. Today, the only way to get a copy is to go to Chronosphere and give them your details. Of course, suppose you only have one of the books. In that case, I’d recommend considering buying the other one (yes, I’ll get a small single-digit percentage of the money you spend), but more importantly, you’ll have details relating to the entire Fluent ecosystem, and plenty of insights that will help even if you’re currently only focused on one of the Fluent tools.

Going forward

While I’m disappointed by how this played out, it doesn’t mean I won’t work with Manning again. But we’ll probably approach things a little differently. At the end of the day, the relationship with Manning extends beyond commercial marketing.

  • Manning has a tremendous group of authors, and aside from writing, the relationship allows me to see new titles in development.
  • Working with the development team is an enriching experience.
  • It is a brand with a recognized quality.
  • The social/online marketing team(s) are great to interact with – not just to help with my book, but with opportunities to help other authors.

As to another book, if there was an ask or need for an update on the original books, we’d certainly consider it. If we identify an area that warrants a book and I possess the necessary knowledge to write it, then maybe. However, I tend to focus on more specialized domains, so the books won’t be best-selling titles. It is this sort of content that is most at risk of being disrupted by AI, and things like vibe coding will have the most significant impact, making it the riskiest area for publishers. Oh, and this has to be worked around the day job and family.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Fluent Bit 4.0.4

21 Monday Jul 2025

Posted by mp3monster in development, Fluentbit, General, Technology

≈ 1 Comment

Tags

characters, encoding, filters, Fluent Bit, fstat, GBK, Lua, Open Telemetry, ShiftJIS, tail, Technology, WhatWG

The latest release of Fluent Bit is only considered a patch release (based on SemVer naming). But given the enhancements included it would be reasonable to have called it a minor change. There are some really good enhancements here.

Character Encoding

As all mainstream programming languages have syntaxes that lend themselves to English or Western-based languages, it is easy to forget that a lot of the global population use languages that don’t have this heritage, and therefore can’t be encoded using UTF-8. For example, according to the World Factbook, 13.8% speak Mandarin Chinese. While this doesn’t immediately translate into written communication or language use with computers, it is a clear indicator that when logging, we need to support log files that can be encoded to support idiomatic languages, such as Simplified Chinese, and recognized extensions, such as GSK and BIG5. However, internally, Fluent Bit transmits the payload as JSON, so the encoding needs to be handled. This means log file ingestion with the Tail plugin ideally needs to support such encodings. To achieve this, the plugin features a native character encoding engine that can be directed using a new attribute called generic. encoding, which is used to specify the encoding the file is using.

service:
  flush: 1

pipeline:
  inputs:
    - name: tail
      path: ./basic-file.gsk
      read_from_head: true
      exit_on_eof: true
      tag: basic-file
      generic.encoding: gsk

  outputs:
    - name: stdout
      match: "*"

The encoders supported out of the box, and the recognized names (in italics are) are:

  • GB18030 (earlier Simplifed Chinese Standard from the Chinese government called Information Technology – Chinese coded character set)
  • GBK (standard that extends the GB18030 standard for Simplified Chinese)
  • UHC (Unified Hangul Code also known as Extended Wandung – for Korean)
  • ShiftJIS (Japanese characters)
  • Big5 (for Chinese as used in Taiwan, Hong Kong, Macau)
  • Win866 (Cyrillic Russian)
  • Win874 (Thai)
  • Win1250 (Latin 2 & Central European languages)
  • Win1251 (Cyrillic)
  • Win1252 (Latin 1 & Western Europe)
  • Win1254 (Turkish)
  • Win1255 (also known as cp1255 and supports Hebrew)
  • Win1256 (Arabic)
  • Win2513 (suspect this should be Win1253, which covers the Greek language)

These standards are governed by the WhatWG specification (Web Hypertext Application Technology Group), not a well-known name, but have an agreement with the well-known W3C for various HTML and related standards.

The Win**** encodings are Windows-based formats that predate the adoption of UTF-8 by Microsoft.

Log Rotation handling

The Tail plugin, has also seen another improvement. Working with remote file mounts has been challenging, as it is necessary to ensure that file rotation is properly recognized. To improve the file rotation recognition, Fluent Bit has been modified to take full advantage of fstat. From a configuration perspective, we’ll not see any changes, but from the viewpoint of handling edge cases the plugin is far more robust.

Lua scripting for OpenTelemetry

In my opinion, the Lua plugin has been an underappreciated filter. It provides the means to create customized filtering and transformers with minimal overhead and effort. Until now, Lua has been limited in its ability to interact with OpenTelemetry payloads. This has been rectified by introducing a new callback signature with an additional parameter, which allows access to the OLTP attributes, enabling examination and, if necessary, return of a modified set. The new signature does not invalidate existing Lua scripts with the older three or four parameters. So backward compatibility is retained.

The most challenging aspect of using Lua scripts with OpenTelemetry is understanding the attribute values. Given this, let’s just see an example of the updated Lua callback. We’ll explore this feature further in future blogs.

function cb(tag, ts, group, metadata, record)
  if group['resource']['attributes']['service.name'] then
    record['service_name'] = group['resource']['attributes']['service.name']
  end

  if metadata['otlp']['severity_number'] == 9 then
    metadata['otlp']['severity_number'] = 13
    metadata['otlp']['severity_text'] = 'WARN'
  end

  return 1, ts, metadata, record
end

Other enhancements

With nearly every release of Fluent Bit, you can find plugin enhancements to improve performance (e.g., OpenTelemetry) or leverage the latest platform enhancements, such as AWS services.

Links

  • Fluent Bit
  • Fluent Bit on GitHub
  • Logs and Telemetry (Using Fluent Bit)

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Cookie Legislation

28 Saturday Jun 2025

Posted by mp3monster in General, Technology

≈ Leave a comment

Tags

cookies, law, legislation, sexurity, Technology

Just about any web-based application will have cookies, even if they are being used as part of session management. Then, if you’re in the business-to-consumer space, you’ll likely use tracking cookies to help understand your users.

Understanding what is required depends on which part of the world your application is being used in. For the European Union (EU) and the broader European Economic Area (EEA), this is easy as all the countries have ratified the GDPR and several related laws like the ePrivacy Directive.

For North America (USA and Canada), the issue is a bit more complex as it is a network of federal and state/province law. But the strictest state legislation, such as California, aligns closely with European demands, so as a rule of thumb, meet EU legislation, and you should be in pretty good shape in North America (from a non-lawyer’s perspective).

The problem is that the EEA accounts for 30 countries (see here), plus the USA and Canada, and we have 32 of the UN’s recognized 195 states (note there is a difference between UN membership and UN recognition). So, how do we understand what the rules are for the remaining 163 countries?

I’m fortunate to work for a large multinational company with a legal team that provides guidelines for us to follow. However, I obviously can’t share that information or use it personally. Not to mention, I was a little curious to see how hard it is to get a picture of the global landscape and its needs.

It turns out that getting a picture of things is a lot harder than I’d expected. I’d assumed that finding aggregated guidance would be easy (after all, there are great sites like DLA Piper’s and the UN Trade & Development that cover the more general data protection law). But, far from it. I can only attribute this to the fact that there is a strong business in managing cookie consents.

The resources that I did find, which looked comprehensive on the subject:

  • Securiti’s Q2 2024 report
  • Bird & Bird’s Global Cookie Review
  • Termly has a good resource on this area.
  • International Association of Privacy Professionals (IAPP) – has lots of interesting resources on Cookies, but doesn’t provide a consolidated global view.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Fluent Bit Articles on TheNewStack – a Biproduct of a book sponsor?

12 Thursday Jun 2025

Posted by mp3monster in Books, Fluentbit, Fluentd, General, manning, Technology

≈ 1 Comment

Tags

author, book, books, Fluent Bit, manning, reading, The New Stack, TNS, writing

Like many developers and architects, I track the news feeds from websites such as The New Stack and InfoQ. I’ve even submitted articles to some of these sites and saw them published. However, in the last week, something rather odd occurred: articles in The New Stack (TNS) appeared, attributed to me, although I had no involvement in the publication process with TNS; yet, the content is definitely mine. So what appears to be happening?

To help answer this, let me provide a little backstory. Back in October and November last year, we completed the publication of my book about Fluent Bit (called Logs and Telemetry with a working title of Fluent Bit with Kubernetes), a follow-up from Logging In Action (which covered Fluent Bit’s older sibling, Fluentd). During the process of writing these books, I have had the opportunity to get to know members of the team behind these CNCF projects and consider them as engineering friends. Through several changes, the core team has primarily come to work for Chronosphere. To cut a long story short, I connected the Fluent Bit team to Manning, and they sponsored my book (giving them the privilege of giving away a certain number of copies of the book, cover branding, and so on).

It appears that, as part of working with Manning’s marketing team, authors are invited to submit articles and agree to have them published on Manning’s website. Upon closer examination, the articles appear to have been sponsored by Chronosphere, with an apparent reference to Manning publications. So somewhere among the marketing and sales teams, an agreement has been made, and content has been reused. Sadly, no one thought to tell the author.

I don’t in principle have an issue with this, after all, I wrote the book, and blog on these subjects because I believe enabling an understanding of technologies like Fluent Bit is valuable and my way of contributing to the IT community (Yes, I do see a little bit of money from sales, but the money-to-time and effort ratio works out to be less than minimum wage).

The most frustrating bit of all of this is that one of the articles links to a book I’ve not been involved with, and the authors of Effective Platform Engineering aren’t being properly credited. It turns out that Chronosphere is sponsoring Effective Platform Engineering (Manning’s page for this is here).

26th June Update – another article

The New Stack published another sponsored article – What’s Driving Fluent Bit Adoption? This one does feel like it has been ‘tweaked’.

2nd July Update – and more …

Another bit of the book on New Stack – What Is Fluent Bit?

8th July Update – and more …

Another bit of the book on New Stack – What Are the Differences Between OTel, Fluent Bit and Fluentd?

Other Posts …

If you’d like to see the articles and my bio now on The New Stack:

  • Fluent Bit Core Concepts
  • Specialized Event Capture and Distribution Tool
  • The New Stack Bio

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Working with interns

14 Wednesday May 2025

Posted by mp3monster in General, Technology

≈ 1 Comment

Tags

expectations, intern, learning, mentee, mentor, training

Oracle has an intern programme. While the programmes differ around the world because of the way relationships with educational establishments work and the number of interns that can be supported within a particular part of the organization, there is a common goal—transitioning (under)graduates from a world of theory into productive junior staff (in the cases I work with, that’s developers).

This blog summarizes the steps I have taken with my mentees and elaborates on how I personally approach the mentor role. It serves as a checklist for myself so I don’t have to recreate it as we embark on a new journey.

Interns typically have several lines of reporting – the intern programme leadership, an engineering manager, and a technical mentor. The engineering manager and technical mentor is typically a senior engineer or architect with battle-hardened experience who is able to explain the broader picture and why things are done in particular ways. The mentor and engineering manager roles can often overlap. But the two points of contact exist, as that is how we run our product teams.

Each of the following headings covers the different phases of an intern engagement.

Introduction conversation

When starting the intern programme, and as with any mid-sized organisation, there is a standard onboarding process, which will cover all the details such as corporate policy, and possibly mandatory development skills. While this is happening, I’ll reach out to briefly introduce myself, and tell the intern to let me know when they think they’ll have completed that initial work. We use that as the point at which we have an initial, wide-ranging conversation covering …

  • expectations, goals, and rules of engagement
    • I have a couple of simple rules, which I ask my team and interns to work by:
      • Don’t say you understand when you don’t – as a mentor, your learning is as much my ability to communicate as it is your attention
      • No question is stupid
        • Your questions help me understand you better, pitch my communication better, appreciate what needs to be explained, and point out the best resources to help.
        • The more you ask, the more I share, which will help you find your path.
      • Mistakes are fine (and often, you can learn a lot from them), but we should own them (no deflection, or hiding – if there is a mistake or problem, let’s address it as soon as possible) and never repeat the same mistake.
  • We discuss the product’s purpose, value proposition, and definition of success. How do the architecture and technologies being used contribute to the solution? This helps, particularly when some of the technologies may not be seen as cool. It also provides context for the stories the intern will pick up.

Ongoing dialogue

During the internship, we have a weekly one-to-one call. Initially, the focus is to discuss progress, but as things progress, I encourage the intern to use the session to discuss anything they wish. From technologies to what things they enjoy. How they’re progressing, what is good, what can be better. Resources available to learn from, things to try.

Importantly, I put emphasis on the fact that the interns feel part of the team, never need to wait for these weekly calls if they have concerns, questions, requests, need help, etc. We get a grip on it early before things start to go very wrong.

Tasks & backlog

While the interns may not (at least to start with) be working on a product or at least be focussed on immediate tasks, we adopt normal working processes and practices. So, we manage tasks through JIRA; development processes are the same.

  • The major goals during the internship need to support a narrative for the intern’s degree defence. At the same time, they need to get a taste of the technologies being used across the product, such as the front-end presentation tier and the persistence and integration tech stack. The work needs to ultimately contribute to the product development programme.
  • In the stories early on in the internship, we keep well off the critical path, which means we can take the time to learn and understand why things are the way they are without any pressure. As the internship progresses, we start to bring stories in that are linked to specific deliverable milestones.
  • try and have a narrative for their degree/post-grad defence

Being part of the team

A mentor is only one part of the intern’s education journey. Ideally, learning can come from very interaction, so we need to facilitate:

  • It’s important that the interns feel part of the team, so they’re included in all the stand-ups and sprint planning. The intern tasks are managed as stories, just like everyone else’s. Being part of a team will help ease the tensions that can be experienced if someone is working with someone they also know has to evaluate their progress.
  • This gives the interns the chance to build relationships with others with whom they can talk and learn from those who are closer to where they are in their career journey.

Helping their Learning

From a mentor supporting technical development, when questions are asked, we take the time to not only answer the immediate question but also talk about the context and rationale. We look at where sources of information can be found – we don’t want to get into spoon feeding people, otherwise they’ll never stand up and figure things out for themselves. It is better that people seek some direction and then figure things out. Then go back and present what the right answer is. This way you embed initiative, different perspectives can be seen and life is easier if you’re told of a problem and then offered a solution.

Feedback

Feedback is important; if they’re doing well, then it reassures them to know this, and you’ll see more of what they’re like if you hire them. If there are problems, it is best to have quiet, informal one-to-one conversations. Things aren’t bad, but we all can be better. This positioning is constructive, and as a mentor, I’m there to help the intern find the way to overcome any weaknesses, or to recognize that strengths may be suited in other roles. The outcome of an internship should not be a surprise, but simply a formalized ceremony.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...
← Older posts
Newer posts →

    I work for Oracle, all opinions here are my own & do not necessarily reflect the views of Oracle

    • About
      • Internet Profile
      • Music Buying
      • Presenting Activities
    • Books & Publications
      • Logging in Action with Fluentd, Kubernetes and More
      • Logs and Telemetry using Fluent Bit
      • Oracle Integration
      • API & API Platform
        • API Useful Resources
        • Useful Reading Sources
    • Mindmaps Index
    • Monster On Music
      • Music Listening
      • Music Reading
    • Oracle Resources
    • Useful Tech Resources
      • Fluentd & Fluent Bit Additional stuff
        • Logging Frameworks and Fluent Bit and Fluentd connectivity
        • REGEX for BIC and IBAN processing
      • Formatting etc
      • Java and Graal Useful Links
      • Official Sources for Product Logos
      • Python Setup & related tips
      • Recommended Tech Podcasts

    Oracle Ace Director Alumni

    TOGAF 9

    Logs and Telemetry using Fluent Bit


    Logging in Action — Fluentd

    Logging in Action with Fluentd


    Oracle Cloud Integration Book


    API Platform Book


    Oracle Dev Meetup London

    Blog Categories

    • App Ideas
    • Books
      • Book Reviews
      • manning
      • Oracle Press
      • Packt
    • Enterprise architecture
    • General
      • economy
      • ExternalWebPublications
      • LinkedIn
      • Website
    • Music
      • Music Resources
      • Music Reviews
    • Photography
    • Podcasts
    • Technology
      • AI
      • APIs & microservices
      • chatbots
      • Cloud
      • Cloud Native
      • Dev Meetup
      • development
        • languages
          • java
          • node.js
          • python
      • drone
      • Fluentbit
      • Fluentd
      • logsimulator
      • mindmap
      • OMESA
      • Oracle
        • API Platform CS
          • tools
        • Helidon
        • ITSO & OEAF
        • Java Cloud
        • NodeJS Cloud
        • OIC – ICS
        • Oracle Cloud Native
        • OUG
      • railroad diagrams
      • TOGAF
    • xxRetired
    • AI
    • API Platform CS
    • APIs & microservices
    • App Ideas
    • Book Reviews
    • Books
    • chatbots
    • Cloud
    • Cloud Native
    • Dev Meetup
    • development
    • drone
    • economy
    • Enterprise architecture
    • ExternalWebPublications
    • Fluentbit
    • Fluentd
    • General
    • Helidon
    • ITSO & OEAF
    • java
    • Java Cloud
    • languages
    • LinkedIn
    • logsimulator
    • manning
    • mindmap
    • Music
    • Music Resources
    • Music Reviews
    • node.js
    • NodeJS Cloud
    • OIC – ICS
    • OMESA
    • Oracle
    • Oracle Cloud Native
    • Oracle Press
    • OUG
    • Packt
    • Photography
    • Podcasts
    • python
    • railroad diagrams
    • Technology
    • TOGAF
    • tools
    • Website
    • xxRetired

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 2,556 other subscribers

    RSS

    RSS Feed RSS - Posts

    RSS Feed RSS - Comments

    March 2026
    M T W T F S S
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  
    « Feb    

    Twitter

    Tweets by mp3monster

    History

    Speaker Recognition

    Open Source Summit Speaker

    Flickr Pics

    Gogo Penguin at the BarbicanGogo Penguin at the BarbicanGogo Penguin at the BarbicanGogo Penguin at the Barbican
    More Photos

    Social

    • View @mp3monster’s profile on Twitter
    • View philwilkins’s profile on LinkedIn
    • View mp3monster’s profile on GitHub
    • View mp3monster’s profile on Flickr
    • View mp3muncher’s profile on WordPress.org
    • View philmp3monster’s profile on Twitch
    Follow Phil (aka MP3Monster)'s Blog on WordPress.com

    Blog at WordPress.com.

    • Subscribe Subscribed
      • Phil (aka MP3Monster)'s Blog
      • Join 234 other subscribers
      • Already have a WordPress.com account? Log in now.
      • Phil (aka MP3Monster)'s Blog
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar
     

    Loading Comments...
     

    You must be logged in to post a comment.

      Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
      To find out more, including how to control cookies, see here: Our Cookie Policy
      %d