• Home
  • Site Aliases
    • www.cloud-native.info
  • About
    • Background
    • Presenting Activities
    • Internet Profile
      • LinkedIn
    • About
  • Books & Publications
    • Log Generator
    • Logs and Telemetry using Fluent Bit
      • Fluent Bit book
      • Book Resources in GitHub
      • Fluent Bit Classic to YAML Format configurations
    • Logging in Action with Fluentd, Kubernetes and More
      • Logging in Action with Fluentd – Book
      • Fluentd Book Resources
      • Fluentd & Fluent Bit Additional stuff
    • API & API Platform
      • API Useful Resources
    • Oracle Integration
      • Book Website
      • Useful Reading Sources
    • Publication Contributions
  • Resources
    • GitHub
    • Oracle Integration Site
    • Oracle Resources
    • Mindmaps Index
    • Useful Tech Resources
      • Fluentd & Fluent Bit Additional stuff
      • Recommended Tech Podcasts
      • Official Sources for Product Logos
      • Java and Graal Useful Links
      • Python Setup & related stuff
      • DevTips
  • Music
    • Monster On Music
    • Music Listening
    • Music Reading

Phil (aka MP3Monster)'s Blog

~ from Technology to Music

Phil (aka MP3Monster)'s Blog

Category Archives: General

All things general

Securing Fluent Bit operations

18 Monday Nov 2024

Posted by mp3monster in Fluentbit, General, Technology

≈ Leave a comment

Tags

Fluent Bit

I’ve been peer-reviewing a book in development called ML Workloads with Kubernetes for Manning. The book is in its first review cycle, so it is not yet available in the MEAP programme. I mention this because the book’s first few chapters cover the application of Apache Airflow and Juypter Notebooks on a Kubernetes platform. It highlights some very flexible things that, while pretty cool, could be seen by some organizations as potential attack vectors. I should say, the authors have engaged with security considerations from the outset). My point is that while we talk about various non-functional considerations, including security, there isn’t a section dedicated to security. So, we’re going to talk directly about some security considerations here.

It would be very easy to consider security as not being important when it comes to observability – but that would be a mistake, for a few reasons:

Logging Payloads

It is easy to incorporate all an application’s data payloads into observability signals such as traces and logs. It’s an easy mistake to make during initial development – you just want to initially see everything is being handled as intended during development, so include the payload. While we can go back and clean this up or even remove such output as we tidy up code – these things can slip through the wires. Just about any application today will want login credentials. Input credentials are about identifying who we are and determining if or what we can see. The fact that they can uniquely identify us is where we usually run into Data Protection law.

It isn’t unusual for systems to be expected to record who does what and when – all part of common auditing activities. That means our identity is going to often be attached to data flowing through our application.

This makes anywhere the records this data a potential gold mine of data, and the lack of diligence will mean that our operational support tools and processes will be soft targets.

Code Paths

Our applications will carry details of execution paths – from trace-related activities to exception stacks. We need this information to diagnose issues – it is even possible that the code will handle the issues, but it is typical to record the stack trace so we can see that the application has had to perform remediation (even if that is simply because we decided to catch an exception rather than have defensive code). So what? Well, that information tells us as developers what the application is doing – but in the wrong hands, that tells the consumer how they can induce errors and what third-party libraries we’re using (which means the reader can deduce what vulnerabilities we have) (see what OWASP says on the matter here).

Sometimes, our answer to a vulnerability might not be to fix it but to introduce mitigation strategies—e.g., we’ll block direct access to a system. The issue with such mitigations is that people will forget why they’re there or subvert them for the best of reasons, leaving them accidentally vulnerable again. So, minimizing exposure should be the second line of defense.

How does this relate to Fluent Bit?

Well, the first thing is to assume that Fluent Bit is handling sensitive data, remind ourselves of this from time to time, and even test it. This alone immediately puts us in a healthier place, and we at least know what risks are being taken.

Fluent Bit support SSL/TLS for network traffic

SSL/TLS traffic involves certificates; setting up and maintaining such things can be a real pain, particularly if the processes around managing certificates haven’t been carefully thought through and automated. Imposing the management of certificates with manual processes is the fastest way to kill off their adoption and use. Within an organization, certificates don’t have to be expensive ones that offer big payouts if compromised, such as those provided by companies like Thawte and Symantec. The Linux Foundation with Let’s Encrypt and protocols like ACME (Automated Certificate Management Environment) make it cost-free and provide automation for regular certificate rotation.

Don’t get suckered by the idea that SSL stripping at the perimeter is acceptable today. It used to be an acceptable thing to do because, among other reasons, the overhead of the processing of certificates was a measurable overhead. Moore’s law has seen to it that such computational overhead is tolerable if not fractions of a percentage cost. If not convinced, then consider the fact that there is sufficient drive that Kubernetes supports mutual SSL between containers that are more than likely to be actually running on the same physical server.

Start by Considering File systems on logs

If you’re working with applications or frameworks that direct logs to local files, you can do a couple of things. First, control the permissions on the files.

Many frameworks that support logging configuration don’t do anything with the logs (although some do, like Airflow). For those cases where log location doesn’t have a behavioral impact, we can look to control where the logs are being written. Structuring logs into a common part of the file system can make things easier to manage, certainly from a file system permissions viewpoint.

Watching for sensitive data bleed

If you’re using Fluent Bit to consolidate telemetry into systems like Loki, etc., then we should be running regular scans to ensure that no unplanned sensitive data is being caught. We can use tools like Telemetrygen to inject values into the event stream to test this process and see if the injected values are detected.

If or when such a situation occurs, the ideal solution is to fix the root cause. But, this isn’t always possible when the issue comes through a 3rd party library, an organization is reluctant to make changes or production changes are slow. In these scenarios and discussed in the book, we can use Fluent Bit configurations to mitigate the propagation of such data. But as we said earlier, if you use mitigations, it warrants verifying they aren’t accidentally undone, which takes us back to the start of this point.

Classifying and Tagging data

Telemetry, particularly traces and logs can be classified and tagged to reflect information about origin, nature of the event. This is mostly done nearest the source as understanding the origin helps the classification process. This task is something Fluent Bit can easily do and route accordingly as we can see in the book.

Don’t run Fluent Bit as root

Not running Fluent Bit with root credentials is security 101. But it is tempting when you want to use Fluent Bit to tap in and listen to the OS and platform logs and metrics, particularly if you aren’t a Linux specialist. It is worth investing in getting an OS base configuration that is secure while not preventing your observability. This doesn’t automatically mean you must use containers. Bare metal, etc., can be secured by not installing from a vendor base image but an image you’ve built, or even simpler, taking the base image and then using tools like Chef, Ansible, etc., to impose a configuration over the top.

Bottom Line

The bottom line is, as long as we keep in mind that our observability processes and data should be subject to the same care and consideration as our business data, along with the fact that security should never be an afterthought, something that we bolt on just before go live and pervasive rather than just at the boundary.

When I learnt to drive (in the dark ages), one of the things I was told is – if you assume that everyone on the road is a clueless idiot, then you’ll be ok. We should look at treating systems development and the adoption of security the same way – if you assume someone is likely to make a mistake and take defensive steps — then we’ll be ok — thiswill give us security in depth.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Books Books Books

22 Tuesday Oct 2024

Posted by mp3monster in Books, General, Technology

≈ Leave a comment

Tags

book, books, development, ebook, FluentBit, manning, pbook, print, published, reading

Today we got the official notification that our book has been published …

Logs and Telemetry book - order option

As you can see, the eBook is now available. The print edition can be purchased from Thursday (24th Oct). If you’ve been a MEAP subscriber, you should be able to download the complete book. The book will start showing up on other platforms in the coming weeks (Amazon UK has set an availability date, and Amazon.com you can preorder).

There are some lovely review quotes as well:

A detailed dive into building observability and monitoring.

Jamie Riedesel, author of Software Telemetry
Extensive real-life examples and comprehensive coverage! It’s a great resource for architects, developers, and SREs.

Sambasiva Andaluri, IBM
A must read for anyone managing a critical IT-system. You will truly understand what’s going on in your applications and infrastructure.

Hassan Ajan, Gain Momentum

And there is more …

I hadn’t noticed until today, but the partner book Logging in Action, which covers Fluentd, is available in ebook and print as well as audio and video editions. As you can see, these are available on Manning and platforms like O’Reilly/Safari…

In Logging in Action you will learn how to:

Deploy Fluentd and Fluent Bit into traditional on-premises, IoT, hybrid, cloud, and multi-cloud environments, both small and hyperscaled
Configure Fluentd and Fluent Bit to solve common log management problems
Use Fluentd within Kubernetes and Docker services
Connect a custom log source or destination with Fluentd’s extensible plugin framework
Logging best practices and common pitfalls

Logging in Action is a guide to optimize and organize logging using the CNCF Fluentd and Fluent Bit projects. You’ll use the powerful log management tool Fluentd to solve common log management, and learn how proper log management can improve performance and make management of software and infrastructure solutions easier. Through useful examples like sending log-driven events to Slack, you’ll get hands-on experience applying structure to your unstructured data.

I have to say that my digital twin, who narrated the book, sounds pretty intelligent.

Update

Amazon UK is correct now, and has an availability date

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

shhh – Fluent Bit book has gone to the printers, and …

13 Sunday Oct 2024

Posted by mp3monster in Books, Fluentbit, General, manning, Technology

≈ Leave a comment

Tags

book, ebook, FluentBit, manning, webinar

I thought you might like to know that last week, the production process on the book (Logs and Telemetry with Fluent Bit, written with the working title of Fluent Bit with Kubernetes) was completed, and the book should be on its way to the printers. In the coming weeks, you’ll see the MEAP branding disappear, and the book will appear in the usual places.

If you’ve been brilliant and already purchased the book – the finished version will be available to download soon, and for those who have ordered the ‘tree’ media version – a few more weeks and ink and paper will be on their way.

As part of the promotion, we will be doing a webinar with the book’s sponsor, To register for their webinar – go to https://go.chronosphere.io/fluent-bit-with-kubernetes-meet-the-author.html

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Migrating from Fluentd to Fluent Bit

08 Tuesday Oct 2024

Posted by mp3monster in Fluentbit, Fluentd, General, Technology

≈ Leave a comment

Tags

devops, FluentBit, Fluentd, Kubernetes, mapping, migration, tooling, utility

Earlier in the year, I made a utility available that supported the migration from Fluent Bit classic configuration format to YAML. I also mentioned I would explore the migration of Fluentd to Fluent Bit. I say explore because while both tools have a common conceptual foundation, there are many differences in the structure of the configuration.

We discussed the bigger ones in the Logs and Telemetry book. But as we’ve been experimenting with creating a Fluentd migration tool, it is worth exploring the fine details and discussing how we’ve approached it as part of a utility to help the transformation.

Routing

Many of the challenges come from the key difference in terms of routing and consumption of events from the buffer. Fluentd assumes that an event is consumed by a single output; if you want to direct the output to more than one output, you need to copy the event. Fluent Bit looks at things very differently, with every output plugin having the potential to output every event – the determination of output is controlled by the match attribute. These two approaches put a different emphasis on the ordering of declarations. Fluent Bit focuses on routing and the use of tags and match declarations to control the rounding of output.

  <match *>
    @type copy
    <store>
      @type file
      path ./Chapter5/label-pipeline-file-output
      <buffer>
        delayed_commit_timeout 10
        flush_at_shutdown true
        chunk_limit_records 50
        flush_interval 15
        flush_mode interval
      </buffer>
      <format>
        @type out_file
        delimiter comma
        output_tag true
      </format> 
    </store>
    <store>
      @type relabel
      @label common
    </store>
  </match>

Hierarchical

We can also see that Fluentd’s directives are more hierarchical (e.g., buffer, and format are within the store) than the structures used by Fluentd Bit, so we need to be able to ‘flatten’ the hierarchy. As a result, it makes sense that where the copy occurs, we’ll define both outputs in the copy declaration as having their own output plugins.

Buffering

There is a notable difference between the outputs’ buffer configurations: in Fluent Bit, the output can only control how much storage in the filesystem can be used. As you can see in the preceding example, we can set the flushing frequency, control the number of chunks involved (regardless of storage type).

Pipelines

Fluentd allows us to implicitly define multiple pipelines of sources and destinations, as ordering of declarations and event consumption is key. ~In addition to this, we can group plugin behavior with the use of the Fluentd label attribute. But the YAML representation of a Fluent Bit doesn’t support this idea.

<source>
  @type dummy
  tag dummy
  auto_increment_key counter
  dummy {"hello":"me"}
  rate 1
</source>
<filter dummy>
 @type stdout
 </filter>
<match dummy>
  @id redisTarget
  @type redislist
  port 6379
</match>
<source>
  @id redisSource
  @type redislist
  tag redisSource
  run_interval 1
</source>
<match *>
  @type stdout
</match>

Secondary outputs

Fluentd also supports the idea of a secondary output as the following fragment illustrates. If the primary output failed, you could write the event to an alternate location. Fluent Bit doesn’t have an equivalent mechanism. To create a mapping tool, we’ve taken the view we should create a separate output.

<match *>
    @type roundrobin
    <store> 
      @type forward
      buffer_type memory
      flush_interval 1s  
      weight 50
      <server>
        host 127.0.0.1
        port 28080
      </server>  
    </store>
    <store>
      @type forward
      buffer_type memory
      flush_interval 1s        
        weight 50
      <server>
        host 127.0.0.1
        port 38080
      </server> 
    </store>
  <secondary>
    @type stdout
  </secondary>
</match>

The reworked structure requires consideration for the matching configuration, which isn’t so easily automated and can require manual intervention. To help with this, we’ve included an option to add comments to link the new output to the original configuration.

Configuration differences

While the plugins have a degree of consistency, a closer look shows that there are also attributes and, as a result, features of plugins that don’t translate. To address this, we have commented out the attribute so that the translated configuration can be seen in the new configuration to allow manual modification.

Conclusion

While the tool we’re slowly piecing together will do a lot of the work in converting Fluentd to Fluent Bit, there aren’t exact correlations for all attributes and plugins. So the utility will only be able to perform the simplest of mappings without developer involvement. But we can at least help show where the input is needed.

Resources

  • Fluent Bit from Classic to YAML
  • https://github.com/mp3monster/fluent-bit-classic-to-yaml-converter
  • Fluent Bit
  • Fluentd
  • https://github.com/mp3monster/fluent-bit-classic-to-yaml-converter/tree/fluentd-experimental
  • Logs and Telemetry book

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Fluent Bit – using Lua script to split up events into multiple records

20 Friday Sep 2024

Posted by mp3monster in Fluentbit, General, Technology

≈ Leave a comment

Tags

FluentBit, Lua, plugins

One of the really advanced features of Fluent Bit’s use of Lua scripts is the ability to split a single log event so downstream processing can process multiple log events. In the Logging and Telemetry book, we didn’t have the space to explore this possibility. Here, we’ll build upon our understanding of how to use Lua in a filter. Before we look at how it can be done, let’s consider why it might be done.

Why Split Fluent Bit events

This case primarily focuses on the handling of log events. There are several reasons that could drive us to perform the split. Such as:

  • Log events contain metrics data (particularly application or business metrics). Older systems can emit some metrics through logging such as the time to complete a particular process within the code. When data like this is generated, ideally, we expose it to tools most suited to measuring and reporting on metrics, such as Prometheus and Grafana. But doing this has several factors to consider:
    • A log record with metrics data is unlikely to generate the data in a format that can be directed straight to Prometheus.
    • We could simply transform the log to use a metrics structure, but it is a good principle to retain a copy of the logs as they’re generated so we don’t lose any additional meaning, which points to creating a second event with a metrics structure. We may wish to monitor for the absence of such metrics being generated, for example.
  • When transactional errors occur, the logs can sometimes contain sensitive details such as PII (Personally Identifiable Information). We really don’t want PII data being unnecessarily propagated as it creates additional security risks – so we mask the PII data for the event to go downstream. But, at the same time, we want to know the PII ID to make it easier to identify records that may need to be checked for accuracy and integrity. We can solve this by:
    • Copying the event and performing the masking with a one-way hash
    • Create a second event with the PII data, which is limited in its propagation and is written to a data store that is sufficiently secured for PII data, such as a dedicated database

In both scenarios provided, the underlying theme is creating a version of the event to make things downstream easier to handle.

Implementing the solution

The key to this is understanding how the record construct is processed as it gets passed back and forth. When the Lua script receives an event, it arrives in our script as a table construct (Java developers, this approximates a HashMap), with the root elements of the record representing the event payload.

Typically, we’d manipulate the record and return it with a flag saying the structure has changed, but it is still a table. But we could return an array of tables. Now each element (array entry) will be processed as its own log event.

A Note on how Lua executes copying

When splitting up the record, we need to understand how Lua handles its data. if we tried to create the array with the code:

record1 = record
record2 = record
newRecord[record1, record2] 

Then we manipulated newRecord[1] We would still impact both records; this is because Lua, like its C underpinning, always uses shallow references rather than deep copies of objects. So we need to ensure we perform a deep copy before manipulating the records. You can see this in our example configuration (here on GitHub), or look at the following Lua code fragment:

function copy(obj)
  if type(obj) ~= 'table' then return obj end
  local res = {}
  for k, v in pairs(obj) do res[copy(k)] = copy(v) end
  return res
end

The proof

To illustrate the behavior, we have created a configuration with a single dummy plugin that only emits a single event. That event is then picked up by a Filter with our Lua script. After the filter, we have a simple output plugin. As a result of creating two records, we should see two output entries. To make it easy to compare, in the Lua script, we have a flag called deepCopy; when set to true – we’ll clone the records and modify payload values; when set to true – we then perform the split.

[SERVICE]
  flush 1

[INPUT]
    name dummy
    dummy {   "time": "12/May/2023:08:05:52 +0000",   "remote_ip": "10.4.72.163",   "remoteuser": "-",   "request": {     "verb": "GET",     "path": " /downloads/product_2",     "protocol": "HTTP",     "version": "1.1"   },   "response": 304}
    samples 1
    tag dummy1

[FILTER]
    name lua
    match *
    script ./advanced.lua
    call cb_advanced
    protected_mode true

[OUTPUT]
    name stdout
    match *

Limitations and solutions

While we can easily split events up and return multiple records, we can’t use different tags or timestamps. Using the same timestamp is pretty sensible, but different tags may be more helpful if we want to route the different records in other ways.

As long as the record contains the value we want to use as a tag, we can add to the pipeline a tag-write plugin and point it to the attribute to parse with a REGEX. To keep things efficient, if we create an element that is just the tag when creating the new record, then the REGEX becomes a very simple expression to match the value.

Conclusion

We’ve seen a couple of practical examples of why we might want to spin out new observability events based on what we get from our system. An important aspect of the process is how Lua handles memory.

Resources

  • Logging and Telemetry with Fluent Bit book
  • GitHub example
  • Tech resources

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Sharing a monitor

02 Monday Sep 2024

Posted by mp3monster in General

≈ Leave a comment

Like many techies, I have a personal desktop machine that packs a bit of grunt and runs multiple monitors. I also have a company laptop—for me, it’s a Mac (which, to be honest, I’m not a fan of, despite loving my iPhone and iPad).

Like many, my challenge is that when you’re used to two large screens, a laptop monitor just doesn’t cut it. So, I want to share one of the screens between two machines. The question I’ve wrestled with is how to do that with a simple button or key press. No messing with cables, or monitor settings etc.

I initially tried to solve this with a KVM—easy, right? Wrong. It turns out Macs don’t play nice with KVM switches. I went through buying several switches and sending them back before discovering it was a MacBook Pro quirk.

For a while, I used a travel monitor, which I had acquired, to solve the same issue when I traveled extensively for my previous employer. It’s an improvement, but the travel screen is still pretty small, not to mention it takes up more desk space (my main monitors and laptop are all on arms, so I can push them back if I want more desk space).

As most decent monitors allow multiple inputs, we’ve resorted to having two inputs, the only problem is that the controls to switch between inputs aren’t easy to use – but most people don’t need to toggle back and forth several times a day (VPN related tasks are done on the Mac, everything else is the desktop).

But this week, we had a breakthrough, the core of it is finding out about ControlMyMonitor and the VCP features newer monitors have. ControlMyMonitor provides a simple UI tool that allows us to identify an ID for each monitor and a command line capability that generates and sends connected monitor instructions, which allows us to do things like switching input, changing contrast, etc. With the tool we can issues commands such as: ControlMyMonitor.exe /SetValue "\.\DISPLAY2\Monitor0" 60 15. This tells the app for display monitor 2 (as known to my desktop) to send the VCP (Virtual Control Panel) code 60 (change input source) to the input number 15. I can switch the monitor back to the desktop by supplying the input number for the connection to the desktop.

So now I can toggle between screens without feeling around the back of the monitor to navigate the menu for switching inputs. Using a command is pretty cool, but still not good enough. I could put on my desktop links to two scripts to run the relevant command. But I’d also come across AutoHotKey (aka AHK). This allows us to create scripts using AHK’s syntax, which can be configured to work with specific key combinations. So, creating a config file with a key combo to run the command shown made it really convenient. Windows+Shift+Left arrow and the monitor switches to the desktop, Windows+Shift_Right arrow, and it displays the laptop. The script will look something like this:

Requires AutoHotkey v2.0
#+Right::Run "monitor-left.bat"
#+Left::Run "monitor-right.bat"
ControlMyMonitor
AHK

We could embed the command directly into the AHK script, but the syntax is unusual and would require escaping quotes. By referencing a shell script, we can easily extend the process without needing to master the AHK syntax.

The only remaining problem now is for the AHK app to start when the machine boots up and load the configuration/script when the desktop boots up. We can do this by adding a shortcut the the script file. This can be done by creating a shortcut to our script that the AHK runtime will recognize to run and put that shortcut in the startup folder. The startup folder is likely to be somewhere such as C:\Users\<username>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup. But we can get a file explorer to open in the right place with the command Windows+R (open the command line option from the Start Menu. Then enter the command “shell:startup“.

Further reading:

  • Monitor Control Command Set (which covers VCP)
  • AutoHotKey
  • ControlMyMonitor
  • A Mac app that looks like it will do the same sort of things as ControlMyMonitor – https://github.com/waydabber/BetterDisplay

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Think Distributed Systems

26 Monday Aug 2024

Posted by mp3monster in Book Reviews, development, General, manning, Technology

≈ Leave a comment

Tags

book, distributed, threading, locks, locking, parrallelism

One of the benefits of being an author with a publisher like Manning is being given early access to books in development and being invited to share my thoughts. Recently, I was asked if I’d have a look at Think Distributed Systems by Dominik Tornow.

Systems have become increasingly distributed for years, but the growth has been accelerating fast, enabled by technologies like CORBA, SOAP, REST frameworks, and microservices. However, some distribution challenges even manifest themselves when using multithreading applications. So, I was very interested in seeing what new perspectives could be offered that may help people, and Dominik has given us a valuable perspective.

I’ve been fortunate enough that my career started with working on large multi-server, multithreaded mission-critical systems. Using Ada and working with a mentor who challenged me to work through such issues. How does this relate to the book? This work and the mentor meant I built some good mental models of distributed development early in my career. Dominik calls out that having good mental models to understand distributed systems and the challenges they can bring is key to success. It’s this understanding that equips you to understand challenges such as resource locking, contending with mutual deadlock, transaction ordering, the pros and cons of optimistic locking, and so on.

As highlighted early on in this book, most technical books come from the perspective of explaining tools, languages, or patterns and to make the examples easy to follow, the examples tend to be fairly simplistic. This is completely understandable; these books aim to teach the features of the language. Not how to bring these things to bear in complex real-world use cases. As a result, we don’t necessarily get the fullest insight and understanding of the problems that can come with optimistic locking.

Given the constraints of explaining through the use of programming features, the book takes a language-agnostic approach to explaining the ideas, and complexities of distributed solutions. Instead, the book favors using examples, analogies, and mathematics to illustrate its points. The mathematics is great at showing the implications of different aspects of distributed systems. But, for readers like me who are more visual and less comfortable with numeric abstraction, this does mean some parts of the book require more effort – but it is worth it. You can’t deny hard numeric proofs can really land a message, and if you know what the variables are that can change a result, you’re well on your way.

For anyone starting to design and implement distributed and multi-threaded applications for the first time, I’d recommend looking at this book. From what I’ve seen so far, the lessons you’ll take away will help keep you from walking into some situations that can be very difficult to overcome later or, worse, only manifest themselves when your system starts to experience a lot of load.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Fluent Bit with Chat Ops

12 Monday Aug 2024

Posted by mp3monster in Fluentbit, General, Technology

≈ 1 Comment

Tags

chatops, conference, Fluent, FluentBit, Open Source Monitoring Conference, osmc, osmc.de, Patrick Stephens, slack, tools

My friend Patrick Stephens and Fluent Bit committer will present at the Open Source Monitoring Conference in Germany later this year. Unfortunately, I won’t be able to make it, as my day job is closing in on its MVP product release.

The idea behind the presentation is to improve the ability to detect and respond to Observability events, as the time between detection and action is the period during which your application is experiencing harm, such as lost revenue, data corruption, and so on.

The stable configuration and code base version is in the Fluent GitHub repository; my upstream version is here. We first discussed the idea back in February and March. We applied simpler rules to determine if the log event was critical.

Advancing the idea

Now that my book is principally in the hands of the publishers (copy editing and print preparation, etc.), we can revisit this and exploit features in more recent releases to make it slicker and more effective, for example.

  • Stream processor, so a high frequency of smaller issues could trigger a notification using the stream processor.
  • We can also use the stream processor to provide a more elegant option to avoid notification storms.
  • The new processors will make it easier to interact with metrics, so any application natively producing metrics.

Other tooling

With the book’s copy editing done, we have a bit more time to turn to our other Fluent Bit project … Fluent Bit configuration converter, both classic to YAML, and implementing a Fluentd to Fluent Bit 1st stage converter. You can see this in GitHub here and here.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Two weeks of Fluent Bit

30 Tuesday Jul 2024

Posted by mp3monster in Fluentbit, General

≈ Leave a comment

Tags

book, configurtation, FluentBit, logging, telemetry, tool, YAML

The last couple of weeks have been pretty exciting. Firstly, we have Fluent Bit 3.1 released, which brings further feature development to Fluent Bit, making it even more capable with Fluent Bit handling of Open Telemetry (OTel).

The full details of the release are available at https://fluentbit.io/announcements/v3.1.0/

Fluent Bit classic to YAML

We’ve been progressing the utility, testing and stabilizing it, and making several releases accordingly. The utility is packaged as a Docker image, and the regression test tool also runs as a Docker image.

Moving forward, we’ll start branching to develop significant changes to keep the trunk stable, including experimenting with the possibility of extending the tool to help port Fluentd to Fluent Bit YAML configurations. The tools won’t be able to do everything, but I hope they will help address the core structural challenges and flag differences needing manual intervention.

Book

The Fluent Bit book has moved into its last phase with the start of copy editing. We have also had a shift in the name to Logs and Telemetry using Fluent Bit, Kubernetes, streaming, and more, or just Logs and Telemetry using Fluent Bit. The book fundamentally hasn’t changed. There is still a lot of Kubernetes-related content, but it helps focus on what Fluent Bit is all about rather than being just another Kubernetes book.

Logs and Telemetry using Fluent Bit
Logs and Telemetry using Fluent Bit, Kubernetes, streaming and more

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Fluent Bit config from classic to YAML

02 Tuesday Jul 2024

Posted by mp3monster in Fluentbit, General, java, Technology

≈ 2 Comments

Tags

configuration, development, FluentBit, format, tool, YAML

Fluent Bit supports both a classic configuration file format and a YAML format. The support for YAML reflects industry direction. But if you’ve come from Fluentd to Fluent Bit or have been using Fluent Bit from the early days, you’re likely to be using the classic format. The differences can be seen here:

[SERVICE]
    flush 5
    log_level debug
[INPUT]
   name dummy
   dummy {"key" : "value"}
   tag blah
[OUTPUT]
   name stdout
   match *
#
# Classic Format
#
service:
    flush: 1
    log_level: info
pipeline:
    inputs:
        - name: dummy
          dummy: '{"key" : "value"}'
          tag: blah
    outputs:
        - name: stdout
          match: "*"
#
# YAML Format
#

Why migrate to YAML?

Beyond having a consistent file format, the driver is that some new features are not supported by the classic format. Currently, this is predominantly for Processors; it is fair to assume that any other new major features will likely follow suit.

Migrating from classic to YAML

The process for migrating from classic to YAML has two dimensions:

  • Change of formatting
    • YAML indentation and plugins as array elements
    • addressing any quirks such as wildcard (*) being quoted, etc
  • Addressing constraints such as:
    • Using include is more restrictive
    • Ordering of inputs and outputs is more restrictive – therefore match attributes need to be refined.

None of this is too difficult, but doing it by hand can be laborious and easy to make mistakes. So, we’ve just built a utility that can help with the process. At the moment, this solution is in an MVP state. But we hope to have beefed it up over the coming few weeks. What we plan to do and how to use the util are all covered in the GitHub readme.

The repository link (fluent-bit-classic-to-yaml-converter)

Update 4th July 24

A quick update to say that we now have a container configuration in the repository to make the tool very easy to use. All the details will be included in the readme, along with some additional features.

Update 7th July

We’ve progressed past the MVP state now. The detected include statements get incorporated into a proper include block but commented out.

We’ve added an option to convert the attributes to use Kubernetes idiomatic form, i.e., aValue rather than a_value.

The command line has a help option that outputs details such as the control flags.

Update 12th July

In the last couple of days, we pushed a little too quickly to GitHub and discovered we’d broken some cases. We’ve been testing the development a lot more rigorously now, and it helps that we have the regression container image working nicely. The Javadoc is also generating properly.

We have identified some edge cases that need to be sorted, but most scenarios have been correctly handled. Hopefully, we’ll have those edge scenarios fixed tomorrow, so we’ll tag a release version then.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...
← Older posts
Newer posts →

    I work for Oracle, all opinions here are my own & do not necessarily reflect the views of Oracle

    • About
      • Internet Profile
      • Music Buying
      • Presenting Activities
    • Books & Publications
      • Logging in Action with Fluentd, Kubernetes and More
      • Logs and Telemetry using Fluent Bit
      • Oracle Integration
      • API & API Platform
        • API Useful Resources
        • Useful Reading Sources
    • Mindmaps Index
    • Monster On Music
      • Music Listening
      • Music Reading
    • Oracle Resources
    • Useful Tech Resources
      • Fluentd & Fluent Bit Additional stuff
        • Logging Frameworks and Fluent Bit and Fluentd connectivity
        • REGEX for BIC and IBAN processing
      • Formatting etc
      • Java and Graal Useful Links
      • Official Sources for Product Logos
      • Python Setup & related tips
      • Recommended Tech Podcasts

    Oracle Ace Director Alumni

    TOGAF 9

    Logs and Telemetry using Fluent Bit


    Logging in Action — Fluentd

    Logging in Action with Fluentd


    Oracle Cloud Integration Book


    API Platform Book


    Oracle Dev Meetup London

    Blog Categories

    • App Ideas
    • Books
      • Book Reviews
      • manning
      • Oracle Press
      • Packt
    • Enterprise architecture
    • General
      • economy
      • ExternalWebPublications
      • LinkedIn
      • Website
    • Music
      • Music Resources
      • Music Reviews
    • Photography
    • Podcasts
    • Technology
      • AI
      • APIs & microservices
      • chatbots
      • Cloud
      • Cloud Native
      • Dev Meetup
      • development
        • languages
          • java
          • node.js
          • python
      • drone
      • Fluentbit
      • Fluentd
      • logsimulator
      • mindmap
      • OMESA
      • Oracle
        • API Platform CS
          • tools
        • Helidon
        • ITSO & OEAF
        • Java Cloud
        • NodeJS Cloud
        • OIC – ICS
        • Oracle Cloud Native
        • OUG
      • railroad diagrams
      • TOGAF
    • xxRetired
    • AI
    • API Platform CS
    • APIs & microservices
    • App Ideas
    • Book Reviews
    • Books
    • chatbots
    • Cloud
    • Cloud Native
    • Dev Meetup
    • development
    • drone
    • economy
    • Enterprise architecture
    • ExternalWebPublications
    • Fluentbit
    • Fluentd
    • General
    • Helidon
    • ITSO & OEAF
    • java
    • Java Cloud
    • languages
    • LinkedIn
    • logsimulator
    • manning
    • mindmap
    • Music
    • Music Resources
    • Music Reviews
    • node.js
    • NodeJS Cloud
    • OIC – ICS
    • OMESA
    • Oracle
    • Oracle Cloud Native
    • Oracle Press
    • OUG
    • Packt
    • Photography
    • Podcasts
    • python
    • railroad diagrams
    • Technology
    • TOGAF
    • tools
    • Website
    • xxRetired

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 2,556 other subscribers

    RSS

    RSS Feed RSS - Posts

    RSS Feed RSS - Comments

    March 2026
    M T W T F S S
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  
    « Feb    

    Twitter

    Tweets by mp3monster

    History

    Speaker Recognition

    Open Source Summit Speaker

    Flickr Pics

    Gogo Penguin at the BarbicanGogo Penguin at the BarbicanGogo Penguin at the BarbicanGogo Penguin at the Barbican
    More Photos

    Social

    • View @mp3monster’s profile on Twitter
    • View philwilkins’s profile on LinkedIn
    • View mp3monster’s profile on GitHub
    • View mp3monster’s profile on Flickr
    • View mp3muncher’s profile on WordPress.org
    • View philmp3monster’s profile on Twitch
    Follow Phil (aka MP3Monster)'s Blog on WordPress.com

    Blog at WordPress.com.

    • Subscribe Subscribed
      • Phil (aka MP3Monster)'s Blog
      • Join 234 other subscribers
      • Already have a WordPress.com account? Log in now.
      • Phil (aka MP3Monster)'s Blog
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar
     

    Loading Comments...
     

    You must be logged in to post a comment.

      Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
      To find out more, including how to control cookies, see here: Our Cookie Policy
      %d