Sharing a monitor

Like many techies, I have a personal desktop machine that packs a bit of grunt and runs multiple monitors. I also have a company laptop—for me, it’s a Mac (which, to be honest, I’m not a fan of, despite loving my iPhone and iPad).

Like many, my challenge is that when you’re used to two large screens, a laptop monitor just doesn’t cut it. So, I want to share one of the screens between two machines. The question I’ve wrestled with is how to do that with a simple button or key press. No messing with cables, or monitor settings etc.

I initially tried to solve this with a KVM—easy, right? Wrong. It turns out Macs don’t play nice with KVM switches. I went through buying several switches and sending them back before discovering it was a MacBook Pro quirk.

For a while, I used a travel monitor, which I had acquired, to solve the same issue when I traveled extensively for my previous employer. It’s an improvement, but the travel screen is still pretty small, not to mention it takes up more desk space (my main monitors and laptop are all on arms, so I can push them back if I want more desk space).

As most decent monitors allow multiple inputs, we’ve resorted to having two inputs, the only problem is that the controls to switch between inputs aren’t easy to use – but most people don’t need to toggle back and forth several times a day (VPN related tasks are done on the Mac, everything else is the desktop).

But this week, we had a breakthrough, the core of it is finding out about ControlMyMonitor and the VCP features newer monitors have. ControlMyMonitor provides a simple UI tool that allows us to identify an ID for each monitor and a command line capability that generates and sends connected monitor instructions, which allows us to do things like switching input, changing contrast, etc. With the tool we can issues commands such as: ControlMyMonitor.exe /SetValue "\.\DISPLAY2\Monitor0" 60 15. This tells the app for display monitor 2 (as known to my desktop) to send the VCP (Virtual Control Panel) code 60 (change input source) to the input number 15. I can switch the monitor back to the desktop by supplying the input number for the connection to the desktop.

So now I can toggle between screens without feeling around the back of the monitor to navigate the menu for switching inputs. Using a command is pretty cool, but still not good enough. I could put on my desktop links to two scripts to run the relevant command. But I’d also come across AutoHotKey (aka AHK). This allows us to create scripts using AHK’s syntax, which can be configured to work with specific key combinations. So, creating a config file with a key combo to run the command shown made it really convenient. Windows+Shift+Left arrow and the monitor switches to the desktop, Windows+Shift_Right arrow, and it displays the laptop. The script will look something like this:

Requires AutoHotkey v2.0
#+Right::Run "monitor-left.bat"
#+Left::Run "monitor-right.bat"

We could embed the command directly into the AHK script, but the syntax is unusual and would require escaping quotes. By referencing a shell script, we can easily extend the process without needing to master the AHK syntax.

The only remaining problem now is for the AHK app to start when the machine boots up and load the configuration/script when the desktop boots up. We can do this by adding a shortcut the the script file. This can be done by creating a shortcut to our script that the AHK runtime will recognize to run and put that shortcut in the startup folder. The startup folder is likely to be somewhere such as C:\Users\<username>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup. But we can get a file explorer to open in the right place with the command Windows+R (open the command line option from the Start Menu. Then enter the command “shell:startup“.

Further reading:

Think Distributed Systems

Tags

, , , , ,

One of the benefits of being an author with a publisher like Manning is being given early access to books in development and being invited to share my thoughts. Recently, I was asked if I’d have a look at Think Distributed Systems by Dominik Tornow.

Systems have become increasingly distributed for years, but the growth has been accelerating fast, enabled by technologies like CORBA, SOAP, REST frameworks, and microservices. However, some distribution challenges even manifest themselves when using multithreading applications. So, I was very interested in seeing what new perspectives could be offered that may help people, and Dominik has given us a valuable perspective.

I’ve been fortunate enough that my career started with working on large multi-server, multithreaded mission-critical systems. Using Ada and working with a mentor who challenged me to work through such issues. How does this relate to the book? This work and the mentor meant I built some good mental models of distributed development early in my career. Dominik calls out that having good mental models to understand distributed systems and the challenges they can bring is key to success. It’s this understanding that equips you to understand challenges such as resource locking, contending with mutual deadlock, transaction ordering, the pros and cons of optimistic locking, and so on.

As highlighted early on in this book, most technical books come from the perspective of explaining tools, languages, or patterns and to make the examples easy to follow, the examples tend to be fairly simplistic. This is completely understandable; these books aim to teach the features of the language. Not how to bring these things to bear in complex real-world use cases. As a result, we don’t necessarily get the fullest insight and understanding of the problems that can come with optimistic locking.

Given the constraints of explaining through the use of programming features, the book takes a language-agnostic approach to explaining the ideas, and complexities of distributed solutions. Instead, the book favors using examples, analogies, and mathematics to illustrate its points. The mathematics is great at showing the implications of different aspects of distributed systems. But, for readers like me who are more visual and less comfortable with numeric abstraction, this does mean some parts of the book require more effort – but it is worth it. You can’t deny hard numeric proofs can really land a message, and if you know what the variables are that can change a result, you’re well on your way.

For anyone starting to design and implement distributed and multi-threaded applications for the first time, I’d recommend looking at this book. From what I’ve seen so far, the lessons you’ll take away will help keep you from walking into some situations that can be very difficult to overcome later or, worse, only manifest themselves when your system starts to experience a lot of load.

Fluent Bit with Chat Ops

Tags

, , , , , , , , ,

My friend Patrick Stephens and Fluent Bit committer will present at the Open Source Monitoring Conference in Germany later this year. Unfortunately, I won’t be able to make it, as my day job is closing in on its MVP product release.

The idea behind the presentation is to improve the ability to detect and respond to Observability events, as the time between detection and action is the period during which your application is experiencing harm, such as lost revenue, data corruption, and so on.

The stable configuration and code base version is in the Fluent GitHub repository; my upstream version is here. We first discussed the idea back in February and March. We applied simpler rules to determine if the log event was critical.

Advancing the idea

Now that my book is principally in the hands of the publishers (copy editing and print preparation, etc.), we can revisit this and exploit features in more recent releases to make it slicker and more effective, for example.

  • Stream processor, so a high frequency of smaller issues could trigger a notification using the stream processor.
  • We can also use the stream processor to provide a more elegant option to avoid notification storms.
  • The new processors will make it easier to interact with metrics, so any application natively producing metrics.

Other tooling

With the book’s copy editing done, we have a bit more time to turn to our other Fluent Bit project … Fluent Bit configuration converter, both classic to YAML, and implementing a Fluentd to Fluent Bit 1st stage converter. You can see this in GitHub here and here.

Two weeks of Fluent Bit

Tags

, , , , , ,

The last couple of weeks have been pretty exciting. Firstly, we have Fluent Bit 3.1 released, which brings further feature development to Fluent Bit, making it even more capable with Fluent Bit handling of Open Telemetry (OTel).

The full details of the release are available at https://fluentbit.io/announcements/v3.1.0/

Fluent Bit classic to YAML

We’ve been progressing the utility, testing and stabilizing it, and making several releases accordingly. The utility is packaged as a Docker image, and the regression test tool also runs as a Docker image.

Moving forward, we’ll start branching to develop significant changes to keep the trunk stable, including experimenting with the possibility of extending the tool to help port Fluentd to Fluent Bit YAML configurations. The tools won’t be able to do everything, but I hope they will help address the core structural challenges and flag differences needing manual intervention.

Book

The Fluent Bit book has moved into its last phase with the start of copy editing. We have also had a shift in the name to Logs and Telemetry using Fluent Bit, Kubernetes, streaming, and more, or just Logs and Telemetry using Fluent Bit. The book fundamentally hasn’t changed. There is still a lot of Kubernetes-related content, but it helps focus on what Fluent Bit is all about rather than being just another Kubernetes book.

Logs and Telemetry using Fluent Bit
Logs and Telemetry using Fluent Bit, Kubernetes, streaming and more

Fluent Bit config from classic to YAML

Tags

, , , , ,

Fluent Bit supports both a classic configuration file format and a YAML format. The support for YAML reflects industry direction. But if you’ve come from Fluentd to Fluent Bit or have been using Fluent Bit from the early days, you’re likely to be using the classic format. The differences can be seen here:

[SERVICE]
    flush 5
    log_level debug
[INPUT]
   name dummy
   dummy {"key" : "value"}
   tag blah
[OUTPUT]
   name stdout
   match *
#
# Classic Format
#
service:
    flush: 1
    log_level: info
pipeline:
    inputs:
        - name: dummy
          dummy: '{"key" : "value"}'
          tag: blah
    outputs:
        - name: stdout
          match: "*"
#
# YAML Format
#

Why migrate to YAML?

Beyond having a consistent file format, the driver is that some new features are not supported by the classic format. Currently, this is predominantly for Processors; it is fair to assume that any other new major features will likely follow suit.

Migrating from classic to YAML

The process for migrating from classic to YAML has two dimensions:

  • Change of formatting
    • YAML indentation and plugins as array elements
    • addressing any quirks such as wildcard (*) being quoted, etc
  • Addressing constraints such as:
    • Using include is more restrictive
    • Ordering of inputs and outputs is more restrictive – therefore match attributes need to be refined.

None of this is too difficult, but doing it by hand can be laborious and easy to make mistakes. So, we’ve just built a utility that can help with the process. At the moment, this solution is in an MVP state. But we hope to have beefed it up over the coming few weeks. What we plan to do and how to use the util are all covered in the GitHub readme.

The repository link (fluent-bit-classic-to-yaml-converter)

Update 4th July 24

A quick update to say that we now have a container configuration in the repository to make the tool very easy to use. All the details will be included in the readme, along with some additional features.

Update 7th July

We’ve progressed past the MVP state now. The detected include statements get incorporated into a proper include block but commented out.

We’ve added an option to convert the attributes to use Kubernetes idiomatic form, i.e., aValue rather than a_value.

The command line has a help option that outputs details such as the control flags.

Update 12th July

In the last couple of days, we pushed a little too quickly to GitHub and discovered we’d broken some cases. We’ve been testing the development a lot more rigorously now, and it helps that we have the regression container image working nicely. The Javadoc is also generating properly.

We have identified some edge cases that need to be sorted, but most scenarios have been correctly handled. Hopefully, we’ll have those edge scenarios fixed tomorrow, so we’ll tag a release version then.

Logging Frameworks that can communicate directly with Fluent Bit

Tags

, , , , , , , , , , , , , , ,

While the typical norm is for applications to write their logs to file or to stdout (console), this isn’t the most efficient way to handle logs (particularly given I/O performance for the storage devices). Many logging frameworks have addressed this by providing more direct outputs to commonly used services such as ElasticSearch and OpenSearch. This is fine, but the only downside is that there is no means for an intermediary layer to preprocess, filter, and route (potentially to multiple services). These constraints can be overcome by using an intermediary service such as Fluent Bit or Fluentd.

Many logging frameworks can work with Fluentd by supporting the HTTP or Forward protocols Fluentd supports out of the box. But as both Fluent Bit and Fluentd are interchangeable with these protocols and logging framework that supports Fluentd, by implication also supports Fluent Bit, not to mention Fluent Bit supports OpenTelemetry.

The following table identifies a range of frameworks that can support communicating directly with Fluent Bit. It is not exhaustive but does provide broad coverage. We’ll update the table as we discover new frameworks that can communicate directly.

Latest Version …

LanguageFramework / LibraryProtocol(s)Commentary
JavaLog4J2HTTP AppenderSend JSON payloads over HTTP (use HTTP input plugin)
Javafluent-logger-javaForward
Pythoncore languageHTTP HandlerProvides the means to send logs over HTTP – means Fluent Bit input handler can manage
Pythonfluent-logger-python
Fluent Logger
ForwardUses the Forward protocol meaning it can gain the efficiencies from msgpack.
Maintained by the Fluent community
Node.jsfluent-logger-nodeForwardIt uses the Forward protocol, meaning it can gain efficiencies from msgpack.
Maintained by the Fluent community
Node.jsWinstonHTTP

Forward
Winston is designed as a simple and universal logging library supporting multiple transports.
Winston includes transport support for HTTP in its core. There is also a Transport implementation for native Fluent https://github.com/sakamoto-san/winston-fluent
Node.jsPino (Pino-fluent extension)Logger integrated into the Pino logging framework
Go (Golang)fluent-logger-golangForwardIt uses the Forward protocol, meaning it can gain efficiencies from msgpack.
Maintained by the Fluent community
.Net (C# VB.Net etc)NLog (NLog.Targets.Fluentd)An NLog target – works with .Net
.Net (C# VB.Net etc)Log4NetLog4Net Appender
.NetSerilog (Fluent Sink)Forward and HTTPSupports both HTTP and nativbe Fluentd/FluentBit
Rubyfluent-logger-rubyForwardIt uses the Forward protocol, meaning it can gain efficiencies from msgpack.
Maintained by the Fluent community
PHPfluent-logger-phpForwardIt uses the Forward protocol, meaning it can gain efficiencies from msgpack.
Maintained by the Fluent community
Perlfluent-logger-perlForwardIt uses the Forward protocol, meaning it can gain efficiencies from msgpack.
Maintained by the Fluent community
Scalafluent-logger-scalaForwardIt uses the Forward protocol, meaning it can gain efficiencies from msgpack.
Maintained by the Fluent community
Erlangfluent-logger-erlangForward
It uses the Forward protocol, meaning it can gain efficiencies from msgpack.
Maintained by the Fluent community
OCAMLfluent-logger-ocamlForward
It uses the Forward protocol, meaning it can gain efficiencies from msgpack.
Maintained by the Fluent community
RustRust Logging framework extension for Fluent BitRust crate for logging to Fluent Bit
DelphiQuickloggerHTTP

A Fast (and Dirty) Way to Publish API specs

Tags

, , , , , , , , , , ,

The API specs created using Open API Specification (OAS) and ASyncAPI specification aren’t just for public API consumption. In today’s world of modular component services that make up a business solution, we’re more than likely to have APIs of one sort or another. These need documenting, perhaps not as robustly as those public-facing ones, but the material needs to be easily accessible.

Spotify’s contribution to the CNCFBackstage is a great tool for sharing development content, particularly when your document and code repository is at least git-based if not GitHub (you move away from this or don’t easily have permissions to configure application authentication, you can still work with Backstage, but your workload will grow a lot). There is no doubt that Backstage is a very powerful, information-rich product. But that does come at the cost of needing lots of configuration, the generation of metadata descriptors additional to the APIs to be cataloged, etc. All of these can be a little heavy if you’re using Backstage as a low-cost API documentation portal that might fill the gaps that your corporate wiki/doc management (Confluence/SharePoint) solution can’t support (it is one of the very, very few open-source options that can support both OAS and AsyncAPI reader friendly API rendering tools).

We could, of course, adopt the approach of there are free VS Code plugins that can render the friendly views of APIs, so just perform a git pull (or copy the API specs from a central location) to give the nice visualization. This is fine, but the obligation is now on the developer to ensure they have the latest version of the API spec and that they are using VSCode – while it is very dominant as an IDE – not everyone uses it, particularly if you’re working with low code tooling.

There is a fast and inelegant solution to this if you’re not in need of nice features such as attribute-based search and sorting, etc. Both the Open API Specification and the Async API communities have built command line-based renderers that will read your API specification (even if the schema is spread across multiple files) and generate HTML (an index.html file), CCS, and JavaScript renderings that you see in many tools (hyperlinked, folding, with code and payload examples of the API).

So, we need to grab the YAML/JSON specifications and run them through the tool to get the presentation formatting. You do need to get the specs, but we can easily script that with a bit of shell script that retrieves/finds the relevant files from a repository and then runs the CLI utility on the files.

We want to bring the static content to life across the network for developers. So, on a little server, we can host this logic, plus an instance of Apache, IIS, or Nginx if you’re comfortable with one of the industrial superpower web servers. Or use a spin-off project from the Chrome Server called the Simple Web Server. This tool is incredibly simple and provides you with a UI that allows you to configure quickly and easily and then start a web server that can dish up static content. I would hesitate to suggest such an approach for production use cases, but it’s not to be sniffed at for internal solutions, safely behind firewalls, network security, etc.

Steps in summary:

  • Install NPM
  • Install a Simpler Web Server – Apache, Nginx, or even Simple Web Server
  • Install the CLI tools for OpenAPI and AsyncAPI
  • Script to identify API documents and use the CLIs

Steps …

As all the functionality is dependent on Node, we need both Niode.js and NPM (Node Package Manager). Installing the Node Version Manager (NVM) is the easiest way to do that for Linux, and Mac with the command:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash

Windows has a separately produced binary called NVM for Windows (which will eventually be superseded by Runtime), which has an installer that can be downloaded from the GitHub releases part of the repo.

Once nvm is installed (and ideally in the OS’ PATH environment variable) we can complete the process with the command:

nvm install lts

Which will see the latest Long Term Support (LTS) version installed.

Open API

To install Open API using NPM:

npm install @openapitools/openapi-generator-cli -g

The command that we will need to wrap in a script is:

npx @openapitools/openapi-generator-cli generate -i <your-open-api-spec.yaml> -g html2 -o <your-output-folder-for-this-api>

As the output generated is index.html with subfolders for the stylesheet and Javascript needed, we recommend using the name of the API Spec file (without the postfix, e.g., .yaml) as the folder name.

AsyncAPI

Just Like the Open API command line, we need to install the Async version using the command line:

npm install -g @asyncapi/cli

The equivalent command to generate the HTML is pretty similar, but, note over time, the template-referenced version will evolve (i.e. @2.3.5 to be a newer version)

asyncapi generate fromTemplate <your-async-api-spec.yaml> @asyncapi/html-template@2.3.5 -o ./<your-output-folder-for-this-api> --force-write

Scripting the Process

As you can see, we need to tease out the API files from the source folder, which may contain other resources, even if such resources are schemas that get included in the API (as our APIs grow in scope, we’ll want to break the definitions up to keep things manageable. but also re-use common schema definitions.

The easiest way to do this is to have a text file providing the path and name of the API definition. Each type of API has its own file – removing the need to first work out which type of API needs to be run.

This also means we can read all the API list files to determine then if any API spec pages need to be removed.

Final Thoughts

One of the things we saw when adopting this approach is that the generating process did highlight an issue in the API YAML that the VS Code plugin for Open API didn’t flag, which was the accidental duplication of the operationId when defining an API (an error when creating related API definitions using a bit of cut, paste, and edit).

A static documentation generator is also available for GraphQL (https://2fd.github.io/graphdoc/); although we have not tested it, the available examples, while making the schema navigable, it isn’t as elegant in presenting the details as our Async and Open APIs,

Cloud Native Architecture book

Tags

, , , , , , , ,

It’s a busy time with books at the moment. I am excited, and pleased to hear that Fernando Harris‘ first book project has been published. It can be found on amazon.com and amazon.co.uk among sites.

Having been fortunate enough to be a reviewer of the book, I can say that what makes this book different from others that examine cloud-native architecture is its holistic approach to the challenge. Successful adoption of cloud-native approaches isn’t just technical (although this is an important element that the book addresses); it also considers organizational, processes, and people dimensions. Without these dimensions, the best technology in the world will only be successful as a result of chance rather than by intent.

As a result, this book guides and connects content to the Kubernetes technical content (technical how-to books we typically see from publishers like Manning and O’Reilly) and the more organizational leadership books that you might expect from IT Revolution (Gene Kim et al.).

A read I’d recommend to any architect or technical lead who wants to understand the different aspects of achieving cloud-native adoption rather than just the mastery of an individual technology.

Secure APIs (MEAP) book – Initial Impressions

Tags

, , , , ,

My day job as a technical architect means I spend a lot of time working on and around technical non-functional needs, from observability to APIs. And APIs are everywhere (sometimes we don’t talk about things like the OpenTelemetry Protocol (OTLP) as APIs, but this is what it is). and I’ve written and blogged on the subject many times in the past.

One of the things I tend to do is read books on the subject – always on the lookout for new strategies, ideas, and techniques for handling an API’s number one challenge – security. With a new book on Secure APIs from José Haro Peralto being published by Manning (as a Manning author, I have the perks of looking at books published and in the Early Access Program).

The Early Access Program means that after the first couple of chapters have been written and go through initial review processes, they’re made available. However, the book is still in development and has not gone through a full copy edit process. However, the core ideas and messages are there in the book.

The book so far looks really good. It comes across as very practical and illustrative of the points it needs from the outset, with some nicely presented insights about why API Security is such an important consideration—54% of web traffic is API-driven, organizations see as many as 10 million attacks per day, and a breach typically costs $6.1 million. If you’re trying to make a case for investing in API security – there are some great references here.

The book doesn’t just look at implementing the code that powers the API contract but also the tools from firewalls to gateways. It engages in the process of figuring out what risks an API needs to mitigate and the consequences of failing to do so. While the first couple of chapters look at the broader landscape and ideas. We can expect a closer look at things like the OWASP Top 10 (a resource that should be mandatory learning for anyone going to implement APIs or web app development more generally) as the book progresses.

The first couple of chapters read well and are easy to absorb, and we’re looking forward to reading the coming chapters, which will discuss the nuts and bolts of securing APIs.

The only observation to be aware of at this point is that, while not explicitly stated, the illustrations suggest a strong bias to RESTful web services with the appearance of just the Open API Initiative logo. While REST is the most common API approach, gRPC, and GraphQL are continuing to make big inroads and are supported by the Asynchronous API Spec. I suspect this will be addressed given José’ background and expertise. I#m looking forward to the coming chapters.

Useful Quick Reference Links when Writing API Specs

Tags

, , ,

Whether you’re writing Asynchronous or Open APIs unless you’re doing it pretty much constantly, it is useful to have links to the specific details, to quickly check the less commonly used keywords, or to check whether you’re not accidentally mixing OpenAPI with AsyncAPI or the differences between version 2 or version 3 of the specs. So here are the references I keep handy:

There are some useful ISO Specs for common data types like dates. Ideally, if you’re working in a specific industry domain, it is worth evaluating the industry standard definitions (even if you elect to use entire standardized objects). But when you’re not in such a position, it is at least work using standard ways of representing data—it saves on documentation effort.

Not a standard, but still an initiative to promote consistency back by the likes of Microsoft etc, so could provide some insights/ideas/templates for common data structures – https://schema.org/

There are, of course, a lot of technology-centered standards such as media streaming, use of HTTP, etc.

These and many more resources are in my Tech resources.