It’s all about … Fluent Bit

Tags

, , , , ,

We can reveal why things have been quieter than usual on the blogging front. Logging in Action with Fluentd has a partner title … Fluent Bit with Kubernetes.

The new book focuses on Fluent Bit, given its significant advances, reflected by the fact it is now at Version 2 and is deserving of its own title. The new book is a free-standing book but is complimentary to the Logging In Action book. Logging in Action focuses on Fluentd but compliments by addressing more deeply deployment strategies for Fluent Bit and Fluentd. The new book engages a lot more with OpenTelemetry now it has matured, along with technologies such as Prometheus.

It is the fact that we’ve seen increasing focus in the cloud native space on startup speeds and efficiency in footprints that have helped drive Fluent Bit, as it operates with native binaries rather than using a Just-In-Time compilation like Ruby (used for Fluentd). The other significant development is the support for OpenTelemetry.

The book has entered the MEAP (Manning Early Access Program). The first three chapters have been peer-reviewed, and changes have been applied; another three are with the development editor. If you’ve not read a MEAP title before, you’ll find the critical content is in the chapters, but from my experience, as we work through the process, the chapters improve as feedback is received. In addition, as an author, when we have had time away from a chapter and then revisit it – it is easier to spot things that aren’t as clear as they could be. So, it is always worth returning as a reader and looking at chapters. Then, as we move to the production phases, any linguistic or readability issues that still exist are addressed as a copy editor goes through the manuscript.

I’d like to thank those involved with the peer review. Their suggestions and insights have been really helpful. Plus, the team at Calyptia is sponsoring the book (and happens to be employing a number of the Fluent Bit contributors).

We also have a discount code on the book, valid until 20th November – mlwilkins2

Peter Gabriel I/O

Tags

, , ,

It’s been about twenty years since we’ve had any new original songs from Peter Gabriel. Now, for the last year, he has been teasing us by releasing a new track every month with two mixes called The Bright Side and The Dark Side. which sort of makes sense, given you could see I/O as a rather abstract representation of Ying and Yang.

With a track, each month has created an interesting experience, as it has given us time to absorb each track, rather than a big audio feast of an album, where the singles leap out at you, and then you start to appreciate the other tracks. If there is a downside, it is probably the fact it is no longer easy to say – these tracks are the singles. But to be honest I don’t think it matters to Peter Gabriel. There may be fan favorites, but that’s it certainly as far as it goes since Us.

However, even knowing which tracks are becoming fan favorites has been tough as Peter toured the album, and depending upon where you are in the world, you’ll have only heard some of the new songs, even though the core of the live show has been I/O.

The musical core of the band remains largely unchanged, with David Rhodes and Tony Levin with Manu Katche back on drums for most of the tracks. John Metcalfe is back, having also contributed so wonderfully with New Blood and the tours over the last ten years where Gabriel has used orchestral arrangement.

With this team, we have a real mix of style and sounds. From the very reflective Playing For Time, which opens with the muted horn reminiscent of tracks like Father Son on Ovo. Then there are tracks that are rhythm-heavy, like The Court, that would have fit in on the Up album.

As with all the two-letter-titled albums, there is a loose theme to the album. For I/O that is input and output, whether that is input from observation as suggested by Panopticom to the title track about how to absorb and contribute to the environment.

What the album shows and the tour demonstrated is that unlike some of his peers, Peter’s voice has changed, but the songs fit what sounds like a more weathered voice. The older songs, which may have been pitched higher, still have the energy and dynamics but perhaps pitched a little differently. So none of the challenges faced like Jim Kerr, who leans more of backing vocalists live, or Sting and Bono, who you can hear have to really work to hit some of the notes.

Peter has continued the idea that each song gets its own artwork associated with it, which came to prominence on the Us album (you can see more with Art From US). Some videos of this work can be seen here.

Along with the artwork, there have been some amazing videos. This is not big news, and the use of technology – particularly the application of some Generative AI. Check out these:

Some images from the videos …

Speeding Ruby

Tags

, , , , , ,

Development trends have shown a shift towards precompiled languages like Go and Rust away from interpreted or Just-In-Time (JIT) compiled languages like Java and Ruby as it removes the startup time of the language virtual machine and the JIT compiler as well as a smaller memory footprint. All desirable features when you’re scaling containerized solutions and percentage point savings can really add up.

Oracle has been leading the way with its work on GraalVM for some years now, and as a result, not only can GraalVM be used to produce native binary images from Java code, GraalVM also supports TuffleRuby and GraalPy, among others. As TruffleRuby is an open-source project, Oracle isn’t the only vendor contributing to it, work effort has also come from Shopify.

Helping Ruby move forward isn’t new for the Shopify engineering team, and part of that investment is that they have just announced the open-sourcing of a toolchain called Ruvy. Ruvy takes Ruby and creates a WebAssembly (WASM) from it the code. This builds on the existing project ruby.wasm. In doing so they’ve addressed the Ruby startup overhead of the language VM we mentioned. They have also simplified the process of deployment, eliminating the need for Web Assembly System Interface (WASI) arguments, and overcome constraints of class loading by reading files by having the code bundled within the assembly and then accessing the content using WASI-VFS, a simple virtual file system.

The published benchmarks show a massive performance boost in the process of executing where the Ruby code needs to be executed by the packaged JIT. For me, this is interesting as one of the related cloud-native trends is the shift from Fluentd to Fluent Bit. Fluentd was built with Ruby and has a huge portfolio of third-party extensions. But Fluent Bit is built using C to get those performance gains previously described. But it does support plugins through WASM. This raises an interesting question can we take existing Ruby plugins and wrap them so the required interfacing works – which should be minimal and more likely to be impacted by the fact Fluent Bit v2 has refined the internal data structure that was common to both Fluentd and Fluent Bit to allow Fluent Bit to more easily engaged with OpenTelemetry.

If the extra bit of wrapping code isn’t complex, then applying Ruvy should mean the core plugin can then work with Fluent Bit. If this can be templated, then Fluent Bit is going to make a big leap forward with the number of available plugins.

Clickbait headlines on open-source project maintenance

Tags

, ,

Infoworld published a rather clickbait incendiary new item the other week ‘few open source projects actively maintained’. Personally, I find these statements a little frustrating, as it would be easy for the less informed to assume that adopting open-source software is dangerous. There are several missed points here:

  • How well and frequently are close source solutions being maintained, and does that stop businesses from using end-of-life products? There is big business to be had in offering support to end-of-life solutions. Just look at companies like Rimini Street. Such organizations aren’t going to change software unless there is a major issue.
  • Not all open-source software is intended to be undergoing continuous maintenance? Shocking until you consider that open-source projects will remain open and available even when they have been declared end-of-life. Why? One of the things about open-source is you don’t know who is using the code, and suddenly pulling the code because the originator has decided they can no longer maintain their investment could put others in a difficult position. So, the right thing is to leave the source available and allow people to fork it so they can continue maintaining their own version of it or until they’ve migrated away. That way, the originator is not impacted by changes.
  • Next up, not all open-source projects need continued maintenance; many repositories exist to provide demo and sample solutions – so that developers can see how to use a product or service. These repositories shouldn’t need to change often. Frequent change could easily be a sign of an unstable product or service. These solutions may not be the most secure, as you don’t want to complicate the illustration with all the checks and balances that should be considered. Look at it this way: when we start learning a new language or tool, we start with the classic Hello World – which today means pointing your browser at a URL and seeing the words appear on the page. Do we insist that the initial implementation be secure? No, because it distracts from the basic message. For example, in my GitHub repository, I have multiple public repositories with Apache2 licenses attached to them – i.e., open-source. A number of them support the books I’ve written – they aren’t going to change – in fact, change would be a bad thing unless the associated book is corrected (this repo, for example).
  • When it comes to security vulnerabilities. This needs to be viewed with some intelligence. For several reasons:
    • As mentioned, our demo examples are unlikely to be patched with the latest versions of dependencies all the time. The point is to see how the code works. Unless the demo relates directly to something that has to be patched and that changes the demo itself. I don’t think it is unreasonable to expect developers to apply some intelligence to ensure dependencies (and therefore the risk of known vulnerabilities) are checked rather than blindly cutting and pasting. The majority of the time, such content will be published with a minimum version number, not a maximum.
    • Sometimes, a security vulnerability isn’t an issue. For example, I rarely run vulnerability checks on my LogSimulator. Not because I have a cavalier attitude to security but because I don’t expect it to ever be near a production environment, and the data flowing through the tool will be known and controlled by the user in advance of any activity. Secondly, it shouldn’t be using sensitive data, and thirdly, if there was any malicious intent intended, then I’d be more concerned about how secure its data source and configuration is. The tool is a command-line solution. That said, I still apply development practices that minimize potential exploitation.

Don’t get me wrong, there are risks with all software – closed and open-source, whether it is maintained or has security vulnerabilities. A software development team has a responsibility to make informed, risk-aware selections of software (open or closed source). If you have the means to check for risks, then they are best used. It is worth not only scanning our own code but also considering whether the dependencies we use have been scanned if appropriate (e.g. used in production). Utilizing innovations like SBOM, and exercising routine checks and reviews can also help.

While I can’t prove it, I suspect there are more risks being carried by organizations adopting a library that was considered sufficiently secure when downloaded, but subsequent vulnerabilities have been found, or selected mitigations to risks have been eroded over time.

Java 21 & GraalVM — lots to be excited about

Tags

, , , ,

Today, Java 21 has reached General Availability (GA) with some important new features in the language mainstream (i.e., not requiring preview flags enabled), and Oracle will be supporting Java 21 as a Long long-term support (guaranteed at least 3years of free support (2years to the next LTS + 1 yr overlap) and then for at least an additional 5 years under support subscription). Everyone is talking about virtual threads. Interestingly the new virtual threads mean, in the majority of cases, we no longer need to handle the complexities of reactive programming – not my point of view, but a view expressed earlier today by Tomas Langer, the architect for Helidon. For old hands like myself – this is a blessing as the old-style threading comes more naturally. There are a lot of other smaller features coming through in the language with this, such as records, Z Garbage Collector, and better support for Key Encapsulation management. All the fine details can be found on the Oracle Java blog.

Java.dev has a new Playground, which allows you to write some Java code in the browser and run it. No local JDK or IDE is needed. Great for trying out code, like pattern matching for switch statements.

GraalVM gets a new release with Java 21. Along with some other cool features. Including being able to deploy Graal’s polyglot features with just the support for the languages you want, meaning that the GraalVM footprint is kept as small as you need in containers. This decoupling is supported with Maven and Gradle configurations. With this are enhancements for the Just-in-Time (JIT) and Ahead-of-Time (AOT) performance – read more about this in Alina Yurenko‘s blog.

Who is Claude Shannon?

Tags

, , , , ,

Anyone in IT will have heard of Alan Turing and Tim Berners-Lee. The majority of developers will know about Ada Lovelace. But what about Claude Shannon? Well, I have to admit that I didn’t until I had time to watch the documentary film The Bit Player. I am shocked I’d never come across Shannon’s name before, given the importance of his work.

So what did he do? Well, Claude was responsible for Information Theory, which some people will have heard of. His MIT thesis set the foundations for Boolean algebra and the use of switches to manage data. He published a couple of really important papers in the 1940s. The most important of these is the Mathematical Theory of Communication put forward a number of ideas:

  • All means of communication can be reduced to a logical representation:
  • Representation of information using bits
  • To optimize communication, we should compress data – and compression allows us to reduce the data to just enough before it becomes unintelligible. This is best illustrated by the fact we can write messages and omit characters and sometimes whole words and still be understood.
  • We can use mathematical formulas to determine and correct data corruption due to noise – using techniques such as checksums and error correction.
  • There is an upper limit to how much information can be communicated – now referred to as Shannon’s limit

While this may seem obvious today, in the 1940s, computers were still electromechanical – making it groundbreaking. Claude’s later work may not have been seen as seismic as these initial papers. But in the 50s, he demonstrated with basic telephone switches and magnets the underlying ideas of machine learning using a robotic mouse called Theseus who had to navigate a maze (read more here). Illustrated ideas of how to computationally beat chess masters, which is what eventually happened with IBM’s Deep Blue against Gary Kasparov.

It’s a lovely documentary film, which includes reconstructed interviews with Claude that happened in the 80s. Sadly, Shannon died in 2001 from Alzheimer’s, possibly the cruelest of illnesses for such an insightful and intelligent person,

If you’d like to know more about Shannon – then have a read of this paper. The film The Bit Player can be found on several streaming services and YouTube, and the film’s website is here.

Visualizing A Career Path

Tags

, , , ,

I recently wrote a piece for DZone about visualizing career paths. As an enabler for people to make use of the diagrams to help the visualization, we’ve made the original PowerPoint diagrams used available here:

Update

We’re excited to hear we’ve had another DZone article selected to be used on the homepage …

Architectural governance – decision matrices as way to reduce friction

Tags

, ,

When it comes to software delivery processes, governance processes such as architectural governance boards can often be perceived as a hold-up to software delivery processes, so when a project is slipping against its forecast timelines, such processes can become the easy thing to blame (along with any other process that engages beyond the project team). Sometimes, the slip is happening for very good and legitimate reasons in these situations it is just very hard to defend the slip.

There are a number of things we can do, to simplify and streamline the process. One of these is the use of decision matrices – something I’ve written about in the past (Decision Matrix aka ‘Stress Test’ as a vehicle to make decisions easier). The value of the decision matrix when it comes to governance is that it can be used as a catalog of pre-approved solution approaches. Let’s give an example; we could provide a decision matrix to select the best type of application server, which perhaps covers whether a micro profile framework is used and which ones (e.g., Helidon but not Payara because of the support agreements in place) vs. J2EE (and again reflecting decisions relating implementations such as WebLogic but not WebSphere). Then when a team decides on the implementation or developing a roadmap, if they are working within the matrix’s guidance, then the decision could be approved on the spot by any member of the governance team. With the approval given by just checking the approach being adopted is sensible.

TOGAF – governance perspective

If the solution falls outside of the decision matrices recommendations, this comes down to one of the following reasons:

  • The approach represents a good approach that could and should be applied within the domain but not yet captured in the matrices – therefore, the matrix needs updating.
  • The solution makes sense and follows common industry strategies and/or tools but is addressing an outlier/anomalous situation for this organization – therefore should go to governance seeking a dispensation on this basis. In this situation, it is would beneficial for the designer(s) to highlight the case for dispensation. By highlighting how the existing decision options do not fit. In effect, sharing the assessment of the relevant matrix(ices) against the problem.
  • The approach reflects the development team’s preferences rather than perhaps aligning with the organization’s needs for the ability to maintain technologies. For example, keeping development language in use to the top 5 commonly used languages according to TIOBE rather than adopting a niche language such as Haskell or avoiding languages that have a reputation for being difficult to maintain, such as Perl. In these situations, a careful examination of the case is needed by any governance process.

What we are effectively doing is making the decision matrix not only a tool to help developers select the most effective options (given the ability to standardize approaches raises the chance of possible code reuse or refactoring to reuse) to being a way to lighten governance, or the perception of governance. Whatever mechanism is used to record decisions by a team just has to reference the decision matrices.

Road to Kubernetes – MEAP book review

Tags

, , ,

One of the benefits of being a Manning author is that we get access to the Manning book catalog, including those currently in the MEAP early access programme (MEAP). The Road to Kubernetes title was bought to my attention. The book has just become available as a MEAP title; this means that the book has just completed its first major review milestone, and about a third of the book has been written. It does mean our review only covers the first 3 chapters at the moment.

What got my attention with this book is that unlike other titles about Kubernetes \9of which there are a number of great titles in the Manning portfolio already) is that it has adopted a different approach.

Most books focus on one technology and deep dive into that technology and dig into the more advanced features of that specific area. For an experienced IT person, that is great. But, when it comes to Kubernetes, if you’re skills are largely just focused on largely coding with languages like Java, Python, and JavaScript – not unusual for a graduate or junior developers it means the amount of reading and learning curve to get to grips with developing and deploying containers to Kubernetes is considerable. Here, Justin has taken the approach of assuming basic development skills and then taking you on a journey of focussing on the basics of containers, deployment automation, and then Kubernetes with just enough to be able to deploy a simple solution using good practices. This makes the learning path to gaining the skills that allow you to work within a team and building containerized solutions a lot easier.

I imagine once the book is complete and you’ve followed it through, you’ll be in a position to focus on learning new, more advanced aspects of containers and Kubernetes in a focused manner to meet the needs of a day-to-day job.

Having coached and mentored junior developers and graduates, this is a book I’d recommend to help them along, and if my experience with the Manning book development process is anything to go by, as Justin goes through the major milestones, this book will go from good to great.

My only word of caution is that this book will take the reader on a journey of building and deploying microservices to Kubernetes. Don’t be fooled into thinking Kubernetes and microservices are easy – there is a lot of technologies that I don’t think the book will go into (but then not all developers need to understand details such the differences in network fabric (Calico, Flannel) or container engines cri-o, Docker Engine, deploying support tooling for things like observability). Without good design and, depending upon your solution, a handle in a variety of more specialized areas, it is still possible to get yourself into a mess, even for the most experienced of teams.

DZone article – IDE Changing as Fast as Cloud Native

Tags

, , , , ,

While this might be my home for sharing thoughts and knowledge, my domain name can work against me when it comes to new potential readers (once people have found me – it’s a fairly easy domain name to remember and get back to). That does mean I occasionally write and publish content elsewhere (Software Engineering Daily, and Medium, for example). I’ve recently written a couple of posts on DZone, the latest of which looks at how IDEs have evolved.

Today we’ve just heard that the article is on the DZone homepage (top left in the image below). If you’re a bit old school, it feels like we’ve made the front page of the national press (for the really old, it would be fair to say as a More Articles piece, it is ‘below the fold’). Go check it out here.