• Home
  • Site Aliases
    • www.cloud-native.info
  • About
    • Background
    • Presenting Activities
    • Internet Profile
      • LinkedIn
    • About
  • Books & Publications
    • Log Generator
    • Logs and Telemetry using Fluent Bit
      • Fluent Bit book
      • Book Resources in GitHub
      • Fluent Bit Classic to YAML Format configurations
    • Logging in Action with Fluentd, Kubernetes and More
      • Logging in Action with Fluentd – Book
      • Fluentd Book Resources
      • Fluentd & Fluent Bit Additional stuff
    • API & API Platform
      • API Useful Resources
    • Oracle Integration
      • Book Website
      • Useful Reading Sources
    • Publication Contributions
  • Resources
    • GitHub
    • Oracle Integration Site
    • Oracle Resources
    • Mindmaps Index
    • Useful Tech Resources
      • Fluentd & Fluent Bit Additional stuff
      • Recommended Tech Podcasts
      • Official Sources for Product Logos
      • Java and Graal Useful Links
      • Python Setup & related stuff
      • DevTips
  • Music
    • Monster On Music
    • Music Listening
    • Music Reading

Phil (aka MP3Monster)'s Blog

~ from Technology to Music

Phil (aka MP3Monster)'s Blog

Monthly Archives: February 2026

Fluent Bit and Otel Collectors at scale

26 Thursday Feb 2026

Posted by mp3monster in Fluentbit, General, Technology

≈ Leave a comment

Tags

AI, artificial-intelligence, Cloud, LLM, OpAMP, Open Telemetry, OTel, Protobuf, Spec, Technology

Fluent Bit and OpenTelemetry’s Collector (as well as many other observability tools) are designed to use a distributed/agent model for deployment. This model can pose challenges, including ensuring that all agents are operating healthily and correctly configured. This is particularly true outside of a Kubernetes ecosystem. But even within a Kubernetes ecosystem, more than basic insights are required (for example, is it running flat out or over-resourced). Fluent Bit exposes its own metrics and logs so that you can either configure Fluent Bit to forward the metrics and logs to an endpoint (or allow Prometheus to scrape the metrics).

As we’re usually using Fluent Bit to collect data and route it to tools like Prometheus, Grafana, etc., or perhaps a more commercial product. So it makes sense that Fluent Bit is also sharing its own status and health.

When it comes managing the configuration of Fluent Bit, we have lots of options for Kubernetes deployments (from forcing pod replacement, to sharing configurations via persistent volume claims, and Fluent Bit reloading configs), but given that Observability needs to operate on simple virtualized and bare metal scenarios, and not everything can be treated as dynamically replaceable more general strategies such as GITOps and potentially using Istio (yes, there really good use cases for Istio outside of Kubernetes) are available. But there is also more advanced tooling, such as Puppet, Chef, and Ansible.

The challenge is non of these tools provides out-of-the-box capabilities to fully exploit the control surface that OTel Collectors and Fluent Bit offer. So the OpenTelemetry community has elected to develop a new standard called OpAMP, which fits snuggly into the OTel ecosystem.

OpAMP defines an agent/client and server model in which the central server provides control, measurement, etc., to all Collectors. The agent/client side can be deployed in two ways: wired directly into a collector or via a separate supervisor tool. Integrating the client-side directly into the client is great, as it avoids introducing a new local process. The heart of OpAMP is the message exchange, which we’ll look at more in a moment.

Today, Fluent Bit would need to use the Supervisor model without modification (although GitHub shows a feature request to support the protocol, we haven’t heard whether or when it will be implemented). But it is early days, and some aspects of the protocol are still classified as ‘development’. That said, it is worth looking more closely at OpAMP as it offers some interesting opportunities, particularly around how we could easily evolve ideas such as chatOps.

While I’m not a fan of having a peer process for Fluent Bit, on the basis that we now have two distinct processes to support observability, we could experiment with the supervisor spawning Fluent Bit as a child process, which would let the supervisor easily spot Fluent Bit failing. At the sametime the supervisor can communicate with Fluent Bit using the localhost loopback adapter and the usual APIs.

Understanding the OpAMP Protocol and what it offers

Lets start with what the OpAMP protocol offers, firstly very little of it is mandatory (this is both good and bad – it means we can build compatibility in a more incremental manner, if certain behaviours are provided elsewhere, then the agent doesn’t have to offer a capability it is also tolerant of what an agent or collector can and can’t do):

  • Heatbeat and announce the agent’s existence to the server, along with what the agent is capable of/allowed to do regarding OpAMP features.
  • Status information, including:
    • environmental information
    • configuration being run
    • modules being used
  • Perform updates to resources, including:
    • Agent configuration
    • TLS certificate rotation
    • Credentials management
    • module or even an entire agent installation
  • issuing of custom commands.
  • directing the agent’s own telemetry to specific services/endpoints.

The heart of this protocol is the contract between the client (agent/collector) and the server, defined using Protobuf3. This means you can easily create the code skeleton to handle the payload objects, which will be transmitted in binary form (giving network traffic efficiency and the price of not being humanly readable or dynamically processable, insofar as you need to know the Protobuf definition to extract any meaning).

In addition to the Protobuf definitions, there are rules for handling messages, specifying when a message or response is required, message sequence numbering, and the default heartbeat frequency. But there aren’t any complex exchanges involved.

The binary payloads are exchanged over Sockets (allowing full-duplex exchanges where the server can send requests at any time) or over HTTP (providing half-duplex, aka polling/client check-in, at which point the server can respond with an instruction) – a strategy that is becoming increasingly common today.

The benefits we see

Regardless of whether you’re in a Kubernetes environment, the ability to ask agents/collectors to quickly tweak their configuration is attractive. If you start suspecting a service or application is not behaving as expected, if Fluent Bit is filtering your logs or sampling traces, you can quickly push a config change out to allow more through, providing further insight into what is going on – you don’t need to wait for a Kubernetes scheduler to roll through with replacing pods.

With the rise of AI, having a central point of contact makes it easier to wrap a central server as an MCP tool and have a natural language command interface. Along with the possibility that you can potentially send custom commands to the agent (or supervisor) to initiate a task. This was part of the functionality we effectively implemented in our original chatops showcase. The problem was that in the original solution, we deployed a small Slack bot that directed HTTP calls to Fluent Bit. With the OpAMP framework, we can direct the request to the server, which will route the command to the correct Fluent Bit node through a more trustworthy channel.

Implementation

The OpAMP protocol is likely to be widely adopted by commercial service providers (Bindplane, OneUpTime, for example), as it allows them to start working with additional agents/collectors that are already deployed (for example, a supervisor could be used to manage a Fluentd fleet, where there isnt the appetite to refactor all the configuration to Fluent Bit or an OTel Collector). Furthermore, it has the potential to simplify by standardizing functionality.

In terms of resource richness in the OpenTelemetry GitHub repo, I suspect (and hope) there will be more to come (a UI and richer details on how a base server can be extended, for example). At the time of writing, within the OpenTelemetry GitHub repository, we can see:

  • The published spec
  • Go implementation of the protocol: the server accepts and responds to messages, and the client includes some test functionality to populate messages.
  • A supervisor implementation that, through configuration, can be pointed at an agent to observe, and the configuration so that the supervisor is uniquely identifiable to the server. This is also implemented in Go.
  • Open Telemetry Collector extension

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Selling a music collection

19 Thursday Feb 2026

Posted by mp3monster in General, Music

≈ Leave a comment

Tags

album, cd, Music, records, vinyl, collecting, vinyl-records, rock

This may come across as maudlin or possibly depressing, but as the popular financial advisor Martin Lewis says, the grim reaper gets us all, and leaving your partner and family in the dark about how to pick up and manage the finances, etc., is pretty hard. But any half-committed music collector will want their collection treated with the consideration that personal effects like jewellery would have.

When I thought about this, initially I thought it just needed to be a document alongside my will, it occurred to me that the guidance would be specific to me, I’d largely apply it to many. Which is why this became a blog post.

My son is starting to make his way in the world of music and might find the courage to take on the collection. Failing that, let friends and family have a rummage before disposal is started.

My better half is not a die-hard music collector; she enjoys music, but it is more a transient pleasure. When in record stores and record fairs, it really seems like a couple of things, so the temptation to sell a collection wholesale will be there. There are people out there who will buy up entire collections or simply let a house clearing company take it away – you will get as the expression goes ‘only get pennies n the pound’ for the value. I don’t begrudge these people that act, after all, a living needs to be made, and the fun of crate digging comes from these people selling on, and sometimes they don’t recognize the value of the music they have acquired. If you want to really make someone turn in their grave, well then, the collection goes to the skip, but think of the environmental harm you’re inflicting.

Discogs is your friend

Discogs is a website that tracks the details of releases to great detail, distinguishing the releases down to specific pressing form a particular record plant. As Discogs pays for itself by also operating as an online market place, It tracks the highest and lowest prices people have paid for a release – this is the first clue as to the true value of any item in the collection. That said, rare items, which don’t change hands very often will have prices that might not be representative.

If the collection isn’t on Discogs, it might be worth adding. The process is easy enough with any device that has a camera and a browser. You start by simply scanning the barcode (best to use an app integrated with Discogs). Most of the time, the app will find one ore matching results. You might get more than 1 as the barcode can sometimes represent multiple different pressings (typically because a standard printed sleeve may be used, but the vinyl may be from different plants or issuing cycles). When this happens, you’ll need to choose the correct version. This can best be addressed by looking at any information included that relates to run out groove details on vinyl, or its equivalent for CDs. The Discogs guides will help you better understand this.

My Discog possible errors

For my collection, there are a few details worth keeping mind, firstly when I first cataloged the collection with Discogs we already had a lot, so when there lots of versions I selected the one, that had the ‘headline details’ that matched – sleeve type, colour, release date. But, during this phase there is a chance I choose the wrong one, so it’s worth checking before selling. Why check? Well, like books, first pressings usually fetch more value. In some cases certain pressing plants have been noted to produce higher quality pressings.

Everything released since the late 80s onwards in my collection will be likely be first issues/pressings. This can be verified since the addition date will be within days of the release date.

Grading

All media when sold, is sold with a condition score from mint, near mint down, and this is applied to both vinyl, CD etc. and a separate assessment for the sleeve. Here, Discogs can help as the scoring system I well described in their guide. For my personal collection, very nearly everything will score highly; there are a couple of exceptions where quality was compromised as an acceptance of lower quality when I’ve sourced through crate digging (virtual or real), which is fairly small.

So, how can I claim this, well …

  • Media stored properly, never left out when not being played. vinyl is never stacked (a cause of warping) or even leaning.
  • We’ve stored vinyl with antistatic sleeves, very nearly exclusively using Nagaoka Discfile 102s. These are considered by many as the Rolls-Royce of anti-static sleeves.
  • Media has been well stored – record cases, replaced with custom flight cases, and now professional-grade outer sleeves in an IKEA Kallax setup (considered good for vinyl as it can handle the weight).
  • With the advent of the Digipak (folding card sleeves) for CDs we’ve protected them with sleeves so they don’t scuff etc.
  • Vinyl was never played to death. I used to copy everything to cassette for freedom and casual listening. CDs never got played in cars (a classic source of scratching and tarnishing) – they were copied to CDR or minidisc and later hard disks, to copy onto USB sticks for portability.

My music collection has been cared for in part as I’ve had to work to pay for nearly everything, from paper rounds to Saturday jobs and so on.

Understanding Valuation

Valuation isn’t just driven purely by the quality of the media or rarity (which can be from deliberately limiting numbers produced, to production errors).

The value of any album or single doesn’t often make sense, this because some artists seem to attract collectors. Depeche Mode for example is a mainstream artist that has this kind of community. But others, maybe pretty obscure but do well, these are often what can be described as an artist’s artist. In other words an artist that has been admired or influential for other artists, as a result you get a ‘cognoscenti’ culture.

There are also factors such as the record label involved, an original Chess records release will be highly prized, because of the import of the label.

Trying to identify what influences value without getting into the head of the collector community isn’t easy. But I’ve tried to distill some easy to spot influencers. These are certainly true for my collection.

The bottomline is the closer to the best possible price for any artefact, the more you’re going to need to understand that collector community. If you’re honoring wishes, of not letting a collection go for rock bottom prices, then we’d recommend checking prices on several web sites such as Discogs, EBay and others.

Of course if you’re dealing with a large collection, you need to filter down what is run of the mill vs potentially valuable. The following are general quick clues:

  • Singles (they rarely get repressed, have tracks that don’t show up on other releases).
  • Numbering on the sleeve, the smaller the batch the greater the possible value
  • Signed by the artist
  • Die cut, lenticular covers
  • Box sets often have extra content not available elsewhere and are produced in smaller numbers.
  • Anything produced before the mid 1950s
  • coloured vinyl (picture discs can fall into this category)
  • In North America and Europe, there is value in Japanese releases (usually with a mobile strip – paper strip wrapping the recording.
  • Bootleg recordings – usually live recordings
  • Vinyl releases with gatefold sleeves for albums with only one piece of vinyl or booklets (this costs money for no real gain other than possibly driving up early sales).
  • Labelled as a Record Store Day release (indicates limited issue).

CD valuation

The pricing of CDs generally have cratered, this is a combination of vinyl gaining popularity again. Easier to create fakes, and volumes, and unlike Vinyl, generally don’t go out of print, as we’re able to produce pretty much on demand now.

But this isn’t true for everything. CD singles have definitely retained and even gained value, this can be attributed to:

  • They have versions of tracks or even extra songs that haven’t made it to the streaming platforms.
  • The singles have definitely gone out of print.
  • Unusual sleeves, die cut, lenticular, different artwork.
  • Singles by the 90s saw smaller production numbers.

Some record labels were prepared to do things, particularly with singles to drive up sales, which meant chart positions, which helped propel album sales.

For me, we collected a lot of CD singles because of all the extra tracks that didn’t make it to albums. It became common to also release multiple versions of singles, and yes with my favourite artists, or those with a reputation to invest in remixes or B sides I’d eat all the versions, which together could boost the value.

Some CDs experienced limited runs, with sleeves sometimes having numbering printed on them, or hand signed by the artist. A benefit of buying directly from artist websites, as soon as the album was announced.

Provenance

Provenance particularly for signed albums can be tricky at times. Sometimes the delivery note, may record that the release may be a special edition,, or signed etc. sometimes, the unique characteristic may be acknowledged on a delivery note, but usually it would mention the value on the website and web order, which I usually saved as a PDF in among the record of purchases kept electronically.

Pricing guides

Aside from Discogs, there are other places to try and ascertain value. There is the Rare Record Price Guide book, which picks up on the better-known collectables, but I’ve found its prices are often below potential. Then there is Record Collector magazine, which provides a way to list sales, and you can also see what people are selling for.

Selling

While Discogs is one option for selling, eBay is another, and there appears to be greater tolerance for price ranges (and lower fees than on Discogs). When it comes to editions with distinct uniqueness/value, another option to maximise value is to sell via fan websites (and Facebook pages), where people are more likely to recognise the release’s value, such as a complete set of CD singles, a first pressing, etc.

The watchword here is be patient, don’t skimp on the postal packing, we’ve hung onto some packaging, but selling online will need a lot more. Sourcing these from mainstream channels will make this expensive. Go to a specialist like Covers33 and buy in quantity.

Useful resources

  • Discogs
  • Record Collector magazine
  • Covers33
  • Popslike website – we’ve not used, but has been collecting auction price data.
  • Rare Record Price Guide

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Intersection of API documentation and Search optimisation

18 Wednesday Feb 2026

Posted by mp3monster in APIs & microservices, General, Technology

≈ Leave a comment

Tags

API, API Commons, APIs.io, Async API, IETF, OAS, RFC, specifications, Technology, WELL-KNOWN

I have long said that APIs are more than just payload definitions. For public APIs or those being made available within large organisations, where discoverability and making it easy for APIs to be adopted are just as important to the definition of the API. API specification standards such as Open API Specification and Asynchronous API Specification have addressed some of the challenges, the specifications don’t address everything.

Diagram I was using in 2021, to convey the point that an API involves more than simply a specification – https://www.slideshare.net/slideshow/api-more-than-payload/247135283?utm_source=clipboard_share_button&utm_campaign=slideshare_make_sharing_viral_v2&utm_variation=control&utm_medium=share

I’ve recently been working on the public API and integration strategy for a product, and revisiting this very point to ensure there is time for the wider needs.

There is good news in this area. The IANA-governed WELL-KNOWN path has been added to as a result of the IETF RFC 9727, which defines a way of structuring a list of defined APIs for the api-catalog path (e.g., https://www.example.com/.well-known/api-catalog).

The URI then returns a JSON-based payload using the relatively new media type of linkset (application/linkset+json), which is consists of an anchor to the actual API. Additional attributes (per IETF recommendations) can be supplied to reference metadata, such as the API specification. and other attributes made up of an href and a type. The RFC includes a set of suggested additional metadata references, which could cover details such as:

Attribute NameDescription
service-docLink to the API specification document, such as the Open API Specification
statusLink to the endpoint providing status information the API, for example whether the service is currently available.
service-metaAdditional machine readable metadata that maybe needed.
availabilityDetails about the service availability, e.g. are there any SLAs/SLOs, when maintenance windows may occur etc
performanceDetails of any rate limits imposed upon the API
usageThe provider of the api-catalog may wish to correlate requests to the /.well-known/api-catalog URI with subsequent requests to the API URIs listed in the catalog
currentProvide information about which services may be deprecated or no longer available.
Suggested additional attributes.

Given this, the response data structure could look something like this:

{"linkset": [
{
"anchor": "https://developer.example.com/apis/foo_api",
"service-desc": [
{
"href": "https://developer.example.com/apis/foo_api/spec",
"type": "application/yaml"
}
],
"status": [
{
"href": "https://developer.example.com/apis/foo_api/status",
"type": "application/json"
}
],
"service-doc": [
{
"href": "https://developer.example.com/apis/foo_api/doc",
"type": "text/html"
}
]
},
[#... next API service]
}

This opens the door to referencing additional content that may not be available in the current API specifications (and for wrapping structural context around GraphQL APIs, which don’t have as good documentation semantics as OAS, for example). But, as we’ve mentioned, even OAS doesn’t provide an easy way to publish references to SDKs for example, which using the catalog could not be easily referenced. The question is: how do we identify these different building blocks? Here, apicommons.org can come to our rescue. Rather than provide a highly prescriptive specification like the OpenAPI Specification and others. It has reviewed the different standards and practices and identified the commonly needed resources, ranging from authentication details to versioning strategies. Each entry provides recommended tags or attributes to use, which map perfectly into the additional entries for the IETF 9727 linkset.

If you review all the aspects described in apicommons.org, you’ll note that the potential list of resources that you may wish to offer is extensive, and may not be best suited to all being in each catalog entry. This isn’t an issue. Alongside the API Commons site, there is another partner site, APIS.json. This offers a machine-readable API definition. It does not seek to compete with OAS, etc., or to disrupt or displace standards such as OAS; doing so would be a real uphill struggle, as it is too well adopted (to the point we’ve seen good alternatives such as API Blueprint, and RAML fall away to minimal or no advancement). APIs.json is a wrapper definition to the likes of OAS, providing a standardization of the elements identified by APICommons. So we could simplify our catalog to the mandatory API endpoint and a supporting reference to the API.json, which will bring all the resources mentioned together.

While these sights place a clear emphasis on JSON-centric APIs, nothing prevents much of this from being applied to non-JSON APIs (in fact, if you look at the spec for JSON.APIs, you’ll see references to technologies such as WADL and others). The only JSON restriction is that files must be defined in JSON or YAML.

Human-searchable API catalog

Until a few years ago, the Programmable Web website managed a curated catalog of APIs – and it was a fantastic resource, both for discovering potential third-party resources, but also ideas on how to best model your own API specifications, as the API Evangelist blogged that site has since closed its doors. With all the above efforts, there is clearly an opportunity to automate curation of public API services, which is much more viable. This is the direction APIs.io is taking: it is not crawling the web to collect API documents. However, given the support of a search engine and the WELL-KNOWN specification, it is certainly achievable and more a question of funding and, probably, investment in functionality to filter out poor or incomplete API specifications. Currently, to have an API integrated into the catalog, you must manually submit your APIs.json specification.

When using APIs.io, APIS.json, and API Commons, we should treat these resources with respect; they’re beautifully clean sites, knowledge-rich, with no advertising to fund them, and being paid for by the likes of Kin Lane (API Evangelist), Nicolas Grenier (Typeform Inc.), and Steven Willmott (Timewarp Labs).

Useful links

  • Nordic APIs on Linkset
  • API Commons
  • APIS.json
  • Open API Specification
  • Asynchronous API Specification
  • GraphQL Specification
  • API – more than a payload presentation
  • APIs.io searchable catalog
  • API Evangelist

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • Share on Reddit (Opens in new window) Reddit
  • Email a link to a friend (Opens in new window) Email
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Print (Opens in new window) Print
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Mastodon (Opens in new window) Mastodon
  • Share on Pinterest (Opens in new window) Pinterest
  • More
  • Share on Bluesky (Opens in new window) Bluesky
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

    I work for Oracle, all opinions here are my own & do not necessarily reflect the views of Oracle

    • About
      • Internet Profile
      • Music Buying
      • Presenting Activities
    • Books & Publications
      • Logging in Action with Fluentd, Kubernetes and More
      • Logs and Telemetry using Fluent Bit
      • Oracle Integration
      • API & API Platform
        • API Useful Resources
        • Useful Reading Sources
    • Mindmaps Index
    • Monster On Music
      • Music Listening
      • Music Reading
    • Oracle Resources
    • Useful Tech Resources
      • Fluentd & Fluent Bit Additional stuff
        • Logging Frameworks and Fluent Bit and Fluentd connectivity
        • REGEX for BIC and IBAN processing
      • Formatting etc
      • Java and Graal Useful Links
      • Official Sources for Product Logos
      • Python Setup & related tips
      • Recommended Tech Podcasts

    Oracle Ace Director Alumni

    TOGAF 9

    Logs and Telemetry using Fluent Bit


    Logging in Action — Fluentd

    Logging in Action with Fluentd


    Oracle Cloud Integration Book


    API Platform Book


    Oracle Dev Meetup London

    Blog Categories

    • App Ideas
    • Books
      • Book Reviews
      • manning
      • Oracle Press
      • Packt
    • Enterprise architecture
    • General
      • economy
      • ExternalWebPublications
      • LinkedIn
      • Website
    • Music
      • Music Resources
      • Music Reviews
    • Photography
    • Podcasts
    • Technology
      • AI
      • APIs & microservices
      • chatbots
      • Cloud
      • Cloud Native
      • Dev Meetup
      • development
        • languages
          • java
          • node.js
          • python
      • drone
      • Fluentbit
      • Fluentd
      • logsimulator
      • mindmap
      • OMESA
      • Oracle
        • API Platform CS
          • tools
        • Helidon
        • ITSO & OEAF
        • Java Cloud
        • NodeJS Cloud
        • OIC – ICS
        • Oracle Cloud Native
        • OUG
      • railroad diagrams
      • TOGAF
    • xxRetired
    • AI
    • API Platform CS
    • APIs & microservices
    • App Ideas
    • Book Reviews
    • Books
    • chatbots
    • Cloud
    • Cloud Native
    • Dev Meetup
    • development
    • drone
    • economy
    • Enterprise architecture
    • ExternalWebPublications
    • Fluentbit
    • Fluentd
    • General
    • Helidon
    • ITSO & OEAF
    • java
    • Java Cloud
    • languages
    • LinkedIn
    • logsimulator
    • manning
    • mindmap
    • Music
    • Music Resources
    • Music Reviews
    • node.js
    • NodeJS Cloud
    • OIC – ICS
    • OMESA
    • Oracle
    • Oracle Cloud Native
    • Oracle Press
    • OUG
    • Packt
    • Photography
    • Podcasts
    • python
    • railroad diagrams
    • Technology
    • TOGAF
    • tools
    • Website
    • xxRetired

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 2,556 other subscribers

    RSS

    RSS Feed RSS - Posts

    RSS Feed RSS - Comments

    February 2026
    M T W T F S S
     1
    2345678
    9101112131415
    16171819202122
    232425262728  
    « Jan    

    Twitter

    Tweets by mp3monster

    History

    Speaker Recognition

    Open Source Summit Speaker

    Flickr Pics

    Gogo Penguin at the BarbicanGogo Penguin at the BarbicanGogo Penguin at the BarbicanGogo Penguin at the Barbican
    More Photos

    Social

    • View @mp3monster’s profile on Twitter
    • View philwilkins’s profile on LinkedIn
    • View mp3monster’s profile on GitHub
    • View mp3monster’s profile on Flickr
    • View mp3muncher’s profile on WordPress.org
    • View philmp3monster’s profile on Twitch
    Follow Phil (aka MP3Monster)'s Blog on WordPress.com

    Blog at WordPress.com.

    • Subscribe Subscribed
      • Phil (aka MP3Monster)'s Blog
      • Join 234 other subscribers
      • Already have a WordPress.com account? Log in now.
      • Phil (aka MP3Monster)'s Blog
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar
     

    Loading Comments...
     

    You must be logged in to post a comment.

      Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
      To find out more, including how to control cookies, see here: Our Cookie Policy
      %d