• Home
  • Site Aliases
    • www.cloud-native.info
  • About
    • Background
    • Presenting Activities
    • Internet Profile
      • LinkedIn
    • About
  • Books & Publications
    • Log Generator
    • Logs and Telemetry using Fluent Bit
      • Fluent Bit book
      • Book Resources in GitHub
      • Fluent Bit Classic to YAML Format configurations
    • Logging in Action with Fluentd, Kubernetes and More
      • Logging in Action with Fluentd – Book
      • Fluentd Book Resources
      • Fluentd & Fluent Bit Additional stuff
    • API & API Platform
      • API Useful Resources
    • Oracle Integration
      • Book Website
      • Useful Reading Sources
    • Publication Contributions
  • Resources
    • GitHub
    • Oracle Integration Site
    • Oracle Resources
    • Mindmaps Index
    • Useful Tech Resources
      • Fluentd & Fluent Bit Additional stuff
      • Recommended Tech Podcasts
      • Official Sources for Product Logos
      • Java and Graal Useful Links
      • Python Setup & related stuff
  • Music
    • Monster On Music
    • Music Listening
    • Music Reading

Phil (aka MP3Monster)'s Blog

~ from Technology to Music

Phil (aka MP3Monster)'s Blog

Tag Archives: Technology

MCP Security

30 Thursday Oct 2025

Posted by mp3monster in AI, development, General, Technology

≈ Leave a comment

Tags

AI, artificial-intelligence, attack, attacks, cybersecurity, MCP, model context protocol, Paper, Security, Technology, vectors

MCP (Model Context Protocol) has really taken off as a way to amplify the power of AI, providing tools for utilising data to supplement what a foundation model has already been trained on, and so on.

With the rapid uptake of a standard and technology that has been development/implementation led aspects of governance and security can take time to catch up. While the use of credentials with tools and how they propagate is well covered, there are other attack vectors to consider. On the surface, it may seem superficial until you start looking more closely. A recent paper Model Context Protocol (MCP): Landscape, Security Threats, and Future Research Directions highlights this well, and I thought (even if for my own benefit) to explain some of the vectors.

I’ve also created a visual representation based on the paper of the vectors described.

The inner ring represents each threat, with its color denoting the likely origin of the threat. The outer ring groups threats into four categories, reflecting where in the lifecycle of an MCP solution the threat could originate.

I won’t go through all the vectors in detail, though I’ve summarized them below (the paper provides much more detail on each vector). But let’s take a look at one or two to highlight the unusual nature of some of the issues, where the threat in some respects is a hybrid of potential attack vectors we’ve seen elsewhere. It will be easy to view some of the vectors as fairly superficial until you start walking through the consequences of the attack, at which point things look a lot more insidious.

Several of the vectors can be characterised as forms of spoofing, such as namespace typosquatting, where a malicious tool is registered on a portal of MCP services, appearing to be a genuine service — for example, banking.com and bankin.com. Part of the problem here is that there are a number of MCP registries/markets, but the governance they have and use to mitigate abuse varies, and as this report points out, those with stronger governance tend to have smaller numbers of services registered. This isn’t a new problem; we have seen it before with other types of repositories (PyPI, npm, etc.). The difference here is that the attacker could install malicious logic, but also implement identity theft, where a spoofed service mimics the real service’s need for credentials. As the UI is likely to be primarily textual, it is far easier to deceive (compared to, say, a website, where the layout is adrift or we inspect URIs for graphics that might give clues to something being wrong). A similar vector is Tool Name Conflict, where the tool metadata provided makes it difficult for the LLM to distinguish the correct tool from a spoofed one, leading the LLM to trust the spoof rather than the user.

Another vector, which looks a little like search engine gaming (additional text is hidden in web pages to help websites improve their search rankings), is Preference Manipulation Attacks, where the tool description can include additional details to prompt the LLM to select one solution over another.

The last aspect of MCP attacks I wanted to touch upon is that, as an MCP tool can provide prompts or LLM workflows, it is possible for the tool to co-opt other utilities or tools to action the malicious operations. For example, an MCP-provided prompt or tool could ask the LLM to use an approved FTP tool to transfer a file, such as a secure token, to a legitimate service, such as Microsoft OneDrive, but rather than an approved account, it is using a different one for that task. While the MCP spec says that such external connectivity actions should have the tool request approval, if we see a request coming from something we trust, it is very typical for people to just say okay without looking too closely.

Even with these few illustrations, tooling interaction with an LLM comes with deceptive risks, partially because we are asking the LLM to work on our behalf, but we have not yet trained LLMs to reason about whether an action’s intent is in the user’s best interests. Furthermore, we need to educate users on the risks and telltale signs of malicious use.

Attack Vector Summary

The following list provides a brief summary of the attack vectors. The original paper examines each in greater depth, illustrating many of the vectors and describing possible mitigation strategies. While many technical things can be done. One of the most valuable things is to help potential users understand the risks, use that to guide which MCP solutions are used, and watch for signs that things aren’t as they should be.

Continue reading →

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

AI to Agriculture

17 Friday Oct 2025

Posted by mp3monster in General, Oracle, Technology

≈ Leave a comment

Tags

AI, artificial-intelligence, Cloud, development, Oracle, Technology

Now that details of the product I’ve been involved with for the last 18 months or so are starting to reach the public domain  (such as the recent announcement at the UN General Assembly on September 25), I can talk to a bit about what we’ve been doing.  Oracle’s Digital Government Global Industry Unit has been working on a solution that can help governments address the questions of food security.

So what is food security?  The World Food Programme describes it as:

Food security exists when people have access to enough safe and nutritious food for normal growth and development, and an active and healthy life. By contrast, food insecurity refers to when the aforementioned conditions don’t exist. Chronic food insecurity is when a person is unable to consume enough food over an extended period to maintain a normal, active and healthy life. Acute food insecurity is any type that threatens people’s lives or livelihoods.

World Food Programme

By referencing the World Food Programme, it would be easy to interpret this as a 3rd world problem. But in reality, it applies to just about every nation. We can see this, with the effect the war in Ukraine has had on crops like Wheat, as reported by organizations such as CGIAR, European Council, and World Development journal. But global commodities aren’t the only driver for every nation to consider food security. Other factors such as Food Miles (an issue that perhaps has been less attention over the last few years) and national farming economics (a subject that comes up if you want to it through a humour filter with Clarkson’s Farm to dry UK government reports and US Department of Agriculture.

Looking at it from another perspective, some countries will have a notable segment of their export revenue coming from the production of certain crops.  We know this from simple anecdotes like ‘for all the tea in China’, coffee variants are often referred to by their country of origin (Kenyan, Columbian etc.). For example, Palm Oil is the fourth-largest economic contributor in Malaysia (here). 

So, how is Oracle helping countries?

One of the key means of managing food security is understanding food production and measuring the factors that can impact it (both positively and negatively), which range from the obvious—like weather (and its relationship to soil, water management, etc.) —to what crop is being planted and when. All of which can then be overlayed with government policies for land management and farming subsidies (paying farmers to help them diversify crops, periodically allowing fields to go fallow, or subsidizing the cost of fertilizer).

Oracle is a technology company capable of delivering systems that can operate at scale. Technology and the recent progress in using AI to help solve problems are not new to agriculture; in fact, several trailblazing organizations in this space run on Oracle’s Cloud (OCI), such as Agriscout. Before people start assuming that this is another story of a large cloud provider eating their customers’ lunch, far from it, many of these companies operate at the farm or farm cooperative level, often collecting data through aerial imagery from drones and aircraft, along with ground-based sensors.  Some companies will also leverage satellite imagery for localized areas to complement these other sources. This is where Oracle starts to differentiate itself – by taking high-resolution imagery (think about the resolution level needed to differentiate Wheat and Maize, or spot rice and carrots, differentiate an orchard from a natural copse of trees). To get an idea, look at Google Earth and try to identify which crops are growing.

We take the satellite multi-spectral images from each ‘satellite over flight’ and break it down, working out what the land is being used for (ruling out roads, tracks, buildings, and other land usage).  To put the effort to do this into context, the UK is 24,437,600,000 square meters and is only 78th in the list of countries by area (here).  It’s this level of scale that makes it impractical to use more localized data sources (imagine how many people and the number of drones needed to fly over every possible field in a country, even at a monthly frequency).

This only solves the 1st step of the problem, which is to tell us the total crop growing area.  It doesn’t tell us whether the crop will actually grow well and produce a good yield.  For this, you’re going to need to know about weather (current, forecast, and historic trends), soil chemical composition and structure, and information such as elevation, angle, etc. Combined with an understanding of optimal crop growing needs (water levels, sun light duration, atmospheric moisture, soil types and health) – good crops can be ruined by it simply being too wet to harvest them, or store them dryly.  All these factors need to be taken into account for each ‘cell’ we’re detecting, so we can calculate with any degree of confidence what can be produced.

If this isn’t hard enough, we need to account for the fact that some crops may have several growing seasons per year, or succession planting is used, where Carrots may be grown between March and June, followed by Cucumbers through to August, and so on.

Using technology

Hopefully, you can see there are tens of millions of data points being processed every day, and Oracle’s data products can handle that. As a cloud vendor, we’re able to provide the computing scale and, importantly, elasticity, so we can crunch the numbers quickly enough that users benefit from the revised numbers and can work out mitigation actions to communicate to farmers. As mentioned, this could be planning where to best use fertilizer or publishing advice on when to plant which crops for optimal growing conditions. In the worst cases recognizing there is going to be a national shortage of a staple crop and start purchasing crops from elsewhere and ensure when the crops arrive in ports they get moved out to the markets  (like all large operations – as we saw with the Covid crises – if you need to react quickly, more mistakes can be made, costs grow massively driven by demand).

I mentioned AI, if you have more than the most superficial awareness of AI, you will probably be wondering how we use it, and the problems of AI hallucination – the last thing you want is a being asked to evaluate something and hallucinating (injecting data/facts that are not based on the data you have collected) to create a projection.  At worst, this would mean providing an indication that everything is going well, when things are about to really go wrong.  So, first, most of the AI discussed today is generative, and that is where we see issues like hallucinations.  We’re have and are adopting this aspect of AI where it fits best, such as explainability and informing visualization, but Oracle is making heavy use of the more traditional ideas of AI in the form of Machine Learning and Deep Learning which are best suited to heavy numerical computational uses, that is not to say there aren’t challenges to be ddressed with training the AI.

Conclusion

When it comes to Oracle’s expertise in the specialized domains of agriculture and government, Oracle has a strong record of working with governments and government agencies from its inception. But we’ve also worked closely with the Tony Blair Institute for Global Change, which works with many national government agencies, including the agriculture sector.

My role in this has been as an architect, focused primarily on applying integration techniques (enabling scaling and operational resilience, data ingestion, and how our architecture can work as we work with more and more data sources) and on applying AI (in the generative domain). We’re fortunate to be working alongside two other architects who cover other aspects of the product, such as infrastructure needs and the presentation tier. In addition, there is a specialist data science team with more PhDs and related awards than I can count.

Oracle’s Digital Government business is more than just this agriculture use case; we’ve identified other use cases that can benefit from the data and its volume being handled here. This is in addition to bringing versions of its better-known products, such as ERP, Healthcare (digital health records management, vaccine programmes, etc.), national Energy and Water (metering, infrastructure management, etc).

For more on the agricultural product:

  • Government Data Intelligence for Agriculture
  • Agriculture on the Digital Government page
  • TBI on Food Security
  • iGrow News
  • Oracle Launches AI Platform to Strengthen Government Led Agricultural Resilience – AgroTechSpace

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Fluent Bit 4.0.4

21 Monday Jul 2025

Posted by mp3monster in development, Fluentbit, General, Technology

≈ 1 Comment

Tags

characters, encoding, filters, Fluent Bit, fstat, GBK, Lua, Open Telemetry, ShiftJIS, tail, Technology, WhatWG

The latest release of Fluent Bit is only considered a patch release (based on SemVer naming). But given the enhancements included it would be reasonable to have called it a minor change. There are some really good enhancements here.

Character Encoding

As all mainstream programming languages have syntaxes that lend themselves to English or Western-based languages, it is easy to forget that a lot of the global population use languages that don’t have this heritage, and therefore can’t be encoded using UTF-8. For example, according to the World Factbook, 13.8% speak Mandarin Chinese. While this doesn’t immediately translate into written communication or language use with computers, it is a clear indicator that when logging, we need to support log files that can be encoded to support idiomatic languages, such as Simplified Chinese, and recognized extensions, such as GSK and BIG5. However, internally, Fluent Bit transmits the payload as JSON, so the encoding needs to be handled. This means log file ingestion with the Tail plugin ideally needs to support such encodings. To achieve this, the plugin features a native character encoding engine that can be directed using a new attribute called generic. encoding, which is used to specify the encoding the file is using.

service:
  flush: 1

pipeline:
  inputs:
    - name: tail
      path: ./basic-file.gsk
      read_from_head: true
      exit_on_eof: true
      tag: basic-file
      generic.encoding: gsk

  outputs:
    - name: stdout
      match: "*"

The encoders supported out of the box, and the recognized names (in italics are) are:

  • GB18030 (earlier Simplifed Chinese Standard from the Chinese government called Information Technology – Chinese coded character set)
  • GBK (standard that extends the GB18030 standard for Simplified Chinese)
  • UHC (Unified Hangul Code also known as Extended Wandung – for Korean)
  • ShiftJIS (Japanese characters)
  • Big5 (for Chinese as used in Taiwan, Hong Kong, Macau)
  • Win866 (Cyrillic Russian)
  • Win874 (Thai)
  • Win1250 (Latin 2 & Central European languages)
  • Win1251 (Cyrillic)
  • Win1252 (Latin 1 & Western Europe)
  • Win1254 (Turkish)
  • Win1255 (also known as cp1255 and supports Hebrew)
  • Win1256 (Arabic)
  • Win2513 (suspect this should be Win1253, which covers the Greek language)

These standards are governed by the WhatWG specification (Web Hypertext Application Technology Group), not a well-known name, but have an agreement with the well-known W3C for various HTML and related standards.

The Win**** encodings are Windows-based formats that predate the adoption of UTF-8 by Microsoft.

Log Rotation handling

The Tail plugin, has also seen another improvement. Working with remote file mounts has been challenging, as it is necessary to ensure that file rotation is properly recognized. To improve the file rotation recognition, Fluent Bit has been modified to take full advantage of fstat. From a configuration perspective, we’ll not see any changes, but from the viewpoint of handling edge cases the plugin is far more robust.

Lua scripting for OpenTelemetry

In my opinion, the Lua plugin has been an underappreciated filter. It provides the means to create customized filtering and transformers with minimal overhead and effort. Until now, Lua has been limited in its ability to interact with OpenTelemetry payloads. This has been rectified by introducing a new callback signature with an additional parameter, which allows access to the OLTP attributes, enabling examination and, if necessary, return of a modified set. The new signature does not invalidate existing Lua scripts with the older three or four parameters. So backward compatibility is retained.

The most challenging aspect of using Lua scripts with OpenTelemetry is understanding the attribute values. Given this, let’s just see an example of the updated Lua callback. We’ll explore this feature further in future blogs.

function cb(tag, ts, group, metadata, record)
  if group['resource']['attributes']['service.name'] then
    record['service_name'] = group['resource']['attributes']['service.name']
  end

  if metadata['otlp']['severity_number'] == 9 then
    metadata['otlp']['severity_number'] = 13
    metadata['otlp']['severity_text'] = 'WARN'
  end

  return 1, ts, metadata, record
end

Other enhancements

With nearly every release of Fluent Bit, you can find plugin enhancements to improve performance (e.g., OpenTelemetry) or leverage the latest platform enhancements, such as AWS services.

Links

  • Fluent Bit
  • Fluent Bit on GitHub
  • Logs and Telemetry (Using Fluent Bit)

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Cookie Legislation

28 Saturday Jun 2025

Posted by mp3monster in General, Technology

≈ Leave a comment

Tags

cookies, law, legislation, sexurity, Technology

Just about any web-based application will have cookies, even if they are being used as part of session management. Then, if you’re in the business-to-consumer space, you’ll likely use tracking cookies to help understand your users.

Understanding what is required depends on which part of the world your application is being used in. For the European Union (EU) and the broader European Economic Area (EEA), this is easy as all the countries have ratified the GDPR and several related laws like the ePrivacy Directive.

For North America (USA and Canada), the issue is a bit more complex as it is a network of federal and state/province law. But the strictest state legislation, such as California, aligns closely with European demands, so as a rule of thumb, meet EU legislation, and you should be in pretty good shape in North America (from a non-lawyer’s perspective).

The problem is that the EEA accounts for 30 countries (see here), plus the USA and Canada, and we have 32 of the UN’s recognized 195 states (note there is a difference between UN membership and UN recognition). So, how do we understand what the rules are for the remaining 163 countries?

I’m fortunate to work for a large multinational company with a legal team that provides guidelines for us to follow. However, I obviously can’t share that information or use it personally. Not to mention, I was a little curious to see how hard it is to get a picture of the global landscape and its needs.

It turns out that getting a picture of things is a lot harder than I’d expected. I’d assumed that finding aggregated guidance would be easy (after all, there are great sites like DLA Piper’s and the UN Trade & Development that cover the more general data protection law). But, far from it. I can only attribute this to the fact that there is a strong business in managing cookie consents.

The resources that I did find, which looked comprehensive on the subject:

  • Securiti’s Q2 2024 report
  • Bird & Bird’s Global Cookie Review
  • Termly has a good resource on this area.
  • International Association of Privacy Professionals (IAPP) – has lots of interesting resources on Cookies, but doesn’t provide a consolidated global view.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Fluent Bit and AI: Unlocking Machine Learning Potential

30 Monday Dec 2024

Posted by mp3monster in Fluentbit, General, Technology

≈ Leave a comment

Tags

AI, artificial-intelligence, Cloud, Data Drift, development, Fluent Bit, GenAI, Machine Learning, ML, observability, Security, Technology, Tensor Lite, TensorFlow

These days, everywhere you look, there are references to Generative AI, to the point that what have Fluent Bit and GenAI got to do with each other? GenAI has the potential to help with observability, but it also needs observation to measure its performance, whether it is being abused, etc. You may recall a few years back that Microsoft was trailing new AI features for Bing, and after only having it in use for a couple of days, it had been recorded generating abusive comments and so on (Microsoft’s Tay is such an example).

But this isn’t the aspect of GenAI (or the foundations of AI with Machine Learning (ML)) I was thinking about. Fluent Bit can be linked to GenAI through its TensorFlow plugin. Is this genuinely of value or just a bit of ‘me too’?

There are plenty of backend use cases once the telemetry has been incorporated into an analytics platform, for example:

  • Making it easy to query and mine the observability data, such as natural language searching – to simplify expressing what is being looked for.
  • Outlier / Anomaly detection – when signals, particularly metrics, diverge from the normal patterns of behavior, we have the first signs of a problem. This is more Machine Learning than generative AI.
  • Using AI agents to tune monitoring thresholds and alerting scenarios

But these are all backend, big data style use cases and do not center on Fluent Bit’s core value of getting data sources to appropriate destination systems for such analysis or visualization.

To incorporate AI into Fluent Bit pipelines, we need to overcome a key issue – AI tends to be computationally heavy – making it potentially too slow for streams of signals being generated by our applications and too expensive given that most logs reflecting ‘business as usual’ are, in effect, low value.

There are some genuine use cases where lightweight AI can deliver value. First, we should be a little more precise. The TensorFlow plugin is the TensorFlow Lite version, also known as LiteRT. The name comes from the fact that it is a lite-weight solution intended to be deployable using small devices (by AI standards). This fits the Fluent Bit model of having a small footprint.

So, where can we put such a use case:

  • Translating stack traces into actionable information can be challenging. A trained ML or AI model can help classify and characterize the cause of a stack trace. As a result, we can move from the log to triggering appropriate actions.
  • Targeted use cases where we’ve filtered out most signal data to help analyze specific events – for example, we want to prevent the propagation of PII data downstream. Some PII data can be easily isolated through patterns using REGEX. For example, credit card IDs are a pattern of 4 digits in 4 groups. Phone numbers and email addresses can also be easily identified. However, postal addresses aren’t easy, particularly when handling multinational addresses, where the postal code/zip code can’t be used as an indicative pattern. Using AI to help with such checks means we must filter out signals to only examine messages that could accidentally carry such information.

When adopting AI into such scenarios, we have to be aware of the problems that can impact the use of ML and AI. These use cases are less high profile than the issues of hallucinations but just as important. As we’re observing software, which will change over time. As a result, payloads or data shifts (technically referred to as data drift) and the detection rate can drop. So, we need to measure the efficacy of the model. However, issues such as data drift need to be taken into account, as the scenario being detected may change in volume, reflecting changes in software usage and/or changes in how the solution works.

There are ways to help address such considerations, such as tracking false positive outcomes, and if the model can provide confidence scoring, is there a trend in the score?

Conclusion

There are good use cases for using Machine Learning (and, to an extent, Artificial Intelligence) within an observability pipeline – but we have to be selective in its application as:

  • The cost of the computation can outweigh the benefits
  • The execution time for such computation can be notably slower than our pipeline, leading to risks of back pressure if applied to every event in the pipeline.
  • The effectiveness and how much data drift might occur (we might initially see very good results, but then things can fall off).

Possibly, the most useful application is when the AI/ML engine has been trained to recognize patterns of events that preceded a serious operational issue (strictly, this is the use of ML).

Forward-looking

The true potential for Gen AI is when we move beyond isolating potential faults based on pattern recognition to using AI to help recommend or even trigger remediation processes.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Speaker Upgrade – how I decided what was good

08 Tuesday Oct 2024

Posted by mp3monster in Music

≈ Leave a comment

Tags

Acoustic Energy, AE500, audio, Beth Orton, Bowers & Wilkins, Chord, Cosmotron, Elbow, GoGo Penguin, Music, new-music, Peter Gabriel, reviews, Rush, Technology, Tori Amos

With some recent good news from work, I decided to treat myself to a speaker upgrade – Acoustic Energy 500s sat on some IsoAcoustic Aperta stands. While these would be considered audiophile – they’re still at the lower end – we’re not talking audio exotica like B& Nautilus at nearly hundred thousand pounds or the Cosmotron 130 at around the million pound mark.

Bowers & Wilkins – Nautilus Speaker – a snip at £90,000

So how can I decide and justify the expenditure, even if it’s a fraction of the loose change from the back of the sofa from buying these monsters? As friends have said to me in the past, the Samsung speakers on my stereo are just as good. Well there are a raft of things that will prevent speakers from performing well, from positioning, to the quality of their source.

Million Pound Cosmotron speaker
Cosmotrom priced at £1M

The source material is often one of the biggest issues, particularly for rock and pop pushing the envelope with CDs. We saw what has become known as the loudness wars – where the dynamic range of the music was reduced. But music with a wide dynamic range with good speakers is great. A couple characteristics of good speakers is the containment of distortion – so if you have a song that is often quiet with occasional moments of loudness, the speaker drivers (cones) will be able to react properly to another sudden spike in signal occurs the sudden movement in the magnet moving the cone is handled rather than causing the speaker surface straining against its mounts.

Better speakers will result in better control of the cone (the visible bit of the speaker), making the cone’s movements more precisely revealing detail in the music. You’ll go from hearing a cymbal, to being able to tell how the cymbal was struck, a drum is no long a thump, but you’ll start to hear it resonate.

The cone moves backward and forwards to move the air, which affects air inside the speaker, not just outside. We don’t want the speaker casing to behave as a suction cup, preventing air movement and inhibiting the cone’s movement.

Improvements in speaker performance can help you recognize little details. For example, with a vocal performance, you’ll start to hear fine details, such as air drawn over the microphone as the singer inhales. You can also hear changes as a singer moves close to or away from the microphone, even if they alter their vocal volume.

I was experimenting with a loaned hi-fi kit once, listening to a Jamie Cullum live performance, and a detail that leapt out as I swapped in and out a piece of equipment was what sounded like background ambient noise, such as air conditioning. But suddenly, it became clear I wasn’t picking up ambient noise but the fan that was positioned behind Jamie.

It is always useful to have some good go-to pieces of music for trying out hi-fi. Being familiar with the music and knowing the production values applied means that if there are improvements, you’ll pick them up. So, what are my go-to pieces at the moment?

  • Tori Amos – Me and a Gun — although any part of Little Earthquakes is good. This song is an acapella performance, recounting a rape. With just a voice, the miking of the vocal is very close, and you can hear the inhalation and the rawness of the performance.
  • Beth Orton – Weather Alive — probably Beth’s best album to date. Here is another incredible voice, but also more delicate than Tori Amos, so the better the HiFi, the purer the performance will sound.
  • GoGo Penguin – Branches Break from Man Made Objects – although just about any of their work will be good. This is a trio of piano, bass, and drums in a jazz/minimalist classical/chill beat crossover. This is a recording that should feel like it’s being performed in a big live sounding room. But you’ll hear each instrument clearly, particularly down to recognizing the loudness, varying attack, and decay of each note played.
  • Rush – Red Sector A from Grace Under Pressure, perhaps not the best-produced album in the world, but before the loudness wars really took hold. Rush were a real bunch of prog rock musos with the late Neil Peart, who many considered to be one of the best ever drummers. This track will test the HiFi in terms of control – the drumming has a huge range of very fine cymbal work, some really deep bass drums, and tom-tom runs that make Phil Collin’s In The Air Tonight sound like child’s play.
  • Elbow – One Day Like This – The Seldom Seen Kid (Live At Abbey Road Studios) — with a high-quality recording (Abbey Road’s special Half Speed Mastered edition), you’ll get a sense of staging and as the song grows scale with the choir. The strings will be natural and nuanced, in the early parts of the performance of the performance you’ll hear how dry Guy’s voice is – not a hint of vibrato or sibilance.
  • Peter Gabriel – the Book Of Love — from Scratch My Back — another performance that should give a sense of staging and breadth with great dynamics and the strings swell and subside. Fronted by Peter’s voice which should weathered and world warn.

The list of music could go on. But, ultimately, it’s a very individual choice.

Final anecdote

Buying Hi-Fi is a law of diminishing returns. As you get better and better, the parts needed are more expensive and produced in fewer numbers, making the R&D more expensive, with costs to be covered by a small number of sales. But still, these esoteric, bank-crushing systems are amazing.

Some years back, I went to a HiFi show; if you’ve never been to such a show then picture this. A corridor of rooms is stripped of the beds and furnishings other than some chairs. Each company has a room and typically sets up its demo kit where the head of the bed would usually be. Everything would be positioned and mounted on professional hi-fi tables, etc, for the absolute best performance. The classic layout for a hotel room means as you walk into the room, you won’t see what is set, so the seconds it takes to walk past what is normally the bathroom is almost a blind test as you can’t see the HiFi, but you’ll be able to hear it.

So here we are, as we start to walk into a room that was pretty busy, so you didn’t see the main space for a minute or so, and we hear a performance of a beautifully played unaccompanied double bass. I could have sworn there was a musician in the room performing – the performance had that warmth, depth, and volume you’d expect. No hint of any recording artifacts. When we got to the main part of the room, we were stunned to see two speakers, big and rather boxy – no audio exotica beauty like Nautilus or Cosmotron — definitely all function, and little thought to form. With them, 3 large pieces of silver HiFisat on big chunky slabs of marble on the floor – what I assume to be a pre-amp and a power amp for each speaker. Plus a source – which might have been a turntable – but honestly, I can’t remember – whatever it was, the sound was breathtakingly natural sounding.

Chord Ultima Monoblock Power Amplifier £35,000 per unit
Chord Monobloc Power Amplifier £350,000 per bloc – you’d need two, plus a pre-amp for a basic arrangement.

I do remember the price tags, and at the time, prices were around 50k a component- so little change out of a quarter of a million. It left me wishing I’d won the national lottery.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Cloud Native Architecture book

31 Friday May 2024

Posted by mp3monster in Books, General

≈ Leave a comment

Tags

BPB, Cloud Native, CNCF, Fernando Harris, Kubernetes, organization, people, process, Technology

It’s a busy time with books at the moment. I am excited, and pleased to hear that Fernando Harris‘ first book project has been published. It can be found on amazon.com and amazon.co.uk among sites.

Having been fortunate enough to be a reviewer of the book, I can say that what makes this book different from others that examine cloud-native architecture is its holistic approach to the challenge. Successful adoption of cloud-native approaches isn’t just technical (although this is an important element that the book addresses); it also considers organizational, processes, and people dimensions. Without these dimensions, the best technology in the world will only be successful as a result of chance rather than by intent.

As a result, this book guides and connects content to the Kubernetes technical content (technical how-to books we typically see from publishers like Manning and O’Reilly) and the more organizational leadership books that you might expect from IT Revolution (Gene Kim et al.).

A read I’d recommend to any architect or technical lead who wants to understand the different aspects of achieving cloud-native adoption rather than just the mastery of an individual technology.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Simplifying the escaping of JSON strings

08 Thursday Jun 2023

Posted by mp3monster in development, General, Technology

≈ Leave a comment

Tags

CLI, development, JQ, JSON, software, Technology, utility

when you’re testing apps, it is pretty common to want to send JSON via CURL to a local endpoint. The problem is that this usually means that the string you provide curl needs to have characters escaped, such as quote marks. By hand, this can be irritating to sort out, particularly if you’re using an IDE to make sure the JSON is correct. I’d concluded this is hardly a new problem; someone must have produced a nice little multiple-platform command line utility that can do it for you. The result was a bit more surprising.

There are plenty of online utils that solve it, but if you’re working with data, you don’t want to publicly share (or the fiddling around with copy-pasting to your browser). Nothing wrong with these tools, but you can’t script them without resorting to RPA (Robotic Process Automation) either. Here are a couple of services I found that are straightforward, and when I’ve tried them, not plagued by annoying ads.

  • https://wtools.io/json-escape-unescape
  • https://www.jsonescaper.com/
  • https://toolslick.com/text/escaper/json
  • https://codebeautify.org/json-escape-unescape
  • https://appdevtools.com/json-escape-unescape

But finding command line tools, well, finding an answer, has proven a bit more challenging. For removing escaped characters, you could use jq, but we actually want to go the other way to use curl with JSON that has been escaped. I have come across conversations covering the use of bash (making use of awk and sed. Plus, details about how the manipulation could be done in various languages (so you could code your own solution if so inclined. Coding is unlikely to take much effort, but testing permutations is going to demand effort).

The one solution I have found that meant I could escape (or reverse) JSON locally is a plugin for VS Code called appropriately JSON-escaper, which does what is needed in a nice and clean manner. All credit to Joshua Poehls for the tool.

The solution JSON-escaper built on top of a more generic JavaScript utility which addresses escaping special characters which can be found here.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Node (npm) package licensing

05 Tuesday Jul 2022

Posted by mp3monster in development, General, node.js, Technology

≈ Leave a comment

Tags

code, developer, development, Licensing, node.js, package, Technology

When building Node solutions, even if you’re not going to publish the code to a public repository you’re likely to be using package.json to declare the dependencies for your app. Doing this makes it easier to build and deploy a utility. But if you’re conversant with several languages there is a tendency to just adapt your existing skills to work with others. The downside of this is small tooling nuances can catch you off guard and consume time while figuring them out. The workings of packages with NPM (as shown below) is one possible case.

{
  "name": "graph-svr",
  "version": "1.0.0",
  "description": "packages needed for this service",
  "main": "index.js",
  "type": "module",
  "scripts": {
    "start": "node index.js"
  },
  "dependencies": {
    "@graphql-tools/graphql-file-loader": "^7.3.11",
    "@graphql-tools/load-files": "^6.5.4",
    "@graphql-tools/schema": "^8.3.10",
    "@graphql-yoga/node": "^2.4.1",
    "apollo-datasource-rest": "^3.5.2",
    "apollo-server": "^3.6.7",
    "graphql": "^16.4.0",
    "graphql-tools": "^8.2.8"
  },
  "author": "Phil Wilkins",
  "license": "MIT"
}

If you create the package.json using npm init to create the initial version of the file, it is fairly common to set values to default. In the case of the license, this is an ISC license. This is easily forgotten. The problem here is twofold:

  • Does the license set reflect the constraints of the dependencies and their licenses
  • Does the default license reflect the position you want?

Looking at the latter point first, This is important as organizations have matured (and tooling greatly improved) when it comes to understanding how open source licensing can impact. This is particularly important for any organizations leveraging open source as part of their revenue generating activities either ‘as a service’ but also selling software solutions. If you put the wrong license here the license checking tools often protecting code repositories may reject your code, even in internal only use cases (yes this tripped me up).

To help overcome this issue you can install a tool that will analyze the dependencies and optionally their dependencies and report back on your license exposure. This tool is called license-report. Once installed (npm install -g license-report) we just need to point the tool to the package.json file. e.g. license-report package.json. We can make the results a lot more consumable by outputting the content in a number of formats. For example a simple text value:

From this, you could set your license declaration in package.json or validate that your preferred license won’t conflict,

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Apollo GraphQL – some pointers

16 Thursday Jun 2022

Posted by mp3monster in development, General, languages, node.js, Technology

≈ 2 Comments

Tags

API, code, development, GraphQL, javascript, node.js, Technology

I’ve designed a variety of GraphQL schemas and developed microservice backends. But not done much with configuring the Apollo implementation of a GraphQL server until recently. This may reflect the fact my understanding of JavaScript doesn’t extend into the world of Node.JS as much as I’d like (the problem with being a multi-language developer is you’re likely to find your way around many languages but never be a master of one). Anyway, the following content is about the implementation within a GraphQL server part of a solution. It may be these pointers are just for my benefit you might find them helpful as well.

Read more: Apollo GraphQL – some pointers

To make it easy to reference the code, we’ve added entries (n) into the code, where n is a number. This is not part of the code. But there to make the different lines referenceable. Where code should go but is not relevant to the point being made I’ve added ellipsis (…)

Dynamic loading and server configuration

import { ApolloServer } from 'apollo-server';
import { loadFilesSync } from '@graphql-tools/load-files';
import { resolvers } from './resolvers.js';   (1)
import ProviderInternalAPI from './ProviderInternalAPI.js'; (1)
import EventsInternalAPI from './EventsInternalAPI.js';  (1)
const server = new ApolloServer({
  debug : true,    (2)
  typeDefs: loadFilesSync('./schema.graphql'),   (3)
  resolvers,
  dataSources: () => {
    return {
      eventsInternalAPI: new EventsInternalAPI(),    (4)
      providerInternalAPI: new ProviderInternalAPI() (4)
      pro
    };
  }});

There is the potential to dynamically load the resolvers rather than importing each JavaScript file as we see on lines (1). The mechanics to do this is documented here. It would be cool if an opinionated implementation was provided. As shown by (3) we can take a independent schema file being loaded. The Apollo example approach for this didn’t seem to work for us, although both approaches make use of graphql-tools in a synchronous manner.

We can switch on debugging (2) for the GraphQL server, although the level of information published doesn’t appear to be significant. Ideally this setting is changed for production.

Defining the resolvers

The prefix for each resolver (1) must correlate to the name in the schema of the mutator or query (not the type as you would expect with Java). Often we don’t need all the parameters for the resolver. The documentation describes replacing each unused parameter with one or more underscores (i.e _, __ ). The underscore denoting the field not in use. However we can satisfy the indication of not being used, but keep the meaning of each position by using the underscore then a name (i.e. _parent, _args ) as shown in (2).

By taking the response into a variable (3) we can optionally log it. Trying to return using invocation line would result in the handler object rather than the payload itself. By taking the result into a variable we can log the content if desired and return the content.

The use of the backward quote is a node feature. It allows us to incorporate variables into a string by referencing it within ${} (4).

We need to supply the GraphQL server with instances with a layer of code that will interact with the resolvers. We can instantiate the instances in the declaration. The naming of the object is important (4) to the resolver.js (declarations).

import { useLogger } from "@graphql-yoga/node";
...
latestEvent (1): async (_parent, _args, { dataSources }, _info) (2)   => {
      if (log) { console.log("resolvers - get latest event"); }
      let responseValue = await dataSources.eventsInternalAPI.getLatestEvent(); (3)
      if (log) { console.log(`(4)  Resolver response for latest event:\n ${responseValue}`); }
      return responseValue;
    },

Resolver declarations

 Query: {  ...
 },
  
Mutation: {...
},
  Event: {  (1)
    providers: (event, args, { dataSources }, info) => {
      if (log) { console.log(`going to locate ${event.sources}`) }
      let responseValue = await (2) dataSources.providerInternalAPI.getProviders(event.sources);
      return responseValue;
    }

To handle the use of resolvers within a larger resolver we need to declare the resolution outside of the Query and Mutator blocks (but inside the whole declaration block)(1). The name provided needs to match the parent entity that the query resolver contributes to.

To then provide values from the outer resolution we need to prover to the chained resolution use the naming as represented in the GraphQL schema as shown by (2). The GraphQL engine will resolve the mapping values.

Web resolver URL

  // GET
  async getProvider(code) {
    console.log("getProvider (%s) directing to %s",code,this.baseURL);
    return this.get(`provider?code=${code} (1)`);
  }

The URL parameters need to be appended to the base URL path for the parent class to use in the invocation as shown by (1). The Apollo examples showed a setter option but we didn’t see the URI being addressed properly. This approach produces the relevant requirement.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to print (Opens in new window) Print
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Pinterest (Opens in new window) Pinterest
  • More
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...
← Older posts

    I work for Oracle, all opinions here are my own & do not necessarily reflect the views of Oracle

    • About
      • Internet Profile
      • Music Buying
      • Presenting Activities
    • Books & Publications
      • Logging in Action with Fluentd, Kubernetes and More
      • Logs and Telemetry using Fluent Bit
      • Oracle Integration
      • API & API Platform
        • API Useful Resources
        • Useful Reading Sources
    • Mindmaps Index
    • Monster On Music
      • Music Listening
      • Music Reading
    • Oracle Resources
    • Useful Tech Resources
      • Fluentd & Fluent Bit Additional stuff
        • Logging Frameworks and Fluent Bit and Fluentd connectivity
        • REGEX for BIC and IBAN processing
      • Java and Graal Useful Links
      • Official Sources for Product Logos
      • Python Setup & related tips
      • Recommended Tech Podcasts

    Oracle Ace Director Alumni

    TOGAF 9

    Logs and Telemetry using Fluent Bit


    Logging in Action — Fluentd

    Logging in Action with Fluentd


    Oracle Cloud Integration Book


    API Platform Book


    Oracle Dev Meetup London

    Blog Categories

    • App Ideas
    • Books
      • Book Reviews
      • manning
      • Oracle Press
      • Packt
    • Enterprise architecture
    • General
      • economy
      • ExternalWebPublications
      • LinkedIn
      • Website
    • Music
      • Music Resources
      • Music Reviews
    • Photography
    • Podcasts
    • Technology
      • AI
      • APIs & microservices
      • chatbots
      • Cloud
      • Cloud Native
      • Dev Meetup
      • development
        • languages
          • java
          • node.js
      • drone
      • Fluentbit
      • Fluentd
      • logsimulator
      • mindmap
      • OMESA
      • Oracle
        • API Platform CS
          • tools
        • Helidon
        • ITSO & OEAF
        • Java Cloud
        • NodeJS Cloud
        • OIC – ICS
        • Oracle Cloud Native
        • OUG
      • railroad diagrams
      • TOGAF
    • xxRetired
    • AI
    • API Platform CS
    • APIs & microservices
    • App Ideas
    • Book Reviews
    • Books
    • chatbots
    • Cloud
    • Cloud Native
    • Dev Meetup
    • development
    • drone
    • economy
    • Enterprise architecture
    • ExternalWebPublications
    • Fluentbit
    • Fluentd
    • General
    • Helidon
    • ITSO & OEAF
    • java
    • Java Cloud
    • languages
    • LinkedIn
    • logsimulator
    • manning
    • mindmap
    • Music
    • Music Resources
    • Music Reviews
    • node.js
    • NodeJS Cloud
    • OIC – ICS
    • OMESA
    • Oracle
    • Oracle Cloud Native
    • Oracle Press
    • OUG
    • Packt
    • Photography
    • Podcasts
    • railroad diagrams
    • Technology
    • TOGAF
    • tools
    • Website
    • xxRetired

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 2,555 other subscribers

    RSS

    RSS Feed RSS - Posts

    RSS Feed RSS - Comments

    December 2025
    M T W T F S S
    1234567
    891011121314
    15161718192021
    22232425262728
    293031  
    « Nov    

    Twitter

    Tweets by mp3monster

    History

    Speaker Recognition

    Open Source Summit Speaker

    Flickr Pics

    Turin Brakes Acoustic Tour 24 @ The Maltings FarnhamTurin Brakes Acoustic Tour 24 @ The Maltings FarnhamTurin Brakes Acoustic Tour 24 @ The Maltings FarnhamTurin Brakes Acoustic Tour 24 @ The Maltings Farnham
    More Photos

    Social

    • View @mp3monster’s profile on Twitter
    • View philwilkins’s profile on LinkedIn
    • View mp3monster’s profile on GitHub
    • View mp3monster’s profile on Flickr
    • View mp3muncher’s profile on WordPress.org
    • View philmp3monster’s profile on Twitch
    Follow Phil (aka MP3Monster)'s Blog on WordPress.com

    Blog at WordPress.com.

    • Subscribe Subscribed
      • Phil (aka MP3Monster)'s Blog
      • Join 233 other subscribers
      • Already have a WordPress.com account? Log in now.
      • Phil (aka MP3Monster)'s Blog
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar
     

    Loading Comments...
     

    You must be logged in to post a comment.

      Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
      To find out more, including how to control cookies, see here: Our Cookie Policy
      %d