npx dalelane

May 13th, 2025

If you’re a Node.js person, try running: npx dalelane

I recently read Ashley Willis’ blog post about her “terminal business card” – a lovely project she shared that prints out a virtual CLI business card if you run npx ashleywillis.

Check out her blog post for the history of where this all started, and an explanation of how it works.

I love this!

npx dalelane

screenshot of running npx dalelane

Blast from the past

It reminds me (and I’m showing my age here) of the finger UNIX command we had in my University days.

Other than IRC, finger was our social media: we maintained .plan and .project files in our profile directory, and anyone else at Uni could run finger <username> to see info about you and what you’re up to.

We created all sorts of endlessly creative ASCII-art plan files, and came up with all sorts of unnecessarily elaborate ways to automate updates to those files.

I haven’t thought about that for years, but Ashley’s project reminded me of it so strongly that I had to give it a try.

npx dalelane

screenshot of running npx dalelane

A dynamic business card needs live data

Her blog post explains how to get it working. I mostly just shamelessly copied it. But where her project is elegant and concise, I naturally crammed in noise. 🙂

I wanted live data, so I updated my “business card” to include what I’m currently reading (from my Goodreads profile), the most recent video game I’ve played (from my Backloggd profile), the most recent song I’ve listened to (from my Last.fm profile) and my most recent post from Bluesky.

(It is a little bit hacky and scrape-y, but realistically it’ll be run so infrequently I don’t feel like it’ll cause any harm!)

Try it for yourself!

You can see my fork of the project at
github.com/dalelane/dalelane.dev-card.

Visualising Apache Kafka events in Grafana

May 5th, 2025

In this post, I want to share some ideas for how Grafana could be used to create visualisations of the contents of events on Apache Kafka topics.

By using Kafka as a data source in Grafana, we can create dashboards to query, visualise, and explore live streams of Kafka events. I’ve recorded a video where I play around with this idea, creating a variety of different types of visualisation to show the sorts of things that are possible.


youtu.be/EX5clcmHRsU

To make it easy to skim through the examples I created during this run-through, I’ll also share screenshots of each one below, with a time-stamped link to the part of the video where I created that example.

Finally, at the end of this post, I’ll talk about the mechanics and practicalities of how I did this, and what I think is needed next.

Read the rest of this entry »

A break in Devon

April 21st, 2025

I’m back from an extended Easter break week in Devon.

Read the rest of this entry »

Exploring Language Models in Scratch with Machine Learning for Kids

March 2nd, 2025

In this post, I want to share the most recent section I’ve added to Machine Learning for Kids: support for generating text and an explanation of some of the ideas behind large language models.


youtu.be/Duw83OYcBik

After launching the feature, I recorded a video using it. It turned into a 45 minute end-to-end walkthrough… longer than I planned! A lot of people won’t have time to watch that, so I’ve typed up what I said to share a version that’s easier to skim. It’s not a transcript – I’ve written a shortened version of what I was trying to say in the demo! I’ll include timestamped links as I go if you want to see the full explanation for any particular bit.

The goal was to be able to use language models (the sort of technology behind tools like ChatGPT) in Scratch.

youtu.be/Duw83OYcBik – jump to 00:19

For example, this means I can ask the Scratch cat:

Who were the Tudor Kings of England?

Or I can ask:

Should white chocolate really be called chocolate?

Although that is fun, I think the more interesting bit is the journey for how you get there.

Read the rest of this entry »

Using MirrorMaker 2 for simple stream processing

February 13th, 2025

Kafka Connect Single Message Transformations (SMTs) and MirrorMaker can be a simple way of doing stateless transformations on a stream of events.

There are many options available for processing a stream of events – the two I work with most frequently are Flink and Kafka Streams, both of which offer a range of ways to do powerful stateful processing on an event stream. In this post, I’ll share a third, perhaps overlooked, option: Kafka Connect.

Read the rest of this entry »

Running OpenMessaging benchmarks on your Kafka cluster

February 3rd, 2025

The OpenMessaging Benchmark Framework is typically used to benchmark messaging systems in the cloud, but in this post I want to show how useful it can also be for Kafka clusters that you run yourself in Kubernetes (whether that is using the open source Strimzi operator, or IBM’s Event Streams).

From openmessaging.cloud:

The OpenMessaging Benchmark Framework is a suite of tools that make it easy to benchmark distributed messaging systems.

As I’ve written about before (when illustrating the impact of setting quotas at the Kafka cluster level, and when adding quotas at the event gateway level), Apache Kafka comes with a good performance test tool. That is still my go-to option if I just want an easy way to push data through a Kafka cluster in bulk.

But – OpenMessaging’s benchmark has some interesting features that make it a useful complement and worth considering.

The benefit that OpenMessaging talk about the most is that can be used with a variety of messaging systems, such as RocketMQ, Pulsar, RabbitMQ, NATS, Redis and more – although in this post I’m only interested in using it to benchmark an Apache Kafka cluster.

More interesting for me was their focus on realistic workloads rather than relying on static data.

Quoting again from openmessaging.cloud:

Benchmarks should be largely oriented toward standard use cases rather than bizarre edge cases

Read the rest of this entry »

Understanding event processing behaviour with OpenTelemetry

January 31st, 2025

When using Apache Kafka, timely processing of events is an important consideration.

Understanding the throughput of your event processing solution is typically straightforward : by counting how many events you can process a second.

Understanding latency (how long it takes from when an event is first emitted, to when it has finished being processed and an action has been taken in response) requires more coordination to be able to measure.

OpenTelemetry helps with this, by collecting and correlating information from the different components that are producing and consuming events.

From opentelemetry.io:

OpenTelemetry is a collection of APIs, SDKs, and tools. Use it to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help you analyze your software’s performance and behavior.

A distributed event processing solution

To understand what is possible, I’ll use a simple (if contrived!) example of an event processing architecture.

Read the rest of this entry »

Turning noise into actionable alerts using Flink

January 28th, 2025

In this post, I want to share two examples of how you can use pattern recognition in Apache Flink to turn a noisy stream of output into something useful and actionable.

Imagine that you have a Kafka topic with a stream of events that you sometimes need to respond to.

You can think of this conceptually as being a stream of values that could be plotted against time.

To make this less abstract, we can use the events from the sensor readings topic that the “Loosehanger” data generator produces to. Those events are a stream of temperature and humidity readings.

Imagine that these events represent something that you might need to respond to when the event for a sensor exceeds some given threshold.

You can think of it visually like this:

Read the rest of this entry »