We’re half-way through 2025, which means it’s a good time to check how I’m doing against some of the goals I set for the year.
A mid-year (non-work) checkpoint
July 1st, 2025How to use kafka-console-consumer.sh to view the contents of Apache Avro-encoded events
June 12th, 2025kafka-console-consumer.sh
is one of the most useful tools in the Kafka user’s toolkit. But if your topic has Avro-encoded events, the output can be a bit hard to read.
You don’t have to put up with that, as the tool has a formatter plugin framework. With the right plugin, you can get nicely formatted output from your Avro-encoded events.
With this in mind, I’ve written a new Avro formatter for a few common Avro situations. You can find it at:
github.com/IBM/kafka-avro-formatters
The README includes instructions on how to add it to your Kafka console command, and configure it with how to find your schema.
Using annotations to store info about Kafka topics in Strimzi
June 1st, 2025In this post, I highlight the benefits of using Kubernetes annotations to store information about Kafka topics, and share a simplified example of how this can even be automated.
Managing Kafka topics as Kubernetes resources brings many benefits. For example, they enable automated creation and management of topics as part of broader CI/CD workflows, it gives a way to track history of changes to topics and avoid configuration drift as part of GitOps processes, and they give a point of control for enforcing policies and standards.
The value of annotations
Another benefit that I’ve been seeing increasing interest in recently is that they provide a cheap and simple place to store small amounts of metadata about topics.
For example, you could add annotations to topics that identify the owning application or team.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: some-kafka-topic annotations: acme.com/topic-owner: 'Joe Bloggs' acme.com/topic-team: 'Finance'
Annotations are simple key/value pairs, so you can add anything that might be useful to a Kafka administrator.
You can add links to team documentation.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: some-kafka-topic annotations: acme.com/documentation: 'https://acme-intranet.com/finance-apps/some-kafka-app'
You can add a link to the best Slack channel to use to ask questions about the topic.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: some-kafka-topic annotations: acme.com/slack: 'https://acme.enterprise.slack.com/archives/C2QSX23GH'
npx dalelane
May 13th, 2025If you’re a Node.js person, try running: npx dalelane
I recently read Ashley Willis’ blog post about her “terminal business card” – a lovely project she shared that prints out a virtual CLI business card if you run npx ashleywillis
.
Check out her blog post for the history of where this all started, and an explanation of how it works.
I love this!
npx dalelane
Blast from the past
It reminds me (and I’m showing my age here) of the finger
UNIX command we had in my University days.
Other than IRC, finger
was our social media: we maintained .plan
and .project
files in our profile directory, and anyone else at Uni could run finger <username>
to see info about you and what you’re up to.
We created all sorts of endlessly creative ASCII-art plan files, and came up with all sorts of unnecessarily elaborate ways to automate updates to those files.
I haven’t thought about that for years, but Ashley’s project reminded me of it so strongly that I had to give it a try.
npx dalelane
A dynamic business card needs live data
Her blog post explains how to get it working. I mostly just shamelessly copied it. But where her project is elegant and concise, I naturally crammed in noise. 🙂
I wanted live data, so I updated my “business card” to include what I’m currently reading (from my Goodreads profile), the most recent video game I’ve played (from my Backloggd profile), the most recent song I’ve listened to (from my Last.fm profile) and my most recent post from Bluesky.
(It is a little bit hacky and scrape-y, but realistically it’ll be run so infrequently I don’t feel like it’ll cause any harm!)
Try it for yourself!
You can see my fork of the project at
github.com/dalelane/dalelane.dev-card.
Visualising Apache Kafka events in Grafana
May 5th, 2025In this post, I want to share some ideas for how Grafana could be used to create visualisations of the contents of events on Apache Kafka topics.
By using Kafka as a data source in Grafana, we can create dashboards to query, visualise, and explore live streams of Kafka events. I’ve recorded a video where I play around with this idea, creating a variety of different types of visualisation to show the sorts of things that are possible.
To make it easy to skim through the examples I created during this run-through, I’ll also share screenshots of each one below, with a time-stamped link to the part of the video where I created that example.
Finally, at the end of this post, I’ll talk about the mechanics and practicalities of how I did this, and what I think is needed next.
A break in Devon
April 21st, 2025Exploring Language Models in Scratch with Machine Learning for Kids
March 2nd, 2025In this post, I want to share the most recent section I’ve added to Machine Learning for Kids: support for generating text and an explanation of some of the ideas behind large language models.
After launching the feature, I recorded a video using it. It turned into a 45 minute end-to-end walkthrough… longer than I planned! A lot of people won’t have time to watch that, so I’ve typed up what I said to share a version that’s easier to skim. It’s not a transcript – I’ve written a shortened version of what I was trying to say in the demo! I’ll include timestamped links as I go if you want to see the full explanation for any particular bit.
The goal was to be able to use language models (the sort of technology behind tools like ChatGPT) in Scratch.
youtu.be/Duw83OYcBik – jump to 00:19
For example, this means I can ask the Scratch cat:
Who were the Tudor Kings of England?
Or I can ask:
Should white chocolate really be called chocolate?
Although that is fun, I think the more interesting bit is the journey for how you get there.
Using MirrorMaker 2 for simple stream processing
February 13th, 2025Kafka Connect Single Message Transformations (SMTs) and MirrorMaker can be a simple way of doing stateless transformations on a stream of events.
There are many options available for processing a stream of events – the two I work with most frequently are Flink and Kafka Streams, both of which offer a range of ways to do powerful stateful processing on an event stream. In this post, I’ll share a third, perhaps overlooked, option: Kafka Connect.