Posts Tagged ‘kafka’

How to use kafka-console-consumer.sh to view the contents of Apache Avro-encoded events

Thursday, June 12th, 2025

kafka-console-consumer.sh is one of the most useful tools in the Kafka user’s toolkit. But if your topic has Avro-encoded events, the output can be a bit hard to read.

You don’t have to put up with that, as the tool has a formatter plugin framework. With the right plugin, you can get nicely formatted output from your Avro-encoded events.

With this in mind, I’ve written a new Avro formatter for a few common Avro situations. You can find it at:

github.com/IBM/kafka-avro-formatters

The README includes instructions on how to add it to your Kafka console command, and configure it with how to find your schema.

(more…)

Using annotations to store info about Kafka topics in Strimzi

Sunday, June 1st, 2025

In this post, I highlight the benefits of using Kubernetes annotations to store information about Kafka topics, and share a simplified example of how this can even be automated.

Managing Kafka topics as Kubernetes resources brings many benefits. For example, they enable automated creation and management of topics as part of broader CI/CD workflows, it gives a way to track history of changes to topics and avoid configuration drift as part of GitOps processes, and they give a point of control for enforcing policies and standards.

The value of annotations

Another benefit that I’ve been seeing increasing interest in recently is that they provide a cheap and simple place to store small amounts of metadata about topics.

For example, you could add annotations to topics that identify the owning application or team.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: some-kafka-topic
  annotations:
    acme.com/topic-owner: 'Joe Bloggs'
    acme.com/topic-team: 'Finance'

Annotations are simple key/value pairs, so you can add anything that might be useful to a Kafka administrator.

You can add links to team documentation.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: some-kafka-topic
  annotations:
    acme.com/documentation: 'https://acme-intranet.com/finance-apps/some-kafka-app'

You can add a link to the best Slack channel to use to ask questions about the topic.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: some-kafka-topic
  annotations:
    acme.com/slack: 'https://acme.enterprise.slack.com/archives/C2QSX23GH'

(more…)

Visualising Apache Kafka events in Grafana

Monday, May 5th, 2025

In this post, I want to share some ideas for how Grafana could be used to create visualisations of the contents of events on Apache Kafka topics.

By using Kafka as a data source in Grafana, we can create dashboards to query, visualise, and explore live streams of Kafka events. I’ve recorded a video where I play around with this idea, creating a variety of different types of visualisation to show the sorts of things that are possible.


youtu.be/EX5clcmHRsU

To make it easy to skim through the examples I created during this run-through, I’ll also share screenshots of each one below, with a time-stamped link to the part of the video where I created that example.

Finally, at the end of this post, I’ll talk about the mechanics and practicalities of how I did this, and what I think is needed next.

(more…)

Using MirrorMaker 2 for simple stream processing

Thursday, February 13th, 2025

Kafka Connect Single Message Transformations (SMTs) and MirrorMaker can be a simple way of doing stateless transformations on a stream of events.

There are many options available for processing a stream of events – the two I work with most frequently are Flink and Kafka Streams, both of which offer a range of ways to do powerful stateful processing on an event stream. In this post, I’ll share a third, perhaps overlooked, option: Kafka Connect.

(more…)

Running OpenMessaging benchmarks on your Kafka cluster

Monday, February 3rd, 2025

The OpenMessaging Benchmark Framework is typically used to benchmark messaging systems in the cloud, but in this post I want to show how useful it can also be for Kafka clusters that you run yourself in Kubernetes (whether that is using the open source Strimzi operator, or IBM’s Event Streams).

From openmessaging.cloud:

The OpenMessaging Benchmark Framework is a suite of tools that make it easy to benchmark distributed messaging systems.

As I’ve written about before (when illustrating the impact of setting quotas at the Kafka cluster level, and when adding quotas at the event gateway level), Apache Kafka comes with a good performance test tool. That is still my go-to option if I just want an easy way to push data through a Kafka cluster in bulk.

But – OpenMessaging’s benchmark has some interesting features that make it a useful complement and worth considering.

The benefit that OpenMessaging talk about the most is that can be used with a variety of messaging systems, such as RocketMQ, Pulsar, RabbitMQ, NATS, Redis and more – although in this post I’m only interested in using it to benchmark an Apache Kafka cluster.

More interesting for me was their focus on realistic workloads rather than relying on static data.

Quoting again from openmessaging.cloud:

Benchmarks should be largely oriented toward standard use cases rather than bizarre edge cases

(more…)

Understanding event processing behaviour with OpenTelemetry

Friday, January 31st, 2025

When using Apache Kafka, timely processing of events is an important consideration.

Understanding the throughput of your event processing solution is typically straightforward : by counting how many events you can process a second.

Understanding latency (how long it takes from when an event is first emitted, to when it has finished being processed and an action has been taken in response) requires more coordination to be able to measure.

OpenTelemetry helps with this, by collecting and correlating information from the different components that are producing and consuming events.

From opentelemetry.io:

OpenTelemetry is a collection of APIs, SDKs, and tools. Use it to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help you analyze your software’s performance and behavior.

A distributed event processing solution

To understand what is possible, I’ll use a simple (if contrived!) example of an event processing architecture.

(more…)

Turning noise into actionable alerts using Flink

Tuesday, January 28th, 2025

In this post, I want to share two examples of how you can use pattern recognition in Apache Flink to turn a noisy stream of output into something useful and actionable.

Imagine that you have a Kafka topic with a stream of events that you sometimes need to respond to.

You can think of this conceptually as being a stream of values that could be plotted against time.

To make this less abstract, we can use the events from the sensor readings topic that the “Loosehanger” data generator produces to. Those events are a stream of temperature and humidity readings.

Imagine that these events represent something that you might need to respond to when the event for a sensor exceeds some given threshold.

You can think of it visually like this:

(more…)

Using Kafka Streams for a Kafka Event Projection

Monday, December 2nd, 2024

In this post, I’ll walk through a sample implementation of Kafka Streams to maintain an Event Projection. I’ll use this to illustrate when this is a suitable approach to use.

I’ve written similar Event Projection posts about sample implementations that use an in-memory lookup table, and a PostgreSQL database.

The objective for this demo

I introduced the pattern of Event Projections in Comparing approaches to maintaining an Event Projection from Kafka topics.

I also explained the scenario that I’ve been using for each of my Event Projections demos. If you haven’t seen my other posts, it may help to go back and see the scenario detail and motivation first.

In short, I’m showing how to maintain a projection of two Kafka topics (one based on the event key, the other based on an attribute in the event payload). And I’m showing how an application could make an HTTP/REST call to retrieve the data from the most recent event that matches some query.

At a high-level, the goal for this demo is to:

  • use Kafka Streams to maintain a projection of the Kafka topics
  • provide an HTTP/REST API for querying the projection

(more…)