Posts Tagged ‘ibmeventstreams’

Using time series models with IBM Event Automation

Tuesday, July 22nd, 2025

Intro

graphic of an e-bike hire park

Imagine you run a city e-bike hire scheme.

Let’s say that you’ve instrumented your bikes so you can track their location and battery level.

When a bike is on the move, it emits periodic updates to a Kafka topic, and you use these events for a range of maintenance, logistics, and operations reasons.

You also have other Kafka topics, such as a stream of events with weather sensor readings covering the area of your bike scheme.

Do you know how to use predictive models to forecast the likely demand for bikes in the next few hours?

Could you compare these forecasts with the actual usage that follows, and use this to identify unusual demand?

Time series models

A time series is how a machine learning or data scientist would describe a dataset that consists of data values, ordered sequentially over time, and labelled with timestamps.

A time series model is a specific type of machine learning model that can analyze this type of sequential time series data. These models are used to predict future values and to identify anomalies.

For those of us used to working with Kafka topics, the machine learning definition of a “time series” sounds exactly like our definition of a Kafka topic. Kafka topics are a sequential ordered set of data values, each labelled with timestamps.

(more…)

How to use kafka-console-consumer.sh to view the contents of Apache Avro-encoded events

Thursday, June 12th, 2025

kafka-console-consumer.sh is one of the most useful tools in the Kafka user’s toolkit. But if your topic has Avro-encoded events, the output can be a bit hard to read.

You don’t have to put up with that, as the tool has a formatter plugin framework. With the right plugin, you can get nicely formatted output from your Avro-encoded events.

With this in mind, I’ve written a new Avro formatter for a few common Avro situations. You can find it at:

github.com/IBM/kafka-avro-formatters

The README includes instructions on how to add it to your Kafka console command, and configure it with how to find your schema.

(more…)

Using annotations to store info about Kafka topics in Strimzi

Sunday, June 1st, 2025

In this post, I highlight the benefits of using Kubernetes annotations to store information about Kafka topics, and share a simplified example of how this can even be automated.

Managing Kafka topics as Kubernetes resources brings many benefits. For example, they enable automated creation and management of topics as part of broader CI/CD workflows, it gives a way to track history of changes to topics and avoid configuration drift as part of GitOps processes, and they give a point of control for enforcing policies and standards.

The value of annotations

Another benefit that I’ve been seeing increasing interest in recently is that they provide a cheap and simple place to store small amounts of metadata about topics.

For example, you could add annotations to topics that identify the owning application or team.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: some-kafka-topic
  annotations:
    acme.com/topic-owner: 'Joe Bloggs'
    acme.com/topic-team: 'Finance'

Annotations are simple key/value pairs, so you can add anything that might be useful to a Kafka administrator.

You can add links to team documentation.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: some-kafka-topic
  annotations:
    acme.com/documentation: 'https://acme-intranet.com/finance-apps/some-kafka-app'

You can add a link to the best Slack channel to use to ask questions about the topic.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: some-kafka-topic
  annotations:
    acme.com/slack: 'https://acme.enterprise.slack.com/archives/C2QSX23GH'

(more…)

Running OpenMessaging benchmarks on your Kafka cluster

Monday, February 3rd, 2025

The OpenMessaging Benchmark Framework is typically used to benchmark messaging systems in the cloud, but in this post I want to show how useful it can also be for Kafka clusters that you run yourself in Kubernetes (whether that is using the open source Strimzi operator, or IBM’s Event Streams).

From openmessaging.cloud:

The OpenMessaging Benchmark Framework is a suite of tools that make it easy to benchmark distributed messaging systems.

As I’ve written about before (when illustrating the impact of setting quotas at the Kafka cluster level, and when adding quotas at the event gateway level), Apache Kafka comes with a good performance test tool. That is still my go-to option if I just want an easy way to push data through a Kafka cluster in bulk.

But – OpenMessaging’s benchmark has some interesting features that make it a useful complement and worth considering.

The benefit that OpenMessaging talk about the most is that can be used with a variety of messaging systems, such as RocketMQ, Pulsar, RabbitMQ, NATS, Redis and more – although in this post I’m only interested in using it to benchmark an Apache Kafka cluster.

More interesting for me was their focus on realistic workloads rather than relying on static data.

Quoting again from openmessaging.cloud:

Benchmarks should be largely oriented toward standard use cases rather than bizarre edge cases

(more…)

Social media updates with Kafka Connect

Tuesday, November 19th, 2024

In this post, I’ll show how to bring posts from open social media networks (Bluesky and Mastodon) into Kafka using Kafka Connect source connectors.

My goal is to be able to populate a Kafka topic with status updates posted to social media.

Rather than to try and do this with the full firehose of all status updates, this is done with status updates that match a search term or hashtag.

For example, the screenshot above is a Kafka topic with posts from Bluesky that mention the term “xbox”.

(more…)

Creating custom record builders for the Kafka Connect MQ Source Connector

Monday, October 28th, 2024

In this post, I want to share an example of handling bespoke structured messages with the Kafka Connect MQ Source Connector.

The MQ Source Connector gets data from MQ messages and produces it as events on Kafka topics. The default record builder makes a copy of the data as-is. For example, this can mean taking a JMS TextMessage from MQ and producing a string to Kafka. Or it can mean taking a JMS BytesMessage from MQ and producing a byte array to Kafka.

In my last post, I showed an example of using the XML record builder, to read XML documents from MQ and turn them into structured Kafka Connect records. From this point, I could choose the format I want the data to be produced to Kafka in (e.g. JSON or Avro) by choosing an appropriate value converter (e.g. org.apache.kafka.connect.json.JsonConverter or io.apicurio.registry.utils.converter.AvroConverter).

But what if your MQ messages have a custom structure, but you still want Kafka Connect to be able to parse your messages and output them to Kafka in any format of your choice?

In that case, you need to use a record builder that can correctly parse your MQ messages. In this post, I’ll explain what that means, show you how to create one, and share a sample you can use to get started.

(more…)

Analysing IBM MQ messages in IBM Event Processing

Sunday, October 27th, 2024

In this post, I’ll walk through a demo of using IBM Event Processing to create an Apache Flink job that calculates summaries of messages from IBM MQ queues.

This is a high-level overview of the demo:

  • A JMS/Jakarta application puts XML messages onto an MQ queue
  • A JSON version of these messages is copied onto a Kafka topic
  • The messages are processed by a Flink job, which outputs JSON results onto a Kafka topic
  • An XML version of the results are copied onto an MQ queue
  • The results are received by a JMS/Jakarta application

I’ve added instructions for how you can create a demo like this for yourself to my demos repo on Github.

The rest of this post is a walkthrough and explanation of how it all works.

(more…)

Event-driven tech at IBM TechXchange

Saturday, October 19th, 2024

This week, I’m at IBM TechXchange: our annual technical learning conference.

Our other big annual event Think has a business focus, but TechXchange is for technologists to advance their skills and expertise.

There are thousands of presentations, demos, workshops and hands-on labs to choose from, but naturally the most interesting ones will be about event-driven architectures and event stream processing technologies. 😉

In this post, I’ll share a few of our sessions from each day – if you’re at TechXchange this week, I hope to see you at some of these!

(more…)