Posts Tagged ‘apachekafka’

A Kafka Developer’s Guide to AsyncAPI

Tuesday, March 30th, 2021

How Kafka developers can use the AsyncAPI specification to describe how their applications are using Kafka topics.

In my post “Why should you document your Kafka topics?” last week, I wrote about the benefits of documenting your Kafka event sources, and mentioned a few of the problems that this can help with.

In this post, I want to show you how you can document the API for your Kafka event sources by creating AsyncAPI documents.

You don’t necessarily have to learn the AsyncAPI specification – tools such as the new Event Endpoint Management capability that I work on in Cloud Pak for Integration make it easy to document APIs with user-friendly forms that generate AsyncAPI documents for you. However, some developers will want to know more about what is happening under the covers, so here is an introduction.


youtu.be/Ni5tCY9r0TY

(more…)

Migrating your Apache Kafka cluster using MirrorMaker 2

Wednesday, March 24th, 2021

You have a Kafka cluster that you have been using for a while. Your cluster has many topics, and the topics have many messages.

Now you’ve decided to move and start using a new, different Kafka cluster somewhere else.

How can you take your topics with you?

Huge thanks to Andrew Borley for co-writing this with me. Useful insights in here probably came from him, the mistakes from me.

(more…)

Describing Kafka with AsyncAPI

Friday, November 27th, 2020

In this post, I want to describe how to use AsyncAPI to document how you’re using Apache Kafka. There are already great AsyncAPI “Getting Started” guides, but it supports a variety of protocols, and I haven’t found an introduction written specifically from the perspective of a Kafka user.

I’ll start with a description of what AsyncAPI is.

“an open source initiative … goal is to make working with Event-Driven Architectures as easy as it is to work with REST APIs … from documentation to code generation, from discovery to event management”

asyncapi.com/docs

The most obvious initial aspect is that it is a way to document how you’re using Kafka topics, but the impact is broader than that: a consistent approach to documentation enables an ecosystem that includes things like automated code generation and discovery.

(more…)

Using TensorFlow to make predictions from Kafka events

Sunday, September 6th, 2020

This post is a simple example of how to use a machine learning model to make predictions on a stream of events on a Kafka topic.

It’s more a quick hack than a polished project, with most of this code hacked together from samples and starter code in a single evening. But it’s a fun demo, and could be a jumping-off point for starting a more serious project.

For the purposes of a demo, I wanted to make a simple example of how to implement this pattern, using:

  • sensors that are easily and readily available, and
  • predictions that are easy to understand (and easy to generate labelled training data for)

With that goal in mind, I went with:

  • for the sensors providing the source of events, I used the accelerometer and gyroscope on my iPhone
  • to set up the Kafka broker, I used the Strimzi Kafka Operator
  • for the machine learning model, I used TensorFlow to make a simple bidirectional LSTM
  • the predictions I’m making are a description of what I’m doing with the phone (e.g. is it in my hand, is it in my pocket, etc.)

I’ve got my phone publishing a live stream of raw sensor readings, and passing that stream through an ML model to give me a live stream of events like “phone has been put on a table”, “phone has been picked up and is in my hand”, or “phone has been put in a pocket while I’m sat down”, etc.

Here is it in action. It’s a bit fiddly to demo, and a little awkward to film putting something in your pocket without filming your lap, so bear with me!

The source code is all at
github.com/dalelane/machine-learning-kafka-events.

(more…)

Supporting CI/CD with Kubernetes Operators

Thursday, August 20th, 2020

Operators bring a lot of benefits as a way of managing complex software systems in a Kubernetes cluster. In this post, I want to illustrate one in particular: the way that custom resources (and declarative approaches to managing systems in general) enable easy integration with source control and a CI/CD pipeline.

I’ll be using IBM Event Streams as my example here, but the same principles will be true for many Kubernetes Operators, in particular, the open-source Strimzi Kafka Operator that Event Streams is based on.

(more…)

Using MirrorMaker 2

Wednesday, July 15th, 2020

I’ve been talking about MirrorMaker 2 this week – the Apache Kafka tool for replicating data across two Kafka clusters. You can use it to make a copy of messages on your Kafka cluster to a remote Kafka cluster running on a different data centre, and keep that copy up to date in the background.

For the discussion we had, I needed to give examples of how you might use MirrorMaker 2, which essentially meant I spent an afternoon drawing pictures. As some of them were a little pretty, I thought I’d tidy them up and share them here.

We went through several different use cases, but I’ll just describe two examples here.
(more…)

IBM Event Streams v10

Tuesday, June 30th, 2020

On Friday, we released the latest version of IBM Event Streams. This means I’ve been doing a variety of demo sessions to show people what we’ve made and how it works.

Here’s a recording of one of them:

In this session, I did a run-through of the new Event Streams Operator on Red Hat OpenShift, with a very quick intro to some of the features:

00m30s – installing the Operator
02m10s – creating custom Kafka clusters in the OpenShift console
05m10s – creating custom Kafka clusters in IBM Cloud Pak for Integration
08m00s – running the sample Kafka application
08m50s – creating topics
10m20s – creating credentials for client applications
11m45s – automating deployment of event-streaming infrastructure
12m30s – using schemas with the schema registry
13m10s – sending messages with HTTP POST requests
13m45s – viewing messages in the message browser
14m00s – command line administration
14m30s – running Kafka Connect
15m10s – geo-replication for disaster recovery
15m50s – monitoring Kafka clusters in the Event Streams UI
17m10s – monitoring with custom Grafana dashboards
17m30s – alerting using Prometheus

Why are Kafka messages still on the topic after the retention time has expired?

Sunday, February 9th, 2020

We had an interesting Kafka question from an Event Streams user. The answer isn’t immediately obvious unless you know a bit about Kafka internals, and after a little searching I couldn’t find an explanation online, so I thought I’d share the answer here (obviously anonymised and heavily simplified).

What is retention?

Retention is a Kafka feature to help you manage the amount of disk space your topics use.

It lets you specify how long you want Kafka to keep messages on a topic for. You can specify this by time (e.g. “I want messages on this topic to be preserved for at least X days”) or by disk usage (e.g. “I want at least the last X gb of messages on this topic to be preserved”).

After the retention time or disk threshold is exceeded, messages become eligible for being automatically deleted by Kafka.

What was wrong in this case?

screenshot

They had created a topic with a retention time of 7 days.

They had assumed that this meant messages older than 7 days would be deleted.

When they looked at the messages on their topic, they could see some messages older than 7 days were there, and were surprised.

They thought this might mean retention wasn’t working.

(more…)