I gave a talk at Current last week about how artificial intelligence and machine learning are used with Kafka topics. I had a lot of examples to share, so I wrote up my slides across several posts.
I’ll use this post to recap and link to my write-ups of each bit of the talk:

I started by talking about the different building blocks that are needed, and the sorts of choices that teams make.
Next, I talked about how projects to introduce AI into event driven architectures typically fall into one or more of these common patterns:

The most common, and the simplest: using AI to improve and augment the sorts of processing we can do with events. This can be as simple as using off-the-shelf pre-trained models to enrich a stream of events, and using this to filter or route the event as part of processing.
Perhaps the newest (and the pattern that is recently getting the most interest and attention) is to use streams of events to trigger agents, so that they can autonomously take actions in repsonse.
Maybe the less obvious approach is to collect and store a projection of recent events, and use these to enhance an agentic AI, by making it available as a queryable or searchable form of real-time context.
And finally, the longest established pattern is to simply use the retained history of Kafka topics as a relevant source of historical training data, for training new custom and bespoke models.
Tags: apachekafka, kafka, machine learning