Archive for October, 2025

Triggering agentic AI from event streams

Friday, October 31st, 2025

In this post, I describe how agentic AI can respond autonomously to event streams.

I spoke at Current on Wednesday, about the most common patterns for how AI and ML are used with Kafka topics. I had a lot of content I wanted to cover in the session, so it’s taking me a while to write it all down:

The premise of the talk was to describe the four main patterns for using AI/ML with events. This pattern was where I started focusing on agents.

(more…)

Using AI to augment event stream processing

Thursday, October 30th, 2025

In this post, I describe how artificial intelligence and machine learning are used to augment event stream processing.

I gave a talk at a Kafka / Flink conference yesterday about the four main patterns for using AI/ML with events. I had a lot to say, so it is taking me a few days to write up my slides.

The most common pattern for introducing AI into an event driven architecture is to use it to enhance event processing.

As part of event processing, you can have events, collections of events, or changes in events – and any of these can be sent to an AI service. The results can inform the processing or downstream workflows.

(more…)

From Event Streams to Smart Streams : Powering AI / ML with your Kafka topics

Wednesday, October 29th, 2025

In this series of posts, I will outline the most common patterns for how artificial intelligence and machine learning are used in event driven architectures.

I’m at a Kafka / Flink conference this week.

This morning, I gave a talk about how AI and ML are used with Kafka topics. I had a lot to say, so I’ll write it up over the next few days:

In this first post, I’ll outline the building blocks available when bringing AI into the event-driven world, and discuss some of the choices that are available for each block.

(more…)

Introducing LLM benchmarks using Scratch

Saturday, October 18th, 2025

In this post, I want to share a recent worksheet I wrote for Machine Learning for Kids. It is perhaps a little on the technical side, but I think there is an interesting idea in here.

The lesson behind this project

The idea for this project was to get students thinking about the differences between different language models.

There isn’t a “best” model, that is the best at every task. Each model can be good at some tasks, and less good at other tasks.

The best model for a specific task isn’t always necessarily going to be the largest and most complex model. Smaller and simpler models can be better at some tasks than larger models.

And we can identify how good each model is at a specific task by testing it at that task.

(more…)