Archive for the ‘tech’ Category

How to explain Generative AI in the classroom

Wednesday, January 28th, 2026

Generative AI is fast becoming an everyday tool across almost every field and industry. Teaching children about it should include an understanding of how it works, how to think critically about its risks and limitations, and how to use it effectively. In this post, I’ll share how I’ve been doing this.

My approach is to take students through six projects that give a practical and hands-on introduction to generative AI through Scratch. The projects illustrate how generative AI is used in the real world, build intuitions about how models generate text, show how settings and prompts shape the output, highlight the limits (and risks) of “confidently wrong” answers, and introduce some practical techniques to make AI systems more reliable.

As with the rest of what I’ve done with Machine Learning for Kids, my overall aim is AI literacy through making. Students aren’t just told what language models are – they go through a series of exercises to build, test, break, and improve generative AI systems in Scratch. That hands-on approach helps to make abstract ideas (like “context” or “hallucination”) more visible and memorable.

I’ve long described Scratch as a safe sandbox, and this makes it ideal to experiment with the sorts of generative AI concepts they will encounter in daily tools such as chatbots, writing assistants, translation apps, and search experiences.

Core themes

Across the six projects, students repeatedly encounter three core questions:

1. How does a model decide what to say next?
Students learn that language models generate text one word at a time, guided by patterns in data and the recent conversation (“context”).

2. Why do outputs vary, and how can we steer them?
Students discover how settings and prompting techniques can balance creativity vs reliability, and how “good prompting” is about being clear on the job you want done.

3. When should we not trust a model, and what do we do then?
Students experiment with hallucinations, outdated knowledge, semantic drift, and bias. They practise mitigations to these problems, such as retrieval (adding trusted information) and benchmarking (testing systematically).

All of this is intended to be a (much simplified!) mirror of real-world practice. Professional uses of generative AI combine generation (writing), grounding (bringing in trusted sources), instruction (prompting), and evaluation (testing and comparing). Children can be introduced to all of these aspects through hands-on experiences.

(more…)

Explaining few-shot prompting in Scratch

Tuesday, January 20th, 2026

In this post, I want to share a recent worksheet I wrote for Machine Learning for Kids. It is a hands-on project to give students an insight into an aspect of prompt engineering with language models.

Students create a Scratch project with four sprites.

They start things off by writing an English sentence which goes to their first sprite.

The first sprite waits to be given an English sentence, and uses a language model to translate it into French.

The second sprite waits to be given a French sentence, and uses a language model to translate it into German.

The third sprite waits to be given a German sentence, and uses a language model to translate it into Chinese.

The fourth sprite waits to be given a Chinese sentence, and uses a language model to translate it into English.

This is then received by the first sprite, and the process continues again.


screen recording of the Scratch project on YouTube

Because the translations aren’t 100% perfect, like the famous children’s game, the text passed between the sprites gets further and further from the student’s starting sentence.

I’ve been kicking around this idea for a few months, but it didn’t work well with the groups that I tried the early project incarnations with. I think it’s in a better state now, so I’ve added the worksheet to the site.

The project has given me a chance to introduce a range of different ideas…

(more…)

Explaining role prompting in Scratch

Sunday, January 11th, 2026

In this post, I want to share a recent worksheet I wrote for Machine Learning for Kids. It is a hands-on project to give students an insight into an aspect of prompt engineering with language models.

Students create a Scratch project that lets them have a conversation with a small language model. They try to have the same conversation multiple times, and they set up the Scratch project so that adds a short role instruction to the context at the start (e.g. “Answer like a pirate”).

The instruction changes how the model answers, and students have to try and work out from the responses they get from the model which persona has been selected.


screen recording of the Scratch project on YouTube

By repeating the activity several times, they should notice something important: the same language model gives very different answers to the same questions, just because of a small change in the instructions. This observation is the key lesson here.

(more…)

Introducing Generative AI into Code Clubs

Friday, November 14th, 2025

I recently spoke at Clubs Conference about how Generative AI can be introduced into Code Clubs.

The recording is a little fuzzy, but still watchable, I think.


youtu.be/yzdSJ6p0BP8

(The video includes all of the sessions from the first day of the Conference – my bit starts at 5:10:50).

My slides are here:

(I didn’t have a lot in my slides, as most of the talk was based around live demos.)

AI patterns in event driven architectures

Monday, November 3rd, 2025

I gave a talk at Current last week about how artificial intelligence and machine learning are used with Kafka topics. I had a lot of examples to share, so I wrote up my slides across several posts.

I’ll use this post to recap and link to my write-ups of each bit of the talk:

I started by talking about the different building blocks that are needed, and the sorts of choices that teams make.

Next, I talked about how projects to introduce AI into event driven architectures typically fall into one or more of these common patterns:

The most common, and the simplest: using AI to improve and augment the sorts of processing we can do with events. This can be as simple as using off-the-shelf pre-trained models to enrich a stream of events, and using this to filter or route the event as part of processing.

Perhaps the newest (and the pattern that is recently getting the most interest and attention) is to use streams of events to trigger agents, so that they can autonomously take actions in repsonse.

Maybe the less obvious approach is to collect and store a projection of recent events, and use these to enhance an agentic AI, by making it available as a queryable or searchable form of real-time context.

And finally, the longest established pattern is to simply use the retained history of Kafka topics as a relevant source of historical training data, for training new custom and bespoke models.

Using streams of events to train machine learning models

Sunday, November 2nd, 2025

In this post, I describe how event streams can be used as a source of training data for machine learning models.

I spoke at Current last week. I gave a talk about how artificial intelligence and machine learning are most commonly used with Kafka topics. I had a lot to say, so I didn’t manage to finish writing up my slides – but this post covers the last section of the talk.

It follows:

The talk covered the four main patterns for using AI/ML with events.

This pattern was where I talked about using events as a source of training data for models. This is perhaps the simplest and longest established approach – I’ve been writing about this for years, long pre-dating the current generative AI-inspired interest.

(more…)

Using event streams to provide real-time context for agentic AI

Saturday, November 1st, 2025

In this post, I describe how event stream projections can be used to make agentic AI more effective.

I spoke at a Kafka / Flink conference on Wednesday. I gave a talk about how AI and ML are used with Kafka topics. I had a lot to say, so this is the fourth post I’ve needed to write up my slides (and I’ve still got more to go!).

The talk was a whistlestop tour through the four main patterns for using artificial intelligence and machine learning with event streams.

This pattern was where I talked about using events as a source of context data for agents.

(more…)

Triggering agentic AI from event streams

Friday, October 31st, 2025

In this post, I describe how agentic AI can respond autonomously to event streams.

I spoke at Current on Wednesday, about the most common patterns for how AI and ML are used with Kafka topics. I had a lot of content I wanted to cover in the session, so it’s taking me a while to write it all down:

The premise of the talk was to describe the four main patterns for using AI/ML with events. This pattern was where I started focusing on agents.

(more…)