Archive for the ‘tech’ Category

“Local projects” in Machine Learning for Kids

Friday, January 19th, 2024

I added support for “local projects” (storing projects on your own computer) to Machine Learning for Kids this week. In this post, I want to give a little background.

(more…)

MQTT extension for Scratch

Saturday, August 5th, 2023

screenshot

Extensions in Scratch let you add additional blocks to the palette. I’ve written about how to create extensions before, but in this post I want to share my latest extension which adds MQTT support.

I don’t have a particular Scratch project in mind for this yet, but publishing and subscribing to an MQTT broker from a Scratch project would allow multiple web browsers each running Scratch to communicate with each other. I’m sure there are some fun things this could be used for.

(more…)

An introduction to Kafka Connect and Kafka Streams using Xbox

Sunday, June 18th, 2023

This is a talk I gave at Kafka Summit last month. It was an introduction to the Java APIs for Kafka Connect and Kafka Streams, using data from Xbox to bring the examples to life.


Confluent require personal details to watch recordings from Kafka Summit – sorry

(more…)

What children can learn about artificial intelligence

Sunday, May 21st, 2023

One of the conference presentations I gave last year was a talk at Heapcon, sharing some stories of AI/ML lessons I’ve run in schools. The focus of the talk was how I’ve seen children understand and react to machine learning technologies.

I’ve since expanded the ideas in this talk into a mini-book at MachineLearningForKids.co.uk/stories but here is a recording of where some of these stories started.

Teaching students that machine learning doesn’t always learn what we intend it to

Tuesday, January 3rd, 2023

This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.

Some of the best lessons I’ve run have been where a machine learning model did the “wrong” thing. Students learn a lot from seeing an example of machine learning not doing what we want.

Perhaps my favourite example of when something went wrong was a lesson that I did on Rock, Paper, Scissors.

Students make a Scratch project to play Rock, Paper, Scissors. They use their webcam to collect example photos of their hands making the shapes of rock (fist), paper (flat hand), and scissors (two fingers) – and use those photos to train a machine learning model to recognise their hand shapes.

It this particular lesson, the project worked really nicely for nearly all the students. There was one student where things went a little bit wrong.

(more…)

What I’ll be playing in 2023

Monday, January 2nd, 2023

(To start with, anyway…)

These are my current favourite toys…

(more…)

Teaching students how to use confidence scores

Sunday, December 18th, 2022

This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.

Machine learning models don’t just give an answer, they also typically return a score showing how confident the system is that it has correctly recognized the input.

Knowing how to use this confidence score is an important part of using machine learning.

An example of how a student used this in their project is shown in this video. Their Scratch script says that if the machine learning model has less than 50% confidence that it has correctly recognized a command, it replies “I’m sorry I don’t understand” (instead of taking an action).

The project was trained to understand commands to turn on a lamp or a fan. When they asked it to “Make me a cheese sandwich”, their assistant didn’t try to turn the lamp or fan on, it said “I don’t understand”

This command was unlike any of the example commands that had been used to train the model, causing the machine learning model to have a very low level of confidence that it had recognised the command. This was represented with a very low confidence score.

The challenge for the students making this project was knowing what confidence score threshold to use. Instead of telling them a good value to use, I let them try out different values and decide for themselves. By playing and experimenting with it, they get a feel for the impact that this threshold has on their project.

(more…)

Collecting poorly handled inputs is a common practice for training ML projects

Wednesday, December 14th, 2022

This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.

Digital assistants, such as Amazon’s Alexa or Google’s Home, is a great basis for student projects, because it is a use case that the students are familiar with.

A project I’ve run many times is to help students create their own virtual assistant in Scratch, by training a machine learning model to recognise commands like “turn on a lamp”. They do this by collecting examples of how they would phrase those commands.

This is an example of what this can look like:

By the time I do this project, my classes will normally have learned that they need to test their machine learning model with examples they didn’t use for training.

Students like trying to break things – they enjoy looking for edge cases that will trip up the machine learning model. In this case, it can be unusual ways of phrasing commands that their model won’t recognise.

I remember one student came up with ‘activate the spinny thing!’ as a way of asking to turn on a fan, which I thought was inspired.

But when the model gets something wrong, what should they do about that?

Students will normally suggest by themselves that a good thing to do is to collect examples of what their machine learning model gets wrong, and add those to one of their training buckets.

That means every time it makes a mistake, they can add that to their training, and train a new model – and their model will get better at recognizing commands like that in future.

They typically think of this for themselves, because with a little understanding about how machine learning technology behaves, this is a natural and obvious thing to do.

(more…)