This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
Some of the best lessons I’ve run have been where a machine learning model did the “wrong” thing. Students learn a lot from seeing an example of machine learning not doing what we want.
Perhaps my favourite example of when something went wrong was a lesson that I did on Rock, Paper, Scissors.
Students make a Scratch project to play Rock, Paper, Scissors. They use their webcam to collect example photos of their hands making the shapes of rock (fist), paper (flat hand), and scissors (two fingers) – and use those photos to train a machine learning model to recognise their hand shapes.
It this particular lesson, the project worked really nicely for nearly all the students. There was one student where things went a little bit wrong.
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
Machine learning models don’t just give an answer, they also typically return a score showing how confident the system is that it has correctly recognized the input.
Knowing how to use this confidence score is an important part of using machine learning.
An example of how a student used this in their project is shown in this video. Their Scratch script says that if the machine learning model has less than 50% confidence that it has correctly recognized a command, it replies “I’m sorry I don’t understand” (instead of taking an action).
The project was trained to understand commands to turn on a lamp or a fan. When they asked it to “Make me a cheese sandwich”, their assistant didn’t try to turn the lamp or fan on, it said “I don’t understand”
This command was unlike any of the example commands that had been used to train the model, causing the machine learning model to have a very low level of confidence that it had recognised the command. This was represented with a very low confidence score.
The challenge for the students making this project was knowing what confidence score threshold to use. Instead of telling them a good value to use, I let them try out different values and decide for themselves. By playing and experimenting with it, they get a feel for the impact that this threshold has on their project.
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
Digital assistants, such as Amazon’s Alexa or Google’s Home, is a great basis for student projects, because it is a use case that the students are familiar with.
A project I’ve run many times is to help students create their own virtual assistant in Scratch, by training a machine learning model to recognise commands like “turn on a lamp”. They do this by collecting examples of how they would phrase those commands.
This is an example of what this can look like:
By the time I do this project, my classes will normally have learned that they need to test their machine learning model with examples they didn’t use for training.
Students like trying to break things – they enjoy looking for edge cases that will trip up the machine learning model. In this case, it can be unusual ways of phrasing commands that their model won’t recognise.
I remember one student came up with ‘activate the spinny thing!’ as a way of asking to turn on a fan, which I thought was inspired.
But when the model gets something wrong, what should they do about that?
Students will normally suggest by themselves that a good thing to do is to collect examples of what their machine learning model gets wrong, and add those to one of their training buckets.
That means every time it makes a mistake, they can add that to their training, and train a new model – and their model will get better at recognizing commands like that in future.
They typically think of this for themselves, because with a little understanding about how machine learning technology behaves, this is a natural and obvious thing to do.
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
Students can make a Scratch project to play Rock, Paper, Scissors. They use their webcam to collect example photos of their hands making the shapes of ‘rock’ (fist), ‘paper’ (flat hand), and ‘scissors’ (two fingers). Then they use those photos to train a machine learning model to recognise their hand shapes.
I often have at least one enthusiastic (or impatient!) student keen to create their machine learning model as quickly as possible. They’ll hold their hand fairly still in front of the webcam, and keep hitting the camera button. The result is that they’ll take a large number of very similar photos.
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
I like running projects like Pac-Man (where students collect training examples by playing a game) with a class after they’ve done a project like chatbots (where students collect training examples by typing them in).
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
This video starts with one student’s training data from their Pac-Man project. They played a simplified version of Pac-Man in Scratch.
They set up the game in Scratch so that every time they pressed an arrow key (‘left’, ‘right’, ‘up’, or ‘down’) as well as moving their Pac-Man character, it put the x,y coordinates for Pac-Man and the Ghost into the training bucket for that direction.
For example, when Pac-Man was at x=3,y=4 and the Ghost was at x=5,y=5 – they went right. That became a training example for when it’s good to go right. and so on.
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
If students are given the time and freedom to create their own machine learning models, rather than being given an existing model to use, they can learn even more.
A major part of the Machine Learning for Kids site is a child-friendly training tool that can be used to create a wide range of machine learning models.
For example, students can make their own simple chatbots, by training a text classifier to recognise frequently asked questions. They can choose their own subject for what the chatbot can answer questions about. In the video shown here, the student chose to make a project about the Moon.
They have to guess what questions someone might ask about their subject. In the video shown, you can see the student thought someone might ask where the Moon is, how big it is, how cold it is on the Moon, or what it’s made of.
For each of those questions, they came up with a few examples of how someone might ask that question.
They used those examples to train their own custom machine learning model, unique to their project.
Then they scripted the responses that their chatbot should give when it gets a question that it has learned to recognise.
I’ve run this project with school classes dozens of times, and it is different every time, with each class bringing their own creativity and imagination to the chatbot.
I’ve helped history classes make chatbot Vikings, chatbot Romans, and chatbot Ancient Greeks – trained to answer about what it was like to live in their times, what they ate, what they wore, and so on.
I’ve helped English classes have created chatbot Shakespeares that they trained to answer questions about his life and some of his most famous plays.
I’ve helped school clubs create local chatbot guides about their own school or their own town, trained to answer questions about their local area.
By going through the process for themselves, they learn the workflow of a machine learning project – a workflow that is similar to real-world projects: predict what users might do; collect examples of how the user would do that; use those examples to train a machine learning model to recognise that; and script what the system should do in response when it recognises something.
Going through the process of creating a machine learning project for themselves gives students an insight into how these systems are created in the real world.
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
I like to introduce students to building with machine learning by allowing them to play with pretrained models – a range of new blocks that can be added to the Scratch palette to represent a variety of powerful machine learning models.
For example, imagenet: a model that can recognise the object in a photo that you give it. It can recognise over a thousand different things.
With just a few Scratch blocks, students can start building projects that do remarkably powerful and impressive things.