One of the conference presentations I gave last year was a talk at Heapcon, sharing some stories of AI/ML lessons I’ve run in schools. The focus of the talk was how I’ve seen children understand and react to machine learning technologies.
I’ve since expanded the ideas in this talk into a mini-book at MachineLearningForKids.co.uk/stories but here is a recording of where some of these stories started.
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
Some of the best lessons I’ve run have been where a machine learning model did the “wrong” thing. Students learn a lot from seeing an example of machine learning not doing what we want.
Perhaps my favourite example of when something went wrong was a lesson that I did on Rock, Paper, Scissors.
Students make a Scratch project to play Rock, Paper, Scissors. They use their webcam to collect example photos of their hands making the shapes of rock (fist), paper (flat hand), and scissors (two fingers) – and use those photos to train a machine learning model to recognise their hand shapes.
It this particular lesson, the project worked really nicely for nearly all the students. There was one student where things went a little bit wrong.
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
Machine learning models don’t just give an answer, they also typically return a score showing how confident the system is that it has correctly recognized the input.
Knowing how to use this confidence score is an important part of using machine learning.
An example of how a student used this in their project is shown in this video. Their Scratch script says that if the machine learning model has less than 50% confidence that it has correctly recognized a command, it replies “I’m sorry I don’t understand” (instead of taking an action).
The project was trained to understand commands to turn on a lamp or a fan. When they asked it to “Make me a cheese sandwich”, their assistant didn’t try to turn the lamp or fan on, it said “I don’t understand”
This command was unlike any of the example commands that had been used to train the model, causing the machine learning model to have a very low level of confidence that it had recognised the command. This was represented with a very low confidence score.
The challenge for the students making this project was knowing what confidence score threshold to use. Instead of telling them a good value to use, I let them try out different values and decide for themselves. By playing and experimenting with it, they get a feel for the impact that this threshold has on their project.
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
Digital assistants, such as Amazon’s Alexa or Google’s Home, is a great basis for student projects, because it is a use case that the students are familiar with.
A project I’ve run many times is to help students create their own virtual assistant in Scratch, by training a machine learning model to recognise commands like “turn on a lamp”. They do this by collecting examples of how they would phrase those commands.
This is an example of what this can look like:
By the time I do this project, my classes will normally have learned that they need to test their machine learning model with examples they didn’t use for training.
Students like trying to break things – they enjoy looking for edge cases that will trip up the machine learning model. In this case, it can be unusual ways of phrasing commands that their model won’t recognise.
I remember one student came up with ‘activate the spinny thing!’ as a way of asking to turn on a fan, which I thought was inspired.
But when the model gets something wrong, what should they do about that?
Students will normally suggest by themselves that a good thing to do is to collect examples of what their machine learning model gets wrong, and add those to one of their training buckets.
That means every time it makes a mistake, they can add that to their training, and train a new model – and their model will get better at recognizing commands like that in future.
They typically think of this for themselves, because with a little understanding about how machine learning technology behaves, this is a natural and obvious thing to do.
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
Students can make a Scratch project to play Rock, Paper, Scissors. They use their webcam to collect example photos of their hands making the shapes of ‘rock’ (fist), ‘paper’ (flat hand), and ‘scissors’ (two fingers). Then they use those photos to train a machine learning model to recognise their hand shapes.
I often have at least one enthusiastic (or impatient!) student keen to create their machine learning model as quickly as possible. They’ll hold their hand fairly still in front of the webcam, and keep hitting the camera button. The result is that they’ll take a large number of very similar photos.
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
I like running projects like Pac-Man (where students collect training examples by playing a game) with a class after they’ve done a project like chatbots (where students collect training examples by typing them in).
This post was written for MachineLearningForKids.co.uk/stories: a series of stories I wrote to describe student experiences of artificial intelligence and machine learning, that I’ve seen from time I spend volunteering in schools and code clubs.
This video starts with one student’s training data from their Pac-Man project. They played a simplified version of Pac-Man in Scratch.
They set up the game in Scratch so that every time they pressed an arrow key (‘left’, ‘right’, ‘up’, or ‘down’) as well as moving their Pac-Man character, it put the x,y coordinates for Pac-Man and the Ghost into the training bucket for that direction.
For example, when Pac-Man was at x=3,y=4 and the Ghost was at x=5,y=5 – they went right. That became a training example for when it’s good to go right. and so on.