{"id":5837,"date":"2026-01-20T01:18:38","date_gmt":"2026-01-20T01:18:38","guid":{"rendered":"https:\/\/dalelane.co.uk\/blog\/?p=5837"},"modified":"2026-03-14T21:04:29","modified_gmt":"2026-03-14T21:04:29","slug":"explaining-few-shot-prompting-in-scratch","status":"publish","type":"post","link":"https:\/\/dalelane.co.uk\/blog\/?p=5837","title":{"rendered":"Explaining few-shot prompting in Scratch"},"content":{"rendered":"<p><strong>In this post, I want to share a <a href=\"https:\/\/machinelearningforkids.co.uk\/#!\/worksheets?worksheet=Translation+Telephone\">recent worksheet<\/a> I wrote for <a href=\"https:\/\/machinelearningforkids.co.uk\/\">Machine Learning for Kids<\/a>. It is a hands-on project to give students an insight into an aspect of prompt engineering with language models.<\/strong><\/p>\n<p>Students create a Scratch project with four sprites.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/images.dalelane.co.uk\/2026-01-20-mlforkids\/01-screenshot.jpg?raw=true\" style=\"width: 100%; max-width: 600px;\"\/><\/p>\n<p>They start things off by writing an English sentence which goes to their first sprite.<\/p>\n<p>The first sprite waits to be given an English sentence, and uses a language model to translate it into French.<\/p>\n<p>The second sprite waits to be given a French sentence, and uses a language model to translate it into German.<\/p>\n<p>The third sprite waits to be given a German sentence, and uses a language model to translate it into Chinese.<\/p>\n<p>The fourth sprite waits to be given a Chinese sentence, and uses a language model to translate it into English.<\/p>\n<p>This is then received by the first sprite, and the process continues again.<\/p>\n<p><iframe loading=\"lazy\" width=\"450\" height=\"325\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/MBI-e8_7heE?si=bpSu8voEsNO2gCd8&#038;origin=https:\/\/dalelane.co.uk\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><br \/>\n<small><a href=\"https:\/\/youtu.be\/MBI-e8_7heE\">screen recording of the Scratch project on YouTube<\/a><\/small><\/p>\n<p>Because the translations aren\u2019t 100% perfect, like <a href=\"https:\/\/en.wikipedia.org\/wiki\/Telephone_game\">the famous children\u2019s game<\/a>, the text passed between the sprites gets further and further from the student\u2019s starting sentence.<\/p>\n<p>I\u2019ve been <a href=\"https:\/\/bsky.app\/profile\/dalelane.co.uk\/post\/3lzjsipg5i225\">kicking around this idea for a few months<\/a>, but it didn\u2019t work well with the groups that I tried the early project incarnations with. I think it&#8217;s in a better state now, so I&#8217;ve <a href=\"https:\/\/machinelearningforkids.co.uk\/worksheets\">added the worksheet to the site<\/a>.<\/p>\n<p>The project has given me a chance to introduce a range of different ideas&#8230;<\/p>\n<p><!--more--><\/p>\n<hr \/>\n<h3>Semantic Drift<\/h3>\n<p>Semantic drift in generative AI\u00a0refers to the phenomenon where AI-generated text gradually diverges from the intended subject, context, or factual accuracy as generation progresses. This is highlighted in the way that the project behaves.<\/p>\n<p>The more details you include in the input sentence, the more hallucinated details the language models introduce.<\/p>\n<p>The longer it runs, the more opportunities for the model to introduce problems &#8211; with the sentence getting further and further from the student\u2019s original sentence.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/images.dalelane.co.uk\/2026-01-20-mlforkids\/02-screenshot.jpg?raw=true\" style=\"width: 100%; max-width: 600px;\"\/><\/p>\n<p>This is an useful aspect to highlight: the importance of knowing when to stop, which is critical for reliable AI output.<\/p>\n<blockquote><p><a href=\"https:\/\/arxiv.org\/abs\/2404.05411v1\">Know When To Stop: A Study of Semantic Drift in Text Generation<\/a><\/p><\/blockquote>\n<p>Semantic drift needs to be actively monitored and managed in real-world AI applications. Being able to evaluate the degree of drift is an interesting area.<\/p>\n<blockquote><p><a href=\"https:\/\/arxiv.org\/abs\/2509.04438\">The Telephone Game: Evaluating Semantic Drift in Unified Models<\/a><\/p><\/blockquote>\n<hr \/>\n<h3>Temperature and Top-P<\/h3>\n<p>This helps to reinforce a lesson from <a href=\"https:\/\/dalelane.co.uk\/blog\/?p=5538\">one of my earlier worksheets<\/a> about the use of temperature and Top-P in controlling the output from language models.<\/p>\n<p>Experimenting with the temperature and Top-P in the Scratch project illustrates this well. The higher the temperature and Top-P values, the faster the translation diverges from the original input sentence, because of the increased creativity in the model\u2019s outputs.<\/p>\n<hr \/>\n<h3>Bias<\/h3>\n<p>Some sentences diverge during translations in particularly illuminating ways.<\/p>\n<p>Sentences that contradict common stereotypes are often translated in ways that conform with the stereotype. Gender stereotypes are an easy example of this &#8211; sentences about female engineers, male nurses, or female doctors are all often inverted after a few translations.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/images.dalelane.co.uk\/2026-01-20-mlforkids\/02-screenshot.jpg?raw=true\" style=\"width: 100%; max-width: 600px;\"\/><\/p>\n<p>The nature of the changes are a useful reminder of the way that language models predict the most likely next work based on the knowledge they\u2019ve derived from the documents in their training.<\/p>\n<p>The most likely next word will be influenced by what was most common in the documents. That means that biases are hard to avoid.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/images.dalelane.co.uk\/2026-01-20-mlforkids\/03-screenshot.jpg?raw=true\" style=\"width: 100%; max-width: 600px;\"\/><\/p>\n<hr \/>\n<h3>Zero-shot \/ One-shot \/ Few-shot prompting<\/h3>\n<p>It took a while to get the Scratch project to work correctly.<\/p>\n<p>If you just ask the language models for a translation, they often return the translation together with some accompanying commentary or explanation.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/images.dalelane.co.uk\/2026-01-20-mlforkids\/05-screenshot.jpg?raw=true\" style=\"border: thin black solid; width: 100%; max-width: 600px;\"\/><\/p>\n<p>In the Scratch project, that means the commentary gets passed to the next sprite, which then does the same\u2026<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/images.dalelane.co.uk\/2026-01-20-mlforkids\/04-screenshot.jpg?raw=true\" style=\"width: 100%; max-width: 600px;\"\/><\/p>\n<p>I experimented with a variety of prompts that all would repeatedly fall prone to this mistake.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/images.dalelane.co.uk\/2026-01-20-mlforkids\/09-screenshot.jpg?raw=true\" style=\"width: 100%; max-width: 600px;\"\/><\/p>\n<p>I tried adding additional instructions, specifying that I wanted the model to return only the translation without any additional comments.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/images.dalelane.co.uk\/2026-01-20-mlforkids\/06-screenshot.jpg?raw=true\" style=\"border: thin black solid; width: 100%; max-width: 600px;\"\/><\/p>\n<p>That works with some model types, but with most of them that isn\u2019t enough.<\/p>\n<p>I tried adding an example showing the sort of translation output I wanted.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/images.dalelane.co.uk\/2026-01-20-mlforkids\/07-screenshot.jpg?raw=true\" style=\"border: thin black solid; width: 100%; max-width: 600px;\"\/><\/p>\n<p>This is a nice example of \u201c<strong><a href=\"https:\/\/www.ibm.com\/think\/topics\/one-shot-prompting\">one-shot prompting<\/a><\/strong>\u201d &#8211; a technique where a single example is included alongside an instruction to guide a language model how to perform a task.<\/p>\n<p>I tried adding more examples, showing specifically how I wanted only translations as output.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/images.dalelane.co.uk\/2026-01-20-mlforkids\/08-screenshot.jpg?raw=true\" style=\"border: thin black solid; width: 100%; max-width: 600px;\"\/><\/p>\n<p>This is a nice example of \u201c<strong><a href=\"https:\/\/www.ibm.com\/think\/topics\/few-shot-prompting\">few-shot prompting<\/a><\/strong>\u201d &#8211; a technique where a few examples are included alongside an instruction to help the model understand what is wanted.<\/p>\n<p>Experimenting with these, and contrasting the results with the sorts of results without the examples (\u201d<strong><a href=\"https:\/\/www.ibm.com\/think\/topics\/zero-shot-prompting\">zero-shot prompting<\/a><\/strong>\u201d) is a nice introduction to a key technique in prompt engineering.<\/p>\n<p>For almost all of the small language models in the site, the examples are enough to get it to work correctly most of the time.<\/p>\n<p>I ended up making this the main message of the project. The project is a hands-on introduction to one-shot and few-shot prompting. By experimenting with these different approaches to creating prompts, students can see for themselves how it changes how the model behaves.<\/p>\n<hr \/>\n<p><img decoding=\"async\" src=\"https:\/\/images.dalelane.co.uk\/2026-01-20-mlforkids\/11-worksheet.jpg?raw=true\" style=\"border: thin black solid; width: 100%; max-width: 600px;\"\/><\/p>\n<p>This is a continuation of what I was talking about last year in <a href=\"https:\/\/dalelane.co.uk\/blog\/?p=5719\">bringing generative AI into code clubs<\/a> and the classroom.<\/p>\n<p><strong>Please give it a try and let me know how you get on, and what you think.<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this post, I want to share a recent worksheet I wrote for Machine Learning for Kids. It is a hands-on project to give students an insight into an aspect of prompt engineering with language models. Students create a Scratch project with four sprites. They start things off by writing an English sentence which goes [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":5838,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[614],"class_list":["post-5837","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech","tag-mlforkids"],"_links":{"self":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts\/5837","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5837"}],"version-history":[{"count":6,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts\/5837\/revisions"}],"predecessor-version":[{"id":5866,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts\/5837\/revisions\/5866"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/media\/5838"}],"wp:attachment":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5837"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5837"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5837"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}