{"id":4447,"date":"2021-07-10T20:30:28","date_gmt":"2021-07-10T20:30:28","guid":{"rendered":"https:\/\/dalelane.co.uk\/blog\/?p=4447"},"modified":"2021-07-10T20:30:28","modified_gmt":"2021-07-10T20:30:28","slug":"visualizing-tensorflow-image-classifier-behaviour","status":"publish","type":"post","link":"https:\/\/dalelane.co.uk\/blog\/?p=4447","title":{"rendered":"Visualizing TensorFlow image classifier behaviour"},"content":{"rendered":"<p><strong>How to use Scratch to create a visualization that explains what parts of an image a TensorFlow image classifier finds the most significant.<\/strong><\/p>\n<p>An image classifier recognizes this image as an image of The Doctor.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/vots5qog3am6095iclontwdjiljlix55.png\" style=\"border: thin black solid\"\/><br \/>\n<a href=\"https:\/\/ibm.box.com\/shared\/static\/vots5qog3am6095iclontwdjiljlix55.png\" target=\"_blank\" rel=\"noopener\"><small>prediction: The Doctor<br \/>\nconfidence: 99.97%<\/small><\/a><\/p>\n<p>Why? What parts of the image did the classifier recognize as indicating that this is the Doctor?<\/p>\n<p>How could we tell?<\/p>\n<p><!--more-->One idea could be to cover part of the image and see what difference it makes.<\/p>\n<p>For example, if I cover part of the background, the image classifier still recognizes this as The Doctor, and the confidence has hardly changed.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/e6yubw74pbtf6dg11g9xc1cxxragcvkr.png\" style=\"border: thin black solid\"\/><br \/>\n<a href=\"https:\/\/ibm.box.com\/shared\/static\/e6yubw74pbtf6dg11g9xc1cxxragcvkr.png\" target=\"_blank\" rel=\"noopener\"><small>prediction: The Doctor<br \/>\nconfidence: 99.98%<\/small><\/a><\/p>\n<p>The image classifier probably didn&#8217;t find the content of that part of the image very significant.<\/p>\n<p>If I cover the face, the image classifier still recognizes it as the Doctor, but it is a bit less confident.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/xmsht3oov17wojjbd3kl3w2uvuwm5y9q.png\" style=\"border: thin black solid\"\/><br \/>\n<a href=\"https:\/\/ibm.box.com\/shared\/static\/xmsht3oov17wojjbd3kl3w2uvuwm5y9q.png\" target=\"_blank\" rel=\"noopener\"><small>prediction: The Doctor<br \/>\nconfidence: 95.44%<\/small><\/a><\/p>\n<p>The face isn&#8217;t essential. The rest of the body is still enough to identify The Doctor. But the contents of that part of the image must have some significance, because the image classifier was less confident without it.<\/p>\n<p>Another example &#8211; what if I cover the chest?<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/hbmpr3zu53r2bik2gmn6uk0488z7yxft.png\" style=\"border: thin black solid\"\/><br \/>\n<a href=\"https:\/\/ibm.box.com\/shared\/static\/hbmpr3zu53r2bik2gmn6uk0488z7yxft.png\" target=\"_blank\" rel=\"noopener\"><small>prediction: The Doctor<br \/>\nconfidence: 78.41%<\/small><\/a><\/p>\n<p>Still recognized as the Doctor, but much less confident. The contents of that section of the image must be very significant.<\/p>\n<p>The right leg?<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/uf14rxszzdrjm3z0av84xlfxs8v4xwux.png\" style=\"border: thin black solid\"\/><br \/>\n<a href=\"https:\/\/ibm.box.com\/shared\/static\/uf14rxszzdrjm3z0av84xlfxs8v4xwux.png\" target=\"_blank\" rel=\"noopener\"><small>prediction: The Doctor<br \/>\nconfidence: 94.71%<\/small><\/a><\/p>\n<p>A little relevant &#8211; loses 5% confidence.<\/p>\n<p>The left leg?<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/d3s9za1lc34qdf8qqxnof8vqv9u38fdg.png\" style=\"border: thin black solid\"\/><br \/>\n<a href=\"https:\/\/ibm.box.com\/shared\/static\/d3s9za1lc34qdf8qqxnof8vqv9u38fdg.png\" target=\"_blank\" rel=\"noopener\"><small>prediction: The Doctor<br \/>\nconfidence: 56.60%<\/small><\/a><\/p>\n<p>Very significant! This has had the biggest impact on the confidence so far.<\/p>\n<p>Is it the leg? The shiny shoe?<\/p>\n<p><strong>I like this approach as a simple, intuitive way of testing an image classifier.<\/strong><\/p>\n<p>Displaying confidence numbers for one section of the image at a time is a bit slow. Better to have a way to visualize the change in confidence numbers for all the areas of the image.<\/p>\n<p>I can vary the transparency of the black square as a way of visualizing the impact on the classifier&#8217;s confidence.<\/p>\n<p>Setting the <strong>ghost<\/strong> effect to <code>0<\/code> means no transparency &#8211; the square appears black.<\/p>\n<p>Setting the <strong>ghost<\/strong> effect to <code>100<\/code> means full transparency &#8211; the square cannot be seen any longer.<\/p>\n<p>Using the ghost effect to visualize the difference:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/653kn4svdozafkqkofbbvrbx2uajg8lo.png\" style=\"border: thin black solid\"\/><\/p>\n<p>Small difference to the image classifier&#8217;s confidence = small ghost value = the square displayed as black<\/p>\n<p>High difference to the image classifier&#8217;s confidence = high ghost value = the square appears more see-through.<\/p>\n<p>(<em>I set the square ghost effect set to 0 while measuring the image classifier confidence, and only set the ghost effect afterwards to visualize the results without the transparency effects affecting the measurements<\/em>).<\/p>\n<p>If I do this for every possible location of my black square, I get this:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/wko9tl8p968ky52yw6n9ftdfj0gd4z41.png\" style=\"border: thin black solid\"\/><\/p>\n<p>The left and right side of the photo isn&#8217;t very relevant, the face and hat are quite significant, and his shiny left shoe is very significant.<\/p>\n<p>I really like this.<\/p>\n<p>Let&#8217;s try it with another image.<\/p>\n<p>The image classifier recognizes this image as Gandalf.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/oyftmrokf9e4zrqvoeeturfy5co1cx89.png\" style=\"border: thin black solid\"\/><\/p>\n<p>If I cover up sections of the image in sequence, the change this makes to the confidence can be visualised like this:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/df4aqpnvf8zdbyoh2nx6ydpntqe1rtj6.png\" style=\"border: thin black solid\"\/><\/p>\n<p>I love that the top of his cane is significant.<\/p>\n<p>One more.<\/p>\n<p>The image classifier recognizes this as Peter Venkman.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/9gh246byymczk7oembib6bklfbw5yzyn.png\" style=\"border: thin black solid\"\/><\/p>\n<p>Showing the impact that different sections of the image have on the image classifier&#8217;s confidence results in:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ibm.box.com\/shared\/static\/4zzxzzkjaxxdenbfllaukkcl2u2chihg.png\" style=\"border: thin black solid\"\/><\/p>\n<p>A bit surprising.<\/p>\n<p>Is it the black &#8220;gloves&#8221; that the model is picking up on?<\/p>\n<p><strong>What now?<\/strong><\/p>\n<p>It&#8217;s a fun idea, but I&#8217;m not really sure what to do with it.<\/p>\n<p>Obviously I started playing with this thinking it could form the basis of a new <a href=\"http:\/\/machinelearningforkids.co.uk\/worksheets\">worksheet for <strong>Machine Learning for Kids<\/strong><\/a>. Teachers are always asking for ways to explain more about the results that machine learning models give, so I tried this as a way of being able to do that in a simple visual way using <a href=\"https:\/\/scratch.mit.edu\">Scratch<\/a>.<\/p>\n<p>And while I find the output this makes interesting, I&#8217;m an unashamed geek. I&#8217;m not sure how to make an activity out of this that children would find interesting.<\/p>\n<p>I&#8217;m open to suggestions. If you can think of a way of making this into a fun project, please <a href=\"https:\/\/groups.google.com\/g\/mlforkids\">let me know<\/a>!<\/p>\n<p>In the meantime, if you&#8217;d like to try the Scratch project I used for these screenshots, download <a href=\"http:\/\/ibm.box.com\/v\/image-classifier-visualisation\">image-classifier-relevance-demo.sb3<\/a> and open it using my <a href=\"https:\/\/machinelearningforkids.co.uk\/scratch3\/\">modified version of Scratch<\/a>. (Press the Green Flag then press the X key to run it)<\/p>\n<p>If you&#8217;d like to create your own version of this with your own machine learning model and images, you can <a href=\"https:\/\/github.com\/IBM\/taxinomitis-docs\/raw\/master\/project-worksheets\/pdf\/worksheet-relevance.pdf\">download the instructions<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>How to use Scratch to create a visualization that explains what parts of an image a TensorFlow image classifier finds the most significant. An image classifier recognizes this image as an image of The Doctor. prediction: The Doctor confidence: 99.97% Why? What parts of the image did the classifier recognize as indicating that this is [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":4456,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[580,587,536],"class_list":["post-4447","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-code","tag-machine-learning","tag-mlforkids-tech","tag-scratch"],"_links":{"self":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts\/4447","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4447"}],"version-history":[{"count":0,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts\/4447\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/media\/4456"}],"wp:attachment":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4447"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4447"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4447"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}