{"id":2092,"date":"2012-04-03T22:29:32","date_gmt":"2012-04-03T22:29:32","guid":{"rendered":"http:\/\/dalelane.co.uk\/blog\/?p=2092"},"modified":"2012-04-03T23:09:21","modified_gmt":"2012-04-03T23:09:21","slug":"smile","status":"publish","type":"post","link":"https:\/\/dalelane.co.uk\/blog\/?p=2092","title":{"rendered":"Smile!"},"content":{"rendered":"<p><script type=\"text\/javascript\" src=\"http:\/\/www.google.com\/jsapi\"><\/script><script type=\"text\/javascript\" src=\"http:\/\/dalelane.co.uk\/blog\/post-images\/120403-timeseriescharts.min.js\"><\/script><script type=\"text\/javascript\">\n    google.load('visualization', '1', {packages: ['annotatedtimeline']});    \n    google.setOnLoadCallback(drawPreviewVisualizations);<\/script><em>The visualisations on this page need Flash and Javascript. Apologies if that means most of this page doesn&#8217;t work for you!<\/em><\/p>\n<p>This is my mood (as identified from my facial expressions) over time while watching <a href=\"http:\/\/www.bbc.co.uk\/programmes\/b006v0dz\">Never Mind the Buzzcocks<\/a>. <\/p>\n<div id=\"120403visualisationone\" style=\"width: 450px; height: 310px;\"><\/div>\n<p>The green areas are times where I looked happy.<\/p>\n<p>This shows my mood while playing XBox Live. Badly. <\/p>\n<div id=\"120403visualisationtwo\" style=\"width: 450px; height: 310px;\"><\/div>\n<p>The red areas are times where I looked cross. <\/p>\n<p>I smile more while watching comedies than when getting shot in the head. Shocker, eh? <\/p>\n<p><!--more-->A couple of years ago, I played with the idea of <a href=\"http:\/\/dalelane.co.uk\/blog\/?p=1176\">capturing my TV viewing habits and making some visualisations<\/a> from them. This is a sort of return to that idea in a way. <\/p>\n<p><script type=\"text\/javascript\">\n    google.setOnLoadCallback(drawBodyVisualizations);\n<\/script><\/p>\n<p>A webcam lives on the top of our TV, mainly for skype calls. I was thinking that when watching TV, we&#8217;re often more or less looking at the webcam. What could it capture? <\/p>\n<p>What about keeping track of how much I smile while watching a comedy, as a way of measuring which comedies I find funnier? <\/p>\n<p><img decoding=\"async\" src=\"http:\/\/dalelane.co.uk\/blog\/post-images\/120403-smiling1.jpg\"\/><\/p>\n<p>This suggests that, overall, I might&#8217;ve found Mock the Week funnier. But, this shows my facial expressions while watching <a href=\"http:\/\/www.bbc.co.uk\/programmes\/b006t6vf\">Mock the Week<\/a>. <\/p>\n<div id=\"120403visualisationthree\" style=\"width: 450px; height: 310px;\"><\/div>\n<p>It seems that, unlike with Buzzcocks, I really enjoyed the beginning bit, then perhaps got a bit less enthusiastic after a bit. <\/p>\n<p>What about <a href=\"http:\/\/www.thedailyshow.com\/\">The Daily Show with Jon Stewart<\/a>?<\/p>\n<div id=\"120403visualisationsix\" style=\"width: 450px; height: 310px;\"><\/div>\n<p>I think the two neutral bits are breaks for adverts. <\/p>\n<p>Or classifying facial expressions by mood and looking for the dominant mood while watching something more serious on TV? <\/p>\n<p>This shows my facial expressions while catching a bit of <a href=\"http:\/\/www.bbc.co.uk\/programmes\/b006mk25\">Newsnight<\/a>.<\/p>\n<div id=\"120403visualisationfour\" style=\"width: 450px; height: 310px;\"><\/div>\n<p>On the whole, my expression remained reasonably neutral whilst watching the news, but you can see where I visibly reacted to a few of the news items. <\/p>\n<p>Or looking to see how I react to playing different games on the XBox?<\/p>\n<p>This shows my facial expressions while playing Modern Warfare 3 last night.<\/p>\n<div id=\"120403visualisationfive\" style=\"width: 450px; height: 310px;\"><\/div>\n<p>Mostly &#8220;sad&#8221;, as I kept getting shot in the head. With occasional moments where something made me smile or laugh, presumably when something went well. <\/p>\n<p>Compare that with what I looked like while playing Blur (a car racing game).<\/p>\n<div id=\"120403visualisationtwob\" style=\"width: 450px; height: 310px;\"><\/div>\n<p>It seems that I looked a little more aggressive while driving than running around getting shot. For last night, at any rate. <\/p>\n<p><strong>Not just about watching TV<\/strong><\/p>\n<p>I&#8217;m using face recognition to tell my expressions apart from other people in the room. This means there is also a bunch of stuff I could look into around how my expressions change based on who else is in the room, and their expressions? <\/p>\n<p>For example, looking at how much of the time I spend smiling when I&#8217;m the only one in the room, compared with when one or both of <a href=\"https:\/\/picasaweb.google.com\/dale.lane\">my kids<\/a> are in the room. <\/p>\n<p><img decoding=\"async\" src=\"http:\/\/dalelane.co.uk\/blog\/post-images\/120403-smiling2.jpg\"\/><\/p>\n<p>To be fair, this isn&#8217;t a scientific comparison. There are lots of factors here &#8211; for example, when the girls are in the room, I&#8217;ll probably be doing a different activity (such as playing a game with them or reading a story) to what I would be doing when by myself (typically doing some work on my laptop, or reading). This could be showing how much I smile based on which activity I&#8217;m doing. But I thought it was a cute result, anyway. <\/p>\n<p><strong>Limitations<\/strong><\/p>\n<p>This isn&#8217;t sophisticated stuff. <\/p>\n<p>The webcam is an old, cheap one that only has a maximum resolution of 640&#215;480, and I&#8217;m sat at the other end of the room to it. I can&#8217;t capture fine facial detail here. <\/p>\n<p>I&#8217;m not doing anything complicated with video feeds. I&#8217;m just sampling by taking photos at regular intervals. You could reasonably argue that the funniest joke in the world isn&#8217;t going to get me to sustain a broad smile for over a minute, so there is a lot being missed here. <\/p>\n<p>And my y-axis is a little suspect. I&#8217;m using the percentage level of confidence that the classifier had in identifying the mood. I&#8217;m doing this on the assumption that the more confident the classifier was, the stronger or more pronounced my facial expression probably was. <\/p>\n<p>Regardless of all of this, I think the idea is kind of interesting. <\/p>\n<p><strong>How does it work?<\/strong><\/p>\n<p>The <a href=\"http:\/\/dalelane.co.uk\/blog\/?p=1228\">media server under the TV<\/a> runs Ubuntu, so I had a lot of options. My language-of-choice for quick hacks is Python, so I used <a href=\"http:\/\/www.pygame.org\/\">pygame<\/a> to capture stills from the webcam.<\/p>\n<p>For the complicated facial stuff, I&#8217;m using <a href=\"http:\/\/developers.face.com\/\">web services from face.com<\/a>. <\/p>\n<p>They have a REST API for uploading a photo to, getting back a blob of JSON with information about faces detected in the photo. This includes a guess at the gender, a description of mood from the facial expression, whether the face is smiling, and even an estimated age (often not complimentary!). <\/p>\n<p>I used a <a href=\"https:\/\/github.com\/chris-piekarski\/python-face-client\">Python client library from github<\/a> to build the requests, so getting this working took no time at all. <\/p>\n<p>There is a face recognition REST API. You can train the system to recognise certain faces. I didn&#8217;t write any code to do this, as I don&#8217;t need to do it again, so I did this using the <a href=\"http:\/\/developers.face.com\/tools\/#faces\/detect\">API sandbox on the face.com website<\/a>. I gave it a dozen or so photos with my face in, which seemed to be more than enough for the system to be able to tell me apart from someone else in the room. <\/p>\n<p>My monitoring code puts what it measures about me in one log, and what it measures about anyone else in a second &#8220;guest log&#8221;. <\/p>\n<p>This is the result of one evening&#8217;s playing, so I&#8217;ve not really finished with this. I think there is more to do with it, but for what it&#8217;s worth, this is what I&#8217;ve come up with so far. <\/p>\n<p><strong>The script<\/strong><\/p>\n<pre style=\"border: thin solid silver; background-color: #eeeeee; padding: 0.7em; font-size: 1em; overflow: auto;\">####################################################\r\n#  IMPORTS\r\n####################################################\r\n\r\n# imports for capturing a frame from the webcam\r\nimport pygame.camera\r\nimport pygame.image\r\n\r\n# import for detecting faces in the photo\r\nimport face_client\r\n\r\n# import for storing data\r\nfrom pysqlite2 import dbapi2 as sqlite\r\n\r\n# miscellaneous imports\r\nfrom time import strftime, localtime, sleep\r\nimport os\r\nimport sys\r\n\r\n\r\n####################################################\r\n# CONSTANTS \r\n####################################################\r\n\r\nDB_FILE_PATH=\"\/home\/dale\/dev\/audiencemonitor\/data\/log.db\"\r\nFACE_COM_APIKEY=\"MY_API_KEY_HERE\"\r\nFACE_COM_APISECRET=\"MY_API_SECRET_HERE\"\r\nDALELANE_FACETAG=\"dalelane@dale.lane\"\r\nPOLL_FREQUENCY_SECONDS=3\r\n\r\n\r\nclass AudienceMonitor():\r\n\r\n    # \r\n    # prepare the database where we store the results\r\n    #\r\n    def initialiseDB(self):\r\n        self.connection = sqlite.connect(DB_FILE_PATH, detect_types=sqlite.PARSE_DECLTYPES|sqlite.PARSE_COLNAMES)\r\n        cursor = self.connection.cursor()\r\n\r\n        cursor.execute('SELECT name FROM sqlite_master WHERE type=\"table\" AND NAME=\"facelog\" ORDER BY name')\r\n        if not cursor.fetchone():\r\n            cursor.execute('CREATE TABLE facelog(ts timestamp unique default current_timestamp, isSmiling boolean, smilingConfidence int, mood text, moodConfidence int)')\r\n        \r\n        cursor.execute('SELECT name FROM sqlite_master WHERE type=\"table\" AND NAME=\"guestlog\" ORDER BY name')\r\n        if not cursor.fetchone():\r\n            cursor.execute('CREATE TABLE guestlog(ts timestamp unique default current_timestamp, isSmiling boolean, smilingConfidence int, mood text, moodConfidence int, agemin int, ageminConfidence int, agemax int, agemaxConfidence int, ageest int, ageestConfidence int, gender text, genderConfidence int)')\r\n\r\n        self.connection.commit()\r\n\r\n\r\n    #\r\n    # initialise the camera\r\n    #\r\n    def prepareCamera(self):\r\n        # prepare the webcam\r\n        pygame.camera.init()\r\n        self.camera = pygame.camera.Camera(pygame.camera.list_cameras()[0], (900, 675))\r\n        self.camera.start()\r\n\r\n    #\r\n    # take a single frame and store in the path provided\r\n    #\r\n    def captureFrame(self, filepath):\r\n        # save the picture\r\n        image = self.camera.get_image()\r\n        pygame.image.save(image, filepath)\r\n     \r\n\r\n    #\r\n    # gets a string representing the current time to the nearest second\r\n    #\r\n    def getTimestampString(self):\r\n        return strftime(\"%Y%m%d%H%M%S\", localtime())\r\n\r\n\r\n    #\r\n    # get attribute from face detection response\r\n    #\r\n    def getFaceDetectionAttributeValue(self, face, attribute):\r\n        value = None\r\n        if attribute in face['attributes']:\r\n            value = face['attributes'][attribute]['value']\r\n        return value\r\n\r\n    #\r\n    # get confidence from face detection response\r\n    #\r\n    def getFaceDetectionAttributeConfidence(self, face, attribute):\r\n        confidence = None\r\n        if attribute in face['attributes']:\r\n            confidence = face['attributes'][attribute]['confidence']\r\n        return confidence\r\n\r\n\r\n\r\n    #\r\n    # detects faces in the photo at the specified path, and returns info\r\n    #\r\n    def faceDetection(self, photopath):\r\n        client = face_client.FaceClient(FACE_COM_APIKEY, FACE_COM_APISECRET)\r\n        response = client.faces_recognize(DALELANE_FACETAG, file_name=photopath)\r\n        faces = response['photos'][0]['tags']\r\n        for face in faces:\r\n            userid = \"\"\r\n            faceuseridinfo = face['uids']\r\n            if len(faceuseridinfo) &gt; 0:\r\n                userid = faceuseridinfo[0]['uid']\r\n            if userid == DALELANE_FACETAG:\r\n                smiling = self.getFaceDetectionAttributeValue(face, \"smiling\")\r\n                smilingConfidence = self.getFaceDetectionAttributeConfidence(face, \"smiling\")\r\n                mood = self.getFaceDetectionAttributeValue(face, \"mood\")\r\n                moodConfidence = self.getFaceDetectionAttributeConfidence(face, \"mood\")\r\n                self.storeResults(smiling, smilingConfidence, mood, moodConfidence)\r\n            else:\r\n                smiling = self.getFaceDetectionAttributeValue(face, \"smiling\")\r\n                smilingConfidence = self.getFaceDetectionAttributeConfidence(face, \"smiling\")\r\n                mood = self.getFaceDetectionAttributeValue(face, \"mood\")\r\n                moodConfidence = self.getFaceDetectionAttributeConfidence(face, \"mood\")\r\n                agemin = self.getFaceDetectionAttributeValue(face, \"age_min\")\r\n                ageminConfidence = self.getFaceDetectionAttributeConfidence(face, \"age_min\")\r\n                agemax = self.getFaceDetectionAttributeValue(face, \"age_max\")\r\n                agemaxConfidence = self.getFaceDetectionAttributeConfidence(face, \"age_max\")\r\n                ageest = self.getFaceDetectionAttributeValue(face, \"age_est\")\r\n                ageestConfidence = self.getFaceDetectionAttributeConfidence(face, \"age_est\")\r\n                gender = self.getFaceDetectionAttributeValue(face, \"gender\")\r\n                genderConfidence = self.getFaceDetectionAttributeConfidence(face, \"gender\")\r\n                # if the face wasnt recognisable, it might've been me after all, so ignore\r\n                if \"tid\" in face and face['recognizable'] == True:\r\n                    self.storeGuestResults(smiling, smilingConfidence, mood, moodConfidence, agemin, ageminConfidence, agemax, agemaxConfidence, ageest, ageestConfidence, gender, genderConfidence)\r\n                    print face['tid']\r\n\r\n\r\n    #\r\n    # stores face results in the DB\r\n    #\r\n    def storeGuestResults(self, smiling, smilingConfidence, mood, moodConfidence, agemin, ageminConfidence, agemax, agemaxConfidence, ageest, ageestConfidence, gender, genderConfidence):\r\n        cursor = self.connection.cursor()\r\n        cursor.execute('INSERT INTO guestlog(isSmiling, smilingConfidence, mood, moodConfidence, agemin, ageminConfidence, agemax, agemaxConfidence, ageest, ageestConfidence, gender, genderConfidence) values(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)',\r\n                        (smiling, smilingConfidence, mood, moodConfidence, agemin, ageminConfidence, agemax, agemaxConfidence, ageest, ageestConfidence, gender, genderConfidence))\r\n        self.connection.commit()\r\n\r\n\r\n    #\r\n    # stores face results in the DB\r\n    #\r\n    def storeResults(self, smiling, smilingConfidence, mood, moodConfidence):\r\n        cursor = self.connection.cursor()\r\n        cursor.execute('INSERT INTO facelog(isSmiling, smilingConfidence, mood, moodConfidence) values(?, ?, ?, ?)',\r\n                        (smiling, smilingConfidence, mood, moodConfidence))\r\n        self.connection.commit()\r\n\r\n\r\nmonitor = AudienceMonitor()\r\nmonitor.initialiseDB()\r\nmonitor.prepareCamera()\r\nwhile True:\r\n    photopath = \"data\/photo\" + monitor.getTimestampString() + \".bmp\"\r\n    monitor.captureFrame(photopath)\r\n    try:\r\n        faceresults = monitor.faceDetection(photopath)\r\n    except:\r\n        print \"Unexpected error:\", sys.exc_info()[0]\r\n    os.remove(photopath)\r\n    sleep(POLL_FREQUENCY_SECONDS)<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>The visualisations on this page need Flash and Javascript. Apologies if that means most of this page doesn&#8217;t work for you! This is my mood (as identified from my facial expressions) over time while watching Never Mind the Buzzcocks. The green areas are times where I looked happy. This shows my mood while playing XBox [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[512,522,212,523],"class_list":["post-2092","post","type-post","status-publish","format-standard","hentry","category-code","tag-eightbar","tag-face","tag-python","tag-webcam"],"_links":{"self":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts\/2092","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2092"}],"version-history":[{"count":0,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts\/2092\/revisions"}],"wp:attachment":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2092"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2092"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2092"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}