This post is a simple example of how to use a machine learning model to make predictions on a stream of events on a Kafka topic.
It’s more a quick hack than a polished project, with most of this code hacked together from samples and starter code in a single evening. But it’s a fun demo, and could be a jumping-off point for starting a more serious project.
For the purposes of a demo, I wanted to make a simple example of how to implement this pattern, using:
sensors that are easily and readily available, and
predictions that are easy to understand (and easy to generate labelled training data for)
With that goal in mind, I went with:
for the sensors providing the source of events, I used the accelerometer and gyroscope on my iPhone
for the machine learning model, I used TensorFlow to make a simple bidirectional LSTM
the predictions I’m making are a description of what I’m doing with the phone (e.g. is it in my hand, is it in my pocket, etc.)
I’ve got my phone publishing a live stream of raw sensor readings, and passing that stream through an ML model to give me a live stream of events like “phone has been put on a table”, “phone has been picked up and is in my hand”, or “phone has been put in a pocket while I’m sat down”, etc.
Here is it in action. It’s a bit fiddly to demo, and a little awkward to film putting something in your pocket without filming your lap, so bear with me!