Launching TensorFlow Lite for Microcontrollers

SparkFun Edge Development Board - Apollo3 Blue

I’ve been spending a lot of my time over the last year working on getting machine learning running on microcontrollers, and so it was great to finally start talking about it in public for the first time today at the TensorFlow Developer Summit. Even better, I was able to demonstrate TensorFlow Lite running on a Cortex M4 developer board, handling simple speech keyword recognition. I was nervous, especially with the noise of the auditorium to contend with, but I managed to get the little yellow LED to blink in response to my command! If you’re interested in trying it for yourself, the board is available for $15 from SparkFun with the sample code preloaded. For anyone who didn’t catch it, here are the notes from my talk.

Hi, I’m Pete Warden on the TensorFlow Lite team, and I’m here to talk about a new project we’re pretty excited about. When I first joined Google back in 2014, I learned about a lot of exciting internal work that wasn’t yet public, but one of the most impressive moments was when I was introduced to Raziel, who was on the speech team at that point, and he told me that they used network models that were only thirteen kilobytes in size! I only had experience with image models, and in those days even the smallest model like Inception still took up megabytes.

I was even more amazed when he told me why these models had to be so small. They needed to run them on DSPs and other embedded chips in smartphones so Android could listen out for wake words like “Hey Google” while the main CPU was powered off to save the battery. These microcontrollers often only had tens of kilobytes of RAM and Flash memory, so they simply couldn’t fit anything larger. They also couldn’t rely on cloud connectivity because keeping any radio connection alive continuously would drain the battery in no time at all.

What struck me was that the speech team had a massive amount of experience, and had spent a lot of time experimenting, and even within the tough constraints of these devices, neural networks produced better results than any of the more traditional methods they tried. This left me wondering if they would be useful for other embedded sensor applications, and I wanted to see if we could build support for these platforms into TensorFlow. At the time few people knew about the ground-breaking work that was being done in the speech community, so I was excited to help share it more widely.

Today I’m pleased to announce that we are releasing the first, experimental support for embedded platforms in TensorFlow Lite. To show you what I mean, here’s a demonstration I have in my pocket!

This is a prototype of a development board built by SparkFun, and it has a Cortex M4 processor with 384KB of RAM and 1MB of Flash storage. The processor was built by Ambiq to be extremely low power, drawing less than one milliwatt in many cases so it’s able to run for many days on a small coin battery.

I’m going to take my life in my hands now by trying a live demo, so wish me luck! The goal is that I’m going to say the word “Yes”, and the little yellow LED here will light up. Hopefully we can use this camera contraption to show this to everyone on the screen and in the livestream.

“Yes”. “Yes”. “Yes”.

As you can see, it’s still far from perfect, but it’s managing to do a decent job of recognizing when I say the word, and not lighting up when there’s unrelated conversations.

So why is this useful? First, this is running entirely locally on the embedded chip, with no need to have any internet connectivity, so it’s good to have as part of a voice interface system. The model itself takes up less than 20KB of Flash storage space, the footprint of the TensorFlow Lite code is only another 25KB of Flash, and it only needs 30KB of RAM to operate.

Secondly, the software for this demo is entirely open source. You can grab the code for it and build it yourself. It’s also already been ported to a lot of different embedded chips, and we hope to see it appear on many more over the next few months. You can check out the code yourself at

https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro

There’s more documentation here:

https://www.tensorflow.org/lite/guide/microcontroller

If you want to customize the example, you can try this code lab:

https://g.co/codelabs/sparkfunTF

Third, you can train your own model using this tutorial that we provide. It comes with an open dataset of over 100,000 utterances submitted by volunteers, which we’d love your help expanding through the link here:
https://aiyprojects.withgoogle.com/open_speech_recording
The helpful thing about this is that if you have your own words or noises you want to recognize, you should be able to adapt this training approach to your own problem just by supplying new training data.

Fourth, the code is part of TensorFlow Lite, it uses the same APIs, file formats, and conversion tools, so it’s well integrated into the whole TensorFlow ecosystem, making it easier to use.

So, how can you try this out yourself? If you’re in the audience, I’m pleased to say that when you pick up your box this afternoon you’ll find your very own prototype SparkFun Edge board! Just remove the tab to switch the battery on, and you should find it preloaded with the TensorFlow “yes” example. Just try saying “Yes” to TensorFlow, and you should hopefully get a yellow light! We also include all the cables you need to program it with your own code through the serial port. These are the first 700 boards ever built, so there is a wiring issue that drains the battery more quickly than on the final devices, but you should be able to develop with them in exactly the same way as the production boards.

If you’re watching at home, you can order one of these for $15 from SparkFun. You’ll also find instructions for many other platforms in the documentation, so we’re happy to work with whatever devices you want to build your projects on. We welcome collaboration with developers across the community to unlock all the creativity that I know is out there, and I’m hoping to be spending a lot of my time in the future reviewing pull requests!

Finally, a big thanks to everyone who helped bring this prototype together, including the TensorFlow Lite team, especially Raziel, Rocky, Dan, Tim, and Andy; Alasdair, Nathan, Owen and Jim at SparkFun; Scott, Steve, Arpit, and Andre at Ambiq, and many people at Arm including Rod, Neil and Zach! This is still a very early experiment but I can’t wait to see what people build with this.

15 responses

  1. Great stuff Pete! On a related thing: do you foresee that voice rec support will end in Android? Actually we’d be more interested in certain type of hand gesture recognition supported natively by Android using accelerometer and gyro sensor data. What do you think?

  2. Love what this means. We are working with extreme environments to bring Voice UI to them. Such as Space, Mining and Fire. This could be a real game changer. Will be ordering as soon as the available again. Also for fun we teach kids with others computing and robotics and really want to get our hands on it to work with our little robots so we can introduce ML to them also. Keep it up. Love what you are doing.

  3. Pete – very interesting project. I did one project on the first version of Apollo. Looks like SparkFun Edge Development Board – Apollo3 Blue is NOT on stock anymore 😦
    Can anyone sell me this device, please?

  4. Pingback: Launching TensorFlow Lite for Microcontrollers – make my pi

  5. Pingback: Warning — May Contain Traces of AI – OUseful.Info, the blog…

  6. Pingback: Why TinyML is a giant opportunity – Frank's World of Data Science & AI

  7. Pingback: TinyML = Big Opportunity – moebius recursive

  8. Pingback: Deep Learning Has a Size Problem - Comet

Leave a comment