2021 Call for Code Awards: Live from New York, with SNL’s Colin Jost! Learn more

IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

Explore some problems with deep learning applications, then see how deep learning on a Raspberry Pi can solve them

This blog is part of the 2020 Call for Code Global Challenge. Thanks to Maureen McElaney for her help in reviewing and editing this blog.

During the past decade, deep learning models have advanced human lives into a new era. Many traditionally difficult tasks, such as object detection and machine translation, have advanced drastically. Despite these laudable achievements, the vast majority of deep learning models still heavily depend on connections to powerful central servers – that is, they must remain connected to a server to function properly. While this dependence on a connection to an external server is fine in most situations, there are some instances where you might want to consider an alternative.

In this blog, I discuss some problems with deep learning applications, then explain how deep learning on a Raspberry Pi can solve them. I’ll also demonstrate how to do it using an open source deep learning model from the Model Asset eXchange (MAX).

Privacy and data security

The first problem with using deep learning models is the common dependence on an external server. Everything you do in an external server is exposed to that server. While it’s possible to secure the server and ensure your data’s privacy, it’s usually expensive and takes a lot of effort to keep a server secure. Many people maintaining their own projects might not have the time or resources to do this well. Even worse, some server maintainers have their own malicious intent. This could pose serious privacy and data security issues depending on the application. Securing yourself from a data breach is also a difficult thing to do.

Variable (or no) connectivity

The internet is not always available and even when it is available, you can rarely guarantee constant connectivity. If you are a field geographer or a sailor using a deep learning model that does real-time image analysis, you need your application to be able to work in an offline environment. In limited circumstances, this might be circumvented by using TensorFlow.js in a browser. See Use your arms to make music for a fun example of this.

Even in situations where connection to the internet is readily available, some systems are so critical that they cannot count on an internet connection to remain stable and fast at all times. For example, an ambulance could employ a deep learning model that interprets non-English speaking patients. But with a dependence on a strong internet connection without latency, any disconnection results in a delay of care and can potentially cost lives.

Why try deep learning on a Raspberry Pi?

These problems can be solved by running a deep learning model where internet connections are not required, for example, on a small board device like a Raspberry Pi. In fact, even in situations where slow and unstable connections to the internet are available and acceptable, on the client side using a Raspberry Pi instead of an expensive traditional device such as a mobile phone can save you a lot of money. An example of this could be a massive deployment context such as security cameras.

Model Asset eXchange

The Model Asset eXchange (MAX) provides free and open source deep learning models. With MAX, you don’t need to be a data scientist or mathematician to take advantage of free open source AI. There are over 30 models that you can choose from, ranging from audio classification and image segmentation to natural language processing. Thanks to the Docker-based architecture, some of the MAX models are particularly suitable for deploying on the Raspberry Pi.

Running the MAX Object Detector on a Raspberry Pi

Now, let’s walk through how to run a MAX model on a Raspberry Pi. As an example, I run the MAX Object Detector on Raspberry Pi 4 with Raspbian Buster.

  1. Open the terminal on the Pi.
  2. Install Docker.

     curl -sSL https://get.docker.com | sh
  3. Add the user “pi” to the Docker group so that you can use Docker as a regular user.

     sudo usermod –aG docker pi
  4. Log out, then back in (or simply reboot).

  5. Following the README for the MAX Object Detector model, run the following command.

     docker run -it -p 5000:5000 codait/max-object-detector:arm-arm32v7-latest
  6. At the end, you should see the following messages.

     * Serving Flask app "MAX Object Detector" (lazy loading)
     * Environment: production
       WARNING: This is a development server. Do not use it in a production deployment.
       Use a production WSGI server instead.
     * Debug mode: off
     * Running on (Press CTRL+C to quit)
  7. Download the test image.

     wget https://raw.githubusercontent.com/IBM/MAX-Object-Detector/master/samples/baby-bear.jpg
  8. Open the browser and visit http://localhost:5000/app. Choose the image that you have just downloaded. Click Submit. You should see the object detector working.

    MAX object window

    It also works with a camera.

    MAX object detector gif

And that’s it!

Keep playing with the Raspberry Pi

Some interesting use cases for this might be to install the Pi on a robot for navigation purposes or even simply use the Pi as your home security camera.

Check out the other MAX models for more inspiration on what to do with deep learning in an offline environment. Some of them already have support for the Raspberry Pi, such as the MAX Audio Classifier. If you would like to see more models on Raspberry Pi, don’t hesitate to send us your requests and ideas.