The global Call for Code is well underway, we want to share some visual recognition models which could help you. These AI models can operate on the edge, which could be particularly useful for this years’ theme: disaster preparedness. How could visual recognition help in relief work? From satellite and drone imagery analysis, to classifying damages, or object detection and counting, the possibilities are endless. Join us as we explore edge computing, why it is useful and important, and discover how you can implement your own models on the edge.

What is edge computing?

Edge computing sits between the Cloud and the user, it is also sometimes known as fog computing. Your services can run on the very ‘edge’ of your network: on sensors, network switches, or other edge devices. Let’s dig a little deeper.

How does edge computing work?

Edge devices are those which are able to compute automatically, instead of receiving instructions from a centralised position. This means that data can be processed at the source of the data itself, away from the cloud or data warehouses. Their geographic distribution means the data stays close to the user, reducing reaction and decision-making time, because the extraordinary amount of data no longer needs to transfer to the cloud and back.

Why is edge computing important?

The advantages of edge computing include de-centralising the computing processes, reducing latency, optimizing performance. Speed is definitely one of the biggest draws for edge computing. It also provides room for reducing device data traffic, improving bandwidth consumption. Other benefits include privacy and security: with edge computing you can anonymise data, protecting the user and their data, move encrypted data closer to the network core, firewalls, and other safety nets.

Who uses edge computing?

Some believe it’s far more efficient and practical to process data where it is collected. Since edge computing empowers the end devices and sensors, it makes it very important to the Internet of Things (IoT) and Mobile, it is capable of solving many issues. Edge analytics rules can parse the data, depending on your rules, critical data and alerts can be sent to your IoT Platform. As the Internet of Things evolves, the rise of edge computing becomes inevitable. Consider some more specific uses such as voice assistants or autonomous driving, for example.

Or consider the potential edge computing has if you augment edge devices with the power of Artificial Intelligence. You can implement AI into edge computing! Watson Deep Learning is part of IBM Watson Studio for both building and training AI models, which you can train in the cloud or even locally. In IBM Watson Studio you can use Open Source tools such as Jupyter Notebooks, RStudio, and Scala. Get started with AI and analysis tools.

Discover more about the importance of Open Source to IBM.

How can I implement AI into edge computing?

One example could be edge computing with TensorFlow, an open-source library originating from the Google Brain team, focusing on Machine Learning. Their experimental version of TensorFlow Lite for Mobile allows you to build AI scenarios on edge devices. Some important considerations for your edge computing models include: model size, memory usage, and battery usage.

Training TensorFlow with IBM Watson Studio

One of our Developer Advocates Niklas Heidloff has created some models combining TensorFlow with IBM Watson Studio. Let’s take a look at some examples. Here are two different ways of deploying TensorFlow models on edge devices: firstly, in browsers then in iOS and Android apps. You can optimize the models for different operating systems.

For browsers: Training TensorFlow.js Models with IBM Watson

This sample uses Watson Deep Learning service to train TensorFlow models and run them in browsers for real-time predictions. Read the blog, fork the code on GitHub, watch the demo on YouTube.

For iOS and Android: Deploying TensorFlow Models on Edge Devices

Explore how you can train TensorFlow models with the Watson Deep Learning service and how to run the models on edge devices as native apps. Read the blog, fork the code on GitHub, watch the demo on YouTube.


We’ve also previously shared code on visual recognition for an Anki Cozmo robot, completing training and classification via TensorFlow, along with a MobileNet model on Kubernetes and an OpenWhisk function. Fork the complete source code on GitHub.

What is the Call for Code?

The past decade has been one of the worst periods for natural disasters and while weather events may be inevitable they don’t have to become so catastrophic. The Call for Code is a multi-year global initiative, a rallying cry to developers to use their skills and mastery of the latest technologies, and to create new ones, to drive positive and long-lasting change across the world with their code. This competition is the first of its kind at this scale, encouraging developers who want to pay their skills forward for a specific mission to alleviate human suffering. Change the World. Join the Call for Code.


Developers worldwide, united to help disaster victims. The initiative has the support from a cross-section of experts, humanitarian and international organizations, including the United Nations Human Rights Office and the American Red Cross’ International team. Hear from our partner organisations – find out where code can solve real problems. Discover Call for Code events in your area now

Visual recognition and the Call for Code

Visual recognition technology can be used to assess an area for risk before building or otherwise modifying the environment. Make use of our open source roadmaps for solving complex programming challenges to kickstart your submission. You can also use these Blogs, Tech Talks, and How-tos to build your idea upon other services provided by the IBM Cloud.

More Call for Code

Call for Code Technical Overview
Get started with the Call for Code!

More Edge Computing

Ready for the disruption from edge computing? via Scott Amyx
A guide to Edge Analytics

Tech Talks:

Get started with IBM technology: weather, IoT, bots, AI, and data science!

How to solve the world’s largest natural disaster challenges with code
Collect and analyze device sensor data to take corrective or preventative action automatically
Improve logistics based on traffic and weather activity to reduce the number of people affected

Join The Discussion

Your email address will not be published. Required fields are marked *