Archived | Analyze an image and send a status alert
Build an IoT app that uses serverless and visual recognition to analyze images and send alert notifications
Industrial and high-tech maintenance companies often photograph their sites for potential hazards or emergencies and then inform the appropriate person who can take action to resolve the issue. In this code pattern, you will build an app that will load images into an IBM Cloudant® database, analyze it, and based on the results, trigger an alert showing whether there is a danger and need for action.
Industrial and maintenance companies need to know what’s happening at their sites. A leak, fire, or malfunction can spell disaster for a company, resulting in dangerous situations for employees, downtime, public relation setbacks, and financial losses.
These companies have been leaders in using remote devices-phones, mounted cams, drones-to send images of various sites and equipment to be monitored for any malfunctions. But what if you could automatically analyze those images and send an alert about the location or potential emergency situation?
If you’re a developer working for a company that relies on site images, you can now build an application that analyzes an image and sends an alert automatically. In this code pattern, you’ll use IBM Cloud Functions to analyze an image and send it to the Watson™ IoT Platform. The image will be assigned a score, and that score will be evaluated to trigger any necessary alerts to reach authorities through the best available communication channel (for example, email, text, or push notifications).
You have the option to develop a standalone application that can be easily updated or modified to work from within a smart device, or run it on a browser on your laptop or phone.
In the pattern use case, you’ll learn how to send an image for processing that detects a fire. (You can also use this same app for maintenance alerts or other emergency alert detections.) The fire is identified by the Watson Visual Recognition service, and the Node-RED app will subsequently notify the appropriate resources.
There are multiple ways to design this process, and you can modify the pattern to extend it to other real-world use cases, sending alerts to other designated recipients and creating additional designated channels for alert notifications.
You’ll create an app based on the following flow:
- The application takes an image from a device or uploads it from a local image folder to an IBM Cloudant NoSQL database.
- The Cloudant database, in turn, receives the binary data and triggers an action on IBM Cloud Functions.
- IBM Cloud Functions Composer performs the Visual Recognition analysis and receives a response in JSON format.
- The response is sent to the IoT Platform and registers itself as a device receiving the analyzed image.
A Node-RED flow continues to read these events from the device on the IoT Platform and triggers alerts based on the image’s features. For example:
image: fire score: 0.679 alert: EMERGENCY ALERT! time: Tue Oct 24 2017 01:20:49 GMT+0000 (UTC)
- You can execute the viz-send-image-app folder locally or push it to the cloud. The folder contains the app UI that enables you to upload an image to the Cloudant database.
- Create a Node-RED package that includes the Cloudant service.
- Create IBM Functions from the IBM Cloud Catalog. Paste your credentials from Cloudant, IoT Platform, and Visual Recognition into the credentials.cfg file (in viz-openwhisk-functions) and credentials.json file (in viz-send-image-app).
- Create the Watson Visual Recognition service from the IBM Cloud Catalog.
- Create the Watson IoT Platform and bind it to the Node-RED package.
- Paste the .json flow to your Node-RED editor. Ensure that the ibmiot node in Node-RED has the correct information from the Watson IoT Platform.
Ready to put this code pattern to use? Complete details on how to get started running and using this application are in the README.