Overview

Skill Level: Intermediate

Recipe assumes knowledge on using of Watson Visual Recognition Service and Watson IoT Platform

Elevators fitted with IoT Device along with camera to periodically capture the elevator floor pictures and feeding to Watson Visual Recognition Service to get the score to determine the presence of dirtiness on elevator floor.

Ingredients

Hardware

  • Raspberry Pi with Camera

Software

  • Bluemix Account
  • Visual Recognition Service on Bluemix
  • Cloudant Service on Bluemix
  • Watson IoT Service on Bluemix
  • Node-Red on Raspberry Pi

Service fee may apply - Estimated Monthly Costs: < $10

Information

More than 256MB RAM might be required to deploy this application. If the memory usage exceeds 256MB in the free trial accounts (Bluemix Trial Account and Standard Account), the application might not work as expected.

We suggest that you upgrade to Pay-as-You-Go or Subscription account to enjoy the full-range Bluemix services.

Step-by-step

  1. Introduction

    Recipe showcases training the custom classes for Watson Visual Recognition (VR) Service to analyze the given image and produce the score based of the image analysis. We also use Watson IoT Platform to send email as part of alert action based on the image analysis score from the VR Service.

    Quickly Deploy relevant services to Bluemix

    The IoT Recipe discussed here, makes use of the Create Toolchain button to help deploy the necessary services on to Bluemix. Click on the Create Toolchain button provided below, provide a Custom Name to your application and choose to click on Create button, to quickly deploy the Watson IoT Platform, Node-RED and Cloudant NoSQL DB, as part of the Bluemix starter app. Post successful deployment, you shall have all three of the above mentioned services, up & running in your Bluemix environment.

    Toolchain-8

    Note: If you are a User using the United Kingdom Region in your Bluemix environment, then please make use of the steps mentioned in the IoT Recipe Deploy Internet of Things Platform Starter service on Bluemix to deploy the setup. Alternatively, you can also try using the Deploy to Bluemix button, to deploy the setup under your United Kingdom Region, provided your Jazzhub account is validated. Users of US South Region can ignore this step.

    deploy

    However, neither the button(s) nor the IoT Recipe menitoned above, deploy the Watson Visual Recognition service. You will have to manually add the service to the application by choosing Add New option under the Connections section of the currently deployed application.

    This recipe is continuation of the earlier recipe on Image Analysis. Refer to recipe on Image analysis to get familiar with prerequisites as mentioned above under ingredients section.

    Refer to VR Video to get know how Watson VR works. Here is VR Service Documentation link for reference.

    In this recipe, we mainly concentrate on training VR custom classes and using VR service to produce the score. We define cloud rule on IoT Platform to send an email as an alert when ever the device event with the score close to 1 is received on IoT Platform.

  2. Elevator Floor Maintenance flow

    This section describes about the steps involved in cognitive solution for elevator floor maintenance. The below image shows the sequence of steps involved in the flow:

    flow

     

    1. Elevators fitted with IoT Device (Raspberry Pi) along with the camera captures the elevator floor picture periodically, say for every 1 minute.
    2. The captured image is saved onto disk with in the IoT Device.
    3. Feeds the captured image to VR service for analysis and also updates the image on the Cloudant DB for future reference.
    4. VR Service custom class is already trained with the bunch of elevator floor dirty and clean images. VR Service analyzes the given image using the trained custom class and produces the score of between 0 to 1. The score closes to 1 indicates the elevator floor is dirty. VR Score is sent to IoT Platform.
    5. We have a cloud rule defined on IoT Platform consisting of condition to check and action. The received VR score as part of device event checked within cloud rule condition whether it’s ~ 1.
    6. If VR scroe is close to 1, then an email is sent as part of alert action to maintenance team along with the elevator details to notify the elevator floor is dirty and needs some one to clean.
  3. How to train and create a custom classifier for VR Service?

    In this section, we are going to describe the steps involved in training and creating custom classifier with Watson VR Service. To do so, we need two categories of elevator floor images Рset of dirty images and set of clean images minimum of 10 each at least, good to have 200+ images of each category to get good score. Refer to  Guidelines for good training.

    Let’s save all the elevator floor dirty images into an archive called¬†floor_dirty.zip and save all the elevator floor clean images into another archive called floor_clean.zip. In this use case, we are going to train custom classifier with elevator floor dirty images as positive images and elevator floor clean images as negative images.

    Watson VR Service provides us the REST APIs to carry out required operations using the VR Service. To use any of the VR Service API, we need to have Bluemix Visual Recognition Service credentials (API Key).

    Here is POST call to create custom classifier using our 2 archives – floor_dirty.zip and floor_clean.zip:

    curl -X POST -F “dirty_positive_examples=@floor_dirty.zip” -F “negative_examples=@floor_clean.zip” -F “name=dirty” “https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classifiers?api_key={api-key}&version=2016-05-20″

    In above CURL command, replace the archives path with the correct paths and api-key with the one from Bluemix Watson visual recognition service.

    The response includes a new classifier ID and status:

    {
    “classifier_id”: “dirty_235093379”,
    “name”: “dirty”,
    “owner”: “76578209-c938-4f8c-83f8-0fbd5fc680e0”,
    “status”: “training”,
    “created”: “2016-10-13T07:49:26.225Z”,
    “classes”: [{“class”: “dirty”}]}

     

    Make a note of classifier_id, as we need this id to check the status and to feed the image for analysis in further sections below.

    Check the training status periodically until we see a status of “ready” using GET:

    curl -X GET “https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classifiers/{dirty_235093379}?api_key={api-key}&version=2016-05-20″

    In above CURL command, replace the api-key with the one from Bluemix Watson visual recognition service. Also we can notice that we have used the classifier_id obtained during custom classifier creation.

    The response for GET should contain status as ready before we proceed further to analyze the images:

    {
    “classifier_id”: “dirty_235093379”,
    “name”: “dirty”,
    “owner”: “76578209-c938-4f8c-83f8-0fbd5fc680e0”,
    “status”: “ready”,
    “created”: “2016-10-13T07:49:26.225Z”,
    “classes”: [{“class”: “dirty”}]}

     

    When the new classifier completes training i.e. status is ready, we can call it to see how it performs.

    Create a JSON file called “myparams.json” that includes the parameters for our call, such as the classifier_id of our new classifier and the default classifier. A simple JSON file might look like the following:

    {
    “classifier_ids”: [“dirty_235093379”, “me”]
    }

    We have a dirty  image of elevator floor saved with the name Рimg-10.jpg and we run analysis on this image using POST Method as shown below:

    ¬†img-10 curl -X POST -F “images_file=@img-10.jpg” -F “parameters=@myparams.json” “https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classify?api_key={api-key}&version=2016-05-20″

     

    In above CURL command, replace the api-key with the one from Bluemix Watson visual recognition service. The response from above POST method is a JSON containing the score against the custom class “dirty”:

    vr_result

     

     This is about how we can train and create custom classifier with Watson VR Service. Once the classifier status is ready, we can use the classifier_id to analyze the custom image as shown in this section above. Here is the link to detailed tutorial on Custom Classifier.

     

  4. Registering IoT Device and Defining Cloud Rule with Watson IoT Platform

    In order to send the score received from Watson VR Service to Watson IoT Platform, we need to register our IoT Device (Raspberry Pi) on to Watson IoT Platform and then define cloud rule to send an alert when ever score from VR service is close to 1.

    To register the device with Watson IoT Platform, refer to the recipe – Registering Devices with WIoTP.

    To get familiarize with Cloud Analytics in Watson IoT Platform, define and activate cloud rule, refer to Rules and Actions with WIoTP Cloud Analytics recipe.

    Using the above recommended recipes, by now, we should have registered our IoT Device, say as piCam-3 as shown in below snippet:

    iot-device

     

    Also, we added schema for elevator devices having value as property to receive score from connected devices:

    schema_details

     

    Next, we have a cloud rule defined containing condition as value > 0.85 and alert action to send email to maintenance team:

    cloud_rule

     

    Finally and most importantly, the defined cloud rule should be set to active on WIoTP Platform and we should see it’s status as activated as shown in below snippet:

    cloud_rule_status

     

    So, as per the above defined cloud rule, when ever WIoTP receives value > .85 from piCam-3, an email is going to be sent to maintenance team as part of associated action with the alert of cloud rule.

  5. Node-Red Flow for Elevator Floor Maintenance

    In this section, we’ll see building Node-Red flow for Elevator Floor Maintenance. To do so, make sure we have Node-Red installed on IoT Device (Raspberry Pi) and installed additional nodes on top of default Node-Red nodes as listed below:

    • Node for Watson Visual Recognition
      • npm install node-red-node-watson
    • Node for Cloudant Database
      • npm install node-red-node-cf-cloudant
    • Node for Node-Red dashboard creation
      • npm install node-red-dashboard
    • Node for Watson IoT Platform
      • npm install node-red-contrib-ibm-watson-iot
    • Node for Base64 conversion
      • npm install node-red-node-base64

    The Node-Red flow is going to have nodes from Visual Recognition, Cloudant, Dashboard and Watson IoT. We need to fill in the required credentials obtained from Bluemix services in the nodes where ever required.

    In the Node-Red flow, we are going to build dashboard to showcase the image analysis for the elevator floor. In the dashboard, we’ll have 4 buttons to follow sub flows with the node-red flow as described in below:

    Analyze Image taken from IoT Device (Raspberry Pi Camera)  
    analyze_picam_img 

    In this sub-flow, Camera attached to IoT Device (Raspberry Pi) is triggered to capture real time image of the floor elevator and getting saved on disk at the path /home/pi. And then, after few seconds of delay, the captured image contents are read to buffer and given to custom classifier in VR Service for Analysis and the resulting VR score is sent to Watson IoT Platform.

    The captured image is saved to disk at the path as /home/pi/picam_img.jpg, to customize the image name and the path, update in the nodes РClick Floor Photo  and Read Image, update the path, deploy the flow.

    picam_img_path

     

    Analyze Elevator Floor Dirty Image stored on disk at the path /home/pi
     analyze_dirty_img

    In this sub-flow, we read the elevator floor dirty image stored on disk at the path /home/pi in to a buffer and feed the buffer contents as input to custom classifier in VR Service for Analysis and the resulting VR score is sent to Watson IoT Platform.

    To provide custom image, double click on the node – Read Dirty Image and update the file path accordingly and deploy the flow.

    analyze_dirty_img

     

    Analyze Elevator Floor Semi Dirty Image stored on disk at the path /home/pi  
     analyze_picam_img

    In this sub-flow, we read the elevator floor semi dirty image stored on disk at the path /home/pi in to a buffer and feed the buffer contents as input to custom classifier in VR Service for Analysis and the resulting VR score is sent to Watson IoT Platform.

    To provide custom image, double click on the node – Read Semi Dirty Image and update the file path accordingly and deploy the flow.

     

    analyze_dirty_img

     

    Analyze Elevator Floor Clean Image stored on disk at the path /home/pi
     analyze_picam_img

    In this sub-flow, we read the elevator floor clean image stored on disk at the path /home/pi in to a buffer and feed the buffer contents as input to custom classifier in VR Service for Analysis and the resulting VR score is sent to Watson IoT Platform.

    To provide custom image, double click on the node – Read Clean Image and update the file path accordingly and deploy the flow.

     

    clean_img_node

     

    With all these sub-flows, the complete Node-Red dashboard looks as shown in below snippet. It has image template to show the image that’s being analyzed and a gauge to show the score from VR service:

    dash_board_view

     

    Below snippet contains the complete node-red flow. The node one marked in RED color is the Visual Recognition Service node, that takes the image data and analyzes, gives the score which gets sent to Watson IoT Platform (WIoTP):

    node_red_flow

     

    Here is the JSON data for the above Node-Red flow, we can just import the given JSON data to build the flow into Node-Red: 

    To view the Node-Red dashboard, Menu -> View -> Dashboard, then we should see Dashboard tab in the sidebar as shown in below snippet. Select Dashboard tab and click on marked arrow in image below to open Node-Red Dashboard:

    dash_board_select  dash_board_click

     

    In this section, we described about installing the additional required nodes, different sub-flows with in the node-red flow, the complete node-red flow for image analysis and how to view the node-red dashboard to carry out the image analysis.

  6. Sample Elevator Floor Image Analysis Snippets

    In this section, we provide sample snippets of elevator floor image analysis in the Node-red Dashboard.

    • Analyzing Elevator Floor Image captured using IoT Device Camera:
    anz_picam_img

     

    • Analyzing Elevator Floor Dirty Image stored on disk at path /home/pi:
    analyze_dirty_img

     

    • Analyzing Elevator Floor Semi Dirty Image stored on disk at path /home/pi:
    analyze_sdirty_img

     

    • Analyzing Elevator Floor Clean Image stored on disk at path /home/pi:
    analyze_clean_img
  7. Conclusion

    We show cased  how to use Watson Visual Recognition Service and Watson IoT Platform to provide cognitive solution to Elevator Floor Maintenance in this recipe. As part of this recipe, we learn:

    • Training and creating custom classifiers for Watson Visual Recognition Service
    • Analyzing custom images using the Rest APIs provided by Watson Visual Recognition Service
    • Defining and Activating cloud rules on Watson IoT Platform
    • Node-Red flow to show case the solution using the Node-Red Dashboard widgets
  8. Links to some of related Cognitive Recipes

    Here are the links to some of our other related Congnitive Recipes available to be referred:

5 comments on"Automating Elevator Floor Maintenance with Cognitive Visual Recognition and Watson IOT"

  1. I’d like to deploy this application to my Bluemix environment. Where can I get these codes ? I couldn’t find it on GitHub…

  2. Recipes@WatsonIoT December 07, 2016

    Hi,
    You should get the Node-Red flow at this link – https://raw.githubusercontent.com/ibm-messaging/iot-device-samples/master/node-red/Elevator-samples/Elevator-floor-clean-classification-flow.json
    The link is also available in the recipe above.

  3. Nice article, What does a score of .52 mean ? Is it that the image is 50% close to matching the ones in the zip file ?

  4. Step 3 needs some additional documentation explaining the scoring if there is a good clean image as well,
    + Some description on why the scoring numbers show the way they are.

  5. Recipes@WatsonIoT December 12, 2016

    This recipe focuses on showing how to use Watson VR service to get score for custom trained images. For more details on the Watson VR Service, we have given link in Section – 3: http://www.ibm.com/watson/developercloud/doc/visual-recognition/index.shtml

Join The Discussion