Survey flooded neighborhoods to identify survivors on rooftops and detect rescue boats

This tutorial shows how you can use drone aerial images, Watson Studio, and Watson Visual Recognition to survey flood-damaged neighborhoods, identify homes with survivors on rooftops, and detect rescue boats. Using GPS and visual recognition, you can direct the rescue boats to the flood victims.

Watson Studio screen capture

Learning objectives

After you complete this tutorial, you’ll be able to:

  • Create a Visual Recognition model in Watson Studio running in IBM Cloud
  • Capture images from a drone, create .zip files, and add them into a class (these .zip files are provided for you)
  • Train a model to identify objects in the images
  • Score the identified objects

Prerequisites

This tutorial can be completed by using an IBM Cloud Lite account.

Estimated time

You can complete this task in no more than 45 minutes.

Steps

This tutorial involves these main steps:

  1. Learn about drones
  2. Capture images
  3. Create and train a visual recognition model
  4. Train your model
  5. Implement your model in your app

Step 1 – Learn about drones

Many types of drones are available, and they range from toys to industrial-grade devices. Many of the drones now include a camera that can store or stream aerial video to the ground. Using the livestream video frames, you can sample frames and send the images to Watson Visual Recognition for classification.

Step 2 – Capture images

One of the fun experiences of flying a drone is capturing video or pictures from a unique aerial perspective. You can use your drone to capture images of interesting objects that you want to train a visual recognition model to autonomously identify.

For this tutorial, I created four .zip files of pictures that were recorded by drones. We will use these images to identify neighborhoods that were affected by the devastating flooding during Hurricane Harvey, Hurricane Katrina, flooding in the upper Midwest of the United States, and floods from around the world. These images will be used as our training set.

Source attribution: These images were collected from various internet sources.

Step 3 – Use Watson Studio to create and train a visual recognition model

In this section, we create a Watson Studio account, create a Project, and create a Watson Visual Recognition model to identify images in several classes.

Create a Watson Studio service instance

  1. Search for Studio in the IBM Cloud Catalog.

    Watson Studio Catalog screen capture

  2. Click the Watson Studio service tile.

    Watson Studio Service screen capture

  3. Click Create.

  4. After the Watson Studio service is created, click Get Started or go to Watson Studio: https://dataplatform.cloud.ibm.com/.

    Watson Studio Launch screen capture

  5. Log in with your IBM Cloud account.

  6. Walk through the introductory tutorial to learn about Watson Studio.

    Watson Studio Welcome screen capture

Create a Watson Studio project

Projects are your workspace to organize your resources, such as assets like data, collaborators, and analytic tools like notebooks and models.

Create a new project

  1. Click Create a Project.

  2. Select the Visual Recognition tile, and then click Create Project.

    Watson Studio New project screen capture

  3. Select a region for visual recognition. New instances of Watson Visual Recognition and Cloud Object Storage services are created.

  4. Give your project a name. The new service instances will be prefilled.

  5. Click Create.

    Watson Studio New project screen capture

A visual recognition model is created for you.

Rename the Visual Recognition model

The Default Custom Model name is not descriptive so let’s rename it.

  1. Click the pencil icon to edit the name.

    Watson Studio screen capture

  2. Rename the model as Flooding.

    Watson Studio screen capture

Add custom classes to the Watson Visual Recognition model

  1. Click the + symbol to create a class.

    Watson Studio screen capture

  2. Name this class Flooded Neighborhood.

  3. Click Create.

  4. Add a second custom class by clicking the + symbol again.

    Watson Studio screen capture

  5. Name this class Rooftop Survivors.

  6. Click Create.

  7. Add a third custom class by clicking the + symbol again.

  8. Name this class Rescue boat.

  9. Click Create.

Upload the .zip files to your Watson Studio project

These .zip files that were prepared contain aerial drone images. These files are:

  1. Click Browse. The file dialog for your operating system opens.

  2. Select all the .zip files.

  3. Upload the .zip files to your Watson Studio project

    Watson Studio screen capture Watson Studio screen capture

Drag the .zip files to custom classes

  1. Grab the rooftop-survivors.zip file from the right navigation and drag it to the Rooftop Survivors class.

    Watson Studio screen capture

    The images in the .zip file are added to the Rooftop Survivors class.

    Watson Studio screen capture

  2. Grab the flooded-neighborhood.zip file from the right navigation and drag it to the Flooded Neighborhood class.

  3. Grab the rescueboats.zip file from the right navigation and drag it to the Rescue Boat class.

  4. Grab the suburban-Neighborhood.zip file from the right navigation and drag it to the Negative class.

    Watson Studio screen capture

Train your Watson Visual Recognition custom classifier

  1. Click Train Model.

  2. Wait a few minutes (maybe 5 – 10 minutes) for the model to train on the images.

    Watson Studio screen capture

  3. After the model has been trained, click the Click here link to view and test your model.

    Watson Studio screen capture

Step 4 – Test your model

  1. After the model has been trained, click the Click here link or the Trained link to view and test your model.

    Watson Studio screen capture

  2. Review the Classes and Model details. Click the Test tab.

    Watson Studio screen capture

  3. Visit the Test Data directory of the drones-iot-visual-recognition GitHub repo, and download the testdata.zip file.

  4. Unlike the training datasets, you will need to extract the images in this test data .zip file to your local hard drive.

  5. Inspect a few of the drone images of flood zones. These images were not part of the training set and will be used to validate the visual recognition model.

  6. Upload the images into the Test page by browsing to select the files or dragging the image files into the Test page.

    Watson Studio screen capture

  7. Inspect the confidence scores returned by the Watson Visual Recognition Custom Classifier.

    The Confidence score for the image is in the range of 0 to 1. A higher score indicates a greater likelihood that the class is depicted in the image. Don’t think of confidence scores as a measure of absolute truth; it’s better to think of them as a threshold for action. Confidence score values are subjective. They vary based upon training images, evaluation images, and the types of criteria that you want to classify. It is up to the developers of each solution to determine what confidences score values are appropriate thresholds for action for a given solution.

    Watson Studio screen capture

Step 5 – Implement this model in your application

You can incorporate this Watson Visual Recognition Custom Classifier model into your applications using a variety of programming languages, such as Java, Node.js, Python, Ruby, or Core ML.

  1. Click the Implementation tab to review the code snippets.

    Watson Studio screen capture

  2. Use the following code snippets to classify images against your model. For reference, the full API specification is available.

In the IBM Cloud Dashboard, search for and open your instance of Watson Visual Recognition. Then, navigate into the Service Credentials section, and copy your API key (apikey) to use in the curl example commands below.

  • API endpoint

    https://gateway.watsonplatform.net/visual-recognition/api
    
  • Authentication

    curl -u "apikey:{apikey}" "https://gateway.watsonplatform.net/visual-recognition/api/{method}"
    
  • Classify an image (GET)

    curl -u "apikey:{apikey}" "https://gateway.watsonplatform.net/visual-recognition/api/v3/classify?url=https://watson-developer-cloud.github.io/doc-tutorial-downloads/visual-recognition/fruitbowl.jpg&version=2018-03-19&classifier_ids=Flooding_418020421"
    
  • Classify an image (POST)

    curl -X POST -u "apikey:{apikey}"-F "images_file=@fruitbowl.jpg" -F "threshold=0.6" -F "classifier_ids=Flooding_418020421" "https://gateway.watsonplatform.net/visual-recognition/api/v3/classify?version=2018-03-19"
    

Summary

This tutorial explained how you can use drone aerial images, Watson Studio, and Watson Visual Recognition to survey flood damaged neighborhoods, identify homes with survivors on rooftops, and detect rescue boats.

You learned how to create a Visual Recognition model in Watson Studio running in IBM Cloud, capture images from a drone and add .zip files of those images into a class, and train a model to identify objects in the images.

John Walicki