This tutorial shows how you can use drone aerial images, Watson Studio, and Watson Visual Recognition to survey wildfire-damaged neighborhoods and identify burned and intact homes.
Learning objectives
After completing this tutorial, you will be able to:
- Create a Visual Recognition model in Watson Studio running in IBM Cloud
- Capture images from a drone and zip them into a class
- Train a model to identify objects in the images
- Score and count the identified objects
Prerequisites
You can complete this tutorial using an IBM Cloud Lite account.
- Create an IBM Cloud account.
- Log in to IBM Cloud.
Estimated time
This tutorial should take approximately 15 minutes to complete.
Step 1: Learn about drones
There are many types of drones available that range from toys to industrial use cases. Many of the drones now include a camera that can store or stream aerial video to the ground. Using the livestream video frames, you can sample frames and send the images to Watson Visual Recognition for classification.
- Pocket toy drones
- Tello – Control a Tello Drone using Node-RED
- Hobbyist drones
- Commercial drones
Step 2: Capturing images
One of the fun experiences of flying a drone is capturing video or pictures from a unique aerial perspective. You can use your drone to capture images of interesting objects that you want to train a visual recognition model to autonomously identify.
In this tutorial, I have created three zip files of pictures recorded by drones. I will use these images to identify neighborhoods affected by the devastating 2018 West Coast wildfires. These images will be used as the training set.
- Aerial drone images of burned homes – BurnedHomes.zip
- Aerial drone images of intact homes – AerialHomes.zip
- Aerial drone images of forests, roads, rivers to be used for the negative class. NotHomes.zip
Source attribution: USA Today article, various internet sources
Step 3: Set up Watson Studio
In this section, we create a Watson Studio account, create a Project, and create a Watson Visual Recognition model to identify images in several classes.
Create Cloud Object Storage
- Create a Cloud Object Storage instance by visiting the IBM Cloud Catalog.
- Search for Object in the IBM Cloud Catalog.
Click the Object Storage service tile.
Click Create.
Create a Watson Studio service instance
- Create a Watson Studio service instance from the IBM Cloud Catalog.
Search for Studio in the IBM Cloud Catalog.
Click the Watson Studio service tile.
Click Create.
After the Watson Studio service is created, click Get Started or visit Watson Studio.
Log in with your IBM Cloud account.
Walk through the introductory tutorial to learn about Watson Studio.
Watson Studio Projects
Projects are your workspace to organize your resources, such as assets like data, collaborators, and analytic tools like notebooks and models.
Create a new project
- Click Create a Project.
Select the Standard tile and press the Create Project button.
Name your project Wildfire Burned Homes. The Cloud Object Storage instance created in an earlier step should be prefilled.
Press Create.
You are ready to set up your project with Watson Visual Recognition.
Add Visual Recognition to your Watson Studio project
To add Visual Recognition, click the Settings tab.
- Under Associated Services click Add Service and choose Watson.
Provision a new Watson Visual Recognition service instance
- Choose the tile for Visual Recognition.
- Select the Lite plan and note the features.
- Scroll to the bottom and click Create.
Create a new Visual Recognition model
To create a new Visual Recognition model, click + Add to project and choose Image classification model.
Rename the Visual Recognition model
The Default Custom Model name is not descriptive so let’s rename it.
Click the pencil icon to edit the name.
Rename the model as Count Burned Homes.
Add custom classes to the Watson Visual Recognition model
Click the + symbol to create a class.
Name this class Burned Home.
Click Create.
Add a second custom class by clicking the + symbol again.
Name this class Intact Home.
Click Create.
Upload ZIP files to Watson Studio project
Three ZIP files have been prepared that contain aerial drone images. These files are:
- BurnedHomes.zip
- AerialHomes.zip
Click Browse. An operating system native File Dialog opens.
- Multi-select the three ZIP files BurnedHomes.zip, AerialHomes.zip, and NotHomes.zip.
Upload these ZIP files to your Watson Studio project
Drag the ZIP files to custom classes
Grab the BurnedHomes.zip file from the right navigation and drag it to the Burned Home class.
The images in the ZIP file are added to the Burned Home class.
Grab the AerialHomes.zip file from the right navigation and drag it to the Intact Home class.
Grab the NotHomes.zip file from the right navigation and drag it to the Negative class.
Train your Watson Visual Recognition custom classifier
- Click Train Model.
Wait a few minutes for the model to train on the images.
After the model has been trained, click the Click here link to view and test your model.
Step 4: Test your model
- Review the Classes and Model details.
Click the Test tab.
Test Watson Visual Recognition Custom Classifier with sample images
- Visit this UK Daily Mail article and download a few of these drone images of devastated California neighborhoods.
Load the images into the Test page by browsing or dragging the images to the Test page.
Inspect the scores returned by the Watson Visual Recognition Custom Classifier.
Implement Watson Visual Recognition custom model in your applications
You can incorporate this Watson Visual Recognition Custom Classifier model into your applications using a variety of programming languages.
Click the Implementation tab to review the code snippets.
Use the following code snippets to classify images against your model. For reference, the full API specification is available.
API endpoint
https://gateway.watsonplatform.net/visual-recognition/api
Authentication
curl -u "apikey:{apikey}" "https://gateway.watsonplatform.net/visual-recognition/api/{method}"
Classify an image (GET)
curl -u "apikey:{apikey}" "https://gateway.watsonplatform.net/visual-recognition/api/v3/classify?url=https://watson-developer-cloud.github.io/doc-tutorial-downloads/visual-recognition/fruitbowl.jpg&version=2018-03-19&classifier_ids=CountBurnedHomes_1382538940"
Classify an image (POST)
curl -X POST -u "apikey:{apikey}"-F "images_file=@fruitbowl.jpg" -F "threshold=0.6" -F "classifier_ids=CountBurnedHomes_1382538940" "https://gateway.watsonplatform.net/visual-recognition/api/v3/classify?version=2018-03-19"
Summary
This tutorial explained how you can use drone aerial images, Watson Studio, and Watson Visual Recognition to survey wildfire-damaged neighborhoods and identify burned homes and intact homes. You should now know how to create a Visual Recognition model in Watson Studio running in IBM Cloud, capture images from a drone and zip them into a class, train a model to identify objects in the images, and score and count the identified objects.