This code pattern is part of the 2020 Call for Code Global Challenge.
Drones have become essential tools for first responders in search-and-rescue missions. In this code pattern, you learn how to use visual recognition to detect and tag S.O.S. messages from aerial images.
2017 was a year of record breaking natural disasters. From Hurricanes Maria, Irma, and Harvey, to the devastating forest fires in California. People all over the world suffer from tsunamis, tornadoes, floods, landslides, earthquakes, and volcanic eruptions – not to mention all of the man-made disasters.
Aerial images have become crucial for search-and-rescue missions and disaster relief operations. However, not everyone has access to a helicopter or satellites, therefore drones have become an essential tool to capture aerial photos quickly and cheaply.
This code pattern shows you how to complete the following tasks:
- Use Cloud Annotation to train a visual recognition model to identify universal aid symbols (like “S.O.S”) using object detection.
- Stream and capture the video feed from a Tello drone.
- Configure a web app to run prediction against the video feed and view a dashboard of the results.
- The user generates sample images using Lens Studio.
- The user uploads the images to Cloud Annotations, which trains a model and then exports a TensorFlow.js model.
- The user adds the TensorFlow.js model to the web application.
- The user connects the Tello drone to the computer and starts the web application.
- The drone video feed is captured by the web application.
- The video frames are analyzed by the TensorFlow.js model.
- The web app UI displays the visual recognition analysis.
- Use augmented reality to generate the imageset.
- Train the model.
- Deploy the dashboard.