Classify vehicle damage images

Get the code View the demo View the demo

Summary

The IBM Watson Visual Recognition service uses learning algorithms to analyze images for content such as objects, scenes, and faces. This code pattern presents an insurance industry use case: a custom classifier for analyzing vehicle damage. You will create a mobile application that takes a picture of vehicle damage and sends it to the insurance company to identify and classify the problem, for example, a flat tire, a broken window, or a dent.

Description

Image classification is a growing requirement for all kinds of organizations, including insurance companies. Classifying images gets easier with the IBM Watson Visual Recognition service.

The Visual Recognition service provides the ability to create custom classifiers by uploading sample images. In this code pattern, you’ll explore an interesting use case, where an insurance company requires a custom classifier for analyzing vehicle damage.

You will create a mobile application using Apache Cordova, Node.js, and Watson Visual Recognition. The mobile application sends the images of auto and motorcycle accidents and other vehicle issues to be analyzed by a server application using Watson Visual Recognition. The server application uses the images to train Watson Visual Recognition to identify various classes of issues, for example, vandalism, a broken windshield, a vehicle accident, or a flat tire. You can leverage this to create your own custom Watson Visual Recognition classifiers for your use cases.

When you have completed this code pattern, you should know how to:

  • Create a Node.js server that can utilize the Watson Visual Recognition service for classifying images
  • Have a server initialize a Watson Visual Recognition custom classifier at startup
  • Create a Watson Visual Recognition custom classifier in an application
  • Create an Android mobile application that can send pictures to a server application for classification using Watson Visual Recognition

Flow

flow

  1. The user captures an image with the mobile application.
  2. The user sends the image on the mobile phone to the server application running in the cloud.
  3. The server sends the image to Watson Visual Recognition service for analysis.
  4. Watson Visual Recognition service classifies the image and returns the information to the server.

Instructions

The basic steps to deploy this code pattern are listed below. Details are included in the README.md.

  1. Deploy the server application to IBM Cloud or locally.
  2. Clone the repo.
  3. Create the Watson Visual Recognition service and name it.
  4. Add the Visual Recoginition API key to .env file.
  5. Install dependencies and run server.
  6. Update config values for the mobile app.
  7. Install dependencies to build the mobile application.
  8. Run the mobile application build in the Docker container.
  9. Add the Android platform and plug-ins.
  10. Setup your Android device.
  11. Build and run the mobile application.