Classify vehicle damage images  

Create a custom visual recognition classifier with Apache Cordova, Node.js, and Watson Visual Recognition

Last updated | By Scott D’Angelo

Description

The IBM Watson™ Visual Recognition service uses learning algorithms to analyze images for content such as objects, scenes, and faces. This code pattern presents an insurance industry use case: a custom classifier for analyzing vehicle damage. You will create a mobile application that takes a picture of vehicle damage and sends it to the insurance company to identify and classify the problem, for example, a flat tire, a broken window, or a dent.

Overview

Image classification is a growing requirement for all kinds of organizations, including insurance companies. Classifying images gets easier with the IBM Watson Visual Recognition service.

The Visual Recognition service provides the ability to create custom classifiers by uploading sample images. In this code pattern, you’ll explore an interesting use case, where an insurance company requires a custom classifier for analyzing vehicle damage.

You will create a mobile application using Apache Cordova, Node.js, and Watson Visual Recognition. The mobile application sends the images of auto and motorcycle accidents and other vehicle issues to be analyzed by a server application using Watson Visual Recognition. The server application uses the images to train Watson Visual Recognition to identify various classes of issues, for example, vandalism, a broken windshield, a vehicle accident, or a flat tire. You can leverage this to create your own custom Watson Visual Recognition classifiers for your use cases.

When you have completed this code pattern, you should know how to:

  • Create a Node.js server that can utilize the Watson Visual Recognition service for classifying images
  • Have a server initialize a Watson Visual Recognition custom classifier at startup
  • Create a Watson Visual Recognition custom classifier in an application
  • Create an Android mobile application that can send pictures to a server application for classification using Watson Visual Recognition

Flow

  1. The user captures an image with the mobile application.
  2. The user sends the image on the mobile phone to the server application running in the cloud.
  3. The server sends the image to Watson Visual Recognition service for analysis.
  4. Watson Visual Recognition service classifies the image and returns the information to the server.

Instructions

The basic steps to deploy this code pattern are listed below. Details are included in the README file.

  1. Deploy the server application to IBM Cloud or locally.
  2. Clone the repo.
  3. Create the Watson Visual Recognition service and name it.
  4. Add the Visual Recoginition API key to .env file.
  5. Install dependencies and run server.
  6. Update config values for the mobile app.
  7. Install dependencies to build the mobile application.
  8. Run the mobile application build in the Docker container.
  9. Add the Android platform and plug-ins.
  10. Setup your Android device.
  11. Build and run the mobile application.

Related Blogs

Live analytics with an event store fed from Java and analyzed in Jupyter Notebook

Event-driven analytics requires a data management system that can scale to allow a high rate of incoming events while optimizing to allow immediate analytics. IBM Db2 Event Store extends Apache Spark to provide accelerated queries and lightning fast inserts. This code pattern is a simple introduction to get you started with event-driven analytics. You can...

Continue reading Live analytics with an event store fed from Java and analyzed in Jupyter Notebook

Creating an augmented reality résumé using Core ML and Watson Visual Recognition

Overview In June 2017, at the Apple Worldwide Developers Conference (WWDC), Apple announced that ARKit would be available in iOS 11. To highlight how IBM’s Watson services can be used with Apple’s ARKit, I created a code pattern that matches a person’s face using Watson Visual Recognition and Core ML. The app then retrieves information...

Continue reading Creating an augmented reality résumé using Core ML and Watson Visual Recognition

Related Links

Architecture center

Learn how this code pattern fits into the Cognitive discovery Reference Architecture

https://www.ibm.com/devops/method/content/architecture/cognitiveDiscoveryDomain2/0_1

Watson Node.js SDK

Access the Node.js client library to use the Watson developer cloud services, a collection of APIs that use cognitive computing to solve complex problems.