Create a web app to visually interact with objects detected using machine learning
Use an open source object detector deep learning model to display and filter objects recognized in an image in a web application
The IBM Model Asset eXchange (MAX) has given application developers without data science experience easy access to prebuilt machine learning models. This code pattern shows how to create a simple web application to visualize the text output of a MAX model. The web app uses the Object Detector from MAX and creates a simple web UI that displays bounding boxes around detected objects in an image and lets you filter the objects based on their label and probable accuracy given by the model.
In this code pattern uses one of the models from the Model Asset eXchange (MAX), an exchange where you can find and experiment with open source deep learning models. Specifically, it uses the Object Detector to create a web application that recognizes objects in an image and lets you filter the objects based on their detected label and prediction accuracy. The web application provides an interactive user interface backed by a lightweight Node.js server using Express. The server hosts a client-side web UI and relays API calls to the model from the web UI to a REST end point for the model. The web UI takes in an image and sends it to the model REST endpoint through the server and displays the detected objects on the UI. The model’s REST endpoint is set up using the Docker image provided on MAX. The Web UI displays the detected objects in an image using a bounding box and label and includes a toolbar to filter the detected objects based on their labels or a threshold for the prediction accuracy.
When you have completed this code pattern, you should understand how to:
- Build a Docker image of the Object Detector MAX model
- Deploy a deep learning model with a REST endpoint
- Recognize objects in an image using the MAX model’s REST API
- Run a web application that uses the model’s REST API
- The user uses the web UI to send an image to the Model API.
- The Model API returns the object data and the web UI displays the detected objects.
- The user interacts with the web UI to view and filter the detected objects.
Ready to put this code pattern to use? Complete details on how to get started running and using this application are in the README.