Learning objective

This introductory tutorial explains how you can process image, video, audio, or text data by using deep learning models from the Model Asset Exchange in Node-RED flows.

Prerequisites

If you are not familiar with the Model Asset Exchange, this introductory article provides a good overview.

You can complete this tutorial using a pre-configured demo Docker image or use a local installation of Node.js and Node-RED.

Use pre-configured Docker image

Follow the instructions in the Getting Started section in the max-node-red-demo GitHub repository to download and run the pre-configured Docker image.

Use local installation of Node.js and Node-RED

Make sure you have a current version of Node.js or Node-RED installed. You can download the latest versions from these locations:

The tutorial was tested using Node.js version 10.16 and Node-RED version 0.20.8 and a current version of the Chrome browser.

Estimated time

It should take you approximately 30 minutes to complete this tutorial.

Steps

This tutorial consists of the following steps:

In the max-node-red-demo Docker image, the Setup and Import the Model Asset Exchange nodes steps have already been completed for you. Review (but don’t complete) the instructions in those sections if you are following this tutorial using that image.

Tutorial setup

  1. Open a terminal window, start Node-RED node-red, and open the Node-RED editor by pointing your browser to the displayed URL, such as http://127.0.0.1:1880.

  2. From the menu, select Manage palette.

    Access the palette

  3. In the Palette tab of the User Settings, select Install.

  4. Search for the node-red-contrib-browser-utils module and install it if you don’t already have it installed.
  5. Close the settings window.

    Import prerequisite modules

  6. Verify that the camera and file inject nodes are listed in the input category. You’ll use these nodes later in this tutorial.

    Lights - camera - action

Import the Model Asset Exchange nodes

The Model Asset Exchange nodes are published on npm.

  1. From the menu, select Manage palette.
  2. In the Palette tab of the User Settings, select Install.
  3. Search for the node-red-contrib-model-asset-exchange module and install it.
  4. Close the settings window.

    Several nodes should be displayed in the Model Asset eXchange category.

    Palette with MAX nodes

    Each Model Asset Exchange node uses the endpoints of a deep learning microservice that you can run in a local environment or in the cloud.

    A node exposes the functions of a deep learning microservice

To keep this tutorial simple, you’ll associate your nodes with hosted demo microservice instances.

The Get started with the Model Asset Exchange tutorial outlines how to run your own microservice instance in a local environment. The Deploy deep learning models on Red Hat OpenShift tutorial explains how to deploy your own microservice instance on Kubernetes in the cloud.

Explore the image caption generator node

For illustrative purposes, you’ll be using the image caption generator node, which analyzes an image and generates a caption. If you prefer, you can follow along using a different node. The steps and concepts covered in this tutorial apply to all nodes.

  1. Drag the image caption generator node onto the workspace and review the displayed information. The node has one input and one output (the microservice response).

  2. Double-click the node to edit it. Two node properties are listed: service and method.

    Image caption generator node default configuration

  3. Edit the service node property to associate the node with an instance of the image caption generator microservice.

  4. By default, host is pre-populated with the URL of a hosted demo instance of this microservice, which you can use to explore its capabilities. Click Add to associate this demo instance with your node.

    Image caption generator node service configuration

    Never use hosted demo microservice instances for production workloads. To use your own (local or cloud) microservice instance, enter its URL (for example, http://127.0.0.1:5000 if you are running the service on your local machine using the pre-built Docker container from Docker Hub).

  5. The Model Asset Exchange microservices expose multiple endpoints, which are listed under the method property. By default, the predict method is selected, which analyzes the node’s input.

    Image caption generator node options

    Some methods accept optional input parameters, such as a threshold value. (There are none for the image caption generator.) Some method outputs can be customized. Consult the node’s help for detailed information.

  6. From the drop-down list, select metadata to retrieve information about the microservice.

    Image caption generator node configuration

  7. Close the node’s properties window.

  8. Drag an inject input node onto the workspace and connect its output to the input of the image caption generator node.

    The node name automatically changes to timestamp. (We are only using this node to run the flow. The metadata method of the microservices does not require any input.)

  9. Drag a debug output node onto the workspace and connect its input to the output of the image caption generator node.

    Your completed flow should look like the following image.

    This flow retrieves microservice metadata

  10. Deploy the flow.

  11. From the menu, select View > Debug messages to open the debug output view.

  12. Click the inject node’s button to inject a message into the flow and inspect the debug output. If your setup is correct, the output should look like:

    id: "max-image-caption-generator"
    name: "MAX Image Caption Generator"
    description: "im2txt TensorFlow model trained on MSCOCO"
    type: "Image-to-Text Translation"
    source: "https://developer.ibm.com/exchanges/models/all/max-image-caption-generator/"
    license: "Apache 2.0"
    

You’ve just confirmed that the node can connect to the specified service. Next, you’ll modify the flow to generate an image caption.

Generate an image caption

The image caption generator’s predict method analyzes the provided input image and generates a brief description of the image’s content.

  1. Double-click the image caption generator node to edit its properties.
  2. Change the selected method from metadata to predict.
  3. Replace the inject node with the file inject node (or the camera node – only supported in Chrome or Firefox) and connect its output to the image caption generator node.

    Configure file node

  4. Deploy the flow.

  5. Click the file inject (or camera) node’s button to select an image (or take a picture) and inspect the debug output, which should contain a caption describing the image.

     a man sitting on a bench with a dog .
    

    You can access the entire microservice JSON response through the message object.

  6. Open the debug node’s properties and change the output to complete msg object.

  7. Redeploy the flow and rerun it.
  8. Inspect the output. The message details contain a predictions array that contains generated captions.

     {
      "payload": "a man sitting on a bench with a dog .",
      "_msgid": "e8f93edc.7eddd",
      "statusCode": 200,
      ...
      "details": {
         "status": "ok",
         "predictions": [
             {
                 "index": "0",
                 "caption": "a man sitting on a bench with a dog .",
                 "probability": 0.00033009440665662435
             },
             {
                 "index": "1",
                 "caption": "a teddy bear sitting on a bench in a park",
                 "probability": 0.00009953154282018676
             }
         ]
      },
      "topic": "max-image-caption-generator"
     }
    

Explore other model nodes

The node-red-contrib-model-asset-exchange module includes a couple getting started flows that you can import into your workspace.

  1. From the menu, select Import > Examples > model asset exchange > getting started.

    Import Model Asset Exchange example flows

  2. Select one of the flows to import it into your workspace.

  3. Review the pre-configured flow, deploy, and run it.

Consume multiple nodes in a flow

The basic flows you’ve explored thus far consumed a single deep learning node. Let’s take a look at a more advanced usage scenario.

  1. From the menu, select Import > Examples > model asset exchange > beyond the basics > using-multiple-models.

    Advanced Model Asset Exchange flow

    In this scenario, the input (an image file or a picture that was taken using the camera) is processed by the Object Detection node. This node is configured to pass through the input to the Image Caption Generator node, which generates a caption that the debug node displays for illustrative purposes. The Object Detection node is also configured to generate an annotated input image, which contains for each identified object a bounding box along with an object label.

    The two function nodes (Extract Input Image Data and Extract Bounding Box Image Data) map the node’s outputs to inputs for the Image Caption Generator node and the image preview node.

  2. Open the object-detector node and review the node’s output configuration. The node’s help describes the generated output and how other nodes can access it.

    Object Detector output configuration settings

  3. Deploy the flow, and run it. An image caption should be displayed in the debug window and a preview of the annotated image displayed on the canvas.

    Advanced Model Asset Exchange flow output

    If no annotated image is displayed, no objects were detected in the input.

Summary

In this tutorial, you learned how to generate an image caption in a Node-RED flow using the Image Caption Generator microservice from the Model Asset Exchange. The tutorial showed you how to:

  • Import the node-red-contrib-model-asset-exchange module into your palette
  • Create a flow using a node from the Model Asst eXchange category
  • Deploy and run a flow and inspect the generated outputs
  • Import sample flows
  • Customize node output options