Learning objective

This introductory tutorial explains how you can process image, video, audio, or text data by using deep learning models from the Model Asset Exchange in Node-RED flows.

Prerequisites

If you are not familiar with the Model Asset Exchange, this introductory article provides a good overview.

This tutorial uses Node.js and Node-RED, so make sure that you have a current version installed.

The tutorial was tested using Node.js version 10.13 and Node-RED version 0.19.5.

Estimated time

It should take you approximately 30 minutes to complete this tutorial.

Steps

This tutorial consists of the following steps:

Tutorial setup

  1. Open a terminal window, start Node-RED node-red, and open the Node-RED editor by pointing your browser to the displayed URL, such as http://127.0.0.1:1880.

  2. From the menu, select Manage palette.

    Access the palette

  3. In the Palette tab of the User Settings, select Install.

  4. Search for the node-red-contrib-browser-utils module and install it if you don’t already have it installed.
  5. Close the settings window.

    Import prerequisite modules

  6. Verify that the camera and file inject nodes are listed in the input category. You’ll use these nodes later in this tutorial.

    Lights - camera - action

Import the Model Asset Exchange nodes

The Model Asset Exchange nodes are published on npm.

  1. From the menu, select Manage palette.
  2. In the Palette tab of the User Settings, select Install.
  3. Search for the node-red-contrib-model-asset-exchange module and install it.
  4. Close the settings window.

    Several nodes should be displayed in the Model Asset eXchange category.

    Palette with MAX nodes

    Each Model Asset Exchange node uses the endpoints of a deep learning microservice that you can run in a local environment or in the cloud.

    A node exposes the functions of a deep learning microservice

To keep this tutorial simple, you’ll associate your nodes with hosted demo microservice instances. To learn how to run your own microservice instance in a local environment, refer to this tutorial.

Explore the image caption generator node

For illustrative purposes, you’ll be using the image caption generator node, which analyzes an image and generates a caption. If you prefer, you can follow along using a different node. The steps and concepts covered in this tutorial apply to all nodes.

  1. Drag the image caption generator node onto the workspace and review the displayed information. The node has one input and one output (the microservice response).

  2. Double-click the node to edit it. Two node properties are listed: service and method.

    Image caption generator node default configuration

  3. Edit the service node property to associate the node with an instance of the image caption generator microservice.

  4. By default, host is pre-populated with the URL of a hosted demo instance of this microservice, which you can use to explore its capabilities. Click Add to associate this demo instance with your node.

    Image caption generator node service configuration

    Never use hosted demo microservice instances for production workloads. To use your own (local or cloud) microservice instance, enter its URL (for example, http://127.0.0.1:5000 if you are running the service on your local machine using the pre-built Docker container from Docker Hub).

  5. The Model Asset Exchange microservices expose multiple endpoints, which are listed under the method property. By default, the predict method is selected, which analyzes the node’s input. From the drop-down list, select metadata to retrieve information about the microservice.

    Image caption generator node configuration

    Some methods accept optional parameters, such as a threshold value. (There are none for the image caption generator.)

  6. Close the node’s properties window.

  7. Drag an inject input node onto the workspace and connect its output to the input of the image caption generator node.

    The node name automatically changes to timestamp. (We are only using this node to run the flow. The metadata method of the microservices does not require any input.)

  8. Drag a debug output node onto the workspace and connect its input to the output of the image caption generator node.

    Your completed flow should look like the following image.

    This flow retrieves microservice metadata

  9. Deploy the flow.

  10. From the menu, select View > Debug messages to open the debug output view.

  11. Click the inject node’s button to inject a message into the flow and inspect the debug output. If your setup is correct, the output should look like:

     {
      "id":"im2txt-tensorflow",
      "name":"im2txt TensorFlow Model",
      "description":"im2txt TensorFlow model trained on MSCOCO",
      "license":"APACHE V2"
     }
    

You’ve just confirmed that the node can connect to the specified service. Next, you’ll modify the flow to generate an image caption.

Generate an image caption

The image caption generator’s predict method analyzes the provided input image and generates a brief description of the image’s content.

  1. Double-click the image caption generator node to edit its properties.
  2. Change the selected method from metadata to predict.
  3. Replace the inject node with the file inject node (or the camera node – only supported in Chrome or Firefox) and connect its output to the image caption generator node.

    Configure file node

  4. Deploy the flow.

  5. Click the file inject (or camera) node’s button to select an image (or take a picture) and inspect the debug output, which should contain a caption describing the image.

     a man sitting on a bench with a dog .
    

    You can access the entire microservice JSON response through the message object.

  6. Open the debug node’s properties and change the output to complete msg object.

  7. Redeploy the flow and rerun it.
  8. Inspect the output. The message details contain a predictions array that contains generated captions.

     {
      "payload": "a man sitting on a bench with a dog .",
      "_msgid": "e8f93edc.7eddd",
      "statusCode": 200,
      ...
      "details": {
         "status": "ok",
         "predictions": [
             {
                 "index": "0",
                 "caption": "a man sitting on a bench with a dog .",
                 "probability": 0.00033009440665662435
             },
             {
                 "index": "1",
                 "caption": "a teddy bear sitting on a bench in a park",
                 "probability": 0.00009953154282018676
             }
         ]
      }
     }
    

Explore other model nodes

The node-red-contrib-model-asset-exchange module includes a couple getting started flows that you can import into your workspace.

  1. From the menu, select Import > Examples > model asset-exchange > getting started.

    Import Model Asset Exchange example flows

  2. Select one of the flows to import it into your workspace.

  3. Review the pre-configured flow, deploy, and run it.

Now that you have a basic understanding of how to use the Model Asset Exchange nodes, feel free to try out some of the more complex Raspberry Pi flows that we’ve published in the Node-RED library.

Summary

In this tutorial, you learned how to generate an image caption in a Node-RED flow using the Image Caption Generator microservice from the Model Asset Exchange. The tutorial showed you how to:

  • Import the node-red-contrib-model-asset-exchange module into your palette
  • Create a flow using a node from the Model Asst eXchange category
  • Inspect the output of the metadata and predict methods of that node