Build a machine learning node for Node-RED using TensorFlow.js

Lowering the barrier to entry for artificial intelligence (AI) is a goal that is continually strived for. Making AI more widely accessible will not only increase the number of people who actually use AI, but it will also help increase the spread and adoption of AI across many different fields. The world of machine learning can be daunting at first, but there are several approaches to simplify the entire AI app development process. One of these approaches is by combining the use of TensorFlow.js with Node-RED. This tutorial shows you how to use this approach to create AI-enabled Node-RED applications in various environments.

What is Node-RED?

Node-RED is an open source visual programming tool that offers a browser-based flow editor for wiring together devices, APIs, and services. It helps users visualize and design their event-driven applications. Providing a low-code style of application development, Node-RED can speed up development time and make app development more accessible to coders and non-coders alike. Built on Node.js, you can extend its features by creating your own nodes or by taking advantage of the JavaScript and npm ecosystems.

While Node-RED really shines for IoT workloads with the ability to run on devices like the Raspberry Pi, it can also be run on laptop and cloud environments for any event-driven application scenario. One of the core components of Node-RED is the node, many of which are provided by the community. Each Node-RED node has a well-defined purpose and acts as an essential building block for constructing flows. They usually take in some input and produce some output for use by other nodes. With enough of these nodes strung together, you can produce full-fledged applications. Learn more about using Node-RED.

Adding TensorFlow.js

While Node-RED provides the development environment, incorporating machine learning into your apps is another key component. TensorFlow.js fills this gap. The matching Node.js ecosystems make integrating the two technologies seamless, where a TensorFlow.js Node-RED node can easily be created, packaged, and uploaded to npm for sharing.

TensorFlow.js also provides the benefit of having models run directly on the device with no interaction with an external server or cloud. This alleviates most data security or Internet connectivity concerns. Also, with the growing availability of TensorFlow.js Node-RED nodes provided by the community, several different AI apps can be realized without writing a single line of code.

In this tutorial, we highlight:

  • Using publicly available TensorFlow.js Node-RED packages
  • Building your own TensorFlow.js Node-RED packages
  • Enabling TensorFlow.js on IoT devices
  • Deploying Node-RED to cloud environments

Prerequisites

To follow this tutorial, you must have:

Steps

Use the following steps to complete this tutorial.

  1. Use existing TensorFlow.js Node-RED nodes
  2. Build a custom TensorFlow.js Node-RED node
  3. Run Node-RED with TensorFlow.js on IoT devices
  4. Deploy Node-RED with TensorFlow.js in the Cloud

Use existing TensorFlow.js Node-RED nodes

For convenience, we have implemented several frequently used functions as Node-RED custom nodes. We briefly explain their usage with example flows to help you jump-start your TensorFlow.js Node-RED experience. Note that most of these custom nodes require @tensorflow/tfjs-node as a peer dependency, so be sure to have it installed in the Node-RED node environment before installing these custom node packages.

Custom nodes

Example flows

Object detection

The Object detection flow recognizes objects in an image and annotates objects with bounding boxes. An image can be loaded from a built-in camera, the file system, or by injecting the default image. Make sure that you have the node-red-contrib-browser-utils package installed for all of these input nodes to work. This flow uses three of the custom nodes mentioned above (tf-function, tf-model, and post-object-detection).

The loaded image is passed into the pre-processing node as msg.payload. The msg object is a JavaScript object that is used to carry messages between nodes. By convention, it has a payload property containing the output of the previous node. The pre-processing function node is an example of tf-function that directly calls the tf.node.decodeImage method with the predefined tf variable. The node produces a Tensor4D image representation as the payload and then passes it to the COCO SSD lite node, which is an instance of the tf-model custom node. This loads the COCO-SSD lite model.json from an external URL and runs inference on the model.

The result of the model goes through the post-process node that returns an object array containing bbox, className, and score properties. The objects node combines an additional property, complete that is set to true, to the msg with the image object. Then, the bounding-box node draws bounding boxes on the input image and displays it in the browser.

Object detection flow

BERT sentiment analysis

The BERT sentiment analysis example flow uses a BERT sentiment model to classify the comments of a YouTube video and chart the results. In addition to the tf-function and tf-model nodes, we use another custom node, bert-tokenizer, to convert text into input tensors. Other packages needed for this flow are node-red-dashboard and youtube-comments-stream. Note that the BERT sentiment model model loaded here is converted from the MAX-Text-Sentiment-Classifier SavedModel, which takes a named tensor map as input, as shown in the following code.

{
    input_ids_1 : tensor([1,128], "int32"),
    segment_ids_1 : tensor([1,128], "int32"),
    input_mask_1 : tensor([1,128],"int32")
}

The model returns a softmax output representing the likelihoods of the input being both positive and negative in a tensor array. The flow then counts the number of positive and negative comments and outputs the count to a bar chart node. This bar chart node is from the node-red-dashboard package, which allows users to create live data dashboards and widgets. The dashboard can be accessed from the /ui/ endpoint (for example, http://localhost:1880/ui/).

comments-sentiment flow bar-chart

Note: For the Read Comments function node to work, ensure you have the following code in your Node-RED settings.js file.

functionGlobalContext: {
  commentsStream:require('youtube-comments-stream')
},

Build a custom TensorFlow.js Node-RED node

In the previous section of this tutorial, you ran prepackaged TensorFlow.js Node-RED nodes. These nodes let you quickly get started so that you can perform many machine learning tasks with only basic knowledge of the model, TensorFlow.js, and Node-RED.

As useful as these nodes are, they address only some use cases. If your use case is not covered by existing nodes, you can create your own custom node. The full documentation for creating a custom Node-RED node is provided elsewhere and is not repeated here, but the following steps show sufficient details to highlight how to integrate the TensorFlow.js API. Your node imports the TensorFlow.js library for Node.js, loads a TensorFlow.js web model, and runs inference on the model.

For consistency, we use and expand on the COCO-SSD model that you learned in the first tutorial, “An introduction to AI in Node.js,” in this series. We’ll create a custom Node-RED node to perform object detection using the COCO-SSD TensorFlow.js model.

Components of a Node-RED node

A Node-RED node is a Node.js module/npm package that consists of three main files.

  • A JavaScript file that defines the node’s behavior
  • An HTML file laying out the node’s properties, edit dialog, and help text
  • A package.json file that describes the Node.js module’s metadata

The JavaScript file is where you would wrap your TensorFlow.js code. It would load the TensorFlow.js model and run the prediction. After all of the files are bundled and installed, the custom node is displayed in the editor, ready to be wired into a flow and deployed.

Initialize the custom node module

First, you must set up a Node-RED development environment. You develop the custom node and run Node-RED from this environment to avoid the potential of conflict with other custom nodes as well as having to install the custom node to your global Node-RED environment during development.

From a terminal window:

  1. Create a new Node-RED project directory and go into this new directory.

     mkdir nodered-dev
     cd nodered-dev
    
  2. Initialize an npm package for the project and answer the questions as prompted (for example, package name nodered-dev).

     npm init
    
  3. Install the node-red and @tensorflow/tfjs-node dependencies.

     npm install node-red @tensorflow/tfjs-node
    
  4. Edit the newly created package.json file using VS Code or your favorite IDE.

  5. Add the start script that is used to launch Node-RED.

     {
       "name": "nodered-dev",
       ...
       "scripts": {
         "start": "node-red"
       }
       ...
     }
    

With the Node-RED project folder ready, you can begin the custom node.

  1. Create a directory inside the new Node-RED project directory for the custom node and go into this directory.

     mkdir node-red-contrib-tfjs-tutorial
     cd node-red-contrib-tfjs-tutorial
    

    Note: If you choose to use node-red in the name of your node, it is recommended to prefix it with node-red-contrib- to distinguish it from nodes maintained by the Node-RED project.

  2. Initialize an npm package for the custom node and answer the questions as prompted (for example, package name node-red-contrib-tfjs-tutorial).

     npm init
    
  3. Edit the newly created package.json file using VS Code or your favorite IDE.

  4. Add a node-red section that tells the runtime the node files contained in the module, and also add a peerDependecies section for the @tensorflow/tfjs-node module.

     {
       "name": "node-red-contrib-tfjs-tutorial",
       ...
       "peerDependencies": {
         "@tensorflow/tfjs-node": "^1.7.2"
       },
       "node-red": {
         "nodes": {
           "tfjs-tutorial-node": "index.js"
         }
       }
     }
    

The package.json file is a standard Node.js module package file, with the only difference being the addition of the node-red section. With the package.json file initialized and configured, the next step is to define the node’s behavior.

Describe the custom node’s appearance

A node’s appearance is defined in an HTML file with three script tags. The file registers the node with the Node-RED editor and provides the template for the node’s Edit dialog and help text.

In your node’s directory:

  1. Create a new HTML file (that is, index.html), and open it in your IDE.

  2. Add the code to define the template for the node’s Edit dialog. Icon classes from Font Awesome 4.7 (for example, fa-tag) are available to use in the template. A <div class="form-row"> should be used to lay out each row of the dialog. Each property to be passed to the node should have an id in the format node-input-<propertyname>.

     <script type="text/html" data-template-name="tfjs-tutorial-node">
       <div class="form-row">
         <label for="node-input-name"><i class="fa fa-tag"></i> Name</label>
         <input type="text" id="node-input-name" placeholder="Name">
       </div>
       <div class="form-row">
         <label for="node-input-name"><i class="fa fa-globe"></i> Model Url</label>
         <input type="text" id="node-input-modelUrl" placeholder="https://modelurl/model.json">
       </div>
       <div class="form-row">
         <label for="node-input-name">Is a TFHub model url?</label>
         <input type="checkbox" id="node-input-fromHub" checked>
       </div>
     </script>
    
  3. Add the code to register the node with the editor. The category defines which group to place the node under in the editor’s palette window. The inputs and outputs describe how many inputs and outputs the node contains. The defaults sets default values for parameters used by the node and defined in the template.

     <script type="text/javascript">
       RED.nodes.registerType('tfjs-tutorial-node', {
         category: 'machine learning',
         defaults: {
           name: { value: 'tfjs tutorial node' },
           modelUrl: { value: 'https://tfhub.dev/tensorflow/tfjs-model/ssdlite_mobilenet_v2/1/default/1' },
           fromHub: { value: 'checked' }
         },
         inputs: 1,
         outputs: 1,
         paletteLabel: 'tfjs tutorial node',
         color: '#ff9100',
         label: function() {
           return this.name || 'tfjs tutorial node';
         }
       });
     </script>
    
  4. Add the help text for node. This appears when the user views the node’s information pane.

     <script type="text/html" data-help-name="tfjs-tutorial-node">
       <p>A TensorFlow.js node to run prediction using the Coco SSD model for object detection.</p>
       <p>Provide a <strong>Model URL</strong> and indicate whether the URL points to a TFHub hosted model.</p>
       <p>The node accepts an image buffer and outputs a JSON prediction object with detected objects and their bounding box and confidence score.</p>
     </script>
    

Define the custom node behavior

Next, define the behavior of the node (in a function that needs to be registered with the runtime). The function is called whenever a new instance of the node is created and is passed the properties set in the flow editor and Edit dialog. The function should call RED.nodes.createNode to initialize the features shared by all nodes.

This node function is wrapped in a Node.js module. The module exports a function that is called when the runtime loads the node on start up. The exported function is passed a single argument that provides the module access to the Node-RED runtime API.

In your node’s directory:

  1. Open the index.js file in your IDE.

  2. Add the following code into the newly created JavaScript file to create the node function, register the node function, and export the node.

     // export the node module
     module.exports = function(RED) {
       // define the node's behavior
       function TfjsTutorialNode(config) {
         // initialize the features
         RED.nodes.createNode(this, config);
    
         // register a listener to get called whenever a message arrives at the node
         node.on('input', function (msg) {
           // handle incoming message
         });
       }
    
       // register the node with the runtime
       RED.nodes.registerType('tfjs-tutorial-node', TfjsTutorialNode);
     }
    
  3. Create a new JavaScript file (that is, tfjs-tutorial-util.js) and open it in your IDE. This file uses most of the code written in a previous tutorial (to load the model, preprocess the input, and process the output). To learn more about the code, visit the first tutorial in this series.

  4. Add code to this new file to load the tfjs-node library and preprocess the input. The code also references a labels.js file. This file contains a mapping of the object labels to their index value/id returned by the model. You can find the labels.js file here.

     const tf = require('@tensorflow/tfjs-node');
     const labels = require('./labels.js');
    
     // load COCO-SSD graph model from TensorFlow Hub
     const loadModel = async function (modelUrl, fromTFHub) {
       console.log(`loading model from ${modelUrl}`);
    
       if (fromTFHub) {
         model = await tf.loadGraphModel(modelUrl, {fromTFHub: true});
       } else {
         model = await tf.loadGraphModel(modelUrl);
       }
    
       return model;
     }
    
     // convert image to Tensor
     const processInput = function (imageBuffer) {
       console.log(`preprocessing image`);
    
       const uint8array = new Uint8Array(imageBuffer);
    
       return tf.node.decodeImage(uint8array, 3).expandDims();
     }
    
  5. Continue to edit the file to add code to process the model prediction output and export the functions.

     const maxNumBoxes = 5;
    
     // process the model output into a friendly JSON format
     const processOutput = function (prediction, height, width) {
       console.log('processOutput');
    
       const [maxScores, classes] = extractClassesAndMaxScores(prediction[0]);
       const indexes = calculateNMS(prediction[1], maxScores);
    
       return createJSONresponse(prediction[1].dataSync(), maxScores, indexes, classes, height, width);
     }
    
     // determine the classes and max scores from the prediction
     const extractClassesAndMaxScores = function (predictionScores) {
       console.log('calculating classes & max scores');
    
       const scores = predictionScores.dataSync();
       const numBoxesFound = predictionScores.shape[1];
       const numClassesFound = predictionScores.shape[2];
    
       const maxScores = [];
       const classes = [];
    
       // for each bounding box returned
       for (let i = 0; i < numBoxesFound; i++) {
         let maxScore = -1;
         let classIndex = -1;
    
         // find the class with the highest score
         for (let j = 0; j < numClassesFound; j++) {
           if (scores[i * numClassesFound + j] > maxScore) {
             maxScore = scores[i * numClassesFound + j];
             classIndex = j;
           }
         }
    
         maxScores[i] = maxScore;
         classes[i] = classIndex;
       }
    
       return [maxScores, classes];
     }
    
     // perform non maximum suppression of bounding boxes
     const calculateNMS = function (outputBoxes, maxScores) {
       console.log('calculating box indexes');
    
       const boxes = tf.tensor2d(outputBoxes.dataSync(), [outputBoxes.shape[1], outputBoxes.shape[3]]);
       const indexTensor = tf.image.nonMaxSuppression(boxes, maxScores, maxNumBoxes, 0.5, 0.5);
    
       return indexTensor.dataSync();
     }
    
     // create JSON object with bounding boxes and label
     const createJSONresponse = function (boxes, scores, indexes, classes, height, width) {
       console.log('create JSON output');
    
       const count = indexes.length;
       const objects = [];
    
       for (let i = 0; i < count; i++) {
         const bbox = [];
    
         for (let j = 0; j < 4; j++) {
           bbox[j] = boxes[indexes[i] * 4 + j];
         }
    
         const minY = bbox[0] * height;
         const minX = bbox[1] * width;
         const maxY = bbox[2] * height;
         const maxX = bbox[3] * width;
    
         objects.push({
           bbox: [minX, minY, maxX, maxY],
           label: labels[classes[indexes[i]]],
           score: scores[indexes[i]]
         });
       }
    
       return objects;
     }
    
     module.exports = {
       loadModel: loadModel,
       processInput: processInput,
       processOutput: processOutput
     }
    
  6. Update the index.js file, and import the tfjs-tutorial-util.js file, load the model, and run the prediction when an input message is received.

     // export the node module
     module.exports = function(RED) {
       // import helper module
       const tfmodel = require('tfjs-tutorial-util.js');
    
       // load the model
       async function loadModel (config, node) {
         node.model = await tfmodel.loadModel(config.modelUrl, config.fromHub);
       }
    
       // define the node's behavior
       function TfjsTutorialNode(config) {
         // initialize the features
         RED.nodes.createNode(this, config);
         const node = this
    
         loadModel(config, node)
    
         // register a listener to get called whenever a message arrives at the node
         node.on('input', function (msg) {
           // preprocess the incoming image
           const inputTensor = processInput(msg.payload);
           // get image/input shape
           const height = inputTensor.shape[1];
           const width = inputTensor.shape[2];
    
           // get the prediction
           node.model
             .executeAsync(inputTensor)
             .then(prediction => {
               msg.payload = tfmodel.processOutput(prediction, height, width);
               // send the prediction out
               node.send(msg);
             });
         });
       }
    
       // register the node with the runtime
       RED.nodes.registerType('tfjs-tutorial-node', TfjsTutorialNode);
     }
    

Test your custom node

With your custom node’s behavior and appearance defined, the node is ready to be installed, added to a flow, and tested. You can test your node while developing it by linking it to your local Node-RED environment. This lets you continue development of your node and have changes picked up just by restarting Node-RED.

From a terminal window:

  1. Go to your Node-RED project directory (that is, nodered-dev).

  2. Install your custom node-red-contrib-tfjs-tutorial node.

     npm install node-red-contrib-tfjs-tutorial
    
  3. Launch Node-RED.

     npm start
    
  4. In the Palette window, you should find your custom node (that is, tfjs tutorial node) that you can use to create a flow that passes an image buffer to the custom node. For example, you can connect a File in node configured to a local image file (and output set to a Buffer object).

  5. Deploy and run the flow to see the output from the custom node.

    custom flow

Note: If you have an existing Node-RED environment and want to install the local Node-RED custom node into that environment, then use the yalc package. Yalc is the recommended way of working with local packages without publishing anything to a remote registry.

# Install.
npm i -g yalc

# In the custom node directory (i.e. node-red-contrib-tfjs-tutorial/), run the following.
yalc publish

# In Node-RED environment directory, add the custom node as a dependency.
yalc add node-red-contrib-tfjs-tutorial

You can find the complete custom node (tfjs tutorial node) in this repository. Learn more about creating custom Node-RED nodes in the Node-RED documentation.

Run Node-RED with TensorFlow.js on IoT devices

Many edge devices now have enough hardware computational power to run Node-RED, and some devices come with powerful graphic processing units (GPUs) and are suitable for machine learning applications. The Node.js back end of TensorFlow.js lets you integrate the native TensorFlow shared library and use the full power of several devices.

There are two available options depending on your device’s hardware specification: CPU or GPU acceleration. Both options rely on the same fundamental component, TensorFlow shared libraries. However, on edge devices, there are currently some caveats for TensorFlow.js, which we discuss below.

Obtaining the TensorFlow shared library

When you install the @tensorflow/tfjs-node package, it installs the appropriate TensorFlow shared library based on the CPU architecture on your device. Currently, TensorFlow.js supports these major server configurations: Ubuntu on x86_64, MacOS X on x86_64, and Win 7 or higher on x86_64. For more details about supported platforms, look at this document. If you try to install the npm package on an unsupported platform, you get the following error message.

UnhandledPromiseRejectionWarning: Error: Unsupported system: cpu-linux-arm64

In this case, the npm package was not installed completely. The JavaScript libraries are installed, but the native shared libraries and the Node.js binding are missing. In the following sections, we provide instructions to enable the native TensorFlow shared library support for these architectures:

  • ARM32 – armv71
  • ARM64 with Nvidia GPU – aarch64

CPU approach – on Raspberry Pi

The ARM 32-bit architecture is commonly used in edge devices, with the most popular device being the Raspberry Pi. Although the Raspberry Pi 4 uses the 64-bit Cortex-A72 (ARM v8), the Raspbian Buster OS is 32-bit. @tensorflow/tfjs-node before v1.4.0 did support ARM 32-bit. However, this support has not been continued after v1.4.0. To work around this problem with versions after v1.4.0, you can install the community-supported ARM 32-bit binary build of the TensorFlow shared library.

  1. Install the prerequisites.

    sudo apt update && sudo apt install python2 build-essential
    

    You need tools and compilers from these packages to build the Node.js binding.

  2. Locate the @tensorflow/tfjs-node package. Usually, it’s under node_modules/@tensorflow/tfjs-node with the directory where you run the npm install command. In this case, it’s where you create the Node-RED project.

  3. Switch to @tensorflow/tfjs-node package’s directory.

    cd node_modules/@tensorflow/tfjs-node
    
  4. You must provide a file named custom-binary.json under the scripts directory with the following contents.

    {
      "tf-lib": "https://s3.us.cloud-object-storage.appdomain.cloud/tfjs-cos/libtensorflow-cpu-linux-arm-1.15.0.tar.gz"
    }
    

    The URL in the previous code points to the precompiled TensorFlow shared libraries, which are v1.15.0 and for ARM 32-bit architecture.

  5. Run the following command to fetch the prebuilt shared libraries and build the Node.js binding.

    npm install
    

Now, the @tensorflow/tfjs-node package is using native TensorFlow shared libraries.

Note: At the time of writing this tutorial, the latest version of @tensorflow/tfjs-node is v1.7.1 and still depends on TensorFlow v1.15.0. You can check the LIBTENSORFLOW_VERSION variable in the scripts/deps-constants.js file to see which TensorFlow version is needed. @tensorflow/tfjs-node packages from v1.4.0 to v1.7.1 depend on TensorFlow v1.15.0.

GPU approach – on Jetson Nano

The Jetson Nano is a small, powerful computer for embedded applications and AI IoT. Provided by NVIDIA, it is designed to run multiple neural networks in parallel using a Quad-core ARM A57 CPU and 128-core Maxwell GPU. By using the NVIDIA JetPack SDK, you can boot the device into Ubuntu 18.04 with the proper GPU driver, CUDA, and cuDNN libraries. To fully utilize its GPU for model computation, you can use the following instructions, which link the TensorFlow.js to the native TensorFlow shared libraries.

  1. Locate the @tensorflow/tfjs-node package. Usually, it’s under node_modules/@tensorflow/tfjs-node within the directory where you run the npm install command. In this case, it’s where you create the Node-RED project.

  2. Switch to the @tensorflow/tfjs-node package’s directory.

    cd node_modules/@tensorflow/tfjs-node
    
  3. You must provide a file named custom-binary.json under the scripts directory with the following content.

    {
      "tf-lib": "https://s3.us.cloud-object-storage.appdomain.cloud/tfjs-cos/libtensorflow-gpu-linux-arm64-1.15.0.tar.gz"
    }
    

    The URL in the previous code points to the precompiled TensorFlow shared libraries, which are v1.15.0, ARM 64-bit, and linked to CUDA libraries.

  4. Run the following command to fetch the prebuilt shared libraries and build the Node.js binding.

    npm install
    

    You can find the Node.js binding, tfjs_binding.node, under the lib directory.

    find lib -name tfjs_binding.node
    

Now, the @tensorflow/tfjs-node is using the Node.js binding to run model computation on the CPU and GPU.

Outline for building TensorFlow shared libraries

If your devices are not supported by TensorFlow.js and you can’t find any precompiled TensorFlow shared libraries provided in the open source community, you have only one option: build the shared library yourself. The following instructions are the general outline for doing this.

  1. Determine which TensorFlow version is needed by the TensorFlow.js version you use. You need to check the scripts/deps-constants.js file inside the @tensorflow/tfjs-node npm package. In that file, you would see the following lines of code.

    /** Version of the libtensorflow shared library to depend on. */
    const LIBTENSORFLOW_VERSION = '1.15.0';
    

    Here, this means you need TensorFlow v1.15.0.

  2. Build the TensorFlow shared library from source.

    Before building shared libraries, you must install the build tool, Bazel. You can check the documentation to install Bazel. However, you might not find a prebuilt package for your architecture and platform. If this is the case, then you must build the Bazel from source, or bootstrap bazel.

    TensorFlow provides a detailed document to build the TensorFlow pip package from source. The procedures for building shared library packages are the same except for one difference, change the build target to //tensorflow/tools/lib_package:libtensorflow. After getting the source code from the GitHub repository, be sure to check out the tag with the specific version that is needed by your version of TensorFlow.js.

  3. After you finish the build, it produces a bazel-bin/tensorflow/tools/lib_package/libtensorflow.tar.gz file that needs to be manually unpacked to the deps directory of the @tensorflow/tfjs-node package.

    tar xf bazel-bin/tensorflow/tools/lib_package/libtensorflow.tar.gz -C <path-to-my-project>/node_modules/@tensorflow/tfjs-node/deps
    
  4. Build the Node.js binding to link to the shared libraries. Change to the directory of the @tensorflow/tfjs-node package, and run the following command to build the Node.js binding.

    cd path-to-my-project/node_modules/@tensorflow/tfjs-node
    npm run build-addon-from-source
    

    You can find the Node.js binding, tfjs_binding.node, under the lib directory.

    find lib -name tfjs_binding.node
    

Step 2 is usually the most challenging and time consuming. For example, building Bazel and the TensorFlow shared library would take approximately 24 hours on the Jetson Nano. You might need to tweak some toolchain settings to successfully build the shared library package.

TensorFlow.js with native TensorFlow integration gives you increased performance with the added hardware acceleration, but it can take quite a bit of work to enable. You should attempt to find prebuilt binaries, and save building it from scratch as a last resort.

Example AI-IoT flow

The example flows mentioned earlier can also be run on IoT devices. However, many devices like the Raspberry Pi can allow for additional functions by using sensors. An example flow can be found here.

Raspberry Pi Object Detection Flow

This flow is similar to the object detection flow that used the TensorFlow.js custom nodes above. However, some additional nodes to support attached peripherals and sensors were also added into the flow. This flow expects that the Raspberry Pi has a USB camera, a 3.5 mm jack speaker, and a GPIO motion sensor (for example, HC-SR501 PIR Motion Sensor) attached.

Here, if the sensor detects motion, the output will be 1, and this will trigger the usbcamera node to take a snapshot and send the image to the tf-function, tf-model, and post-object-detection nodes for object detection. A function node uses JavaScript to check whether any of the detected classes is a class of interest. If so, a specific audio clip is played through the connected speaker. Learn more about running this type of flow through the Developing a Machine Learning IoT App with Node-RED and TensorFlow.js code pattern.

Deploy Node-RED with TensorFlow.js in the cloud

You have learned how to use the hardware capabilities of IoT devices for TensorFlow.js with Node-RED. Next, you’ll look at how to deploy these Node-RED flows to the cloud through a container. This is especially useful in an enterprise environment where the flow operates as part of a larger system of microservices. Cloud deployment lets you use all of the available cloud automation capabilities such as scaling, high availability, and rolling update. While cloud deployment of a Node-RED flow is independent of deep learning, the ability to embed deep learning can enable many new cloud applications.

Because a flow is self-contained, containerizing the flow is mostly an exercise in packaging to meet the requirements of the particular cloud environment. The following sections describe several methods for creating container images and running them in various environments.

Deploy a Node-RED container

You can directly deploy a Node-RED Docker image from Docker Hub. This runs a Node-RED session with the editor enabled so that you can start creating and running a flow. Note that running on a cloud would require attaching a persistent storage volume so that the flow can persist if the container restarts. If you are running the container on Docker locally on your workstation, simply mount a local directory to the container to save the flow. To create your own custom image, refer to the GitHub repository for more details.

Build a custom image containing a flow

This tutorial gives detailed instructions for packaging your flow in a container image for deployment. The Dockerfile provided there creates a lightweight image that excludes the toolchain for building the image. After your image is ready, you can deploy it on any container orchestration system such as Kubernetes.

Deploy a flow on OpenShift

OpenShift is built on top of Kubernetes and provides additional functions for managing your containers. The OpenShift catalog does include support for Node.js, which allows easy deployment of your Node-RED flow, as described below.

When you select Node.js in the catalog, the dialog window asks for a Git repository URL that should contain the typical Node.js artifacts (choosing Advanced Options allows for more selection). When you deploy the app, OpenShift builds a new image that contains the source from your Git repository in the directory /opt/app-root/src/. Then, it launches the image in a new pod and invokes npm run start in the pod.

Therefore, to package the Git repository per OpenShift’s expectation, you need to gather together four files.

  1. flow.json: This is created by Node-RED to save the graph describing your flow. It is saved when you deploy the flow in the editor. The file is usually named after the host name of the system, and its location is indicated in the userDir attribute, typically, ~/.node-red.

  2. flow_cred.json: This is created by Node-RED to separately save the encrypted credentials from your flow. It is saved when you deploy the flow in the editor. The file location is indicated in the userDir attribute, typically, ~/.node-red.

  3. settings.js: This contains the various options for the node-RED runtime. You can copy the default file created by Node-RED in the ~/.node-red/settings.js file. Edit this file to change the default settings (uncomment the attributes as needed). For deployment in a container, you should consider modifying the following attributes:

    • userDir: Set to the working directory in the pod. You might want to set to /opt/app-root/src/, where your Git sources will reside in the OpenShift container.
    • credentialSecret: Set to any string to encrypt or decrypt your credential in the flow_cred.json file. Make sure the same encryption string is used in the container as when you edit the flow in the Node-RED editor. Otherwise, the container will fail to decrypt the credentials.
    • httpRoot: Set to false to run in headless mode. That is, the editor is disabled so that the flow cannot be modified.
  4. package.json: Create this file and specify the dependencies, which include the Node-RED package along with any packages for the deep learning models and preprocessing. For the start target in scripts, specify the following command to start Node-RED using your customized settings.js file:

     node-red --settings settings.js flow.json
    

See an example here.

To deploy using the OpenShift web console, click Catalog, and then click Node.js.

Node.js catalog

In the dialog box for Git Repository, enter the URL for your Git repository, and then click Create.

Deploy Node.js from catalog

OpenShift builds a custom image containing the Git source, then creates the pod to run the Node-RED flow along with the service and route to access the pod. If Node-RED is configured to run in headless mode, the service and route can be ignored. You can scale the number of container instances by clicking the up/down icon. If you need to make a change to the flow, you can automate the update to the container by setting a webhook between the OpenShift app and the Git repository. When a new pull request is merged in the Git repository, OpenShift receives a notification, rebuilds the image, and relaunches the pods.

Scale containers

Because a cloud deployment automates many operations, you can improve the performance by giving consideration to how the models are loaded in the flow. Nodes in the flow that embed a model download the model files each time the pod is launched. This includes the first time the flow is deployed, when the pod crashes and is redeployed automatically, and when the pods are scaled-up manually or by autoscaling. Many models are small (approximately 50 MB), but some models can be very large (the large BERT model is approximately 450 MB), so repeated downloading can impact performance. In this case, it can be advantageous to embed the model in the container image. You can build a custom container image and copy the model into the image by following the instructions in this tutorial. The reference to the model in the Node-RED node can then point to the local copy in the model.

The Node-RED package tf-model provides additional flexibility in caching a model and checking if a new version is available. When used in the local workstation, tf-model typically keeps the cache in the ~/.node-red/tf-model directory. The cache directory uses the hash of the model URL as the directory name.

ls -l tf-model
total 8
drwxr-xr-x  108 user  staff  3456 Mar 24 10:24 684586768
-rw-r--r--    1 user  staff   246 Mar 23 16:49 models.json

You can replicate the cache directory in the container image and use the normal URL for the model. When tf-model starts, it checks the local cache and the model URL. If a new version is available, tf-model downloads the new version. Otherwise, it automatically loads from the cache and avoids re-downloading the model.

Summary

The first two tutorials in this series showed how to develop deep learning models in JavaScript and how to easily embed them in your code. In this tutorial, we took it a step further and looked at Node-RED and how it can be used as a graphical tool to interactively wire together a complex AI app in Node.js. The large collection of community-based packages now includes support for deep learning models, opening up new possibilities for all users. You learned how to build your own TensorFlow.js Node-RED node, how to use TensorFlow.js on IoT devices, and how to deploy Node-RED flows in containers on a cloud. Your Node.js AI application can now run at the edge to process large volumes of data or in an enterprise environment together with other microservices.

Look for the next tutorial, where we will go into techniques to monitor and optimize the performance of your JavaScript AI application.

Video