Introduction

Node-RED is a visual programming tool that offers a browser-based flow editor for wiring together devices, APIs, and services in new and interesting ways. Because it’s built on Node.js, you can extend its features with new nodes written by the community in JavaScript. The Node-RED Node Generator tool assists in this process by generating the boilerplate code needed for a new node from a service’s OpenAPI specification, the same JSON documentation used by tools like Swagger.

In this tutorial, I show you how to create a customized node for a RESTful deep learning microservice, providing access to that service’s API in your Node-RED flows. In this case, the node has identifying sounds from short clips of recorded audio.

Additionally, I link to example flows at the end of this tutorial that demonstrate usage of the node.

New Node Example

Prerequisites

To follow the steps in this tutorial, you need to install the following software before you begin:

Estimated time

It should take you approximately 30 minutes to complete this tutorial.

Steps

Choose an API / Microservice to work with

The first step in this process is to identify an API you’d like to generate a node from. For this tutorial, we’ve chosen to use a deep learning model from the Model Asset eXchange (MAX) – the MAX Audio Classifier. If you’re unfamiliar with MAX or would like more information, we recommend the ‘Getting Started with the Model Asset Exchange’ tutorial.

Try the API

  1. To obtain the OpenAPI specification for your service, you first need to find its location or URL. In the case of MAX deep learning models, this URL is found by copying the link attached to the Try the API button from the model’s page on the Model Asset eXchange.

  2. Once you’ve found the URL for your service, navigate to a clean directory for this project and download the specification document JSON using curl. For our example, use the following command:

$  curl https://max-audio-classifier.max.us-south.containers.appdomain.cloud/swagger.json >
max-audio-classifier.json

NOTE: The URL above points to an experimental deployed instance of the MAX Audio Classifier. In most cases, you would have your service deployed somewhere in the cloud or running locally on your machine using Docker. While we can’t guarantee the performance of this instance, it should work well for this task.

OpenAPI Specification This excerpt from the OpenAPI spec for the MAX Audio Classifier shows the predict method, its parameters, and a summary of what the operation does. Notice that the model expects a ‘signed 16-bit PCM WAV audio file’ parameter named audio, and the Content-Type it consumes is multipart/form-data.

Generate the boilerplate code for your node

  1. Use the Node-RED Node Generator tool to generate the boilerplate code for your node with the command:

     $ node-red-nodegen max-audio-classifier.json --name 'max-audio-classifier'
    

    This creates a directory named node-red-contrib- followed by the name specified in this step.

    NOTE: For nodes based on models from the Model Asset eXchange like the one in this example, the --category flag can also be used when executing this command to group all similar nodes together in the node palette.

  2. Open this new folder in your code editor, which should have a structure similar to the following:

     icons/
     locales/
     test/
     lib.js
     LICENSE
     node.html
     node.js
     package.json
     README.md
    

    If what you’re seeing in your directory looks like the files above, you should be ready to move on to the next step and start making changes to the generated code.

Get to know your new node

There are three files to be aware of that contain most of the code for our new node: lib.js, node.js, and node.html. Each file serves a different purpose and represents different parts of a node. While we won’t need to make any changes to get our node up and running, I’ll briefly describe what can be found in each.

For a closer look at the code from this example, visit this GitHub repo containing all of the generated files and modified code for you to follow along with the changes.

lib.js

This file defines the methods available to your node and the inputs expected by the underlying microservice. The code in this file is generated directly from the OpenAPI specification provided to Node Generator.

node.js

This file defines the interface between your node in Node-RED and the generated code in lib.js. This is where the behavior for receiving input is defined for your node and also the format for producing output.

node.html

This file defines the UI for the node’s side panel, but also has an effect on the default values and settings for requests made to the underlying service.

The next steps in this section show two example modifications that you can make to enhance your new node. In the first, you see how to manually set a default method for new users. After that, I show you how to modify or remove the input fields for non text-based input parameters.

Node Sidebar
The Method dropdown menu allows you to select from the different methods defined in the API used to generate the node.

  1. First, open the node.html file contained in your project directory with your preferred text editor.

  2. Manually set predict as the default method for the node by replacing the method entry on (line 7) with:

     // the following line replaces line 7 in the un-modified `node.html`
    
     method: { value: 'predict', required: true },
    
  3. Remove (or comment out) both instances of this call on (lines 53 and 62):

     // remove the following from lines 53 and 62 in the un-modified `node.html`
    
     $('#predict_audio').show();
    

    This change prevents a text field from being displayed, which doesn’t support the type of audio input required for this service’s predict method.

    Text Input Box in Sidebar

At this point, you have successfully covered the basics to create a new node and get it working for this particular API, the MAX Audio Classifier model from the Model Asset eXchange.

Import and use your new node

These steps illustrate the easiest way to import your new, custom-fitted node in your own flows and put it to use.

  1. To install the node locally into your instance of Node-RED, head to your terminal and navigate to the working directory for your node if you’re no longer there.

  2. From within your node’s directory, run the command:

     $ sudo npm link
    

    This creates a symlink in your system’s global node_modules folder.

  3. Navigate to the home directory for your Node-RED installation (default is ~/.node-red) and run:

     $ npm link node-red-contrib-max-audio-classifier
    

    Make sure to use the correct name for the package you’ve created. Once completed, this step adds the node to your Node-RED flows.

  4. Now, all that’s left is to load your Node-RED editor (localhost:1880 by default) and give your node a try! You should see it sitting in the sidebar on your left, ready to be used.

    Node Palette Sidebar

    NOTE: If you think others might benefit from the new node you’ve created, the files created during this process also contain everything needed to publish it on npm.

Summary

By now you should have a working node in your Node-RED flow that interacts with whichever API you chose to follow along with. There’s plenty more work to be done, though, if you’d like to keep working. For example, you can further process the response from the underlying microservice in node.js to clean your output data or build an even more polished UI for users of your node in node.html.

For a look at a Node-RED flow that uses this MAX Audio Classifier node to identify sounds through recorded audio or short sound clips, visit this link. This flow can be run on standard computers and laptops.

Audio Classifier Demo Flow

If you’d prefer to use this type of node in an IoT scenario, you may view similar examples designed to run on the Raspberry Pi for the MAX Image Caption Generator and the MAX Facial Recognizer. To run these on your computer, you need to switch out the Raspberry Pi camera nodes for something supported by your machine.

You can import all of these flows into your Node-RED instance to be used as-is or to be further customized as you see fit.

If you’d like to share what you’ve made, or just want to see what others have created, visit the Node-RED Library.

Browse the selection of over 20 free-to-use, open source Deep Learning models at the Model Asset Exchange.