Skill Level: Beginner


IntroductionThe goal of this recipe is to create a bi-directional command and control demo using the Raspberry Pi and PianoHAT and explore various APIs available to the device once it is connected via IBM IoT Platform into Bluemix.We will create a multi-tiered system using the PianoHAT, Python, NodeRED on the Pi, IBM IoT Platform, NodeRED […]


Raspberry Pi running Raspbian and NodeRED

PianoHAT from Pimoroni (you can buy one here: https://www.adafruit.com/products/2695)


  1. Introduction

    The goal of this recipe is to create a bi-directional command and control demo using the Raspberry Pi and PianoHAT and explore various APIs available to the device once it is connected via IBM IoT Platform into Bluemix.

    We will create a multi-tiered system using the PianoHAT, Python, NodeRED on the Pi, IBM IoT Platform, NodeRED on Bluemix, the Watson voice API’s, IBM’s Weather APIs, Twitter APIs and 3rd party APIs to access SMS messaging. Lots of fun!

    This diagram shows the overall architecture:

    The basics of connecting a Pi are here: https://developer.ibm.com/recipes/tutorials/deploy-watson-iot-node-on-raspberry-pi/

  2. Connecting the Pi to the IBM IoT Platform Using NodeRED

    This is a NodeRED flow to create a websocket for the Python script and connect to the IBM IoT Platform. We simply pass keypresses on up to the Platform and listen for two different kinds of messages back: Commands (cmd) and Speech (speech). When we receive a Command, we just pass that on down to the Python script to perform – should only be a Blink or a Flash. If it’s a speech command, we strip off the voice data (in a WAV format), write it to a file, and invoke the ‘aplay’ command to play the sound out the Pi’s audio system.

    The actual NodeRED flow is pasted below, copy and paste into your NodeRED to use.

     [{"id":"YourDeviceKey","type":"wiotp-credentials","z":"85c5838.a7ff2","name":"","org":"YourOrg","devType":"pi","devId":"YourDeviceID"},{"id":"efe010ad.5b9578","type":"websocket-listener","z":"85c5838.a7ff2","path":"/piano","wholemsg":"false"},{"id":"4bb593a7.b538b4","type":"websocket in","z":"85c5838.a7ff2","name":"Piano","server":"efe010ad.5b9578","client":"","x":87,"y":240.5,"wires":[["a7d382a0.57c42"]]},{"id":"a7d382a0.57c42","type":"wiotp out","z":"85c5838.a7ff2","authType":"d","qs":"false","qsDeviceId":"","deviceKey":"YourDeviceKey","deviceType":"GGPi3GW","deviceId":"GGPi3GW","event":"event","format":"json","name":"","x":259,"y":242,"wires":[]},{"id":"a951f1a8.7f827","type":"websocket out","z":"85c5838.a7ff2","name":"","server":"efe010ad.5b9578","client":"","x":266,"y":488,"wires":[]},{"id":"483283fb.ccac04","type":"wiotp in","z":"85c5838.a7ff2","authType":"d","deviceKey":"YourDeviceKey","deviceType":"","deviceId":"","command":"speech","commandType":"g","name":"","x":90,"y":400,"wires":[["3ea8df92.1dae4"]]},{"id":"1ed0b510.02779b","type":"file","z":"85c5838.a7ff2","name":"Playme2","filename":"/tmp/playme2.wav","appendNewline":false,"createDir":false,"overwriteFile":"true","x":530,"y":365.5,"wires":[]},{"id":"3ea8df92.1dae4","type":"function","z":"85c5838.a7ff2","name":"Decode Speech to Binary","func":"var buf = new Buffer(msg.payload.speech, 'base64');n//buf = Buffer.from(msg.speech, 'base64');nmsg.payload = buf;nreturn msg;","outputs":1,"noerr":0,"x":302,"y":399.5,"wires":[["1ed0b510.02779b","99e082bb.3dd738"]]},{"id":"99e082bb.3dd738","type":"exec","z":"85c5838.a7ff2","command":"aplay","addpay":false,"append":"/tmp/playme2.wav","useSpawn":"","name":"aplay","x":521,"y":419,"wires":[[],[],[]]},{"id":"63d59160.2d3b28","type":"wiotp in","z":"85c5838.a7ff2","authType":"d","deviceKey":"YourDeviceKey","deviceType":"","deviceId":"","command":"blink","commandType":"g","name":"","x":90,"y":486.5,"wires":[["a951f1a8.7f827"]]}]

    You will need to modify the Watson IoT nodes to use your own credentials from your instance of the IoT Platform.

  3. The Keyboard Driver

    NodeRED communicates via Websockets – so we will create a small Python script that listens for keypresses and sends them out the websockets. It also listens for commands coming in from the websocket and performs two commands: Blink and Flash.

    Blink chases the LEDs across the PianoHAT from 0 to 15 and 15 back to 0. The PianoHAT library also provides a command to manage the LED on/off ramp time so we slow them down to give the lighs a ‘chaser’ effect and then reset it back at the end.

    Flash turns all the LEDs on at once and then off, 4 times. Uses the default LED ramps to make them very crisp looking.

    Key presses are simply translated to a number and sent up the line.

    We implement this using 2 threads so that commands and keypresses can be processed independently.

    Simply run this script in a terminal window.

     #!/usr/bin/env python
    # Keyboard and command server to communicate with Watson IoT Platform via websocets and NodeRED
    # Author: Greg Gorman
    # May 2016
    *** Server to connect to IBM IoT Platform and send PianoHAT key events up.
    *** Responds to commands coming down to blink or flash lights.
    *** Requires speaker or audio output on the Pi to hear Watson voice replies.
    import pianohat
    import time
    import signal
    import websocket
    import threading
    import json

    # Flash all the lights at once
    def flash():
    for x in range(4):
    for x in range(16):
    pianohat.set_led(x, True)
    for x in range(16):
    pianohat.set_led(x, False)

    # Blink all the lights in sequence (chase across and back)
    def blink():
    # Slow down the on/off ramps so it looks cool
    for x in range(16):
    pianohat.set_led(x, True)
    pianohat.set_led(x, False)
    rev = 16;
    for x in range(17):
    pianohat.set_led(rev, True)
    pianohat.set_led(rev, False)
    rev = rev - 1;

    def handle_touch(ch, evt):
    if evt == True:
    # debounce a little

    # Thread to manage keys
    def keys():

    # Thread to listen for commands
    def cmds():
    while 1:
    cmd = ws.recv()
    jc = json.loads(cmd)
    cmd2 = jc["cmd"] if cmd2 == "Blink":
    if cmd2 == "Flash":

    # main

    ws = websocket.create_connection("ws://localhost:1880/piano")

    # Set event handlers

    # wild guess on the ramps for on/off of LEDs. Looks good so leaving it.

    # Create threads for sending keys and listening for commands
    # Use Daemon mode so we can kill the script easier.
    # ref: https://pymotw.com/2/threading
    k = threading.Thread(target=keys)
    c = threading.Thread(target=cmds)


    raw_input("press Enter to exit program...")

    Paste code above and delete me

  4. NodeRED on Bluemix using IBM Watson IoT Platform

    Create a Bluemix application from the NodeRED Starter boilerplate. Add the IoT Platform, Weather and Text to Speech as shown in this screen shot from my Dashboard. Note: the Cloudant service is automatically added and required for the NodeRED application.

  5. IBM Cloud NodeRED Flows

    The NodeRED instance on the IBM Bluemix cloud is where all the logic happens for this device. Once I got simple key presses and one command sent back to the Pi, I just kept adding new features to continue exploring different APIs available to me on Bluemix.

    Examine the flow diagram below. You should see 3 inputs that affect what happens: IBM IoT App In, this listens for keypresses from the Pi; “Tweets to Me”, this is connected to my Twitter account and listens for Direct Messages; and “Say weather conditions every hour”, a timed trigger that fires ever 1 hour.

    The large node “handle button” simply looks at the integer button coming from the Pi and directs the command flow out one of the attached bubbles. As you can see, many of the buttons don’t do anything (yet) but can be easily added by drawing more nodes and connecting them up.

    The simplest commands are at the bottom: the Flash and Blink commands simply send a message back down to the Pi’s NodeRED, which passes them on down to the Python script. The bottom two buttons format a simple string and send to Twitter and use the Twilio interface to send an SMS to my phone.

    The more sophisticated commands use the Watson text-to-speech API to turn a few strings into audio files that are then sent to the Pi. The main issue I found here is that the IBM IoT output node (and hence the underlying IoT Platform) seem to have a message length limit so the phrases have to be kept short. I set up 3 canned phrases: “Hello, my name is Watson.”, “This is Greg’s Raspberry Pi”, and “Twitter direct message received”. These are sent back to the Pi when the appropriate button is pushed or when a direct message is received by my Twitter account.

    Just for fun, note that I connected the voice output to the Blink command as well so that the Pi looks more like a “talking computer” because the chasing lights run at the same time as the voice commands are spoken. Pretty cool!

    Finally, the most difficult one to do was the Weather API call (“latest weather at my office”). In order to request the current conditions for a location, the API requires a Latitude / Longitude of the location – so a quick Google search turned up many sites that would return a lat/lon given an address or postal code. The returned data is a very large JSON packet that you can then pick off various observations from. I chose current temperature and sky observation.

    I then split the weather report into 3 phrases, again to avoid the message size limit and send them down to the Pi for processing.

    It’s quite difficult to remove my personal details from the source code of the flow, so I’ll discuss some of the more ‘interesting’ function nodes here:

    the CMD nodes: Flash and Blink.

    var newMsg = { payload: {cmd: "Flash"} };
    return newMsg;

    These just format a string and return it for the JSON and IBM IoT Out node to process back to the Pi.

    The “Say” nodes, simply format a string as well and return it for the text-to-speech routine.

    var newMsg = { payload: "This is Greg's Raspberry Pi." };
    return newMsg;

    The “Extract Audio” node pulls the returned audio buffer from the message returned from Watson and formats it into a string for transmission to the Pi.

    buf = msg.speech;
    msg.payload = {speech: buf.toString('base64')};
    return msg;

    The “Current Temperature” node extracts the temperature from the returned weather observation and formats some text around it. The “Sky Observation” is similar.

    var anotherMsg = { payload: "Temp " + msg.payload.observation.imperial.temp + " F." };
    return anotherMsg;

    var anotherMsg = {payload: "The sky is " + msg.payload.observation.phrase_32char};
    return anotherMsg;

    The only other thing that took a few tries was the request to the Weather API “Latest weather at my office” but it isn’t too complicated. The hardest part was finding the appropriate lat/lon location I wanted the observation for!


    For the weather API calls, it’s pretty easy to get the authentication tokens. Once you add the service to your dashboard and enable it to connect to the Node.js application, just open the Node.js application dashboard. Should look something like this. Press the down-arrow to show the credentials.

    The small window there shows a bit of code, don’t be intimiated. You just need the URL line to copy and paste into the HTTP request node for weather (on my diagram it’s named “Latest weather at my office”. Then add the request you want (as shown above) and it should work!

Join The Discussion