Code can fight systemic racism. This Black History Month, let's rewrite the wrong. Get involved

Archived | Implement voice controls for a serverless home automation hub

Archived content

Archive date: 2019-05-21

This content is no longer being updated or maintained. The content is provided “as is.” Given the rapid evolution of technology, some content, steps, or illustrations may have changed.


Home automation has gone from science fiction to reality in a few short years. This code pattern shows you how easy it is to build a home automation hub using natural-language services and IBM Cloud Functions (formerly OpenWhisk) serverless technology.


Over the past few years, we’ve seen a significant rise in the popularity of intelligent personal assistants – think of Apple Siri, Amazon Alexa, and Google Assistant. At first these apps seemed like little more than a novelty, but they’ve now evolved to become convenient, useful, and for a growing number of enthusiastic users, essential.

These apps provide users with an easy natural-language interface that enables them to interact with service APIs and IoT-connected devices. Now that natural-language interaction is taking the next step, developers are keen to provide voice interaction for a fully automated home.

This code pattern guides you into the world of interactive home automation. Homes are truly becoming “smart,” with more and more devices available to connect and control with voice commands. You learn how to set up your own starter home automation hub by using a Raspberry PI to turn power outlets off and on. After the circuit and software dependencies are installed and configured properly, you can use IBM Watson’s language services to control the power outlets with voice or text commands.

You also dive into the world of serverless development. This code pattern shows you how to use the serverless functions of IBM Cloud Functions to trigger those same outlets based on a timed schedule, changes to the weather, motion sensor activation, and other inputs. Learn to use Watson services to interpret user input and how IBM Cloud services can make a system more accessible using HTTP, SMS, MQTT and other protocols. You can expand the Watson IoT Platform to process analytics to determine how long specific devices stay on and adjust the IBM Cloud Functions sequence to control devices based on a schedule or triggered sensors.

So forget about the novelty factor – you need to stay current with your development skills to ensure that the apps you produce are in demand. People want smart homes, connected devices, and voice-activated appliances; this code sample shows you how to do it.



  1. The user speaks a command into the microphone, or sends a text to the Twilio SMS number.
  2. The input is captured and embedded in an HTTP POST request to trigger an IBM Cloud Functions sequence.
  3. IBM Cloud Functions action 1 forwards the audio to the IBM Cloud Speech to Text service and waits for the response.
  4. The transcription is forwarded to IBM Cloud Functions action 2.
  5. IBM Cloud Functions action 2 calls the Assistant service to analyze the user’s text input and then waits for the response.
  6. The Assistant service result is forwarded to the final IBM Cloud Functions action.
  7. The IBM Cloud Functions action publishes an entity/intent pair (“fan/turnon,” for example) to the IoT MQTT broker.
  8. The Raspberry Pi, which is subscribed to the MQTT broker, receives the result.
  9. The Raspberry Pi transmits an RF signal to turn the outlet on or off.


Ready to put this code pattern to use? Complete details on how to get started running and using this application are in the README file.