As you may already know, 2018 was a pivotal year for CEBIT for many reasons. The fair has undergone a complete makeover; you would hardly recognize it. After almost 50 years, the expo not only changed its name by capitalising every letter (CeBIT has transformed to become CEBIT), it also changed its scheduling, opting for...
Home automation has gone from science fiction to reality in a few short years. This developer journey shows you how easy it is to build a home automation hub using natural-language services and OpenWhisk serverless technology.
Over the past few years, we’ve seen a significant rise in the popularity of intelligent personal assistants — think of Apple Siri, Amazon Alexa, and Google Assistant. At first these apps seemed like little more than a novelty, but they’ve now evolved to become convenient, useful, and for a growing number of enthusiastic users, essential.
These apps provide users with an easy natural-language interface that enables them to interact with service APIs and IoT-connected devices. Now that natural-language interaction is taking the next step, developers are keen to provide voice interaction for a fully automated home.
This developer journey guides you into the world of interactive home automation. Homes are truly becoming “smart,” with more and more devices available to connect and control with voice commands. You’ll learn how to set up your own starter home automation hub by using a Raspberry PI to turn power outlets off and on. Once the circuit and software dependencies are installed and configured properly, you’ll be able to use IBM Watson’s language services to control the power outlets using voice or text commands.
You’ll also dive into the world of serverless. This journey shows you how to use OpenWhisk serverless functions to trigger those same outlets based on a timed schedule, changes to the weather, motion sensor activation, and other inputs. Find out how simple it can be to use Watson services to interpret user input and how IBM Cloud services can make a system more accessible using HTTP, SMS, MQTT and other protocols. You can expand the Watson IoT Platform to process analytics to determine how long specific devices stay on and adjust the OpenWhisk sequence to control devices based on a schedule or triggered sensors.
So forget about the novelty factor — you need to stay current with your development skills to ensure that the apps you produce are in demand. People want smart homes, connected devices, and voice-activated appliances; this developer journey shows you how to do it.
- The user speaks a command into the microphone, or sends a text to the Twilio SMS number.
- The input is captured and embedded in an HTTP POST request to trigger an OpenWhisk sequence.
- OpenWhisk action 1 forwards the audio to the IBM Cloud Speech to Text service and waits for the response.
- The transcription is forwarded to OpenWhisk action 2.
- OpenWhisk action 2 calls the Assistant service to analyze the user’s text input and then waits for the response.
- The Assistant service result is forwarded to the final OpenWhisk action.
- The Openwhisk action publishes an entity/intent pair (“fan/turnon,” for example) to the IoT MQTT broker.
- The Raspberry Pi, which is subscribed to the MQTT broker, receives the result.
- The Raspberry Pi transmits an RF signal to turn the outlet on or off.