Create a virtual reality speech sandbox  

Add Watson-powered natural language voice interaction from the Unity environment

Last updated | By Scott D’Angelo

Description

This developer journey will show you how to build advanced interactive speech systems for virtual reality with just two Watson services: Watson Speech-to-Text for transcription and Watson Assistant for parsing the meaning of the words. Discover how to leverage the Watson Unity SDK to implement the services right from the Unity development environment.

Overview

Virtual reality (VR) enables users to feel like they truly inhabit a different space. In a VR environment, speech is a more natural interface than other methods for certain interactions. You don’t want to pause an experience to stare at a control or, heaven forbid, type a command; you want to be in the moment. The ability to simply speak instructions keeps you in that moment and helps provide an entirely new dimension of immersion for users.

By learning how to add speech controls to VR environments, you can build more richly interactive, immersive experiences — and position your own skills for the next big technology revolution. When you complete this developer journey, you will understand how to add IBM Watson Speech-to-Text and Assistant services to a virtual reality environment built in Unity, the popular 3D development platform.

There are several popular VR head-mounted devices that offer users powerful immersive experiences. Their popularity and versatility make them ideal candidates for speech interaction. This developer journey shows you how to implement speech controls for Google Cardboard and HTC Vive, two of the most popular head-mounted VR devices.

Flow

  1. User interacts in virtual reality and gives voice commands such as “Create a large black box”.
  2. The Virtual Reality Hardware microphone picks up the voice command and the running application sends it to Watson Speech-to-Text.
  3. Watson Speech-to-Text converts the audio to text and returns it to the running Application that powers the VR Hardware.
  4. The application sends the text to Watson Assistant. Watson Assistant returns the recognized intent “Create” and the entities “large”, “black”, and “box”. The virtual reality application then displays the large black box (which falls from the sky).

Related Blogs

Leveraging the power of AI at Unite Berlin

Last week, from June 19 – 21, we were at Unity’s premiere in Berlin: Unite 2018. This conference brought together Unity’s video game and development community. Unity touches 770 million gamers all over the world and is the market leader for consumer AR and VR use cases and is also rapidly emerging as the market...

Continue reading Leveraging the power of AI at Unite Berlin

Are You Developers? WeAreDevelopers, 2018

Earlier this year, we attended the WeAreDevelopers World Congress conference in Vienna. Named Europe’s largest playground for developers, founded as recently as 2015, the Congress has truly grown from strength to strength. From May 16th – 18th there were 8,000 participants, 250+ speakers, 100+ sponsors, who all convened in Austria’s capital for 3 days of...

Continue reading Are You Developers? WeAreDevelopers, 2018

Related Links