Get started with the IBM Watson Unity SDK
Learn how to use the IBM Watson SDK for Unity Virtual Reality and Game Development, with the Speech Sandbox as an example.
The world of 3D games has been at the cutting edge of technology for many years. With the advent of Virtual Reality and rise of Artificial Intelligence, 3D Gaming engines have demonstrated an entire new world for game developers to create. With IBM Watson SDK for Unity, we will demonstrate how to implement voice commands in VR, to allow a more natural interaction with these newly created worlds.
With the IBM Watson Unity SDK, a Unity developer can easily use Watson services like Speech-to-Text, Assistant, Visual Recognition, and more. In this tutorial we will use the VR Watson Speech Sandbox asset in the Unity Asset Store to implement voice commands in a Virtual Reality environment using Watson Speech-to-Text and Assistant. This example can be modified for any application for your own voice commands.
- Create an IBM Cloud account (formally called IBM Cloud).
- Create the Speech to Text service
- Create the Assistant service
- Install Blender to render Speech Sandbox Models
Setting up the Speech Sandbox with Watson Unity SDK should take around 30 minutes.
In a new Unity project, go to the Unity Asset Store and search for
IBM Watson Unity SDK.
Select the Watson Unity SDK package.
You will find plenty help in the Examples directory of the Watson Unity SDK. The ExampleStreaming.cs script was used for the Speech-to-Text part of the Speech Sandbox, and modified by adding parts of the ExampleAssistant.cs script for the
To see the SDK in action, go to the Unity Asset Store and Search for
VR Watson Speech Sandboxand download it.
Select the VR Watson Speech Sandbox package.
Load each Scene in sequence and add them to
Add Open Scenes.
- MainMenu – starts the VR app
- Tutorial – teaches how to interact in the Speech Sandbox
SpeechSandbox – The place to “play” with voice commands
*Note: The Scenes must be added in the sequence above as they are indexed by number (0, 1, 2).
We must now upload a Watson Assistant workspace JSON to our Watson Assistant service. For this we can use this sample workspace. Once uploaded we can grab the workspace ID.
Lastly, we need to copy our service credentials for both Speech-to-Text and Watson Assistant. Go to the Watson Assistant service credentials tab to find the credentials for Watson Assistant. Do the same for Speech-to-Text.
SpeechSandboxStreaming.csand add the Speech-to-Text credentials and Watson Assistant credentials with the workspace ID.
private string stt_username = ""; private string stt_password = ""; private string convo_username = ""; private string convo_password = ""; // Change _conversationVersionDate if different from below private string _conversationVersionDate = "2017-05-26"; private string convo_workspaceId = "";
You can now press
Playand run the Speech Sandbox on your HTC Vive or Oculus Rift, the output should look like the following:
If you want to see how your words are translated into voice commands take a look at
SpeechSandboxStreaming.cs, specificall the
Try modifying the conversation by logging into your Watson Assistant service and addings intents and entities.
We naturally use our voice to communicate, adding this ability to our Virtual Reality worlds gives users more power and realism. By using the Watson Unity SDK to bring your 3D and VR applications to life, the user will have a much more immersive and enjoyable experience. Get started on the Watson Unity SDK on the Unity Asset Store today!