Get the code
By Amara Graham | Published November 16, 2018 - Updated November 16, 2018
Artificial IntelligenceConversationSpeech and EmpathyVirtual RealityGaming
This code pattern shows how to use Watson Assistant, Watson Speech to Text, and Watson Text to Speech services deployed to an iPhone with ARKit to have a voice-powered animated avatar in Unity. Augmented reality allows a lower barrier to entry for both developers and users thanks to framework compatibility in phones and digital eyewear.
Unity3d is a great gaming engine that has morphed into a platform for immersive experiences outside of just gaming. Unity developers are looking to expand their skills and use things like artificial intelligence in their projects. A great entry point is building a voice-enabled chatbot experience.
Chatbots and virtual agents bring a more human-like conversational experience to something previously highly scripted. While this pattern shows the animated character walking only forward and backward, it would take only a small amount of work to voice control walking, for example, left or right. Instead of the user having a small number of sayings or phrases, the chatbot can be configured to handle a large number of phrases that mean the same thing and even learn over time.
This pattern brings three Watson services together to give you a chance to work with AI without having to build or train models directly. Watson Assistant, Watson Speech to Text, and Watson Text to Speech services work together to offer an immersive experience without the complex natural language processing burden on the developer.
Find the detailed steps for this pattern in the readme file. The steps will show you how to:
Get the Code »
Back to top