Time is running out! Call for Code submissions due July 29 ›
Get the code
by Amara Graham | Published November 16, 2018
Artificial intelligenceConversationSpeech and empathyVirtual realityGaming
This pattern was previously written specifically for ARKit, but now with AR Foundation, ARCore can be used as well within the same project.
This code pattern shows how to use Watson Assistant, Speech to Text, and Text to Speech services deployed to an iPhone or an Android phone, with ARKit or ARCore to have a voice-powered animated avatar in Unity using AR Foundation. Augmented reality allows a lower barrier to entry for both developers and users thanks to framework compatibility in phones and digital eyewear. Unity’s AR Foundation continues to lower the barrier for developers targeting both Android and iPhones.
Unity3d is a great gaming engine that has morphed into a platform for immersive experiences outside of just gaming. Unity developers are looking to expand their skills and use things like artificial intelligence in their projects. A great entry point is building a voice-enabled chatbot experience.
Chatbots and virtual agents bring a more human-like conversational experience to something previously highly scripted. While this pattern shows the animated character walking only forward and backward, it would take only a small amount of work to voice control walking, for example, left or right. Instead of the user having a small number of sayings or phrases, the chatbot can be configured to handle a large number of phrases that mean the same thing and even learn over time.
This pattern brings three Watson services together to give you a chance to work with AI without having to build or train models directly. Watson Assistant, Speech to Text, and Text to Speech services work together to offer an immersive experience without the complex natural language processing burden on the developer.
Find the detailed steps for this pattern in the readme file. The steps will show you how to:
Get the Code »
Back to top