Archived | Build an AR avatar for iPhone or Android

Archived content

Archived date: 2019-08-12

This content is no longer being updated or maintained. The content is provided “as is.” Given the rapid evolution of technology, some content, steps, or illustrations may have changed.

Summary

This pattern was previously written specifically for ARKit, but now with AR Foundation, ARCore can be used as well within the same project.

This code pattern shows how to use Watson Assistant, Speech to Text, and Text to Speech services deployed to an iPhone or an Android phone, with ARKit or ARCore to have a voice-powered animated avatar in Unity using AR Foundation. Augmented reality allows a lower barrier to entry for both developers and users thanks to framework compatibility in phones and digital eyewear. Unity’s AR Foundation continues to lower the barrier for developers targeting both Android and iPhones.

Description

Unity3d is a great gaming engine that has morphed into a platform for immersive experiences outside of just gaming. Unity developers are looking to expand their skills and use things like artificial intelligence in their projects. A great entry point is building a voice-enabled chatbot experience.

Chatbots and virtual agents bring a more human-like conversational experience to something previously highly scripted. While this pattern shows the animated character walking only forward and backward, it would take only a small amount of work to voice control walking, for example, left or right. Instead of the user having a small number of sayings or phrases, the chatbot can be configured to handle a large number of phrases that mean the same thing and even learn over time.

This pattern brings three Watson services together to give you a chance to work with AI without having to build or train models directly. Watson Assistant, Speech to Text, and Text to Speech services work together to offer an immersive experience without the complex natural language processing burden on the developer.

Flow

flow

  1. Run the app on a phone and speak a command like “walk forward”.
  2. The character is rendered in a nearby horizontal plane.
  3. The Speech to Text service is triggered, which converts the audio to text.
  4. The text is received and sent to Watson Assistant.
  5. Watson Assistant prepares a response and sends the response to Text to Speech.
  6. The character responds verbally and the animation to walk forward is triggered.

Instructions

Find the detailed steps for this pattern in the readme file. The steps will show you how to:

  1. Download the code from GitHub.
  2. Download and install the Watson SDK for Unity.
  3. Download and install the IBM Cloud Unity SDK Core.
  4. Create service instances for Watson Assistant, Speech to Text, and Text to Speech, and add the credentials to the Unity editor.
  5. Upload the .json file to Watson Assistant, and add the workspace information to the Unity editor.
Amara Graham