The Blog

 

In February, IBM and Unity partnered to bring the power of AI to the Unity community. We launched the IBM Watson™ SDK for Unity on the Unity Asset Store. This SDK is the first asset of its kind to bring scalable AI services to Unity, enabling developers to capitalize on the power of AI in their Unity applications. Without generating or acquiring large amounts of data or machine learning and deep learning skills, Unity developers can easily integrate Watson services. Working with AI is now as simple as calling a REST API, but the IBM Watson SDK for Unity goes a step further and includes a wrapper. With so much interest in creating immersive experiences in AR/VR limiting traditional input like keyboards, there has been an increase in interest in speech to text, text to speech, and chatbot or conversational-type interfaces like Watson Assistant.

Today, IBM is proud to announce the newest addition to the suite of available assets on Unity’s Asset Store: the Watson Translator Asset. The new asset empowers you to to use Watson’s translation services when using the VR Watson Speech Sandbox. When we launched the VR Watson Speech Sandbox, we knew that humans naturally use their voices to communicate. By adding this ability to our VR worlds, we give users more power and realism. As we actively explored the application of AR and VR with clients around the world since launching the Speech Sandbox, we realized that users and developers sought out translation capabilities. Users wanted to converse in their natural language and developers believed translation held the key to connect to a global audience and would ultimately expand reach of ideas, games, and information.

Check out a video explaining how to get started with the VR Watson Speech Sandbox: Create Voice Commands with Unity and Watson.

By simply changing one line of code, you can translate any language (the example code demonstrates Spanish) to English, when building a virtual environment that requires voice commands or conversation. To get started, visit this GitHub repo.

At IBM, we’ve seen several clients utilize Watson translation services when building AR software. One notable client, Dragon Creative Enterprise Solution Ltd. (DCES), develops AR solutions for their customers within natural language with near real-time translations, including dialects and local nuances. By incorporating translated text in AR,B DCES reported a 50-percent faster rate of responsiveness to user feedback and enhanced quality of conversations. Check out the case study to learn how DCES used IBM Cloud to accelerate time to market.

What’s next for AI in AR/VR?

The team at IBM spent the year getting to know the Unity developer community and the AR/VR ecosystem, and you can learn all about our inspirational experiences at Unite LA and Unite Berlin.

In this process, we observed two themes: the rise of voice and the rise of AR.

Going into 2019, we’re excited and pleased to share the newest code pattern titled Build an AR avatar for the iPhone, which combines voice and AR. The code pattern shows how to use Watson Assistant, Watson Speech-to-Text, and Watson Text-to-Speech services deployed to an iPhone with ARKit to have a voice-powered animated avatar in Unity. This allows anyone to get up and running quickly in AR and share an immersive experience right through their iPhones. This pattern can be used as a starting point or inspiration for other Watson-based experiences.

We encourage all Unity developers to get ahead of the trend and build your first AI-powered AI application in Unity!

Note: Registering for an IBM Cloud account is a necessary step to instantiate Watson services and access your service credentials.

Resources

We’ve rounded up some key resources to help you get started with the IBM Watson Unity SDK and learn more about the partnership between IBM and Unity: