Skill Level: Intermediate

A basic knowledge of Unity3D and C# is assumed

Using the tools described, you can use the Unity3D front-end to speak and listen to your questions and be responded to in the exact same manner as if you were using a text-based chat bot, except this is way cooler!

Disclaimer: This code is provided as-is


IBM Cloud account - sign up for one, if you don't already have one.

Unity3D development environment - Downloads (as per the EULA guidelines, determine the correct version for you)

Sign up for a unity developer account (so you can access the unity asset store)

(free) UMA2 plug-in - available from unity asset store

(not-free) SALSA plug-in (RandomEyes and LipSync)* - available from unity asset store

(free) IBM Watson SDK - available from unity assets store (available directly from here)

IBM Watson APIs - Watson Assistant / Speech to Text / Text to Speech - available from IBM Cloud


*other products are available, I just happened to choose this one due to it's simplicity, ease-of-use and very helpful website






I apologise up-front that this article is very screenshot heavy, but I do feel that they add value as a walk-through guideline - and you can always see what setting values I have in my environment (that works) and compare it to yours that might not.

I'm afraid that due to having to purchase the license/software for SALSA I will not be supplying a github repo. for this project - but, as you will see below, all of the components are available for you to install, configure and setup.  The only code is the .cs file and I've screenshot the contents of that file enough for you to recreate yourself.

UPDATE: Okay, for ease of reading, I've uploaded the .cs file to a github repo. as-is


UPDATE/UPDATE:  As mentioned by a few people during November 2018, the Watson Services and SDK was migrated to use the new AIM Token Authentication mechanism, which broke the previous code that connected to the Watson services.  I have now created a NEW Watson_newServices.cs file that can downloaded from the github repo.  This still allows you to use the username/password if you have old services, but if you create new one's you will need to use the aimkey value.  I've tested this in a newer version of Unity (2018.2.16f1) also.


Here's a sneak peek of the end result: 



This ARTICLE does a really good job (from SOULMACHINES) of explaining why using a human interface is going to change the way we interact with computers....


UPDATE: Okay, following on from a few comments being posted about errors with the latest Watson SDK (2.4.0) that was breaking the code, I tested this myself to replicate the issue.  I downloaded the latest version of Watson SDK 2.4.0 .zip file, extracted it and overwrote the Assets/Watson folder with the contents.

When starting up Unity and opening the Project I see the following message (and this matches the error raied in the comments below):


(oddly, even though I downloaded v2.4.0 the changelog file still only showed 2.3.0 - but I assure you it was the 2.4.0 release I tested with)


Modify the following 2 lines of code in the WatsonTTS.cs file and everything will then work fine for you - I tested it after the code change and all works as expected.


(Thanks to Crazy Minnow for the info. in the comments)


Due to a change with the Watson Assistant service in November 2018, this article has had to be updated to use the latest Watson SDK (v2.11.0) and the creation of a new Watson Assistant service.  (Thanks to Adam Chapman, for raising the issue that the IAM authentication required a code migration to the latest SDK and to change the WatsonTTS.cs code to connect to the Watson Assistant service).

Get the latest .cs file from the Github repo here.


  1. Create and setup your IBM Watson API services

    As defined in the ingredients section, it is assumed that you have an IBM Cloud account. Sign in to your account and select [Catalog] [Watson]. The services that we are interested in are highlighted below:


    (As you can see, there are many more Watson API services that you can investigate and integrate for version 2.0)

    Select the [Speech to Text] service and select [Create]:


    Once created, we need to take a copy of the [Service Credentials] as we’ll need them within the Unity3D app:



    Now, repeat the same thing for the [Text to Speech] service:




    (The following was valid in April/June 2018 for the creation of a Watson Assistant service – if you create a NEW service today, it will not provide you with a username/password, but an API Key and will require you to setup the service a little differently.  I shall leave the original screenshots/documentation below for historic reasons, but if you are creating a new service post-November 2018, then you should see the following section for how to setup a sample Watson Assistant service to be used by your Virtual Assistant)

    Finally, we need to create a [Watson Assistant] service:



    Now, we have all the [Service Credentials] that we shall need to include from the Unity3D Watson SDK.


    For the sake of this article, we shall create some quick and simple conversation Intent/Entity and Dialog flows within the [Watson Assistant] service.

    To perform this, we need to click on [Manage] and then click on [Open tool]:


    Then select the [Workspaces] tab from the Intoduction screen.

    This will show the tiles of Workspaces.  Create a new Workspace (or re-use an existing one).  I am using a pre-existing Workspace:


    We need to click on [View Details] in order to get the Workspace Id (that we need in order to connect to this Workspace Id):


    Once we have that value, we can click on the Workspace to see the Intents/Entities/Dialogs:


    As you can see, I have some pre-setup Intents (some copied from the [Content Catalog]), but for this recipe, I just setup the [#General_Jokes] Intent


    I setup 17 examples, which is a reasonable amount of examples for an Intent.

    I also setup some Entities, I’ll include them here just to show the [@startConversation], as you’ll see that in the [Dialog] tab shortly:


    Switch to the [Dialog] tab and by default you should have a [Welcome] node.  This is the initial node that is executed when you first call the [Watson Assistant] API.  As you can see below, this is the first node that gets executed on an initial connection [conversation_start] and will respond with the text “HAL 9000 initiated and awaiting your command” and shall wait at this point for the users input:


    We shall create a new top-level node and link it to the [#General_Jokes] intent, therefore, if the #General_Jokes intent is identified and triggered it shall follow this node and into it’s child nodes, but first, it shall return back the response to the user “Seriously, you want me to tell you a joke?” and wait for a response from the user:


    If the user responds with “Yes” (or something positive) then we shall respond with a “random” response, that happens to be a joke (I didn’t say they were quality jokes….but you can change that).  (Take note the <waitX> tags within the responses, we’ll come back to that later on)


    We create a child node response for a non-Yes response, here we’re just taking any non-Yes response and catering for it rather than an exact “no” response (but you can modify that if you like).  As you can see, if you respond with a non-Tes response, we just respond with “Okay, not a problem, no funny time for you then”


    That’s enough and pretty much all we really need to setup within the [Watson Assistant] service for now – you can extend it much more as you see fit.


    POST-NOVEMBER 2018 Watson Assistant service creation/setup example:

    You will notice now that if you create a new Watson Assistant service there are new ways to connect and configure the service.

    Tutorials are available HERE to show you the new features and how to use them, for this articale, we shall just do the minimum that is required.





























    You do not need to, but if you wanted to add further integrations you can:



    You can still test the conversation with the [Try it out] button, this should be the same conversation you will have with your Virtual Assistant:




     Remember, for this to work, you will need to upgrade to the v2.11.0 SDK release, as this supports the new connection code:



    The changes you need to make to the WatsonTTS.cs file can be lifted directly from the sample code provided with the SDK, we’re going to pretty much copy the code as-is and use it ourselves to connect.  If the username/password is present, we’ll use those values if not, we’ll use the API key value to connect to the Watson Assistant service.




    >I must confess that I am on a very limited wi-fi signal in a hotel at this point in time and have not tested this fully, it should work, based on the code within >ExampleConversation.cs (from the v2.11.0 SDK code).  when I get the opportunity, I shall confirm that it is correct.  It should gve you enough to work with >though.

    UPDATE: I have tested this code and it works fine, everything connects as it should, it behaves as it did previously and all is good again.

  2. Setup Unity3D

    After you’ve downloaded and installed Unity, when you start you’ll be given the option to create [New] or [Open] an existing project.  Select [New].


    Select [Add Asset Package], you need to add [IBM Watson Unity SDK], [SALSA With RandomEyes (assuming you have purchased it via the unity asset store)] and [UMA 2 – Unity Multipurpose Avatar]:

    If the [IBM Watson Unity SDK] is not available, please follow the instructions HERE to obtaining the SDK and adding it manually to your Unity project.


    You will then be presented with an “empty” Unity3D project, like so:


    Follow the instructions as defined on the SALSA website here


    You need to go to the SALSA website and download the UMA_DCS asset, select [Assets], [Import Package], [Custom Package] and select the .unitypackage file that you downloaded:


    UPDATE: For some reason I have to import this twice for it to import the [x]Examples folder – that we need for the Scene.

    This will then give you access to the Example/Scenes [SalsaUmaSync 1-Click Setup] – this has the GameObjects pre-selected and setup for us to use out-of-the-box:


    Double-click this scene to open it in the Unity IDE.


    Click on the [Main Camera] and make sure that the [Audio Listener] is selected (this is the Microphone input):


    Just make sure that this has a tick so that it is active.


    In relation to the [Audio Source] ensure that you have the “Play On Awake” UNCHECKED (by default it might be set to play – we do NOT want this to be active.  You’ll know if it is as when you press [>]Run you will hear “Ethan” talking, so just uncheck here)



    All that we shall add extra is a Canvas/Text GameObject, like so:


    This is purely so that we can output to the screen what has been detected by the [Audio Listener] and then converted via the [Speech to Text] service.

    UPDATE: Make sure that you add the “Watson” Script component to the SALSA_UMA2_DCS GameObject – if you do not you will not hear any audio as the code is setup to link to the AudioSrc of this GameObject.  Basically, if everything runs okay, but you hear nothing, you’ve attached the “Watson” Script to the “Main Camera” or some other GameObject.  You can easily copy over the information and remove the component from wherever you had it previously.

    Ensure that you have all of the Components added to your GameObject like so:



    Make sure that you have “Play” UNCHECKED, if you do not then you will see an error output like so:



    As we’ll be enhancing the default setup, we will add a new folder called [Scripts] and we shall add a new file called [WatsonTTS.cs] (or Watson_newServices.cs for the latest update)


    As you can see, in the Inspector view we can add the [Service Credentials] from the IBM Watson API services that we captured earlier.


    We see a [Preview] of the file is you single-click the file, if you double-click then it will then open in the editor you have defined.  I have defined to use the Mono-Develop IDE, as we shall see in the next step.

    The one modification that I have made is to add extra Wardrobe items to the UMA character, to do this you do the following:


    I changed the [Active Race] to Female and added the [Wardrobe Recipes]:


    One last modification is to change where the UMA avatar is viewed from the Camera perspective, so that we can zoom into just the head of the avatar:


    By default, the UMA character will have the “Locomotion” animation assigned to it, which makes it look about randomly, which is a little distracting – if I had more time, I would customise this to be a smaller range, we’ll do that for version 2.0.  For now, we’ll just remove the animation:


    We have not covered the content of the [WatsonTTS.cs] file yet, but once you’ve created the content and you press the [>] run button you will see your 3d Digital Human, like so:


    Due to using the SALSA plug-in, the Avatar will automatically move it’s eyes, blink and when it speaks it will perform Lip-syncing to the words that are spoken.

    Not bad for spending less than 30 minutes getting this far!  Imagine what you could do in “hours” or “days”, refining and getting to know more about the UMA modelling etc… as I say, I’ve used the out-the-box most basic example here, so I could focus on the integration to the IBM Watson API services, but I will return to refining and enhancing the UMA and SALSA setup and configuration.


    YES! it was commented that the above female avatar looked a bit scary! so I switched her for the male avatar – very simple to do.  I repeated the same exercise of adding Pants / T-Shirt / Haircut and eyebrows and in minutes we now have this avatar:


    Okay, still not 100% perfect, but pretty good for something this quick and easy – we can iron out the finer details once we get it all working together.

  3. Explanation of the WatsonTTS.cs C# file used to control everything


    The code that was used as a baseline is already included within the Watson/Examples/ServiceExamples/Scripts folder .cs scripts.


    As mentioned in the previous step, we shall create a new C# script file with the following contents.

    To start with, we need to include all the libraries that we shall be using.  Then you’ll notice that we have field declarations for the Watson APIs that we recorded earlier, we’ll set them like this so we don’t have to hard-code them into the .cs file.

    You’ll also notice that we have private variables declared that we’ll use within the script.

    UPDATE: this code is slightly different, but not too much in the latest .cs file


    As we do not hardcode the Watson API values in the .cs script, you have to insert the values within the Unity IDE itself, like so:


    Now,back to the C# coding. The structure of a .cs file for Unity is to have a Start() method that is executed as an initialiser and an Update() method that is executed every frame (if you’ve ever coded for an Arduino, then it’s a very similar setup).


    The Start() method uses the credentials defined in the IDE and the Watson SDK to prepare the objects for later usage.

    In the second part, we execute the code to make an initial connection to the Watson Assistant service, just passing the text “first hello” and the results will be returned to the OnCONVMessage callback method.

    As you can see the object “response” is passed to this method and this will contain the JSON response from the Watson Assistant service.


    In the response, we are passed the “context” variable, we shall copy this to the local _context variable so that we can pass this as an input each time we make a call to the Watson Assistant service to keep track of the “context” values of the conversation.

    You can also see above, that we extract the output:text JSON value as this contains the text that is returned by the Watson Assistant Dialog node.

    Just as an example, I have left in some custom action tags that are contained within the Dialog node response.  As you can see above, we can detect these action tags within the conversation text itself and replace these with the values that the Text to Speech API service requires.  The reason for these break pauses will become clearer later on.  We store the text to be converted into the global variable TTS_content.

    As you can then see, we set the play variable to true.  This will then get picked up on the next cycle of the Update() method.



    As you can see the first check we make in the Update() method is to check the value of the play variable.  Why do we do this?  Well….if we are going to call the Text to Speech service and play the speech to the user, we need to stop the Microphone from listening otherwise we’ll get into a self-talking avatar this is speaking and listening to itself.  Not what we want.  We want to play the message and when finished, we want to start listening for the users input via the microphone.

    There’s probably a better way to do it from within Unity, but I found that the above code worked for me.  We perform a check (we set the variable value in another method as you’ll see shortly) and we countdown the length of time of the clip that is being played.  This way, we can then determine when the Avatar has finished speaking / playing the clip and then start listening via the microphone again.

    Going back to the check on the play variable – if we look previously, at the end of the onCONVMessage() callback method we set play to true, so this will call the GetTTS() method.


    The GetTTS() method calls the Watson Text to Speech API, the only thing we’re setting here is the voice to use and we pass the TTS_Content variable that contains the text to convert.  The callback will go to the HandleToSpeechCalback() method.

    As you can see the clip object is returned and we assign this to the Audio Source and Play() the clip.  Here, we set the wait variable to the length of the clip and set the check variable to true – again we use these values within the Update() method.


    Going back up the file, we have the OOTB content from the sample files for the Speech to Text.  As you can see 


    As you can see above, when the method StartRecording() is executed is will call the RecordHandling() method as shown below:


    This starts the microhpone listening and takes the captured speech and streams the content to the Speech to Text service.


    As you are speaking, the Speech to Text service will attempt to convert the text “live” and show the output to the Canvas text variable on the screen.

    Once the speech has finished (the result is determined to be .final rather than .interim), we take that text and call the Watson Assistant API via the Watson SDK, passing the Input text and the Context variable (as this is the 2nd+ conversation call, we need to keep passing the growing Context variable value)


    That does seem like quite a lot, but it is actually pretty simple and does exactly what it is required to do.  Next we’ll see what it actually does.

  4. Preview and Debug within Unity

    This is what your Unity IDE screen should now look like if you are viewing the “Scene” tab and have the “SALSA_UMA2_DCS” GameObject selected:


     As you can see, I have the Active Race now set to [HumanMaleDCS] and I have added some Wardrobe Recipes from the examples folder.


    When you press the [>] Run button, the Avatar will be displayed in the “scene” window within the IDE and you will see the Debug.Log() output values displayed underneath.  This is where you can keep track of what is going on within the .cs code:


    As you can see I have output when the “play” variable is set to true, this will trigger the action in the Update() method.  This is actually where the Speech for the welcome/greeting message is happening.  The output with “Server state is listening” is where the Speech has finished and the Microphone is now active and listening.  The “[DEBUG] tell me a joke” output is showing me what the Text-to-Speech service recognised and will then be passing to the Watson Assistant service.  As I say, this is a good way to see the output of each step and to analyse the information in more detail.  If you select a line in the DEBUG output, you will see there is a small window at the bottom of the panel that shows you more indepth information – this is really useful for reviewing the contents of the JSON messages passed back and forth.


    If you wish to “see” your avatar outside of the Unity IDE environment, then from the File menu, select Build Settings:


    Here you will need to press the [Add Open Scenes] if your scene is not in the list initially.  You then select [PC, Mac & Linux Standalone] and select the Target Platform you wish to output for.  You can then press [Build] and it will output, in this case for Mac, to a .app application that you can run by double-clicking on it and it will start up the Unity player and your avatar will initiate and you can start talking and communicating as much or as little as you like!

    If you select [Player Settings…] you will see in the IDE Inspector on the right, there are more specific details that you can set about the output of the .app itself, you can change the Splash Image, the amount of RAM allocated to the app, your specific details etc…etc…


  5. Running the app from a Mac

    I made a few minor settings changes that I want to raise here – as I’m sure if you are following through this, you would have got this far and thought, “But, when I view my UMA human avatar, I don’t have it zoomed in on the head? how do I do that?”

    First of all, select the “Main Camera” GameObject and look in the Inspector to see where I’ve set the main camera to be [X, Y, Z] values:


    Now, click on the “SALSA_UMA2_DCS” GameObject – this is the actual human avatar:


    You can see that I have modified the “Position” values.  You might ask, “how did I know to set it to these values?”.  Well, good question!

    If you press the [>]Run button in the Unity IDE and then you see the UMA human on the screen, you can directly modify the values in the Inspector and the changes happen in real-time.  This way, you can play around with values of the “Main Camera” and th “SALSA_UMA2_DCS” GameObjects and get the view that you want.  Be aware though! Write down the values you changed to, once you press the [>]Run button to stop, those values you changed will revert back to the previous values.  You will then have to modify them manually again.

    One last change I made was to replace a default animation value that is set – you may not want to do this, but I found it a bit distracting and I will attempt to write my own animation model in the future.  If you do not change this value, then when you see your UMA human avatar it’ll be moving about, rocking it’s head and body, swinging around a bit like it’s been in the bar for a few hours.  I didn’t want this so I set the animation to ‘none’, that is why my UMA human avatar is fixed and focused looking forward and just it’s eyes and mouth move:


    As you can see, there are some default UMA animations that you can use.


    This is all great, but the ultimate goal is to see it actually running and working!

    For that I’ve captured a couple of video’s that you can view below:

    (if you’re really interested, yes that is my custom car: https://www.lilmerc.co.uk/ )



     As you see hear/see it did not always behave as I expected.  I need to work on adding more content to my Watson Assistant Intents / Entities and change my Dialog flow to include a reference to the Intents[0].confidence % level, so that when I get mis-heard saying “no” and it thinks I said “note”, it handles it more gracefully.  Now I have the baseline working though, I can spend more time refining these areas.

    I’m going to give this tutorial a little look too, as I think I might be neededing to do this: https://developer.ibm.com/recipes/tutorials/improve-your-speech-to-text-accuracy-by-using-ibm-watson-speech-to-text-acoustic-model-customization-service/


    As you can see above, I’ve spent more time writing this up than it actually took me to make.  My goal now will be to enhance things further (when I get some time), such as looking more into what the SALSA components can do for me; making the LipSync more realistic; perhaps adding more visual feature movements to the UMA human avatar; having key trigger words that perform certain reactions, such as having the head tilt to one side when listening or having the UMA digital avatar squint and wrinkle it’s forehead slightly when responding to questions…

    ….and then there is the other-side, I can look into tapping into the IBM Watson Tone Analyzer service to detect the tone of the user and change the UMA digital avatar responses…. oh, and then there is the ability to Build&Deploy to WebGL….and to iOS and Android phones…..oooooo and then there is the Virtual Reality output from Unity too……

    Anyway, there is always scope for doing more, this is genuinely just the start… I hope you find it a useful starting point for you own projects.  Good Luck!






80 comments on"Create a 3D Digital Human with IBM Watson Assistant and Unity3D"

  1. Hello Tony,
    Really appreciated your work, awesome tutorial ..just stuck at one point I am getting no intents and contexts in the OnCONVMessage callback hence throwing NullPointerException there..what I am doing wrong?
    Thank you in advance.

  2. Tony_Pigram May 18, 2018

    I just noticed in my line 279 for onConvMessage() the red-box highlighter is actually covering up the first ‘(‘ – apologies for that.

    So, the response should come back in the response object. Can you Log.Debug() that object to see what you are receiving? or even put a Debug stop on line 283 and use the Unity#d MonoDevelop to “see” what that object contains.

    One last point – you have setup the Watson Assistant service with a couple of Intents, Entities and some Dialog content and have tested that you have correct connectivity values – ie. it is connecting to the API okay?

  3. I haven’t created the new workspace of my own instead I added existing car dashboard workspace into mine. Connectivity seems to be fine I am just facing an issue in STT. Assistant and TTS are working good.
    I am able here the welcome message from UMA but my commands are not working with STT.

  4. Tony_Pigram May 18, 2018

    The STT code comes directly from the Watson SDK example code. Double-check your code against the code I used here: https://github.com/YnotZer0/unity3d_watsonSdk/blob/master/WatsonTTS.cs

    If you are hearing the “welcome message” then it means that the code has executed the code at #102 and has triggered the onConvMessage() at #279. If STT is not working, then I’d double-check that you’ve configured the STT config details within the IDE config values, make sure that you have added the Audio Listener and activated it:

    “Click on the [Main Camera] and make sure that the [Audio Listener] is selected (this is the Microphone input)”

    Then when you are running, you should see the DEBUG output like the 2nd screenshot in the section above: https://developer.ibm.com/recipes/tutorials/create-a-3d-digital-human-with-ibm-watson-assistant-and-unity3d/#r_step4

    You should see that the SpeechToTextOnListeningMessage DEBUG output says that the ‘Server state is listening’ – if that is the case it is initialised and ready to listen and stream the audio to the STT service. Do you see this initialising for you?

    p.s. apparently there was an issue with the STT service returning 500-errors yesterday, maybe that was related?

  5. Surprisingly it’s working on windows and not on the mac, same code untouched. This is so strange I did not find any evidence of this issue. I am aware you also have developed this tutorial on the mac machine. Let’s see😕

  6. I even did microphone testing on mac, they seem to be working fine.

  7. [05/27/2018 18:41:32][Unity][CRITICAL] Unity Exception IndexOutOfRangeException: Array index is out of range. : WatsonTTS.HandleToSpeechCallback (UnityEngine.AudioClip clip, System.Collections.Generic.Dictionary`2 customData) (at Assets/WatsonTTS.cs:382)
    IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1.TextToSpeech.ToSpeechResponse (IBM.Watson.DeveloperCloud.Connection.Request req, IBM.Watson.DeveloperCloud.Connection.Response resp) (at Assets/Watson/Scripts/Services/TextToSpeech/v1/TextToSpeech.cs:470)
    IBM.Watson.DeveloperCloud.Connection.RESTConnector+c__Iterator0.MoveNext () (at Assets/Watson/Scripts/Connection/RESTConnector.cs:548)
    IBM.Watson.DeveloperCloud.Utilities.Runnable+Routine.MoveNext () (at Assets/Watson/Scripts/Utilities/Runnable.cs:131)
    UnityEngine.SetupCoroutine.InvokeMoveNext (IEnumerator enumerator, IntPtr returnValueAddress) (at C:/buildslave/unity/build/Runtime/Export/Coroutines.cs:17)

    IBM.Watson.DeveloperCloud.Debug.DebugReactor:ProcessLog(LogRecord) (at Assets/Watson/Scripts/Debug/DebugReactor.cs:60)
    IBM.Watson.DeveloperCloud.Logging.LogSystem:ProcessLog(LogRecord) (at Assets/Watson/Scripts/Logging/Logger.cs:206)
    IBM.Watson.DeveloperCloud.Logging.Log:Critical(String, String, Object[]) (at Assets/Watson/Scripts/Logging/Logger.cs:294)
    IBM.Watson.DeveloperCloud.Logging.LogSystem:UnityLogCallback(String, String, LogType) (at Assets/Watson/Scripts/Logging/Logger.cs:167)
    UnityEngine.Application:CallLogCallback(String, String, LogType, Boolean)

    Hey Tony,

    I am facing index out of ranger error. Please help.

    • Tony_Pigram May 28, 2018

      Hi rajax,

      Before the following line:
      _textToSpeech.ToSpeech(HandleToSpeechCallback, OnTTSFail, TTS_content, true);

      are you able to output the content of “TTS_content” so I can see what you are passing to the TTS service? (I’m also assuming you have the correct values in the Inspector for the service variables)

      I note the following: https://answers.unity.com/questions/1473488/why-am-i-getting-a-list-index-out-of-range-error-f.html
      It might be related to the version of the SDK that you are using? Does the default sample work for you?

      • digital acid May 29, 2018

        Hi Tony, thanks for the awesome tutorial. I am also facing the same error as rajax. the only change is I am using iclone characters. If you have a remedy I’d be happy to try it. I’ve tried the same version of unity and different as well as sdk’s. all the watson examples work just not when combined all together in one script with salsa. if you can contact me privately that would be great if. again thanks for the awesome tutorial!

  8. Jean-LucD June 13, 2018

    Hi, have this errors.
    Assets/Scripts/WatsonTTS.cs(129,31): error CS0123: A method or delegate `WatsonTTS.OnSTTRecognize(IBM.Watson.DeveloperCloud.Services.SpeechToText.v1.SpeechRecognitionEvent)’ parameters do not match delegate `IBM.Watson.DeveloperCloud.Services.SpeechToText.v1.SpeechToText.OnRecognize(IBM.Watson.DeveloperCloud.Services.SpeechToText.v1.SpeechRecognitionEvent, System.Collections.Generic.Dictionary)’ parameters

    Assets/Scripts/WatsonTTS.cs(129,31): error CS0123: A method or delegate `WatsonTTS.OnSTTRecognizeSpeaker(IBM.Watson.DeveloperCloud.Services.SpeechToText.v1.SpeakerRecognitionEvent)’ parameters do not match delegate `IBM.Watson.DeveloperCloud.Services.SpeechToText.v1.SpeechToText.OnRecognizeSpeaker(IBM.Watson.DeveloperCloud.Services.SpeechToText.v1.SpeakerRecognitionEvent, System.Collections.Generic.Dictionary)’ parameters

    can you have a solution ?

    • Tony_Pigram June 13, 2018

      Hi Jean-LucD,

      It looks like that line in your post that is throwing the error is the following one:
      //around line 129
      _speechToText.StartListening(OnSTTRecognize, OnSTTRecognizeSpeaker);

      I am aware that the Watson SDK has changed a couple of versions since I posted the article, so I went to check the Watson SDK here: https://github.com/watson-developer-cloud/unity-sdk

      I then took a look at the SDK source-code for the .StartListening() method: https://github.com/watson-developer-cloud/unity-sdk/blob/develop/Scripts/Services/SpeechToText/v1/SpeechToText.cs

      //around line 500
      /// This starts the service listening and it will invoke the callback for any recognized speech.
      /// OnListen() must be called by the user to queue audio data to send to the service.
      /// StopListening() should be called when you want to stop listening.
      /// All recognize results are passed to this callback.
      /// Speaker label goes through this callback if it arrives separately from recognize result.
      /// Returns true on success, false on failure.
      public bool StartListening(OnRecognize callback, OnRecognizeSpeaker speakerLabelCallback = null, Dictionary customData = null)

      Then I took a look at the example code for the SDK (that I notice was changed 5 days ago): https://github.com/watson-developer-cloud/unity-sdk/blob/develop/Examples/ServiceExamples/Scripts/ExampleStreaming.cs

      //around line 122
      _service.StartListening(OnRecognize, OnRecognizeSpeaker);

      That matches my original code. The SDK code implies that if it is not passed 3 params it will just set the 3rd param to null.

      The error you are receiving “parameters do not match delegate” implies that the call is not being passed the parameters it is expecting, which is interesting. I wonder – have you tried the Example code and proven it is working with your setup/Unity Watson SDK version?

      • Hi Tony_Pigram,

        With the latest Watson API, the two SpeechToText.StartListening call back methods [OnSTTRecognize] and [OnSTTRecognizeSpeaker] now require a two parameter signature. If you add [Dictionary customData] as a second parameter to each method in your github code, the code will continue to work with the latest API updates.

        Also, in the image [catalog_watson_apis_WA_12.png], the [#General_Jokes] child node [true] should be [false].


      • The comment section ate the Dictionary example, it should be as below but replace the brackets with angle brackets:
        Dictionary[string, object] customData

  9. Jean-LucD June 14, 2018

    Hi Tony and CrazyM, problem solved and ready for new challenges, Thanks so much for your time and comments. As i say: Each line of code makes a better world.

  10. LDMarkley July 03, 2018

    Thank you so much for the wonderful tutorial. I followed along with it in the hopes of learning how to integrate the SDK in my project. I’ve followed your instructions completely and I’m running into four errors when I attempt to run my project. I’ve checked that I have the latest version of the SDK and the latest version of Unity. If you have any suggestions I’d appreciate the feedback as I’m not sure what is causing these issues. The chatbot DOES say the opening welcome line, so at least that part is working. I’ve also verified that all of my credentials have been entered correctly and my microphone does work with the TTS Example scene.

    First error – [Unity][CRITICAL] Unity Exception ArgumentNullException: Argument cannot be null.
    Parameter name: text : IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1.TextToSpeech.ToSpeech (IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1.SuccessCallback`1 successCallback, IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1.FailCallback failCallback, System.String text, Boolean usePost, System.Collections.Generic.Dictionary`2 customData) (at Assets/Watson/Scripts/Services/TextToSpeech/v1/TextToSpeech.cs:392)

    Second error – ArgumentNullException: Argument cannot be null.
    Parameter name: text
    IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1.TextToSpeech.ToSpeech (IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1.SuccessCallback`1 successCallback, IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1.FailCallback failCallback, System.String text, Boolean usePost, System.Collections.Generic.Dictionary`2 customData) (at Assets/Watson/Scripts/Services/TextToSpeech/v1/TextToSpeech.cs:392)

    Third error – [Unity][CRITICAL] Unity Exception IndexOutOfRangeException: Array index is out of range. : WatsonTTS.HandleToSpeechCallback (UnityEngine.AudioClip clip, System.Collections.Generic.Dictionary`2 customData) (at Assets/Scripts/WatsonTTS.cs:379)

    Fourth error – IndexOutOfRangeException: Array index is out of range.
    WatsonTTS.HandleToSpeechCallback (UnityEngine.AudioClip clip, System.Collections.Generic.Dictionary`2 customData) (at Assets/Scripts/WatsonTTS.cs:379)
    IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1.TextToSpeech.ToSpeechResponse (IBM.Watson.DeveloperCloud.Connection.Request req, IBM.Watson.DeveloperCloud.Connection.Response resp) (at Assets/Watson/Scripts/Services/TextToSpeech/v1/TextToSpeech.cs:470)

    • Tony_Pigram July 09, 2018

      Hi LDMarkley,

      Hmmm….. that looks like a repeat of the issue rajax was having (I wonder if he resolved it?).

      That code originated from the sample source here:

      Lines 117,188, 256 and 261->270. As you can see they are like-for-like. The ONLY difference is that I reference the usage of audioSrc for playing of the clip that I have assigned to the SALSA 3D (script) component- you should see if shown in the ‘Audio Source’ property.

      It is most odd because that code is re-used, so if you’ve heard the Welcome message that has received the response back from the WA service and has called the TTS service passing the contentText and has received a result that can be played. I was going to ask if you could debug output the TTS_content field inside GetTTS() to see that it is actually passing content to the TTS service?

      It’s annoying that I cannot replicate the issue or make it fail myself.
      I manually set TTS_content= ” “;
      To see if I could emulate passing an empty space value, but that returns a very different error message:
      [Unity][CRITICAL] Unity Exception ArgumentException: Length of created clip must be larger than 0 : UnityEngine.AudioClip.Create (System.String name, Int32 lengthSamples, Int32 channels, Int32 frequency, Boolean stream, UnityEngine.PCMReaderCallback pcmreadercallback, UnityEngine.PCMSetPositionCallback pcmsetpositioncallback)

      I manually set TTS_content = “”;
      That has no value at all and again I get a very different error:
      [Unity][CRITICAL] Unity Exception ArgumentNullException: Argument cannot be null. Parameter name: text : IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1.TextToSpeech.ToSpeech

      I’m at a loss as I cannot replicate the issue by trying to break the code, it just won’t fail for me…. Are you able to provide any further indepth debugging analysis?

      • LDMarkley July 13, 2018

        Sorry for the delay in getting back to you, it’s been a hectic week. So keep in mind I’m new to all this when you read my response so if I sound like I have no idea what I’m talking about I probably don’t. 🙂
        I was fiddling with different things trying to debug the errors I mentioned and I discovered completely by accident that unchecking the “Play” checkmark under where I put in my IBM credentials cleared errors 1 and 2. I’m genuinely not sure why. I did do the debug print like you said and there was indeed data being passed to the TTS service. I have no idea why unchecking the play checkmark worked but it did.
        For errors three and four I found out, again, by basically just poking at the code to see what I could do that if I comment out the Coroutine calls at line 379 and 380, errors 3 and 4 clear and the whole thing runs perfectly. Again, I have no idea why it didn’t like the coroutine since it was only taking care of the eye motion but that seems to have been where the problem lay.

        • Tony_Pigram July 16, 2018

          Ah! okay, I’m not sure why switching the play variable from true to false solved anything, but glad it did. Line 72 should have set it to false by default.

          Then line 342 will set it to true, so that the text from the Conversation/Assistant service can now be played/sent to the TTS service:

          //trigger the Update to PLAY the TTS message
          play = true;

          There is a continuous check being made by the Update() function (further down the code):

          // Update is called once per frame
          void Update () {

          if (play)
          Debug.Log (“play=true”);
          play = false;
          Active = false;

          So, that will pickup the fact that play is now true and then set itself to false and then call GetTTS():

          //called by Update() when play=true;
          private void GetTTS()
          // Synthesize
          // Log.Debug(“WatsonTTS”, “Attempting synthesize.”);
          _textToSpeech.Voice = VoiceType.en_US_Michael; // .en_US_Allison; //.en_GB_Kate;
          _textToSpeech.ToSpeech(HandleToSpeechCallback, OnTTSFail, TTS_content, true);

          Which is then passed the Text returned from the Conversation/Assistant service:

          void HandleToSpeechCallback(AudioClip clip, Dictionary customData = null)
          if (Application.isPlaying && clip != null && audioSrc != null)
          audioSrc.spatialBlend = 0.0f;
          audioSrc.clip = clip;

          //set flag values that can be picked up in the Update() loop
          wait = clip.length;
          check = true;

          And it was in the center of that code that we had the StartCoroutine() code that was throwing your error. Good debugging!

          Ah, the StartCoroutine() code for lookleft/lookright was an experiment that I was fiddling around with that didn’t work out – I had those 2 extra elements in my project that’s why it worked for me and not for you – thank you for helping debug that, I’ve removed reference to them as it didn’t do what I was hoping for anyway.

          Thanks for figuring that out and apologies for leaving in those 2 lines of code – am glad you got to grips with what is happening in the code now though!

  11. Tony, Thanks for the recipe. I have the example working well in Unity, but I’m also a bit confused why I can’t a general conversation with the virtual character. If I recall correctly, I was able to use both Watson and Google Assistant (in the command line) and have a simple conversation with with program–much as you would with an Alexa or other home device. Is there a way to get Watson to have a similarly conversational quality in this example? Thanks again…

    • Tony_Pigram July 27, 2018

      Well, the virtual character is just the user interface to the intelligence (or lack of) behind the scenes. You might be referencing what we refer to as chit-chat or small-talk: https://dialogflow.com/docs/small-talk
      My Watson Assistant workspace had about 3 specific Intents created and 1 dialog flow for the structured conversation response – purely as a tester to prove it works.
      There is nothing to stop you extending your WA workspace to have more Intents (I believe WA even provides a load of sample utterances/Intents for you now via “Content Catalog”: https://www.ibm.com/blogs/watson/2018/03/the-future-of-watson-conversation-watson-assistant/ , you can add those and build out your dialog flow responses as needed)

      I once connected up my 3D-printed robotic head to use IBM Visual Recognition, STT and TTS service and as a tester passed the conversation text to https://www.cleverbot.com/ – this has an API that allows you to pass text to it and you receive responses back. It is nonsense chit-chat conversation and you do not have to build out a Dialog flow, like you would do in Watson Assistant.

      It’s usage is very limited, but it is a fun way to have some bizarre and random conversations. You should be able to pretty easily swap out the calls to Watson Assistant service with calling Cleverbot in your Unity project code.

  12. tiancaipipi110 August 10, 2018

    did you add another “yes” intent to create the child node of “General_Jokes”?

  13. tiancaipipi110 August 10, 2018

    also where did the “General_positive_response” come from? did you make that at front too? so why not add it to the tutorial? Also, in the video you said “no”, but in the response it was recognizing “true”, what’s going on there?

    • Tony_Pigram August 12, 2018

      “General_positive_response” comes from the [Content Catalog] tab of the Workspace – click on that tab and you can see there are some pre-made Intents with utterances to save you having to come up with them yourself. they are a good starter point. It wasn’t relevant for the tutorial, hence not mentioning it, just showing it in the images.
      If you scroll up the comment, CrazyM points out that the “true” node should be a “false” node, as it will always go to the true node no matter what was uttered. This tutorial wasn’t meant to be about Conversation design, that you can create yourself, I just created a very simple Intent to show that interactions can happen.

  14. tiancaipipi110 August 10, 2018

    how did the “TextStreaming” get in there? How did you “add” the [Wardrobe Recipes]? where did you add it to? Isn’t it always there? What to put in the “CONV_version date”? I’ve tried with workspace created/modified date, neither works.

    • Tony_Pigram August 12, 2018

      If you scroll up and find the text “All that we shall add extra is a Canvas/Text GameObject, like so:” you will see where the TextStreaming is added.
      [Wardrobe Recipes] is a folder that is present when you add the UMA, if you search above for “I changed the [Active Race] to Female and added the [Wardrobe Recipes]:” you can see that at the bottom of the screen, the files are present, if you drag them into the section in the top (both are shown in red boxes for you), you can then “add” wardrobe items to your character.

      Okay, CONV_version_date is the date used by the Watson Assistant APIs: see here https://www.ibm.com/watson/developercloud/assistant/api/v1/curl.html?curl#versioning
      If you search the article for “Ensure that you have all of the Components added to your GameObject like so” you can click on the image and see the value I was using.

  15. BlueNucleus August 27, 2018

    Do you think this code can run in mobile environment if I export it as an Android App?

    • Tony_Pigram October 16, 2018

      Never tried it myself, in theory there shouldn’t be a problem with it functionally. I once did export it as a Samsung Gear VR output just to see what would happen and the scaling needs tweaking (the avatar was a tiny speck in the distance) and the audio didn’t seem to work, but as a default Android output or even output as a standalone Mac export it should work. Let me know if you try it and what issues you have and how you overcame them. I quite like the idea of having a digital human running on my phone, I wonder how resource intensive it would be….hmmm…..

      • Tony_Pigram October 16, 2018

        Okay – I just did an export to a Samsung Galaxy S6 Android phone (.apk was about 112mb) directly from the Unity environment and it works fine running on the phone. Quite cool actually. I think I’m going to have to define a bit more conversation in the Watson Assistant dialog flow now…. awesome idea BlueNucleus, I should have done this much sooner! thanks

  16. After playing with this for a while I’m back with a follow-up question, which I suppose is mostly hypothetical. Is there anything you’d suggest to improve response time? Running my own example it felt like the delay between saying something and the bot responding was just long enough to be a bit awkward. I know there’s a lot of factors at play here, I was just debating any ways that the conversation could feel a bit more “natural” so to speak.

  17. Hi Tony, thanks so much for this tutorial! Could you help me understand which parts of the script are crucial to the SALSA component? I had SALSA working properly when just following your tutorial using your script to practice. I’m now working on my own project using only Watson Text to Speech. Currently my program pulls from an RSS News Feed, pulls the top 3 news headlines from that feed, converts them to text, and then Watson reads them. That’s all working properly, but for some reason SALSA will not work with it. From what I can tell, Watson is creating a WAV file and then creating an AudioSource, but it is creating its own AudioSource not connected to my avatar rather than using the existing SALSA AudioSource. My script is connected to my Avatar, so I’m not sure why this is. Any ideas? My assumption is that I’m missing somewhere in my script something that tells Watson to throw the created WAV file to SALSA. I’m a little bit new to all of this, so please bear with me if I’m sounding like an idiot!

    • Tony_Pigram October 16, 2018

      Hi, I have a vague memory of having some initial head scratching with AudioSource when I first set up the project. As mentioned, the TTS code is from the Watson SDK sample code, so you can always revert back to that code as a working baseline.
      In this project, you’ll notice if you click on the SALSA object, it has an [x]Audio Source added to it already. I believe the tricky part is how do you link that Audio Source to the “clip” of audio returned by the TTS service?
      If I walk through what I did, it might help you to see if that helps your project

      In the Watson_STT.cs file, you’ll notice there is a declaration of an AudioClip and an AudioSource:

      private AudioClip audioClip; // Link an AudioClip for Salsa3D to play
      private Salsa3D salsa3D;
      private AudioSource audioSrc;

      Then within the start() function, we get the SALSA assigned Audio Source object and stash it in our audioSrc object:

      // Use this for initialization
      void Start () {
      audioSrc = GetComponent(); // Get the SALSA AudioSource from this GameObject

      You’ll note we don’t actually do anything with it….until the response is returned from the TTS service. In the HandleToSpeechCallback() the TTS service returns the audio as an AudioClip, we then assign that clip to the audioSrc and call .Play() that will then play the audio clip through the SALSA Audio Source….and you should then hear the audio and see the SALSA model moving their mouth.

      //called by Update() when play=true;
      private void GetTTS()
      // Synthesize
      // Log.Debug(“WatsonTTS”, “Attempting synthesize.”);
      _textToSpeech.Voice = VoiceType.en_US_Michael; // .en_US_Allison; //.en_GB_Kate;
      _textToSpeech.ToSpeech(HandleToSpeechCallback, OnTTSFail, TTS_content, true);

      void HandleToSpeechCallback(AudioClip clip, Dictionary customData = null)
      if (Application.isPlaying && clip != null && audioSrc != null)
      audioSrc.spatialBlend = 0.0f;
      audioSrc.clip = clip;

      //set flag values that can be picked up in the Update() loop
      wait = clip.length;
      check = true;

  18. Hi
    I am trying to do this tutorial and I have some problems. Since that Watson SDK for Unity was changed and WatsonTTS.cs is not in this unitypackage, for it, Could you help me to fix this topic? Thanks for your help !!!

    • The Watson SDK is available here: https://assetstore.unity.com/packages/tools/ai/ibm-watson-sdk-for-unity-108831

      The file WatsonTTS.cs is a file specific to this project (NOT the Watson SDK unity asset) and is available at this github: https://github.com/YnotZer0/unity3d_watsonSdk/blob/master/WatsonTTS.cs

      I have not used the latest version of the Watson SDK unity asset, so I am not aware of any issues with the latest version, the version that I used for this article worked fine v2.4.0 (as proven by the above article content).

      • Hi again
        First thanks to answer me
        Sorry for my ignorance and my little English but I guessed The file WatsonTTS.cs is in he Watson SDK for Unity and not as a file specific to this project.
        On the other hand and if it is not very hard for you, Could you let me get the Json file of the Workspace Id example used in the Step 1 of this tutorial? Would it be possible?. (I’m not trying to avoid creating it but to be able to compare your Workspace example with the mine….). Thanks again

        • “The code is no longer relevant due to changes in the cloud services. I will explain more once you confirm that you are able to help.”

          I’m getting mixed messages from yourself & am a little confused. You say that the Watson SDK no longer works for you, but do not state why. Now you ask for a copy of the WA workspace. I’ll ask again if you can clearly define what the problem is that you are encountering then people might be able to help you to resolve them.

          If you search in the text/images above for [#General_Jokes] you can see the workspace that was used, it is very simple, I create an Intent [#General_Jokes] an added about a dozen variations of the utterance “Tell me a joke” and the rest is documented above in the screenshots. I do not believe that WA workspace exists anymore as I performed a clean-up last month of non-used services.

          If you are experiencing issues with connecting to the latest Watson SDK (even though I would recommend using v2.4.0 as that was the version proven to work for the above article) then perhaps you could create a sample Unity project and just test usage of the following example code: https://github.com/watson-developer-cloud/unity-sdk/blob/develop/Examples/ServiceExamples/Scripts/ExampleSpeechToText.cs

          This article has and still works fine for all of the versions of software that were used at the time of creation – if you have a newer version of Unity / Watson SDK then I cannot guarantee it will work. As stated in the Watson SDK github repo, you may need to make some changes if you differ from the article: https://github.com/watson-developer-cloud/unity-sdk

          If you have a newer version of Unity you need to change settings, also, some of the services have changed authentication methods, sample code is provided on how to switch it over.
          One last thing I noticed was that the comment at the bottom about “Streaming outside of the US South” region and the change to TLS, make sure you comply with their recommendations.

          If you can clearly articulate what the problem is that you are encountering, what you have tried, what has failed (as you would per a StackOverflow issue), I’m sure I or anyone you contact within IBM can help, if you cannot, then we won’t be able to assist you.

        • Hi, After understanding that you were referring to creating a new Watson Assistant service and using API_Key instead of username/password, I’ve updated the guidance article to show how to use the Watson SDK v2.11.0 to use the API Key instead of username/password instead. I hope that helps you to use the new Watson Assistant service. No other code change should be required at this point in time. thanks tony

  19. Hi, I’m using the api key and the related url to obtain the token for the CONVcredentials but I get that the URL gives me “not found”. So I ask you: should I install IBM cloud to obtain the token ?

    • I no longer have a working Unity3D environment on my Mac laptop, so I am unable to run this sample anymore myself.

      There is no “install IBM Cloud”, you just need to install the latest Watson SDK that allows passing of the APIKey, I’ve put the code and screenshot in the article above (but, as I say, I am no longer able to test the code myself)

  20. Okay, I’ve performed a fresh installation of Unity – 2018.2.16f1, I’ve added the Watson SDK and the UMA asset package. I’ve imported the Salsa package (for some reason it needs to do it twice to pick up the 1-click scene). Then double-clicking the “SalsaUmaSync-1-Click Setup” Scene and adding the additional information as per the article above, I’m back up and running. I have a basic UMA with moving about with random eyes and some red text floating in the air in front.

    I have now created a Watson_newServices.cs file – using the information extracted from the Watson v2.11 SDK to connect to the TTS, STT and Watson Assistant Services. I created the TTS and STT services in the US South/Dallas region and copied the apiKey credentials and urls. I created a WA service in Germany/Frankfurt region and copied the apiKey credentials and urls. I decided to keep this code to a minimum to determine if connectivity can be just made to the services using the new basic/simple SDK code.

    Now when I press [>]Run, I get the UMA character moving about BUT I also get an error message:

    WatsonException: Please provide a username and password or authorization token to use the Text to Speech service. For more information, see https://github.com/watson-developer-cloud/unity-sdk/#configuring-your-service-credentials
    IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1.TextToSpeech..ctor (IBM.Watson.DeveloperCloud.Utilities.Credentials credentials) (at Assets/Watson/Scripts/Services/TextToSpeech/v1/TextToSpeech.cs:141)
    Watson_newServices.Start () (at Assets/Scripts/Watson_newServices.cs:110)

    So, I’m pretty sure I am now in the same boat as the previous people posting about this issue above. Taking a look at line 110, I see that all it contains is the following: _TTSservice = new TextToSpeech(TTScredentials);

    Here’s the snippet of the code prior to that line of code – this is a direct “lift” from the SDK sample code (with one change, that might be the issue)

    // Create credential and instantiate service
    Credentials TTScredentials = null;
    if (!string.IsNullOrEmpty(_TTSusername) && !string.IsNullOrEmpty(_TTSpassword))
    // Authenticate using username and password
    TTScredentials = new Credentials(_TTSusername, _TTSpassword, _TTSserviceUrl);
    else if (!string.IsNullOrEmpty(_TTSiamApikey))
    // Authenticate using iamApikey
    TokenOptions TTStokenOptions = new TokenOptions()
    IamApiKey = _TTSiamApikey,
    IamUrl = _TTSiamUrl

    TTScredentials = new Credentials(TTStokenOptions, _TTSserviceUrl);

    // Wait for tokendata
    // while (!TTScredentials.HasIamTokenData()) //what if – this puts it into an infinite loop?!
    // Debug.Log(“waiting for tokendata”);
    throw new WatsonException(“Please provide either username and password or IAM apikey to authenticate the TTS service.”);

    _TTSservice = new TextToSpeech(TTScredentials);

    That line of code I commented out: while (!TTScredentials.HasIamTokenData())
    I am wondering if this needs to have a valid response that is assigned to the TTSCredentials and that has to be there before the _TTSService = new TextToSpech(TTScredentials); can be successfully called.

    As you can see above, I originally had a Debug.Log() output whilst waiting for a response, but that just appeared to send the code into an infinite loop waiting forever and eventually crashing the Mac. I suspect that the call to this to get a response will need to be figured out and then duplicated for TTS, STT and WA and this will resolve the connectivity issues people are reporting.

    I’ll see if I can get some time to investigate this further shortly.

    • Okay, I had a little bit of a brainwave…. how about modifying the code to be closer to the Watson SDK samples for the IAM authentication. Yep – that worked great! Here’s the code that you need to modify for the “Start()” and then the new functions as shown below:

      private TextToSpeech _TTSservice;
      private SpeechToText _STTservice;
      private Assistant _ASSservice;

      // Use this for initialization
      void Start () {

      Debug.Log(“TTS Service connection made successfully”);
      Debug.Log(“STT Service connection made successfully”);
      Debug.Log(“ASS Service connection made successfully”);


      private IEnumerator CreateTTSService()
      // Create credential and instantiate service
      Credentials TTScredentials = null;
      if (!string.IsNullOrEmpty(_TTSusername) && !string.IsNullOrEmpty(_TTSpassword))
      // Authenticate using username and password
      TTScredentials = new Credentials(_TTSusername, _TTSpassword, _TTSserviceUrl);
      else if (!string.IsNullOrEmpty(_TTSiamApikey))
      // Authenticate using iamApikey
      TokenOptions TTStokenOptions = new TokenOptions()
      IamApiKey = _TTSiamApikey,
      IamUrl = _TTSiamUrl

      TTScredentials = new Credentials(TTStokenOptions, _TTSserviceUrl);

      // Wait for tokendata
      while (!TTScredentials.HasIamTokenData())
      yield return null;
      throw new WatsonException(“Please provide either username and password or IAM apikey to authenticate the TTS service.”);

      _TTSservice = new TextToSpeech(TTScredentials);

      private IEnumerator CreateSTTService()
      // Create credential and instantiate service
      Credentials STTcredentials = null;
      if (!string.IsNullOrEmpty(_STTusername) && !string.IsNullOrEmpty(_STTpassword))
      // Authenticate using username and password
      STTcredentials = new Credentials(_STTusername, _STTpassword, _STTserviceUrl);
      else if (!string.IsNullOrEmpty(_STTiamApikey))
      // Authenticate using iamApikey
      TokenOptions STTtokenOptions = new TokenOptions()
      IamApiKey = _STTiamApikey,
      IamUrl = _STTiamUrl

      STTcredentials = new Credentials(STTtokenOptions, _STTserviceUrl);

      // Wait for tokendata
      while (!STTcredentials.HasIamTokenData())
      yield return null;
      throw new WatsonException(“Please provide either username and password or IAM apikey to authenticate the STT service.”);

      _STTservice = new SpeechToText(STTcredentials);
      _STTservice.StreamMultipart = true;

      private IEnumerator CreateASSService()
      // Create credential and instantiate service for ASSISTANT
      Credentials ASScredentials = null;
      if (!string.IsNullOrEmpty(_ASSusername) && !string.IsNullOrEmpty(_ASSpassword))
      // Authenticate using username and password
      ASScredentials = new Credentials(_ASSusername, _ASSpassword, _ASSserviceUrl);
      else if (!string.IsNullOrEmpty(_ASSiamApikey))
      // Authenticate using ASSiamApikey
      TokenOptions ASStokenOptions = new TokenOptions()
      IamApiKey = _ASSiamApikey,
      IamUrl = _ASSiamUrl

      ASScredentials = new Credentials(ASStokenOptions, _ASSserviceUrl);

      // Wait for tokendata
      while (!ASScredentials.HasIamTokenData())
      yield return null;
      throw new WatsonException(“Please provide either username and password or IAM apikey to authenticate the ASSISTANT service.”);

      _ASSservice = new Assistant(ASScredentials);
      _ASSservice.VersionDate = _ASSversionDate;

      // Update is called once per frame
      void Update () {


      I just ran this and the UMA Avatar appeared moving about and the Debug info showed the following (no errors):

      TTS Service connection made successfully
      Watson_newServices:Start() (at Assets/Scripts/Watson_newServices.cs:83)

      STT Service connection made successfully
      Watson_newServices:Start() (at Assets/Scripts/Watson_newServices.cs:85)

      ASS Service connection made successfully
      Watson_newServices:Start() (at Assets/Scripts/Watson_newServices.cs:87)

      Right, now that is all working as expected, I’ll merge the previous code into this file and then look to republish it to the github as a new file and update the article above on how to use the new Authentication for STT/TTS/WA and the new SDK.

      hope that helps anyone who was stumbling over this little evolution of security coding that is being put into place for all new IBM Cloud services – it does make sense to have oAuth2 token expiry for connectivity to services, it was just slightly frustrating it was a breaking SDK change though, hopefully it’ll go smoother in the future.

  21. Complete NEW and TESTED Watson_newServices.cs file (to replace the WatsonTTS.cs file). This .cs file now works with the new Watson authentication.


    You just have to set the IAMKey values, the URLs if they are not the default, the Watson Assistant Version value (2018-03-19) and WorkspaceID value – then it will all works as it should.

  22. Hi. Just going to be more basic. I hope that is OK. You have update/update all over it and it is causing me confusion. What is the WatsonTTS.cs file in regards to the “sample” code. Where can I find any of this in my Unity. Do I put the WatsonTTS.cs file in my Unity and if so where? Hope you can help and apologies if I am missing something basic.

  23. Ignore my previous comment, apologies. Got a bit confused with the updates. So, whenever I play the scene the avatar simply “reads out” a sample audio clip even though the Audio clip in the Audio source is empty. Additionally, the errors I get when playing a scene says that there is no response from my URLs and it’s failing to get IAM tokens even though they have been inserted into the inspector straight from IBM Cloud.

    I really hope you can help as I am using this in my University course as an example of the incredible power this could have in the games industry in the next years. Your walkthrough is the first instance I have seen of this even close to working.

    Thank you

    • okay, so this article was originally written back in April, when all was good and stable. Then, as with all things to do with IT and the internet, things moved on…some updates were made to the article to accommodate. Then in early November, a breaking change occurred. Initially, I thought, pfhhh… okay, I’ll just delete the article it was available for long enough for people to get the idea. Then I got a lot of emails requesting help. I found some time in between my very hectic work schedule (I work in Delivery services, so spend 8+hours per day, working out, leading and delivering solutions to customers) to sit down and work out what has changed with the services and what needed to be changed in the C-Sharp code to accommodate it. It only took a few hours, but job done, uploaded it last week. In theory, you could delete the whole article and start it again from scratch with just the latest parts, but I’ve no time for that – hence just adding the Update/Updates where relevant.

      So, your problem should now be my problem, right?….why does your avatar read out a sample audio clip? I don’t believe it should be doing that – btw, I setup a fresh new Unity environment last week (as explained in comments above) and actually followed my own screenshots in the article and it worked out fine. Are you using Salsa, if so what version of “uma-dcs.unitypackage” did you download? I only have v1.6 .dll, so I’ve only used that version when importing.

      If you are getting “no response from my URLs”/” failing to get IAM token”, paste the error messages you get in the console window, that should help to determine what is not configured correctly, if you’re not even getting that far then the STT service won’t get invoked and you shouldn’t even get an audio. post more info and let’s take a look.

  24. Hello Tony, thank you very much for this great tutorial, for devoting much of your time to this.

    Unity 2018.2.12f1
    IBM Watson SDK for Unity current ver. 2.11.0

    I have followed your tutorial completely, but this error appears. If you were so helpful, I would be very grateful.


    • are you able to show the actual error message? it’s cut off on the right hand side of your image.

      • your error is:

        ErrorCode: 401, Error: 401 Unauthorized, Response: {“code”:401, “error”: “Unauthorized”}

        seems pretty clear cut to me. I see you have a different name for your .onFail function than what I have. You have ExampleConversation.onFail(), in my latest .cs file I have OnConvFail() and OnConvMessage() as setup to be called on line 231 (last line of this section of code):
        // Message
        Dictionary input = new Dictionary();
        //Say HELLO – start the conversation
        input.Add(“text”, “first hello”);
        MessageRequest messageRequest = new MessageRequest()
        Input = input
        _ASSservice.Message(OnCONVMessage, OnCONVFail, _ASSworkspaceId, messageRequest);

        It sounds to me like the values you are passing to the .Message() are not correct and the fact in the Start() function you are outputting the debug value (in your output log) implies that is has executed the code above okay – which it couldn’t have done. You must understand why I wrote that initial message, it is to start the conversation with the Assistant as the Welcome message, this needs to happen to create a session between your code and the Assistant:
        Log.Debug(“Start()”, “ASS Service connection made successfully”);

        The fact that the debug log output above is output implies that the code above in line 231 is not the issue. Are you really sure you are compliant with the way the code is laid out or have you made coding design changes? I’m not quite psychic to determine what you have changed and why it is failing if you do not stick to the same code base. Are you able to share your code for the .cs file? it might be something quirky about the modified code?

        Are you really sure you have the correct credentials entered and being used? As I say, if you share, I can swap it over in my environment and I’ll put my connection details in place and see if it is the code itself or the configuration.

        • (For some reason I cannot edit the reply above, so I’ll put an update here instead)

          I made the statement you are not using the same .cs code as I am, I retract that, you are. I can see that the output log matches the code below, so ignore my statement about that:

          private void OnCONVFail(RESTConnector.Error error, Dictionary customData)
          Log.Error(“ExampleConversation.OnFail()”, “Error received: {0}”, error.ToString());
          _messageTested = false;

          In which case it must be to do with the connectivity to your specific services and for some reason it does not believe that the values you are using are correct, hence the “401 Unauthorized”.

          One test you can do – create the services as “Lite” versions in the locations that I specified (ie. not in Sydney) and see if they connect and work. If they do, then it must be an issue with your local Sydney services (raise an IBM Cloud support case via the link at the top of the screen in IBM Cloud console). If it does not work for other regions, do you have direct access to the internet? are you going through a firewall / proxy? is that restricting you? (just thinking out loud).

          • Hello Tony, thank you very much for your response and for your time. It’s been my mistake, I was confused with the keys.

            I just added “Services Url” “iam Apikeys” “WorkspaceID” “Version date” leaving “username” and “password” empty.
            Now it works great.
            I’m sorry for my confusion
            Thank you

  25. Hello Tony,
    I am getting this System.NullReferenceException: Object reference not set to an instance of an object
    at Watson_Service.set_Active (Boolean value) [0x000f6] exception in line 258. All 3 connections are successful. Please advice.

    [12/14/2018 07:05:55][Unity][CRITICAL] Unity Exception NullReferenceException: Object reference not set to an instance of an object : Watson_Service.set_Active (Boolean value) (at Assets/Scripts/Watson_Service.cs:258)
    Watson_Service.Update () (atScripts/Watson_Service.cs:551)

    IBM.Watson.DeveloperCloud.Debug.DebugReactor:ProcessLog(LogRecord) (at Assets/Watson/Scripts/Debug/DebugReactor.cs:60)
    IBM.Watson.DeveloperCloud.Logging.LogSystem:ProcessLog(LogRecord) (at Assets/Watson/Scripts/Logging/Logger.cs:206)
    IBM.Watson.DeveloperCloud.Logging.Log:Critical(String, String, Object[]) (at Assets/Watson/Scripts/Logging/Logger.cs:294)
    IBM.Watson.DeveloperCloud.Logging.LogSystem:UnityLogCallback(String, String, LogType) (at Assets/Watson/Scripts/Logging/Logger.cs:167)
    UnityEngine.Application:CallLogCallback(String, String, LogType, Boolean)

    Thank You!

    • I see your followup comment (but for some reason it is not showing above):

      Hello Tony,
      I was able to figure out the issue, referring to my previous question. However, all the connections respond as Success , I am getting this error ,[12/14/2018 10:24:38][Credentials.OnGetTokenFail();][DEBUG] Failed to get IAM Token: URL: https://urldefense.proofpoint.com/v2/url?u=https-3A__gateway.watsonplatform.net_assistant_api_v1_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=rTAJP5nmn7YcOvgRiVpqsbLRELY-SR6QJLYNjW9UJd4&m=WhkFrXzHtfYJ_eIubl1HOLGBALCKQkrlwL6C-yCMfKI&s=OKeaWts_2Dk8WFijCzHMdLghGHBB5bN_o3-kkqUVeLE&e=, ErrorCode: 401, Error: Generic/unknown HTTP error, Response: {“code”:401, “error”: “Unauthorized”}
      IBM.Watson.DeveloperCloud.Debug.DebugReactor:ProcessLog(LogRecord) (at Assets/Watson/Scripts/Debug/DebugReactor.cs:68)
      IBM.Watson.DeveloperCloud.Logging.LogSystem:ProcessLog(LogRecord) (at Assets/Watson/Scripts/Logging/Logger.cs:206)
      IBM.Watson.DeveloperCloud.Logging.Log:Debug(String, String, Object[]) (at Assets/Watson/Scripts/Logging/Logger.cs:228)
      IBM.Watson.DeveloperCloud.Utilities.Credentials:OnGetTokenFail(Error, Dictionary`2) (at Assets/Watson/Scripts/Utilities/Credentials.cs:244)

      IBM.Watson.DeveloperCloud.Utilities.Credentials:OnRequestIamTokenResponse(Request, Response) (at Assets/Watson/Scripts/Utilities/Credentials.cs:336)
      IBM.Watson.DeveloperCloud.Connection.c__Iterator0:MoveNext() (at Assets/Watson/Scripts/Connection/RESTConnector.cs:651)
      IBM.Watson.DeveloperCloud.Utilities.Routine:MoveNext() (at Assets/Watson/Scripts/Utilities/Runnable.cs:131)
      UnityEngine.SetupCoroutine:InvokeMoveNext(IEnumerator, IntPtr)

      …and the new iAm API key. Any mistake that I need to correct?

      Thank You!

      • So, the error does not look to be related to the services (WA, STT, TTS) but purely doing the first step to get the IAM Token.

        Which leads me to the question : if you do a search in the article above for the text: ” if you do not then you will see an error output like so”, there is a snippet on the right handside that shows the values that are being set for connecting to Watson Assistant. Do you have the same sort of details entered?

        for instance, you should only have the workspaceId, version-date and api key entered.

        the api key value is from the WA screen that is shown when you look at the service credentials.

  26. Hello Tony,

    I have updated the script to accomodate iAm api latest as per the unity watson latest sdk. Now script connections working correctly. the following adjustments I have been applied,
    Namespace updates :
    using IBM.Watson.DeveloperCloud.Services.Assistant.v2;
    using IBM.WatsonDeveloperCloud.Assistant.v2;

    private string _sessionId; – for Assiston session id creation

    and in all the CreateService methods tokenoptions has been updated,

    // Authenticate using iamApikey
    TokenOptions tokenOptions = new TokenOptions()
    IamApiKey = _STTiamApikey
    and the assistant service id has been added as a property.
    With the above changes I have been able to setup the connections. However, I am getting this error and unabled to solve it yet. I would appriciate you could assist on the following error,

    NullReferenceException: Object reference not set to an instance of an object
    Watson_Service.set_Active (Boolean value) (at Assets/Scripts/Watson_Service.cs:231)
    Watson_Service.Update () (at Assets/Scripts/Watson_Service.cs:533)

    NullReferenceException: Object reference not set to an instance of an object
    Watson_Service.set_Active (Boolean value) (at Assets/Scripts/Watson_Service.cs:231)
    Watson_Service.Update () (at Assets/Scripts/Watson_Service.cs:533)

    Thank You!

    • I would say that you have checked the Play [x] boolean in the properties section – UNCHECK it. (There is a comment and screenshot advising you to do that above). As I do not have your Watson_Service.cs file I do not know what is at line 231, but the error about .set_Active (Boolean value) indicates to me that is your problem. So, uncheck it and try again. thanks.

  27. Hello tony,
    i have few question, is using salsa is must ?
    since i dont have the assets

    • Hi Salsa is just used for the movement of the UMA. I just used it for the eye and head movement. You can do without it if you want, it just felt a bit un-natural with no movement, or if you’re a Unity expert you can build it yourself or use a different library. If you do find an alternative (I did test out 3-4 before I decided on Salsa) then let me know. thanks tony

  28. Hi all, I cannot figure out why setting “Active” to “true” and then invoking “StartRecording()” would not resume my Speech-to-Text service. Has anyone encountered such a problem before?

  29. Hi Tony, Where in the code are you dealing with entities?

    • Tony_Pigram January 28, 2019

      Hi Josh, inside the .cs code we’re not dealing with entities, it is purely extracting the Output.text from the returned JSON. You want to look at the code around line 444 for that reference.

      If you look at this function:
      private void OnCONVMessage(object response, Dictionary customData)
      Log.Debug(“Assistant.OnMessage()”, “Response: {0}”, customData[“json”].ToString());

      The JSON output will be something like this:
      [01/25/2019 17:09:08][Assistant.OnMessage()][DEBUG] Response: {“intents”:[{“intent”:”General_Greetings”,”confidence”:0.9858184337615967}],”entities”:[],”input”:{“text”:”first hello”},”output”:{“text”:[“Hello. Good evening”],”nodes_visited”:[“node_13_1502484041694″,”node_15_1488295465298″],”log_messages”:[]},”context”:{“conversation_id”:”17d483b5-d426-4490-bd59-1633c05fc3b2″,”system”:{“initialized”:true,”dialog_stack”:[{“dialog_node”:”root”}],”dialog_turn_counter”:1,”dialog_request_counter”:1,”_node_output_map”:{“node_15_1488295465298″:[0]},”branch_exited”:true,”branch_exited_reason”:”completed”}}}

      You can then feel free to parse that JSON, so extracting the content from messageResponse.Entities. In the above example, you can see that that array is empty, but if you are populating it, then that’s where you need to extract the values from and then in your code you can do what you need.

      hope that helps, thanks tony

      • @Tony_Pigram Thanks Tony. I appreciate your reply. I used the “OnListEntities” method, as an example, to get a list of my entities, which is awesome! The following is an example of me being able to receive them, but my question is are we supposed to process this ourselves? For instance, when you say something and the intent is recognized, one of the responses you have set up will fire and you will get a response to your utterance.Is it not supposed to be the same that if your response matches an entity, that entity should handle the utterance and similarly send back a response based on that? Not sure if I am explaining my question well though…

        Response: {“entities”:[{“entity”:”My_1st_Entity”,”fuzzy_match”:true},{“entity”:”My_2nd_Entity”,”fuzzy_match”:true}],”pagination”:{“refresh_url”:”/v1/workspaces/0ffd748d-32df-4e2a-9681-190799b882c7/entities?version=2019-01-24″}}

        • My answer would be no. I believe that you should pass the utterance text to Watson Assistant and let the Watson Assistant configuration determine what the Intent identified is and what entities have been identified and extracted. This way you are pushing Intent and Entity identification and actions performed on that analysis into the correct tooling – Watson Assistant. Using the Dialog design tool, you would then have a tree-node structure where the Intent is identified, you could then get more granular to detect the entities mentioned and reply with a specific response back to the user.

          If you really wanted to perform some action within the Unity C# code, as you’ve explained you can get the values from the JSON and then in C# code you can perform some other action; such as, change an expression or change a GameObject.
          For instance, if your user decided to swear at your Digital Human, you can then use your Watson Assistant configuration to identify the swear word(s) as entities and when returned in the JSON, you can detect this and then make your Digital Human “frown” and maybe wave a finger whilst it read out the WA Dialog response text of “tut!tut! we don’t like that type of language, can you ask me that question again, but this time without swearing at me, thank you”

          • @Tony_Pigram Thank you Tony. I think I understand what you mean. I will try to summarize it, so you can see if I correctly understood your explanation => Using the Watson Assistant tool, I can setup a tree-node structure (which I have) and then create intents and entities (which, again, I have) and then keep typing in my input (that is, treating the assistant as a textual chat bot) and then when an intent is identified (by the Assistant) I would receive a response (which I do). I can, also, add entities to get more granularity, and then following a particular response, I can input something else immediately and the entity will match and fire the specific responses that I have included in my entities reactions (which I have done and tested many time). However, you are saying that in Unity, I cannot achieve this chain of automatic “moving from intent-recognition to entity recognition” and instead I can only get responses back from the service based on pure “intent-recognition”, so if I would like to simulate a similar behavior to the textual chat bot, I should manually have my entities listed, get them back into Unity (as I have done so) and then perform further processing on them to refine the avatar’s response. Have I understood you fully right? ( Many thanks in advance, Tony! )

          • (some odd reason I cannot reply under your following post – we might need to make a new thread 🙂 )

            Correct, all the Digital Human is, is a chatbot front-end….that listens, speaks and moves, but ultimately it turns what you’ve said into text. That text is then sent to WA, in the same way that you would have done if you’d build a chatbot text entry form.

            It is subtle, but in the source-code you’ll notice that the “context” object is populated each time a response is received from WA, this is what allows WA to know “where it is” in the tree-node structure and what Intents/Entities have been identified by WA. Remember, it is WA that is doing all the real work. Unity is just receiving the output from WA and presenting it back to the user, we just happen to be doing that via a Digital Human and speech, rather than displaying the text output in a chatbot box.

            If you want to do “more” in the Unity side of things, then sure, you can analyse the WA “context” object to see what data is available to you and you can then do whatever you want to refine the Digital Humans response, such as raising eyebrows, making it smile, making it wink, look shocked etc.. I believe you’ve understood correctly.

          • @Tony_Pigram Dear Tony, With some help from a Unity Developer at IBM Watson in the US, I managed to get automatic “entity recognition” working in Unity; not through manual-processing, but via the Assistant service itself, just like what you would get in the “Try it” panel. As a token of my appreciation for your continued help and support on this page, I would be more than happy to discuss it with you if you are interested. Let me know, Cheers

  30. Praveenanasurya February 03, 2019

    Hi Tony,

    In Dynamic character Avatar we are receiving error CS0246. Similarly we are receiving errors. Is it possible to send the screenshot over email?

    • are you able to post your issue here: https://github.com/YnotZer0/unity3d_watsonSdk/issues

      I will just point out that you’d get totally abused if you pasted a comment like that into StackOverflow – so I’ll be nicer; can you provide a LOT more information if you want some help. What platform are you running on? What version of Unity are you using? Did you follow the install instructions for the Digital Avatar, did that install/work okay on its own? did you install SALSA? did that install/work okay? When you installed IBM SDK, did that install/work okay?
      Did you follow the instructions above? If so, where did you get to, before you got errors? Which C# file did you use, was it the new one? What line of the C# code did you fail on?

      Can you paste the error message you are getting?

      Quite a lot of information to request – but, if you want someone else to help you, this amount of information should be provided, else I/we’ll just be fumbling around in the dark trying to figure out what you’ve done wrong (if indeed we can).

  31. Hi,
    I’m tring to complete this fantastic tutorial, but …. Unity tell me that all the namespaces starting with IBM. are wrong!

    using IBM.Watson.DeveloperCloud.Connection;
    using IBM.Watson.DeveloperCloud.Logging;
    using IBM.Watson.DeveloperCloud.Utilities;
    using IBM.Watson.DeveloperCloud.DataTypes;
    using IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1;
    using IBM.Watson.DeveloperCloud.Services.SpeechToText.v1;
    using IBM.Watson.DeveloperCloud.Services.Assistant.v1;

    please upgrade the Watson_newServices file with the right namespaces 🙂


    • Tony_Pigram May 31, 2019

      Hi, this article was last updated in January 2019 when all was working fine. Am curious, what version of Watson SDK are you using?

    • Tony_Pigram June 02, 2019

      Hi Massimo, can you share the changes you made to make it work? I’ll then update the .cs file and add a comment that you shared this info. thanks tony

  32. Holofrenia June 09, 2019

    I have tested SALSA_UMA and the lip sync is working in Unity 2018.3.14f1.
    In parallel I have followed the IBM Cloud tutorials and all is working fine too.
    The problem comes when I import to Unity the Watson_newServices.cs file, then I got 16 errors.
    Any help is highly appreciated.

    • Tony_Pigram June 21, 2019

      Hi, I have no time to check this right away and provide an update – BUT some simple investigation would state that the IBM SDK has changed the namespace references. This is a breaking change and I am a little disappointed (but not surprised). All you have to do is go and look at the Watson SDK examples and see what it is now using to reference the API calls.
      Take a look here: https://github.com/watson-developer-cloud/unity-sdk
      Then here for examples: https://github.com/watson-developer-cloud/unity-sdk/tree/master/Examples

      Then for example take a look at the code here: https://github.com/watson-developer-cloud/unity-sdk/blob/master/Examples/ExampleStreaming.cs

      At the top, you can see that the namespace references have changed:
      using IBM.Watson.SpeechToText.V1;
      using IBM.Cloud.SDK;
      using IBM.Cloud.SDK.Utilities;
      using IBM.Cloud.SDK.DataTypes;

      so, change the namespaces in the original Watson_newServices.cs file to match the new namespaces that are used by the SDK. Then that should be job done.

      thanks tony

      • Holofrenia June 23, 2019

        Thanks for your answer Tony. I have changed already the namespace references.
        After that the immediate 4 errors I got are the following ones / There were the same errors I was getting with the previous WatsonTTS.cs file too:

        WatsonTTS.cs(486,43): The type name ‘Error’ does not exist in the type ‘RESTConnector’ -> private void OnCONVFail(RESTConnector:Error error, Dictionary customData)
        WatsonTTS.cs(519,42): The type name ‘Error’ does not exist in the type ‘RESTConnector’ -> private void OnTTSFail(RESTConnector.Error error, Dictionary customData)
        WatsonTTS.cs(90,13): The type or namespace name ‘SpeechToText’ could not be found (are you missing…) -> private SpeechToText _STTservice;
        WatsonTTS.cs(91,13): error CS0246: The type or namespace name ‘Assistant’ could not be found -> private Assistant _ASSservice;

        Probably I’m missing a library. In case you have a hint. [I’m using the latest Watson_newServices.cs].
        Thanks for all your attention and work.

        • Tony_Pigram June 24, 2019

          Hi, okay, it looks like it’s a little more than just namespace name changes, they’ve changed parameters that are passed to the OnxxxFail() events – a quick look at the ExampleStreaming.cs shows that around line 143 they have an onError() event that just take a string as input now.
          A quick look at this example: https://github.com/watson-developer-cloud/unity-sdk/blob/master/Examples/ExampleAssistantV2.cs
          does show that the namespace references have changed to .V2.

          okay, this week, I’ll remake the tutorial on my laptop with a fresh Unity install, latest Watson SDK and will find the same issues as yourself and will provide an updated Watson_newServices.cs file – it’s about this point where the article has reached it’s update limit I reckon, if I were to make changes to the .cs file that involved significant updates to the article it might be simpler to do a complete re-write? For now though, I’ll just include an UPDATE: section and a new .cs file to work with the latest Watson SDK release. (I really do wish they would not make breaking changes that make it non-backward compatible, there must be a reason for it, but it is a little annoying trying to keep aligned with a moving rolling rock).

          thanks tony

  33. Hi Tony, I have a business proposal to you. Where can I find you?

Join The Discussion