Skill Level: Intermediate

You’ll build a chatbot that gives definitions of not only a given set of entities, but the whole world of Wikipedia. We will make external API callouts from Watson Assistant using the Wikipedia API to get definitions for contextual entities.


IBM Cloud Account

Watson Assistant

Cloud Functions


  1. Set Up an IBM Cloud Function

    1. Go to IBM Cloud Catalog https://cloud.ibm.com/catalog
    2. Search “Functions”
    3. Click on “Functions” service
    4. Click on “Start Creating”
    5. Click on “Create Action”



  2. Define the Cloud Functions Action

    1. Type your Action name [action_name], e.g. Wikipedia-Connection
    2. Create a package with any [package_name], e.g. Assistant-Functions
    3. Select your language. In that recipe, we will use Node.js
    4. Create your function
    5. We can now start editing the function!
  3. Wikipedia Connection

    We will send an API request to https://en.wikipedia.org/w/api.php and will hand over the parameter object_of_interest which is going to be defined in our Watson Assistant Workspace later. As a result, we receive a JSON file with the extract of our desired object.

    In your Cloud Functions Actions, replace the existing code with the following:

    let rp = require('request-promise')


    function main(params) {

        const options = {

            uri: "https://en.wikipedia.org/w/api.php?action=opensearch&format=json&namespace=0&limit=1&search=" + encodeURIComponent(params.object_of_interest),

            json: true


        return rp(options)

        .then(res => {

            return { extract : res }



    Don’t forget to save your action!

    By clicking on “Invoke”, you can test your function and see the output in the console. However, if you invoke this action, you will get a result with “undefined” extract. That’s correct, as we haven’t handed over anything yet!

  4. Noting down Credentials

    We need to save our credentials so that we can use them in Watson Assistant later.


    1. Go to “Endpoints”. Under Rest API, copy and save the URL for later.

    2. Click on API KEY. Copy and save the API Key for later.

  5. Setting up Watson Assistant

    Let’s switch over to Watson Assistant. It’s going to be a very basic dialog tree like this.


    Let’s do this step by step.

    1.    Go to IBM Cloud Catalog https://cloud.ibm.com/catalog and search for “Watson Assistant”

    2.    Make sure your Assistant is in the same Region as your Cloud Function (e.g. Dallas)

    3.    Click on “Create”

    4.    Start by clicking on “Launch tool”

    5.    Set up a new Skill by clicking on “Skills” –> “Create new”

    Please note: Currently, contextual entities are only supported in English, so leave the language settings as “English”.

  6. Understanding the Intent

    Now that we have set up the chatbot, we want to teach it to understand our intention.

    1.    Click on “Create intent” and define a name for it (e.g. #tell_me_about). Click on “Create intent”

    2.    Add multiple different user examples to this intent, e.g.:

    what is oxygen

    what means legislation

    what’s the definition of gravity

    Can you explain something about the International Space Station

    Tell me something about Star Wars

    what is love

  7. Extracting Entities

    Now, we want to teach the chatbot to extract the correct object of interest. Therefore, we’ll use contextual entities.

    With contextual entities, you can define entities based on their context in the utterance. You do this by annotating the entities in your intent examples. Watson will use these annotations to learn how to identify the entity values and can match new values that have not been explicitly annotated.

    You simply highlight the part of the utterance, enter an entity name (let’s take @object_of_interest) and save.


  8. Extending Examples

    Make sure your examples also include entities consisting of several words. Otherwise, the model will learn that the @objects_of_interest are one-word-objects only.


    If you look at the entities page, you can see the annotations, and the intents they were annotated in:


  9. Creating the Dialog

    Now, let’s create the conversational part.

    1. Navigate yourself to the Dialog section and click on “Create Dialog”. You will see how a basic dialog tree builds up.

    2. Create a new dialog node by clicking on “Add Node”.

    3. Fill in the condition (below If assistant recognizes)


  10. Changing Context

    Below in the same node, open the JSON Editor by clicking on the three little dots.

    Replace the code with the following:


      "output": {

        "text": {

          "values": [],

          "selection_policy": "sequential"



      "actions": [


          "name": "/SophiesShowcase_coolstuff/wikipedia",

          "type": "cloud_function",

          "parameters": {

            "object_of_interest": "<? @object_of_interest ?>"


          "credentials": "$credentials",

          "result_variable": "$response"



      "context": {

        "credentials": {

          "api_key": "<your-cloud-functions-api-key-here>"


        "object_of_interest": "@object_of_interest"




    The bold two parts need to be replaced! Let’s do this in the next step.

  11. Updating URL

    For the action name, you will need part of your cloud function URL, which should look similar to this:



    Now the two titles following “namespaces” and following “actions” need to be filled into your action name in the node JSON editor, like this:

    "name": "/SophiesShowcase_coolstuff/wikipedia",
  12. Updating Credentials

    To hand over the credentials in the context, replace the api_key value with your cloud function API key.

    "api_key": "1234567890-example"
  13. Adding a Response

    Next, we will define the response.


    1. Add a child node to the node we edited before by clicking on “Add node”

    2. Activate Multiple Conditioned Responses (via “Customize” button in the upper right corner of the node)


  14. Defining Answers

    Below “Then respond with”, we need two answers.


    The first one will be the option if the desired piece of interest cannot be found.


    $response.extract.get(0)=="null" || $response.extract.get(2).get(0)==""

    Answer: I’m sorry, I could not find any definition on Wikipedia.


    The second condition will be our successful response, giving the short abstract from Wikipedia.



    Answer: I am defining <? $response.extract.get(0) ?>: <? $response.extract.get(2).get(0) ?>


    It should look like this:


  15. Connecting the nodes via "Jump To"

    Now, go back to the parent node with the #tell_me_about condition.

    1. Scroll down to “and finally”

    2. Choose “Jump to”

    3. Select the child node which we just edited and choose “Respond”

    This is how it should look like:


  16. We're done, let’s check our result!

    You can easily try out your assistant by clicking on “Try it” on the right side. Pose different questions and notice the result!


    Note: You might need to add more training data to your intent and contextual entities in order to improve understanding.


  17. What's next?

    Awesome! You can now extend this basic functionality or integrate the capability into your existing chatbots and voice bots. Check out further tutorials to learn how to create a voice-enabled bot and let your personal digital assistant give definitions whenever you need them :-)

    I’d be interested in more code patterns of Wikipedia API integrations, feel free to share your developments in the comments!

5 comments on"Connect Watson Assistant with Wikipedia API via Cloud Functions"

  1. sreddy4node June 20, 2019

    Thank you sophierm, for the wonderful article.

    I followed your instructions step by step. While testing the IBM Assisatnt i am getting 401 Error.

    To be Specific, the error is : {“cloud_functions_call_error”:”The supplied authentication is invalid”}

    I have given my valid credentials only, i.e Api key of my CF based name space.

    My CF based space is London.

    Please guide me in fixing this error.

  2. Hi, thanks for your comment! In which space is your Watson Assistant Service? Both WA and CF need to be in the same region + space. Maybe that’s causing the error?

  3. sreddy4node June 25, 2019

    Thank you so much Sophierm. My WA was in Wastington region and my CF was in London. Now after seeing your message i changed my WA to london and its working fine.

    Really it killed all my time with this small mistake.

    I have one more doubt.

    Can i call more than one cloud function in one dialog node?

    My scenario is user text his address and i will send that to my api and after i get the result i will ask him confirmation. When he say yes i need to call another api and no means another api.

    I have already agree intent. I will use this agree intent almost 6 times in my complete chat bot flow. How can I call all separate apis for each time the same intent or node is called. How can i call a CF with different condition in same Node?

  4. Hi sreddy,
    good question! To be honest, I would suggest to create separate nodes for the different API calls – you can define the condition for each child node, letting WA check the conditions of all sub-nodes (in your case, there should be a yes/no condition). It will make it easier to maintain if you design you dialog in a modular way. You can still use the same intent, but different nodes for different scenarios. Does that help?

  5. sreddy4node June 25, 2019

    Thank you sophierm. I will work on the model you said and will check. If i have any doubt again i will post here. If not you can provide any mail to conatct you. As i am new to ibm WA please guide me in fixing all my errors.

Join The Discussion