Skill Level: Intermediate


Recipes to enhance Analytics in IBM Watson IoT Platform Before you proceed, evaluate the following analytical recipes that suites your need.Introduction The total amount of data produced by IoT devices and systems is humongous and arriving with a very high velocity. However more than 90% of this data gets lost unless it is analyzed.


Basic knowledge of

  1. IBM Watson IoT Platform
  2. Apache Spark service
  3. IBM IoT Real-Time Insights
  4. SPSS Modeler
  5. Watson Machine Learning service

** Service fee may apply. Estimated Monthly Costs: < $20


  1. Recipes to enhance Analytics in IBM Watson IoT Platform

    Before you proceed, evaluate the analytical and cognitive recipes from the list that suites your need. Click on the respective image below,

    llist-of-analytical-recipes            llist-of-cognitive-recipes  

  2. Introduction

    The total amount of data produced by IoT devices and systems is humongous and arriving with a very high velocity. However more than 90% of this data gets lost unless it is analyzed. One way of performing this analysis is by setting threshold which would trigger an action to be taken once it is breached. This can be seen by the danger zone readings as shown in the time-series data shown below. 

    However, this approach is at best a reactive approach and at worst simply futile (as the event has already occurred).

    The real benefit of this massive amount of data, produced by IoT, lies in performing a real-time analysis on it so to discover trends and patterns and to use these patterns to predict the failures in a timely manner (as can be seen by the unexpected temperature dip above). One of the mechanisms of performing this analysis is through the usage of Predictive analytics.

    Predictive analytics encompasses a variety of statistical techniques from predictive modeling, machine learning, and data mining that analyze current and historical facts to make predictions about future. The core of predictive analytics relies on capturing relationships between explanatory variables and the predicted variables from past occurrences, and exploiting them to predict the unknown outcome. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions.

  3. Design and Architecture

    This recipe explains how one can integrate IBM Watson Machine Learning service with IBM Watson IoT Platform to detect a temperature change before it hits the danger zone. Similar approach can be taken to apply other type of analytics. 

    The following diagram shows various components involved in the integration. 


    A device with, a temperature sensor, keeps publishing the events in the IBM Watson IoT Platform. In absence of an actual device, we have provided a simulator which keep pumping in the events. 

    Multiple receivers, running in the Apache Spark, subscribe to these events and make ReST calls to the SPSS model deployed in the Watson Machine Learning service.

    The SPSS stream is built on top of the SPSS streaming time series expert model. Based on the input data, it finds the most suitable time series forecast model and trains the model automatically during the scoring time. The first 50 data points are used for training and the model will adjust itself over time. The stream is deployed in Watson Machine Learning service. Through API call, the service will return the next few forecasts based on the input.

    When real time data reading is received, the Spark streaming job gets the next few forecasts from Watson Machine Learning service. It also calculates the z-score (aka, a standard score indicates how many standard deviations an element is from the mean) to indicate the degree of difference in the actual reading compared to the forecast readings.

    A z-score can be calculated from the following formula

    z = (X - µ) / σ

    where z is the z-score, X is the value of the element, µ is the population mean, and σ is the standard deviation

    Since the forecast is a trend indicator, a bigger difference than the normal range would indicate a sudden change of value. So in a way, the z-score is being used as an indicator of predict an outside the acceptable threshold event happening. Thus the z-score can be used in RTI rule to determine when an alert needs to be raised. A larger value filters out smaller spikes and dips.

    This section explained in brief, the architecture of the components, as well as, gave a very brief introduction to the theory behind the statistical model, used in the recipe. The next sections provide you with a hands-on approach of carrying out the recipe in your Bluemix environment.

  4. Deploy the Model in IBM Watson Machine Learning Service

    In this step, we will deploy the model in the Watson Machine Learning Service running in IBM Bluemix. 

    IBM Watson Machine Learning is a service in Bluemix that makes it easy for developers and data scientists to work together to integrate predictive capabilities with their applications. Built on IBM’s proven SPSS analytics platform, Watson Machine Learning allows one to develop applications that make smarter decisions, and improve user outcomes. It exposes set of ReST APIs that can be called from any programming language to predict the score. 

    What is a Predictive Model?

    A predictive model is a mathematical function that learns the mapping between a set of input data variables (usually bundled into a record) and the target variable (response). We can use the IBM SPSS Modeler to create Predictive models. IBM SPSS Modeler is a data mining and text analytics software application built by IBM, used to build predictive models and conduct other analytic tasks. It has a visual interface that allows users to leverage statistical and data mining algorithms without programming. 

    In this Recipe, we use a time series expert model that predicts the forecast temperatures. Based on the input data, it finds the most suitable time series forecast model and trains the model automatically during the scoring time. The first 50 data points are used for training and the model will adjust itself over time.

    To keep things simpler, a model is already made available in the GitHub

     Deploy the model

    1. Open your browser and go to Bluemix. If you are an existing Bluemix user, log in as usual. If you are new to Bluemix you can sign up for a free 30 day trial.
    2. Go to Bluemix and click on Catalog followed by Watson Machine Learning service (This is under Data and Analytics Category) as shown below,

    3. Type a name for the service and for now keep the service unbound, we can later bind this service. Click the Create button.
    4. Click on the Dashboard tab to load the SPSS Model that is already built.
    5. Download the time series expert model from this link and drop it to the Watson Machine Learning service as shown below,ml-drop
    6. Specify the context id and deploy the model.

    Retrieve access key

    In order to invoke the model, one needs to obtain an access key of the Watson Machine Learning service.

    • Goto Bluemix dashboard and open the Watson Machine Learning service that is created now,
    • Click on View Credentials button to get the access_key and url of the ML service, You might observe a screen as shown below,
    • Note down the access_key and url for later use. 

    In this step, we have successfully deployed the SPSS model to the Watson Machine Learning service in the Bluemix. In the next step, we will start the data simulator that generates the temperature data at specified intervals.

  5. Register your Device(s) In Watson IoT Platform

    In order to send the temperature data (IoT sensor data), we need to register the device(s) first, in the IBM Watson IoT Platform. This section guides you in the same.

    Carry out the steps present in this recipe to register your device(s) in IBM Watson IoT Platform. When the device is registered, you will be provided with the registration details shown below. Make a note of them, we need these details to connect the device to Watson IoT Platform later.

    Generate API Key and Token of Watson IoT Platform

    In order to connect Apache Spark service and IoT Real-Time Insights(RTI) service to IBM Watson IoT Platform to receive device events and results, we need to generate the API key and token first. This can be achieved by carrying out steps present in this section – Generate API Key in Watson IoT Platform.

    Note down the Key and Token, we need these later to connect Spark application and RTI to Watson IoT Platform.

    At this step, we have successfully created the IBM Watson IoT Platform service, registered the device(s) in it and generated the API Key.

  6. Publish Temperature Data

    In this step, we will publish the temperature events to IBM Watson IoT Platform so that change in temperature values can be predicted beforehand.

    1. Download and install Maven and Git if not installed already.

    2. Clone the iot-predictive-analytics repository as follows:
      git clone https://github.com/ibm-messaging/iot-predictive-analytics-samples.git 
    3. Navigate to the DeviceDataGenerator project “cd iot-predictive-analytics-samples/DeviceDataGenerator” and build the project using maven,
      mvn clean package
    4. This will download all required dependencies and starts the building process. Once built, the sample can be located in the target directory, with the filename IoTDataGenerator-1.0.0-SNAPSHOT.jar.
    5. Modify the device.prop file present in the target/classes directory by entering the following device registration details that you noted in the previous step:
      Organization-ID = <Your Organization ID>
      Device-Type = <Your Device Type>
      Device-ID = <Your Device ID>
      Authentication-Method = token
      Authentication-Token = <Your Device Token>

      (Note: options must be modified based on your device registration)

    6. Run the data generator sample using the following command:

      mvn exec:java -Dexec.mainClass="com.ibm.iot.iotdatagenerator.IoTDataGenerator"
    7. Observe that the device connects to IBM Watson IoT and publishes the simulated temperature data (from the testDataSet file). 

    Viewing your device and events in Watson IoT Platform

    1. Open the Watson IoT Platform service that you created in the above step “Register your Device(s) In Watson IoT Platform” and click Launch Dashboard.
    2. Select Devices tab and observe that your device is connected to Watson IoT Platform.
    3. Click on the device to view the sensor events published by the simulator. To view an individual event, click on the

    In this section, we have successfully started a device sample. Lets consume and process these events by creating the Apache Spark application in the next section.

  7. Create Spark Streaming service

    In this step, we will create the Scala notebook application to onboard the device events to Apache Spark and invoke the Watson Machine Learning service.

    Create a notebook in IBM Data Science Experience

    The IBM Data Science Experience(DSX) is an environment that has everything a data scientist needs to be successful. It provides an interactive, collaborative, cloud-based environment where data scientists can use multiple tools to activate their insights. Data scientists can use the best of open source tools such as R and Python, tap into IBMs unique features, grow their capabilities, and share their successes.

    1. Use a supported browser to log in to DSX at – http://datascience.ibm.com/.
      If you have Bluemix id, you can login with the same.
    2. Setup a new Project. Projects create a space for you to collect and share notebooks, connect to data sources, create pipelines and add data sets all in one place. As shown below, click “+” then select Create Project to create a new project,create-not
    3. Specify the name and create the Project, Note: Incase if there is no Spark service and Object Storage instance created, Create them before creating the project,create-spark-object-instances
    4. Goto project and click on “add notebooks” link to create a new Jupyter notebook as shown below. The Jupyter Notebook is a web application that allows one to create and share documents that contain executable code, mathematical formulae, graphics/visualization (matplotlib) and explanatory textadd-notebook
    5. Specify a descriptive name for the Notebook, select Scala as the language, select 1.6 as the version, and click Create Notebook button as shown below,notebook-creation-spark16

    Create the notebook application to receive the device events in the Spark,

    1. Go to the notebook, In the first cell (next to In [ ]), enter the following special command AddJar to upload the the Streaming application jar and all the dependent jars to the Spark environment.
      %AddJar https://github.com/ibm-watson-iot/predictive-analytics-samples/releases/download/0.0.3/IoTSparkAsServiceSample-3.0.0.jar -f
      %AddJar https://github.com/sathipal/spark-streaming-mqtt-with-security_2.10-1.3.0/releases/download/0.0.1/spark-streaming-mqtt-security_2.10-1.3.0-0.0.1.jar -f
      %AddJar http://central.maven.org/maven2/org/apache/wink/wink-json4j/1.4/wink-json4j-1.4.jar
      %AddJar https://repo.eclipse.org/content/repositories/paho-releases/org/eclipse/paho/org.eclipse.paho.client.mqttv3/1.0.2/org.eclipse.paho.client.mqttv3-1.0.2.jar
      %AddJar http://central.maven.org/maven2/org/apache/commons/commons-math/2.2/commons-math-2.2.jar
      %AddJar http://repo1.maven.org/maven2/args4j/args4j/2.0.12/args4j-2.0.12.jar
    2. Add another cell, by clicking on the + button below menu option Edit. Enter the following configuration parameters so that the Spark streaming application can talk to Watson IoT Platform and Watson Machine Learning service. Note that the credentials, placed between angular brackets, need to be modified. Also, the appid must be less than 20 characters
      import com.ibm.iot.iotspark.IoTSparkAsServiceSample

      //Watson IoT Platform related parameters
      IoTSparkAsServiceSample.setConfig("appid","a:<orgid>:<unique appid>")

      // Predictive Service related parameters
      IoTSparkAsServiceSample.setConfig("predictive-service-url","<APPENDED URL>")

      (The APPENDED URL must be in the following format – https://{Watson ML service URL}/pm/v1/score/{contextId}?accesskey={access_key for this bound application}
      For example, the URL obtained in Predictive Analysis is “https://ibm-watson-ml.mybluemix.net” and the context id is “predict” and access key is “xxxxxxxxxxxxx”, then the modified URL is “https://ibm-watson-ml.mybluemix.net/pm/v1/score/predict?accesskey=xxxxxxxxxx”)

    3. Trigger the streaming job by adding the following in the next cell,
      IoTSparkAsServiceSample.startStreaming(sc, 4)

      (Note that the streaming batch interval is set to 4 seconds, you can increase/decrease by changing the value)

    4. Then run the code all at once by going to Cell->Run All as shown below,
    5. Observe that when the application starts, it verifies the connectivity to Watson IoT Platform and Watson Machine Learning service with the given configuration parameters. If there are any issues, the application stops. If the connectivity is perfect, then it reads the sensor events from Watson IoT Platform in realtime, invokes the Watson Machine Learning service, calculates the zscore & wzscore values based on the predicted values and publishes the result back to IBM Watson IoT Platform.

    6. Observe in the notebook, the temperature (originated from the device), forecasted temperature (returned by Watson Machine Learning service) and zscore & wzscore (calculated by this Spark application) values are printed every 4 seconds as shown below,

    7. You can observe the results in the Watson IoT Platform dashboard as well, by clicking on the device row in the Watson IoT Platform dashboard.

    In case you want to stop the program, there is an Interrupt Kernel button just below the Kernel.

    Building and running your own code

    One can modify the existing Spark streaming application according to their usecase and run it, its very easy. Follow the steps below to do the same,

    1. Clone the iot-predictive-analytics repository as follows:

      git clone https://github.com/ibm-messaging/iot-predictive-analytics-samples.git 
    2. Import the SparkComponentproject project into the Eclipse environment and make necessary changes.
    3. Build the project using maven (either via Eclipse or command line)
      mvn clean package 
    4. This will download all required dependencies and starts the building process. Once built, the sample can be located in the target directory. Post the jar IoTSparkAsServiceSample-2.0.0-jar-with-dependencies.jar on a publicly available URL, for example box, dropbox, etc..
    5. Go to the notebook, Modify the first cell to upload the Streaming application jar that you built instead of the one available in the Github,

      %AddJar https://github.com/sathipal/spark-streaming-mqtt-with-security_2.10-1.3.0/releases/download/0.0.1/spark-streaming-mqtt-security_2.10-1.3.0-0.0.1.jar -f
      %AddJar <URL of IoTSparkAsServiceSample-2.0.0-jar-with-dependencies.jar> -f

      Note: Modify the URL of the IoTSparkAsServiceSample jar with the URL where you placed the built application (say box, dropbox, etc). In case of dropbox, you may have to change the last part of URL (so instead of ‘?dl=0’, you may have to change it to ‘?dl=1’)

    6. As the IoTSparkAsServiceSample-2.0.0-jar-with-dependencies.jar is built with all the dependencies, you don’t need to specify the dependencies except for the spark-streaming-mqtt-security_2.10-1.3.0-0.0.1.jar.
    7. Keep the contents of remaining 2 notebook cells same and start the streaming application by carrying out the steps mentioned in the sub-section “create the notebook application to receive the device events in the Spark service”. 

    In this section, we started a Scala Spark application which reads the device events and calls the Watson Machine Learning service, that returns back the forecast temperatures. The Spark application then calculates the zscore & wzscore and publishes them back to IBM Watson IoT and is used to create alerts in the IBM Real-Time Insights (RTI) service and charting. We will demonstrate the same in the next step.

  8. Create Rules in Watson IoT

    In this step, we will show how to create a rule for the wzscore value to alert when it crosses the threshold. 

    Create a Message Schema

    Make sure the Apache Spark Streaming application is running, otherwise, you may not get the right data points.

    1. In the Devices tab, select the Manage Schemas tab as shown below,
    2. Click Add Schema to add a new schema,
    3. Select the DeviceType for which the schema is created and click Next,
    4. Click Add a property to add the datapoints from the connected device.
    5. Select “From Connected” tab and then select the required datapoints as shown below. Make sure the Apache Spark Streaming application is running, otherwise, you may not get these data points.add-schema-3
    6. Click Finish to finish the schema creation.

     Add a Rule

    • In the Rules tab, select the Browse tab and click “Create Cloud Rule”,
    • Provide a name for the Rule, select the schema name in the “Applies to” column and click Next,rule-1
    • Define the rule as shown below, the rule will trigger an alert when the wzscore value is either above 3 (temperature spike) or below -3(temperature dip).add-rule-2
    • Also, you can associate different actions to the rule, Refer to this recipe for more information about the list of available actions in RTI and how to associate them with the rule.
    • Click Save to save the rule and then Activate to activate the rule.

     In this step we have successfully setup RTI and configured rules such that the alerts will be generated when the wzscore crosses the threshold.

  9. Results

    Realtime Alerts

    Now, when the predicted results are sent to Watson IoT Platform, the rules will analyze the wzscore data in real time and take action when a threshold is broken. Go to Dashboard tab to view the alerts and Notifications.

    1. Click on Boards tab to view the analytical cards as shown below,
    2. Click on Device-Centric-Analytics card and select the device to view the list of alerts generated for this device.alert-dashboard

    Realtime charting

    With the new cards in the Watson Internet of Things platform, one can build their own custom Dashboard to create visualization charts for the real time data that are coming in from the devices. Refer to this recipe for detailed information about creating visualization charts. 

    Carry out the following steps to visualize the results in charts.

    1. In the dashboard, select the Boards tab.
    2. Click + Create New Board to create board for visualization. 
    3. Specify a name and create the dashboard.
    4. Open the new board and Click + Add New Card button,
    5. In the Devices section, select Realtime Chart,card-00
    6. Select a device.
    7. Now, define the data set for the visualization. Click Connect new data set.
      • Enter the name for your data set
      • Select the event
      • Select the property of the event as temperature
      • Optionally, you can select the unit of the data set as wellcard-01
      • Repeat this steps to add the wzscore property.
    8. Click Next
    9. Preview the card. You can select the size of the card now. By default Small is selected.
    10. Enter the title for the card and click Submit.
    11. Observe that the values are plotted in the chart.
    12. chart
  10. Conclusion and the Road Ahead

    This recipe shows how to integrate IBM Watson IoT, DSX and Watson Machine Learning service, so as to take timely action before an (unacceptable) event occurs. Developers can take a look at the model and code available in the github repository to understand whats happening under the hood. Developers can consider this recipe as a template for integrating Watson Machine Learning service with IBM Watson IoT Platform. They can modify the existing Spark application, as well as, the model, depending upon the use case. 

    Go through the next part of this recipe “Timeseries Data Analysis of IoT events by using Jupyter Notebook” to analyze the resultant events produced by this recipe, in a Jupyter Notebook using Spark SQL and Pandas DataFrames.

41 comments on"Engage Machine Learning for detecting anomalous behaviors of things"

  1. Can IBM QEWS be also used on WIoT platform?

  2. YMDH_sathish_Palaniappan August 29, 2016

    Yes, one can easily integrate the IBM’s Predictive Maintenance and Quality (PMQ) system with Watson IoT Platform by creating a custom flow in the IBM Integration Bus(IIB) node in the IIB node that receives the events from the Watson IoT Platform. We are in the process of drafting a recipe that shows how one can achieve this. It might take couple of weeks to complete the recipe and publish it. We will post a comment once its published. Thank you !

  3. RomeoKienzler September 01, 2016

    > Specify the context id and deploy the model.
    What is the “context id” ?

    • Context Id is an unique id to refer the deployed model. Later this context id will be used in invoking the model for scoring.

  4. Hi – where is the sub-section “create the notebook application to receive the device events in the Spark service”?

  5. Hi, when I run the code in Step 7 (Create Spark Streaming Service) I get the following error;Name: Compile Error

    Message: :21: error: not found: value IoTSparkAsServiceSample
    IoTSparkAsServiceSample.startStreaming(sc, 4)

    Any ideas where I am going wrong?

    • Recipes@IoTF November 02, 2016

      Looks like the first 2 cells are not run. Can you make sure that the first 2 cells are run before the third cell? Actually the first cell downloads all the necessary jars that are required to run this sample which is missing.

  6. Thanks for the excellent tutorial, I was able to reach and complete step 7. Step 8 and the rest should be easier to do once we get it right until this step.

    My thanks!, I have been learning a lot in the last few days. Need to integrate this to real IoT devices such as Raspi with Node-RED for example instead of simulated one, as well as modifying the sample predictive model.

    my output at the completion of step 7:

    Time: 1478358216000 ms
    (Device01,State [prediction={“wzscore”:0.6136529541488246,”name”:”datacenter”,”temperature”:17.69,”forecast”:17.63,”zscore”:1.0805949411139768,”timestamp”:”2016-Nov-05 15:03:33″}])

    Time: 1478358220000 ms
    (Device01,State [prediction={“wzscore”:-2.3332071988718557,”name”:”datacenter”,”temperature”:17.583,”forecast”:17.66,”zscore”:-0.5894687217371645,”timestamp”:”2016-Nov-05 15:03:38″}])

    Time: 1478358224000 ms
    (Device01,State [prediction={“wzscore”:-2.1799095025532944,”name”:”datacenter”,”temperature”:17.56,”forecast”:17.66,”zscore”:-0.847580531770821,”timestamp”:”2016-Nov-05 15:03:40″}])

    • Hi Andi! How did you modify the sample predictive model? I attempt to use real IoT devices to send temperature similar to simulated one but nothing in my output at the completion of step 7 no Time, no wzscore,…and i think the reason is i didn’t modify the sample predictive model. Can you show me the way to do that?!
      thank so much, i hope receive response as soon as possible from you!

      • YMDH_sathish_Palaniappan January 02, 2017

        Hi Hieu Le,

        Thank you for contacting us. If i understand correctly, you are able to perform all the steps with the simulator available in the recipe, but facing error when you try to use the real IoT device.

        As long as the device (real device or simulator) sends the temperature events in the following format, then there is no change required in the Spark application,

        Event name: temperature <— this can be anything, but make sure that the same event name is used in Spark configuration

        {"name":"datacenter","temperature":17.47, "timestamp": "xxxxxx"}

        Let me know the event format that is sent to the Watson IoT Platform. Also the Spark configuration (Step 7 values).

        Thanks & regards,

        • Sathish, how are you?

          This is my output:

          MQTT subscribe topics:iot-2/type/+/id/esp8266-A0-20-A6-01-06-C7/evt/d.Temperatura/fmt/+
          MQTT uri:ssl://9nej1b.messaging.internetofthings.ibmcloud.com:8883
          MQTT appid:a:9nej1b:DHT22TS
          MQTT apikey:a-9nej1b-qhfximoqgf
          MQTT authtoken:6(X*S-!c-++HE*p(x+
          Testing the connectivity to Watson IoT Platform …
          MQTT Client is connected to the server
          Connected to MQTT server and Kafka
          Able to connect to Watson IoT Platform successfully !!
          Testing the connectivity to Predictive Analytics service …
          Creating new instance of IoTPredictionNonPeriodic
          window size is 10
          404 Not Found404 Not Foundnginx/1.11.5
          Connection to Predictive Analytics service is proper and able to invoke the service successfully
          +++ print out the received data now:
          Time: 1493912496000 ms

          Time: 1493912500000 ms
          (esp8266-A0-20-A6-01-06-C7,State [prediction=null])

          Time: 1493912504000 ms
          (esp8266-A0-20-A6-01-06-C7,State [prediction=null])

          Time: 1493912508000 ms
          (esp8266-A0-20-A6-01-06-C7,State [prediction=null])

          I can’t make it work.

          This is my json: {“d”:{“Humedad”:69.1,”Temperatura”:25.8}}

          I’m using a real device.

          Thanks in advance, Max.

          • YMDH_sathish_Palaniappan May 23, 2017

            As per our e-mail conversation, i think the issue is resolved.

  7. In step 8 in an attempt to get the generated values on Watson IoT platform, I’ve just realized that I am not getting the forecast, zcore and wzscore values.

    I am getting all the scores in Spark…
    Somehow Spark is not writing the value back to Watson IoT Platform, or I was missing something during steps 1-7?
    Any idea what I was doing wrong?


  8. AllefA.Silva February 10, 2017

    Gostaria de saber o que preencher no

  9. AllefA.Silva February 10, 2017

    Gostaria de saber o que preencher no unique appid

  10. GuillemGB May 09, 2017

    Hello, on step 7, when I run cells I get the next error:

    Testing the connectivity to Watson IoT Platform …
    Able to connect to Watson IoT Platform successfully !!
    Testing the connectivity to Predictive Analytics service …
    Creating new instance of IoTPredictionNonPeriodic
    window size is 10
    javax.net.ssl.SSLException: Received fatal alert: protocol_version
    at com.ibm.jsse2.j.a(j.java:13)
    at com.ibm.jsse2.j.a(j.java:43)

    Some help, please?


    • GuillemGB May 10, 2017

      I’ve solved it. The problem was that some services where at UK and others at USA. It seems that doesn’t works well

      • YMDH_sathish_Palaniappan May 11, 2017

        Thank you ! You are correct, we have faced similar issues and resolved by moving the services to same location.

  11. humayunkm May 16, 2017

    Heloo, I am having following issue. Could any one help? Very much appreciated.

    MQTT subscribe topics:iot-2/type/+/id/+/evt/temperature/fmt/+
    MQTT uri:ssl://2jypjq.messaging.internetofthings.ibmcloud.com:8883
    MQTT appid:a:2jypjq:bi-iot-hkm-iotf-service
    MQTT apikey:a-2jypjq-nhyokpapvt
    MQTT authtoken:fYaK-zDxKqS_SvnkZK
    Testing the connectivity to Watson IoT Platform …
    Able to connect to Watson IoT Platform successfully !!
    Testing the connectivity to Predictive Analytics service …
    Creating new instance of IoTPredictionNonPeriodic
    window size is 10
    [{“header”:[“TEMPRATURE”,”SEQ”,”$TI_TimeIndex”,”$TI_TimeLabel”,”$TI_Period”,”$TI_Future”,”$TS-TEMPRATURE”,”$TSLCI-TEMPRATURE”,”$TSUCI-TEMPRATURE”],”data”:[[null,null,51,”Period 51″,51,1,26.97508525045445,26.848380612101813,27.101789888807083],[null,null,52,”Period 52″,52,1,26.97405474913744,26.824288582022685,27.123820916252196],[null,null,53,”Period 53″,53,1,26.97405474913744,26.794488177877156,27.153621320397725],[null,null,54,”Period 54″,54,1,26.97405474913744,26.76897329649946,27.17913620177542],[null,null,55,”Period 55″,55,1,26.97405474913744,26.746299066622086,27.201810431652795],[null,null,56,”Period 56″,56,1,26.97405474913744,26.72568627364193,27.22242322463295],[null,null,57,”Period 57″,57,1,26.97405474913744,26.706657762763445,27.241451735511436],[null,null,58,”Period 58″,58,1,26.97405474913744,26.688896201967932,27.25921329630695],[null,null,59,”Period 59″,59,1,26.97405474913744,26.672177877340754,27.275931620934127],[null,null,60,”Period 60″,60,1,26.97405474913744,26.656338060202653,27.291771438072228],[null,null,61,”Period 61″,61,1,26.97405474913744,26.64125128892464,27.306858209350242],[null,null,62,”Period 62″,62,1,26.97405474913744,26.626819394037266,27.321290104237615],[null,null,63,”Period 63″,63,1,26.97405474913744,26.612963845716333,27.33514565255855],[null,null,64,”Period 64″,64,1,26.97405474913744,26.599620657159424,27.348488841115458],[null,null,65,”Period 65″,65,1,26.97405474913744,26.58673687201668,27.361372626258202],[null,null,66,”Period 66″,66,1,26.97405474913744,26.574268072722052,27.37384142555283],[null,null,67,”Period 67″,67,1,26.97405474913744,26.562176568917618,27.385932929357264],[null,null,68,”Period 68″,68,1,26.97405474913744,26.55043005197836,27.39767944629652],[null,null,69,”Period 69″,69,1,26.97405474913744,26.53900057694155,27.40910892133333],[null,null,70,”Period 70″,70,1,26.97405474913744,26.527863779431662,27.42024571884322]]}]
    Connection to Predictive Analytics service is proper and able to invoke the service successfully
    +++ print out the received data now:
    java.lang.IllegalStateException: Only one StreamingContext may be started in this JVM. Currently running StreamingContext was started atorg.apache.spark.streaming.api.java.JavaStreamingContext.start(JavaStreamingContext.scala:624)
    sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at org.apache.spark.streaming.StreamingContext$.org$apache$spark$streaming$StreamingContext$$assertNoOtherContextIsActive(StreamingContext.scala:762)

  12. humayunkm May 18, 2017

    Heloo, Clould anyone help. Which plan I got to upgrade to resolve following message

    {“flag”:false,”message”:”Failed to score predict, msg=Prediction exceeded plan limitation 5000,

    • YMDH_sathish_Palaniappan May 23, 2017

      You need to upgrade the Watson Machine Learning service plan to standard or paid to score more than 5000 times.

  13. StefanyMazon May 29, 2017

    Hi, can someone help me? I`m trying to do the step 7, but my WML access key is not working. Does anyone had this problem?

    “flag”:false,”message”:”Failed to score predict, msg=Can not find model:us-south$a08fd02c-0d9b-4a25-9bee-5f62f488880a/predict, details:java.lang.RuntimeException: Can not find model:us-south$a08fd02c-0d9b-4a25-9bee-5f62f488880a/predict\n\tat com.ibm.spss.internal.scorer.pool.ScorerPoolImpl.get(ScorerPoolImpl.java:96)\n\tat com.ibm.spss.blackbox.score.resource.ScoreUtil.score(ScoreUtil.java:191)\n\tat com.ibm.spss.blackbox.score.resource.ScoreResource.scoreWithInputdata(ScoreResource.java:69)\n\tat sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)\n\tat java.lang.reflect.Method.invoke(Method.java:508)\n\tat org.apache.wink.server.internal.handlers.InvokeMethodHandler.handleRequest(InvokeMethodHandler.java:63)\n\tat org.apache.wink.server.handlers.AbstractHandler.handleRequest(AbstractHandler.java:33)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)\n\tat org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:75)\n\tat org.apache.wink.server.internal.handlers.CreateInvocationParametersHandler.handleRequest(CreateInvocationParametersHandler.java:54)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)\n\tat org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:75)\n\tat org.apache.wink.server.internal.handlers.FindResourceMethodHandler.handleSubResourceMethod(FindResourceMethodHandler.java:188)\n\tat org.apache.wink.server.internal.handlers.FindResourceMethodHandler.handleRequest(FindResourceMethodHandler.java:110)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)\n\tat org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:75)\n\tat org.apache.wink.server.internal.handlers.FindRootResourceHandler.handleRequest(FindRootResourceHandler.java:95)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)\n\tat org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:75)\n\tat org.apache.wink.server.internal.handlers.HeadMethodHandler.handleRequest(HeadMethodHandler.java:53)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)\n\tat org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:75)\n\tat org.apache.wink.server.internal.handlers.OptionsMethodHandler.handleRequest(OptionsMethodHandler.java:46)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)\n\tat org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:75)\n\tat org.apache.wink.server.internal.handlers.SearchResultHandler.handleRequest(SearchResultHandler.java:33)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)\n\tat org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:75)\n\tat org.apache.wink.server.internal.log.ResourceInvocation.handleRequest(ResourceInvocation.java:92)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)\n\tat org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:75)\n\tat org.apache.wink.server.internal.log.Requests.handleRequest(Requests.java:76)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)\n\tat org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)\n\tat org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:75)\n\tat org.apache.wink.server.handlers.AbstractHandlersChain.run(AbstractHandlersChain.java:60)\n\tat org.apache.wink.server.internal.RequestProcessor.handleRequestWithoutFaultBarrier(RequestProcessor.java:207)\n\tat org.apache.wink.server.internal.RequestProcessor.handleRequest(RequestProcessor.java:154)\n\tat org.apache.wink.server.internal.servlet.RestServlet.service(RestServlet.java:124)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:668)\n\tat com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1285)\n\tat com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:776)\n\tat com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:473)\n\tat com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1143)\n\tat com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:81)\n\tat com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:927)\n\tat com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:262)\n\tat com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:955)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:785)\n”}
    Looks like an issue with the URL, Please check the AccessKey and context id!!

  14. StefanyMazon June 01, 2017

    are you IBMer? could you please pass me your IBM Id?

  15. AuliahNuraini July 24, 2017

    Hi satish, thank you for a great explanation in this Recipes.
    As you mentioned in one of the comments:
    ” As long as the device (real device or simulator) sends the temperature events in the following format, then there is no change required in the Spark application,
    Event name: temperature <— this can be anything, but make sure that the same event name is used in Spark configuration
    {"name":"datacenter","temperature":17.47, "timestamp": "xxxxxx"}" <— yes, it works

    but what if I want to change the msg.payload from IoT devices with following format {"name":"datacenter","vibration":17.47, "timestamp": "xxxxxx"}"
    I have changed all"temperature" attribute in your predictive model to "vibration" from SPSS modeler but when I run it at the spark code, I got this error:

    Connected to MQTT server and Kafka
    Able to connect to Watson IoT Platform successfully !!
    Testing the connectivity to Predictive Analytics service …
    Creating new instance of IoTPredictionNonPeriodic
    window size is 10
    {"flag":false,"message":"Failed to score vibration-ContextId, msg=ERROR: Missing required input field [VIBRATION], details:java.lang.RuntimeException: ERROR: Missing required input field [VIBRATION]\n\tat

    So, which any java files in your IoTSparkAsServiceSample-3.0.0-jar that I need to modify too?

  16. Ph.Gregoire October 18, 2017

    I cannot get this tutorial to work, I’m not getting any of the IoT events into Spark, I’m just getting the timestamps output but no device data at all:
    IotSparkSample: MQTT subscribe topics:iot-2/type/+/id/+/evt/+/fmt/+
    IotSparkSample: MQTT uri:ssl://g3iwub.messaging.internetofthings.ibmcloud.com:8883
    IotSparkSample: MQTT appid:a-g3iwub-sparkstream37
    IotSparkSample: MQTT apikey:a-g3iwub-xjettnmxk6
    IotSparkSample: MQTT authtoken:AP1gYytjisviKs91Ko
    IotSparkSample: Testing the connectivity to Watson IoT Platform …
    IotSparkSample: Able to connect to Watson IoT Platform successfully !!
    IotSparkSample: Testing the connectivity to Predictive Analytics service …
    Creating new instance of IoTPredictionNonPeriodic
    window size is 10
    [{“header”:[“TEMPRATURE”,”SEQ”,”$TI_TimeIndex”,”$TI_TimeLabel”,”$TI_Period”,”$TI_Future”,”$TS-TEMPRATURE”,”$TSLCI-TEMPRATURE”,”$TSUCI-TEMPRATURE”],”data”:[[null,null,51,”Period 51″,51,1,26.97000990709166,26.905590020728084,27.03442979345524],[null,null,52,”Period 52″,52,1,26.970017143418854,26.87891366643695,27.061120620400757],[null,null,53,”Period 53″,53,1,26.97002242896915,26.85844391276962,27.081600945168677],[null,null,54,”Period 54″,54,1,26.970026289635086,26.841186516907932,27.09886606236224],[null,null,55,”Period 55″,55,1,26.970029109538533,26.825981864526764,27.114076354550303],[null,null,56,”Period 56″,56,1,26.970031169249364,26.812235318370522,27.127827020128205],[null,null,57,”Period 57″,57,1,26.970032673701123,26.79959367489606,27.140471672506187],[null,null,58,”Period 58″,58,1,26.970033772581154,26.787826818617344,27.152240726544964],[null,null,59,”Period 59″,59,1,26.97003457522392,26.77677491613319,27.16329423431465],[null,null,60,”Period 60″,60,1,26.97003516148938,26.766321593971256,27.173748729007507],[null,null,61,”Period 61″,61,1,26.97003558970877,26.756378997603452,27.183692181814084],[null,null,62,”Period 62″,62,1,26.97003590248831,26.74687887008925,27.193192934887367],[null,null,63,”Period 63″,63,1,26.970036130948404,26.737766927504964,27.202305334391845],[null,null,64,”Period 64″,64,1,26.97003629781998,26.728999154152564,27.211073441487397],[null,null,65,”Period 65″,65,1,26.970036419706137,26.72053927265543,27.219533566756844],[null,null,66,”Period 66″,66,1,26.97003650873409,26.71235696327978,27.2277160541884],[null,null,67,”Period 67″,67,1,26.970036573761792,26.704426577894477,27.235646569629107],[null,null,68,”Period 68″,68,1,26.970036621259258,26.696726190313544,27.24334705220497],[null,null,69,”Period 69″,69,1,26.970036655952303,26.689236881339095,27.25083643056551],[null,null,70,”Period 70″,70,1,26.97003668129276,26.681942191269222,27.2581311713163]]}]
    Connection to Predictive Analytics service is proper and able to invoke the service successfully

    print out the received data now:——————————————-
    Time: 1508343308000 ms

    Time: 1508343308000 ms

    With the exact same credentials, I can get the other tutorial (https://developer.ibm.com/recipes/tutorials/spark-streaming-ibm-watson-iot-platform-integration) to work and get the events from my devices.
    I’ve rebuilt the Java code too, the MQTT client did not work, so I replaced with WIoT client, but the Spark uses its own code to connect and I don’t know if that has worked.

    It looks like the libraries in this code are quite old (MQTT 2.0), whereas the other recipe uses MQTT 3.1 for example.

    Anyone managed to make this work recently?


  17. Hi,
    In Step 7: IoTSparkAsServiceSample.setConfig(“appid”,”a::”)
    I wonder where do we get the ?

  18. Hi,
    In Step 7: IoTSparkAsServiceSample.setConfig(“appid”,”a::”)
    I wonder where do we get the “unique appid”?

    • Prasanna_Alur_Mathada December 07, 2017

      Hello Keith,
      The parameter “unique appid” referred to in the Section 7, relates to Unique Application ID. Generally, the approach followed with programmatic implementation is to associate a ‘unique term’ suffixed ( appended ) with a time stamp, which definitely makes it unique.
      In your case, you can just put any custom ID, say ‘app123’ or ‘1234’ or ‘abc’, etc.

  19. Hi, when I run the code in Step 7 (Create Spark Streaming Service) I get the following error;

    Name: Compile Error
    Message: :29: error: type mismatch;
    found : org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext
    required: org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext
    IoTSparkAsServiceSample.startStreaming(sc, 4)

    Any ideas where I am going wrong?

Join The Discussion