Attention: This article is obsolete. The general concepts still apply but the specific sample and code are no longer being maintained.



IBM Streams (“Streams”) enables continuous and fast analysis of massive volumes of moving data to help improve the speed of business insight and decision-making. Streams provides an execution platform and services for user-developed applications that ingest, filter, analyze, and correlate the information in data streams.

With IBM® Streaming Analytics for Bluemix™, you can perform real-time analysis on data in motion as part of your Bluemix application. The Streaming Analytics service is powered by IBM Streams.

This article describes a demo application that uses a Streams Application to read from the FAA website to get airport weather and delay information. It retrieves tweets from the IBM Insights for Twitter Bluemix service. It uses Streams text analytic capabilities to categorize the area the tweets are related to such as “baggage” or “maintenance”. The Streams SPL application uses the HTTPTupleView and WebContext operators that create an embedded Jetty server to provide data. It uses a Bluemix Liberty application that uses a proxy servlet to provide the public web interface and interact with the Streams Jetty server.components

Information about the demo app along with instructions for deploying and running the demo can be found at Application Source

What the Demo Shows

The demo application reads from two different data sources. The first is FAA data provided at This data includes the airport status for any major airport, including known delays and weather data from NOAA. The second feed uses the IBM Insights for Twitter Bluemix service to retrieve tweets from Twitter related to the major US airlines. This service provides annotated tweet data related that mention any airlines. Each tweet is processed using text analytics operators to produce an Airlines view, a Cities view and a Complaints view. The complaints view for examples looks for words such as (luggage, baggage, suitcase, lost, wait, delay, delays, cancel, cancellation, service, attendant, food, rude, maintenance, repair, fault, faulty) and groups them into 4 categories: baggage, delay, service and maintenance. The complaints are aggregated for each airline and airport. The results are displayed in a browser.


The main page shows a map of the United States with airport locations indicated. A green circle indicates a location, that has FAA weather data, is not experiencing a delay and has no sentiment. A black circle would indicate a delay but no sentiment. An arrow over the airport indicates there is sentiment available. The arrow will point in the direction of the sentiment, that is,  pointing up indicates more positive sentiment positive, down more negative sentiment and to the right neutral sentiment neutral.
You can zoom in the map over areas for a closer look:
You can click on an airport for its details. For example, clicking on the San Jose airport shows the weather, no delay indicated and the positive sentiment:
Zooming in over New York shows one airport that is black (indicating a delay) and arrow pointing down indicating overall negative sentiment in the tweets related to that airport:
Clicking on this airport shows the details:
The table under the map shows a summary social reputation trend for each Airline:

The interface provides a set of buttons that let you drill into the underlying data in tabular form:

Clicking the FAA buttons shows the weather and delay details for each airport:

Clicking the Airport button shows the sentiment details for each airport:

Clicking the Airline button shows the sentiment details for each airline:

Clicking the Tweets Sample buttons shows several of the recent tweets:

Details of the components

The Streams Application


The streams application consists of MasterController main composite that is made up of 3 parts:

  1. The WebContext operator that instantiates a jetty webserver running on the streams application resource, and serves the HTML and javascript necessary for the browser user interface.
  2. The FAA composite that is responsible for retrieving and processing the data from the FAA website.
  3. The Tweet composite that is responsible to interact with the Bluemix Insights for Twitter service to retrieve and process tweet data.

The WebContext Operator

The WebContext operator instantiates the jetty server and serves the html and javascript files. These javascript files will use dojo widgets to mange and display the information produced by the FAA and Tweet composites.

The FAA Composite

The FAA composite that is responsible for retrieving and processing the data from the FAA website. It uses the InetSource operator to read the data, formats it and uses the HTTpTupleView operator that uses the same jetty webserver to provide a REST interface to the formatted FAA data that the user interface dojo objects depend on.

The Tweet Composite

The Tweet composite is responsible to interact with the Bluemix Insights for Twitter service to retrieve tweet data. It uses text analytics to characterize that data related to airports, airlines and complaint categories. It then aggregates that data, and uses the HTTpTupleView operator that uses the same jetty webserver to provide a REST interface to the raw tweets, and the aggregated Airport and Airline data.
The portion of the application that interacts with the with the Bluemix Insights for Twitter service is encapsulated in a composite that  uses custom written java operators that internally use the java libraries provided by that service to retrieve the data and additional operators to control the retrieval and prepare the returned results for later processing.

The Bluemix Liberty for Java Application

The StreamsProxyApp Bluemix application uses a Liberty runtime to host a proxy servlet from HTTP Proxy Servlet to provide a public browser interface to the internal jetty server.

Application Source

The demo code is available at:

Instructions for deploying and running the Bluemix application and a pre-built streams application bundle file can be found at: Streams Airport Sentiment Demo README


This demo application shows some of the power of IBM Streams applications along with the ease of deploying those applications to the cloud in the Bluemix Streaming Analytics service and leveraging other Bluemix services for a complete solution.

25 comments on"Streaming Analytics Airport Sentiment Demo"

  1. Hi, I am trying to access the Demo code but it redirects me to a page that says “Jazzhub has retired”. It leads me to an IBM Cloud’s Toolchain page but there’s no provision to search this demo code. I also don’t know whom and where to request its Toolchain. Can anyone please help?

  2. MikeBranson January 29, 2018

    Sorry for the confusion. With the retirement of Jazzhub, this demo was also retired.

    If you are looking for another demo that uses Twitter, see the app that is used in our development guide:

    Other demos are linked off of this page:

    If none of these demos meet your needs, please reply with the characteristics of the kind of app you are looking for, and I’ll try to point you at the best example. Thanks.

    • Hi, Mike. I was actually trying to figure out how this project fetched data from the aviation authority (FAA) website so that I can check whether I can replicate it as is for the public website of Civil Aviation Authority in Pakistan or request CAA’s IT team for relevant APIs to fetch their data. I think there’s a relevant project in the “Other demos” link you gave me. It is about fetching traffic data from NYC DOT. However, I’d be thankful to you if you can let me know of a project that fetches airport’s data like this project does. Good day!

      • We’ll work on digging up that part of the code and get it to you. At a high level, the demo uses the InetSource operator to bring in that data. And as you’ve found, the NYCTraffic sample also does something similar with InetSource.

        Another example of using InetSource is the EventDetection sample (which should be under that other demos link you have). I’ve posed a code snippet from that one below, getting weather data. (I’ve deleted the code comments to get it to fit. So if you want to download the full sample, you will be able to see the comments). There are a lot variations possible with InetSource depending on the parameters that you set.

        stream RawObservations = InetSource() {
        param URIList : [“”,

        incrementalFetch : true;
        fetchInterval : 240.0l;
        inputLinesPerRecord: 3u;
        initDelay: 10.0l;
        punctPerFetch : true;


        Hope this helps. Let me know if you have any questions.

        • Hi again, Mike. Need a bit of help from you. I wanted to ask if this code snippet depicts .TXT static files. If yes, how is it being fetched here exactly? Is this link in accordance with the IBM Cloud Foundry app on IBM Cloud?

          Also, here is the website I need to fetch flight’s status data from:
          Are there any API’s that you can recommend me for fetching this website’s data on real-time basis? If not, can you please correct me if I am wrong in my perception that the data can be fetched after short intervals, let’s say 10 mins and then add some snippets from EventDetection so that each updated status is regarded as an ‘event’?

          • MikeBranson April 13, 2018

            The .TXT files in my example can change in that they might have more data appended to them during the hour that they represent. That is why there is a fetchInterval parameter on the InetSource operator. It will poll those files for changes on that interval. They are fetched over HTTP. You can put any one of those URIs in a browser and you will see the file contents. The Streams app’s InetSource operator fetches the data. The CF app doesn’t participate in the fetching of the data.

            For the website you listed, are there URIs where they publish the (raw) data you want to analyze? Or are you trying to scrape it off of web pages? I guess the key question is where is the data you want to fetch/analyze? If its published somewhere like the weather data in EventDetection then its fairly straightforward, or if they provide a REST API or another API that allows you to programmatically retrieve it then its straightforward. But right now I don’t understand where the data is.

  3. Hello, Mike. Thanks for the guidance.

    About the website, I consulted them but it’s a semi-govt. body so I can’t get their URIs where they publish raw data so I am left with no option but to scrape it from here upon certain intervals. Can you guide me how to proceed with that?

    Also if this kind of demo is available anywhere because Jazzhub has retired…? I could use some guidance. Once again thanks a lot for your cooperation!

    • Natasha DSilva May 08, 2018

      Hi @Aamna
      If there is no API and that is the only data source, then scraping and parsing might be the only way.

      Bear in mind that this is not a robust solution at all since your application will break if they change the website structure.

      I used the HTTPRequest operator in the inet toolkit.
      This code allowed me to pull the raw html from that URL. It is based on the HTTPRequestDemo sample included in the toolkit.
      Latest release:

      public composite Main {

      stream<uint64 scrapeCount>
      Signal as O = Beacon() {
      iterations : 2; //number of attempts
      period : 60.0; //period to wait
      output O:
      scrapeCount = IterationCount();

      stream<uint64 scrapeCount, rstring status, int32 stat, rstring contentEncoding, rstring contentType, list<rstring> responseHeader, rstring respData> Response
      = HTTPRequest(Signal as I) {
      fixedUrl: "";
      fixedMethod : GET;
      outputBody : "respData";
      outputStatus : "status";
      outputStatusCode : "stat";
      outputContentEncoding : "contentEncoding";
      outputContentType : "contentType";
      outputHeader : "responseHeader";
      fixedContentType : "text/html";
      requestAttributesAsUrlArguments: true;


      () as Printer = Custom(Response as I) {
      onTuple I: {
      printStringLn("Number of attempts ="+(rstring)scrapeCount);
      printStringLn("status="+status+" code="+(rstring)stat);
      printStringLn("contentEncoding="+contentEncoding+" contentType="+contentType);
      onPunct I: println(currentPunct());


      It periodically pulls that data every 60 seconds.

      • Hi @Natasha, I’m trying to achieve the same thing as aamna. Do I run this code as it is? Because I tried doing that and it throws following errors
        1. A token is missing in the operator invocation head of the SPL program. The token is identifier. The expected token is ”
        6.missing ‘<' at 'Signal'

        • Natasha DSilva May 10, 2018

          Hi, its supposed to be
          public composite Main {

          stream <uint64 scrapeCount> Signal as O = Beacon() {…

      • Hi @natasha, this codes gives following error:
        CDISP0127E ERROR: The following toolkit file is out of date: ./toolkit.xml. This file is newer: Main.spl.

        what to do?

        • Natasha DSilva May 28, 2018

          This code gave that error when you tried to compile it?
          Are you compiling from Streams Studio or from command line? If you are compiling from Streams Studio, The toolkit.xml file should automatically be updated when you change a source file. Select the project, right click, and click “Build toolkit index” , and then try compiling again.

          If you are compiling from command line using “sc”, make sure that you are not using the –no-toolkit-indexing flag. This flag prevents the toolkit.xml file from being generated when you compile.
          If that stil doesn’t work,
          run spl-make-toolkit -i [directory]
          where [directory] is the directory containing the toolkit.xml, then try compiling again.

          Please let me know what works.

          • Thanks. I tried the Streams Studio solution you gave. But I am still getting errors. Here are some:

            CDISP0053E An unknown identifier was referenced in the SPL program:

            CDISP0053E An unknown identifier was referenced in the SPL program:

            CDISP0053E An unknown identifier was referenced in the SPL program:

            Multiple markers at this line
            – CDISP0053E An unknown identifier was referenced in the SPL
            program: stat.
            – CDISP0053E An unknown identifier was referenced in the SPL
            program: status.

            Multiple markers at this line
            – CDISP0053E An unknown identifier was referenced in the SPL
            program: contentEncoding.
            – CDISP0053E An unknown identifier was referenced in the SPL
            program: contentType.

            If you can please guide me on these too. I think they are all same in nature. So maybe one remedy would suffice. What would you say?

          • Natasha DSilva June 04, 2018

            It seems you do not have import statements for the needed operators. That is what “unknown identifier” means.
            use; to the main composite. Sorry, I should have pasted the whole application.
            You also need to have the streamsx.inet toolkit added as a dependency to your application:
            Add the toolkit to Streams Studio by following the “Procedure” section of Adding toolkit locations.

          • Also, upon launching, it gives error:
            Error: Cannot run program “/tmp/3699269222452820477/BuildConfig/bin/standalone”: error=2, No such file or directory

            any idea what could be wrong?

  4. Javeria Nadeem May 05, 2018

    Hi, I want to try out this demo code for a project exhibition but it says that it has been obsolete. I want to create an exact version of this demo using IBM streams. Can you provide me with the working demo that has all the activated tools and products of IBM.

    My project is basically an exact replication of this demo, including retrieving twitter tweets and using weather data. I have seen the example of NYC Traffic but it does not have twitter application and text analytics involved. Guide me with a good example to fetch tweets using IBM streams and its analytics.

    • Natasha DSilva May 08, 2018

      Hi, to see how to access Twitter from Streams, you can look at this application:
      Download the source here:

      This page shows how to get the credentials for twitter:
      See steps under “Creating Twitter Application Credentials”

      • Javeria Nadeem June 07, 2018

        Hey Natasha, I have been working with twitter streaming API to fetch tweets that matches my extractor. the issue is that the streaming API is not fetching the tweets relevant to the country i reside in. I want tweets from Pakistani Airlines and the results are not there. Can you help me with the API?

        • Natasha DSilva June 07, 2018

          From what I understand about Twitter you can use keywords when you connect to their API such that it will only return tweets with that keyword. This is more efficient since you don’t have to retrieve data that doesn’t match and then search it.
          Try using the track parameter if you are connecting to the real time API:

          Or the q parameter if using the search API:

          If you are using the real time API there might not be any tweets when you connect, so try using the search API to go back as far as 7 days.
          Bear in mind that Twitter data made available using their free API is limited, you might have to upgrade if your application requires it.
          See their documentation:

  5. Hi, Natasha. I tried adding that statement and the toolkit, still the errors are unresolved. Console says:
    Adding Toolkit Location: file:///opt/ibm/InfoSphere_Streams/ encountered problems.
    Toolkit location file file:/opt/ibm/InfoSphere_Streams/ is not a valid Toolkit Location list file.

    Any idea what could I be doing wrong?

    • Natasha DSilva June 18, 2018


      Does that file /opt/ibm/InfoSphere_Streams/ exist?
      If not, it seems something has gone wrong with your Streams installation.
      I would download the latest copy of the inet toolkit:
      If you are using the Quick Start edition, get the `el6` release.
      Unpack it, and follow the instructions under the “Procedure” section of Adding toolkit locations.
      Then on your project, right click, click “Edit depenedencies” and add the newly downloaded inet toolkit (v2.9.6) to the dependencies, make sure no other versions of the are listed.
      Then try compiling again.

      Lastly, can we please move this discussion to the forums:
      So others can see the solution as well?
      Please post your results there as a new question.
      Thank you.

      • Hi, Natasha. I really appreciate your help but I don’t understand that even after following your suggestions/instructions as is, I am still unable to get this working :/

        • Natasha DSilva June 21, 2018

          Hi, I am sure you are frustrated and sorry you are not making progress.
          I do not know exactly what you are trying to do, so is it possible for you to open an issue on the forums describing what you are trying to do exactly?

          – If possible, please attach a screenshot of what you are seeing/doing, and attach your project (In studio, right click your project, click export ,archive file).

          – If you cannot attach the project, paste the SPL you are trying to compile and attach the info.xml of the toolkit so we know what the dependencies are.
          – Also paste the output of
          `ls $STREAMS_INSTALL/toolkits`

          – An important comment: When reporting a problem, please be as detailed as possible and describe what you did, for example, “I right clicked my project and then chose “run”, I compiled using “sc – m abc def”, and then I got this error”. “It doesn’t work” is too vague.
          – Also include the operating system for Streams and Streams version, or QSE version if using QSE.

          Thank you and I hope to sort out your problems ASAP!

  6. Hi Natasha, I’m trying to achieve the polarity count of sentiments in this project. I tried to do it on a dummy CSV file with repetative records using the aggregate operator. In the aggregate operator I used the tumbling window but it displays the number of records same as the size of window I’ve kept e.g, if I have records of marks in 10 different subjects of a student and I keep the tumbling window of size say 2, partitioned by name of the student, it displays the name of the student 5 times with the count 2. This actually makes sense but is not what I’m trying to achieve. I just want to display the name of the student say”Anna” with the number which is equal to as many times the name “Anna” appears in the data file. Can you please help me out?

Comments are closed.