Skill Level: Beginner

This recipe - co-authored with Julia Wiegel and Rene Meyer - shows how to ingest and process weather data using the Weather Company Data Service (API), IBM Cloud Functions (based on Apache OpenWhisk) and IBM Event Streams (based on Apache Kafka).


Software Requirements

To obtain an IBM Cloud Account and get access to the IBM Cloud and to IBM Watson Studio, please follow the instructions outlined here:


  1. Introduction

    This recipe shows how to ingest and process weather data coming from the Weather Company using IBM Cloud services. It covers 2 key steps: ingestion of weather data into the IBM Cloud using the Weather Company API and IBM Cloud Functions, and delegation of the captured weather data for further processing to IBM Event Streams in which IBM Event Streams takes the role as a generic message broker. Further processing and analysis of the weather data – either as data at rest or as data in motion (real time) – will be covered by other recipes and take IBM Event Streams as a base vehicle for reading events.

    The recipe will cover the following steps:

    • Introduction to the architecture
    • Getting started by instantiating the required services and installing the needed command line tools.
    • Overview of the Weather Data API.
    • Setting up IBM Function packages for IBM Event Streams.
    • Ingestion of weather data using IBM Cloud Functions and the Weather Company Data API.
    • Sending of weather data to IBM Event Streams using IBM Functions.
    • Scheduling of IBM Functions using time based triggers.
    • Monitoring of IBM Functions.
    • Receiving weather data from IBM Event Streams using IBM Functions.
    • On-demand ingestion of weather data using the Weather Company API and IBM Functions.


    In this recipe we shall use IBM (public) Cloud for the exercise. However, insofar that IBM Functions is based on Apache OpenWhisk and IBM Event Streams on Apache Kafka, nothing speaks against deploying the same configuration either on premise or in a private Cloud such as IBM Cloud Private.

  2. Architecture Overview

    Before you start it is important to provide you with some details on the architecture of the solution. The architecture diagram is shown below and is a typical IoT architecture with services for event data ingestion, event handling (message bus), landing zone (object storage) and event data processing to transform, aggregate, load and visualize event data in a form so that it becomes business relevant. In other words, it is a data refining pipeline turning “raw oil” into “business relevant information”.

    In this recipe we will focus on the first 3 services whereas the other services (rendered slightly transparent) will be covered by upcoming recipes:


    Previously – in an initial version of this recipe, weather data could be retrieved using either the Weather Company Data service on IBM Cloud or directly from the Weather Company using the API. Meanwhile, the service on IBM Cloud has been deprecated so iwe will use the Weather Company Data API directly. In order to retrieve the various kind of weather data you will however need an API key. Once this has been obtained (see section 3) the API can be accessed using REST calls, to retrieve historic weather data, current weather data, weather prognosis data. Special kind of data such as e.g. lightning data  may require a special key though.

    IBM Cloud Functions (based on Apache OpenWhisk) is a Function-as-a-Service (FaaS) platform which executes functions in response to incoming event. The actions for reading weather data events from the Weather Company Data API will be triggered by timers. The raw weather data will then be ingested, slightly processed upon which it will be send further onwards to IBM Event Streams. This means that the functions take the role as producers for event data seen from the point of view of the Event Stream component. Various kind of consumers can then read the weather data and process it further on either as data in motion in real time e.g. using Apache Spark Streaming, or as data at rest using e.g. the IBM Cloud Object Storage Bridge for storing the data in IBM Cloud Object Storage. From there Apache Spark or IBM SQL Query can then be used for Extracting, Transforming and Loading (ETL) the data e.g. into a file or a Data Warehouse. These aspect of processing and analyzing weather data will be covered by other recipes which will focus on services of the IBM Cloud data platform and IBM Watson Studio.

    There are several alternatives to the current architecture. One alternative would be to use Node.js applications (or applications defined in any other suitable programming language) to define the ingestion and delegation logic. The advantage of using Functions is 3 fold. First of all functions are minimal and allow the developer to focus on the logic solely since there is no overhead for setting up the application and compiling and deploying it. Secondly, functions are more cost effective since costs will only incur when the function is executing and will be determined on basis of the RAM and time used whereas applications incur costs even if they are idle waiting for events to occur. Third and last, IBM Cloud Functions are distributed and highly available Out-Of-The-Box whereas this would need to be achieved using applications e.g. by deploying the application in a Kubernetes cluster spanning several data centers (availability zone) within a region.


    Weather Company Data and API

    The Weather Company offers accurate, personalized and actionable weather data and insights via the Weather Company API’s, which provides information covering historical, current and forecast weather conditions. This information may in turn help companies optimize operations, reduce cost, improve safety, and uncover new revenue opportunities.

    The Weather Company retrieves its data from  a variety of resources including 250.000 personal weather stations (PWD) as well as e.g. 20 million smartphones (pressure data) and 50,000 flights per day (atmospheric data). A comprehensive set of APIs are available covering current weather conditions, weather forecast, severe weather conditions, ligthning detection, historical weather data as well as APIs for e.g. aggriculture, traffic and lifestyle. Beyond various forms of historic, current and forecasting data the API also provides current condition and forecasting images that can be used to visualize weather conditions:


    The Weather Company API is higly available and globally available with hight resolution. Current on Demand weather information (Enhanced Current Conditions) is for example available down to a resolution of 500 meters. You can retrieve weather data for an area specified by a geolocation and time zones:

    • Hourly forecast: An hourly weather forecast for the next 48 hours starting from the current time, for a specified geolocation.
    • Daily forecast: A daily forecast for each of the next 3, 5, 7, or 10 days starting from the current day, including forecasts for the daytime and nighttime segments.
    • Intraday forecast: A daily forecast for each of the next 3, 5, 7, or 10 days starting from the current day, which breaks each daily forecast into morning, afternoon, evening, and overnight segments.
    • Current conditions: Observed weather data (temperature, wind direction and speed, humidity, pressure, dew point, visibility, and UV Index) plus a weather phrase and a matching weather icon.
    • Historical data: Observed weather data from site-based observation stations for a specified geolocation that includes current observations and up to 24 hours of past observations.
    • Weather alerts: Government-issued weather alerts, including weather watches, warnings, statements, and advisories issued by the National Weather Service (US), Environment Canada, and MeteoAlarm (Europe).
    • Location services: The ability to look up a location name or geocode (latitude and longitude) to retrieve a set of locations that match the request.
    • Almanac services: Historical daily or monthly weather data sourced from National Weather Service observations stations from a time period spanning 10 to 30 years or more.


    In this recipe we shall focus on retrieving current weather conditions for a geo location of your choice.


    IBM Cloud Functions

    IBM Cloud Functions executes functions in response to incoming events and costs nothing when not in use. Functions provide a platform allowing users to develop, run, and manage application functionalities without the complexity of building and maintaining an infrastructure typically associated with developing an application. Using function makes it possible to execute logic in response to database change, perform analytics on sensor input messages, provide cognitive computing via chat bots, schedule tasks performed for a short time and invoke auto scaled APIs and mobile back ends. The current architecture for processing weather will execute functions periodically by defining schedules following a cron-like syntax to specify when actions are supposed to be executed.


    The Apache OpenWhisk platform supports a programming model in which developers write functional logic (called actions), in any supported programming language, that can be dynamically scheduled and run in response to associated events (via triggers) from external sources ( feeds) or from HTTP requests. The project includes a REST API-based Command Line Interface (CLI) along with other tooling to support packaging, catalog services and many popular container deployment options.

    Triggers are classes of events that may occur and can come from a variety of sources. Invocations of a trigger can be blocking, non-blocking, or periodic. Actions are code that runs in response to an event (i.e. they are event handlers), and can be written in a variety of languages, such as JavaScript, Python, Java, PHP, and Swift or any other language by packaging the code with Docker. Actions can be composed to create sequences that increase flexibility and foster reuse. By composing actions it is also possible to define conditionals, loops and error handling. Rules in turn provides an association of a trigger to an action in a many to many mapping. The concept of rules fosters reuse and modularity by allowing a single action to be associated with several triggers.

    Apache OpenWhisk makes it simple for developers to integrate their actions with many popular services using packages that are provided either as independently developed projects under the Apache OpenWhisk family or as part of the IBM default Catalog. Packages offer integrations with general services such as Kafka message queues, databases including Cloudant, Push Notifications from mobile applications, Slack messaging, and RSS feeds. Development pipelines can take advantage of integrations with GitHub, JIRA, or easily connect with custom data services from IBM Watson for Translation or Speech-to-Text, as well as the Weather Company. In this recipe we shall use the Alarms package to schedule times or recurring intervals to run a sequence of actions for reading weather data and delegate the data to Event Streams.

    The advantage of using IBM Cloud Functions are numerous:

    • Cost Effective: cost is only incurred during the short time that the function runs and is dependent on the RAM used.
    • Automatically scalable: action instances scale to meet demand exactly, then disappear.
    • Ease of integration: actions can be triggered from events in other services, or directly via REST API.
    • Polyglot: functions can be developed in one of the natively supported languages, or alternatively in any other language that can be deployed in a Docker container.
    • Highly Available: the current implementation of IBM Functions is transparently distributed over several Availability Zones in a single region which significantly increases the availability of the service, and thus also the availability of applications that rely on the use of this service.
    • Open Source: based on Apache OpenWhisk which avoids vendor lock in.
    • Flexible: Apache OpenWhisk can be deployed on premise, in a private cloud or in a public cloud.


    IBM Event Streams

    The IBM Event Streams (previously also named IBM Message Hub) service is based on Apache Kafka. Using Kafka we can achieve a decoupling of message producers and message consumers which results in larger flexibility compared to a direct one-to-one communication scheme. At the same time Kafka offers essential features such as

    1. Persistence of messages to avoid that messages are lost,
    2. Distributed and horizontal scaling allowing for multiple servers and consumers and last but not least
    3. Automatic recovery from broker failure.


    Kafka runs as a cluster of one or more (distributed) servers called brokers that store streams of records in categories called topics, where a topic is a category or feed name to which records are published by producers and read by consumers. The Kafka cluster is a collection of brokers distributed on one or more machines so that the load can be processed simultaneously. Zookeeper monitors the health and maintains the metadata information about the brokers. It is the central service for maintaining configuration information, monitoring overall health of the cluster and balancing broker assignments and re-assignments. For example, should a broker go down, Zookeeper will need to distribute its load to the remaining brokers.


    Each record consists of a key, a value, and a timestamp. Topics in Kafka are always multi-subscriber; that is, a topic can have zero, one, or many consumers that subscribe to the data written to it. Topics are broken up into ordered commit logs called partitions. Each partition is an ordered, immutable sequence of records that is continually appended to. The records in the partitions are each assigned a sequential id number called the offset that uniquely identifies each record within the partition.


    The producers are in principle responsible for choosing the partition within the topic when sending a message.  Messages sent by a producer to a particular topic partition will be appended in the order they are sent, i.e. messages from the producers are appended to the end (right) of the partition. Consumers can decide whether they want to read from the beginning of the topic, or whether they want to read from the next record. Consumers can furthermore be grouped into consumer groups that will read all read messages from one and the same topic to achieve load balancing and higher throughput. Each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Basically this is achieved by letting all consumers in a group share the notion of “next record to read” for each partition. The notion of topics and consumer groups allow Kafka applications to adopt a queue as well as a broadcasting protocol for communicating messages between producers and consumers. In the current architecture it gives us the flexibility to add consumers such as e.g. the IBM Cloud Object Storage Bridge for storing the weather data to IBM Cloud Object Storage as well as consumers for processing the data in real time using e.g. Apache Spark Streaming.

    Kafka provides fault tolerance and high throughput by distributing the partitions over the servers in the Kafka cluster and by replicating the partitions across a number of servers. Each partition has one server which acts as the “leader” and zero or more servers which act as “followers”. The leader handles all read and write requests for the partition while the followers passively replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others to balance load across the cluster. Moreover, Kafka messages are kept in persistent store for a period of time. The Kafka cluster retains all published records—whether or not they have been consumed—using a configurable retention period. For example, if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space. Kafka’s performance is effectively constant with respect to data size so storing data for a long time is not a problem.

  3. Getting Started

    To get started you will essentially need to go through 4 steps:

    • Select region, organization and space for deployment of the services.
    • Install the IBM Cloud and IBM Functions CLI and login to update the components.
    • Obtain API key for the Weather Company Data API.
    • Provision the IBM Event Streams service and create a topic for the service.


    Selection of deployment location

    IBM Cloud Functions is currently available in US South, US East, Germany and United Kingdom. The Weather Data Service is available in Dallas, London and Sidney. IBM Event Streams is available in the same regions as well as in Washington DC, Tokyo and Frankfurt. In this recipe we shall use London as the location for the Weather Company Data API and IBM Functions as well as  Germany (with its 3 availability zones in Frankfurt) as location for IBM Event Streams.

    Beyond the region you will also need to decide on which Cloud Foundry organization and space to use. Your default organization and the default dev space should do for a start, but feel free to make another choices.


    Installation of CLIs

    To prepare the development environment on your local machine, you will need to download and install the IBM Cloud Command Line Interface (CLI for short) and the IBM Functions CLI:


    When using the CLI you will need to login to a specific region. An important information is the one that defines the potential end-points for IBM Cloud (See ‘Regions‘ for more information):


    We installed the tools on an Ubuntu Linux 64 bit virtual machine. To update the components (at any time) login using single sign on and follow the below procedure:

    1. Invoke the following command to set the endpoint and login
      ibmcloud api <ENDPOINT>
      ibmcloud login -sso
    2. This will display a link for obtaining a code for authentication.
    3. Enter Y and press Return.
    4. In the browser, position the cursor over the code and left-click the mouse to copy the one time code. The browser should respond with a popup message saying “Copied”.
    5. Paste the one time code into the command shell and click Return. You may not see the code even if it is correctly pasted into the shell.
    6. If the login procedure informs you that there is a new version of the CLI available, enter Y and press Return to accept the update.
    7. Check that the endpoint, organization and space are correctly set. Otherwise change the target:
      ibmcloud target -o <ORG> -s <SPACE>
      ibmcloud target --cf-api <ENDPOINT> -o <ORG> -s <SPACE>
    8. Use the following command to check the current installed version of IBM Cloud Functions and see if it is current:
      ibmcloud fn help
    9. Instrall or update the cloud functions plug in by executing one of the following commands
      ibmcloud plugin install cloud-functions
      ibmcloud plugin update cloud-functions


    It is a good idea to update the plugin at regular intervals to ensure that you work with the most recent version of IBM Cloud Functions CLI.


    Obtaining Weather Company Data API Key

    To obtain an API key for the Weather Company Data API you will need to contact the local Sales office for The Weather Company:

    1. Go the the web page https://business.weather.com/contact.
    2. Fill in your contact data.
    3. In the comments field provide a short description and ask to obtain an API key.
    4. Submit the request by hitting the Contact Me button at the end of the page.


    Provisioning of the IBM Event Streams service

    Finally create an instance of the IBM Event Streams service. While doing so use the values for region, organization, space and tags that suits your setup:

    1. Navigate to the IBM Cloud Catalog in the IBM Cloud console.
    2. Enter ‘Streams’ as search string.
    3. Select the ‘IBM Event Streams’ service.
    4. On the Event Streams page:
      1. Provide a name for the service.
      2. Select the region (e.g. Frankfurt).
      3. Choose an organization and a space (or accept the defaults).
      4. Add a tag (e.g. ‘weather-data-demo’) to the service.
      5. Scroll down and select for the pricing the Standard Pricing Plan.
    5. Click Create to create the service.
    6. On the next page select the tab Service Credentials to the left.
    7. Click the New Credential button in the right part of the screen to create new a new credential.
    8. Provide a name for the credential (e.g. ‘weather data demo credentials’).
    9. Click Add.
    10. For the new credential, select the View Credentials button in the Actions column.
    11. Copy all the information for the new credential to a file on your desktop. You will need it later on.


    To check the result select IBM Cloud > Resource List in the upper right corner of the IBM Cloud UI. Filter the elements using the ‘weather-data-demo’ tag and expand the section for Cloud Foundry Services. You should now see the Event Streams service in the list.

    As a last step you will need to create a topic for the event stream that will be used to exchange weather data between producers and consumers:

    1. In the dashboard of the Event Stream service, select the Manage tab in the left toolbar.
    2. Click the + button to add a new topic.
    3. Enter ‘weather-data’ for the topic name.
    4. Keep the defaults for partition and retention period.
    5. Click Create Topic.
  4. Weather Company Data Service and API

    The Weather Data Company provides a comprehensive set of APIs for retrieving past, current and forecasting weather data as well as weather data related to severe weather conditions, road conditions, agriculture, lightning’s and aviation. An overview of the offerings, together with a summary, usage guide and code examples is available on the webpage titled ‘IBM Passport Advantage Offerings | Weather Company Data‘. This page contains a link to the page ‘Weather Company Data | API | Common Usage Guide‘ which provides detailed information on how the URLs are composed as well as information regarding the returned status code when a REST request is made.

    To give you an expression on how the URLs may look like, here a few examples (with a geocode from the city of Munich):


    You can test the Weather Company Data service URLs e.g. by using curl or postman. For this purpose you will need the API key that you obtained in the previous section. The curl command for a GET requests to receive a weather data observation is basically on the form:

    curl --request GET 'https://api.weather.com/v1/geocode/48.1545703/11.2616557/observations.json?language=en-US&units=e&apiKey=<your api key>' | json_pp

    The URL consist of the base URL for the Weather Company API, the geocode and the designator for the kind of data being requested (‘observation.json’). The output delivered by the GET request is a flat string. By piping the output into json_pp we get a more readable output:


    The returned data defines typical properties of the weather such as pressure, temperature, wind, snow and visibility as well as the time of the observation and the location in terms of name.

    Using postman, simply start the postman API, enter the URL, user name and password as shown below and click Send:


    With Postman we can choose to view the output delivered by the service in its natural form as pretty printed JSON. In case you want to install and use Postman for retrieving Weather Data using REST calls:


    Please feel free in the forthcoming sections of the recipe to use the geocodes for your location.  For more details visit the online documentation for the Location Services of the Weather Company Data API or use any geo location finder of your own choice, e.g.  https://www.latlong.net/convert-address-to-lat-long.html.

  5. Setting up the Packages for IBM Functions

    In the forthcoming sections of the recipe you will create and configure IBM Cloud Functions so that it sends incoming weather data to IBM Event Streams. In IBM Cloud Functions use packages to bundle together a set of related actions, and share them with others. A package can include actions and feeds. An action is a piece of code that runs on Cloud Functions. For example, the IBM Event Streams package includes actions to write records to a topic of the IBM Event Streams. A feed is used to configure an external event source to fire trigger events. For example, the Alarm package includes a feed that can fire a trigger at a specified frequency. For more information we refer to the section of online documentation titled ‘Organizing actions in packages‘.

    The solution for reading weather data and sending that data to IBM Event Streams requires the use of 2 packages:

    1. A package for the IBM Event Streams service (available but needs to be bound to the credentials for the service).
    2. A package for the application specific logic.


    IBM Functions has pre-installed packages are automatically registered within Cloud Functions in the /whisk.system namespace. Although it is possible to use the entities in a package directly, there may be situations where you wpould need to pass the same parameters to the action every time. You can simplify the process by binding to a package and specifying default parameters, which are inherited by the actions in the package. Such default parameters could for example define the credentials of the service. To store credentials or other parameters in a pre-installed package, you must consequently create package bindings.

    The application specific package will need one trigger, one sequence and four actions as shown below:


    The sequence is a chain of actions where output of one action acts as an input to the following action and so on.  This allows you to combine existing actions together for quick and easy re-use. The input to and output from an action is a dictionary of key-value pairs. The key is a string, and the value a valid JSON value.

    A sequence can then be invoked just like an action, through a REST API, manually using the user interface or automatically in response to events.  Such an event may be a trigger configured to fire at specific time intervals so that weather data are ingested periodically – be it every 5th minute (during development and test) or every hour (for production purposes). 

    The sequence itself has four actions:

    • prepareParams that defines the location (latitude, longitude) and observation time (current) for the weather data to be ingested.
    • forecast that actually retrieves the weather data from the Weather Company Data API. Forecast may not be the best name for a function that retrieves the current weather, but it has been inherited from the previous version of this recipe which used the service on IBM Cloud to retrieve weather data.
    • prepareMessage that post-processes the JSON object retrieved from the Weather Company Data API before it is passed to IBM Event Streams.
    • messageHubProduce that writes the JSON object to a specific topic of the IBM Event Streams service.


    You provide your action to Cloud Functions either as source code or as a Docker image. In this recipe we shall keep it simple and develop small minimal fragments of Node.js code. When creating actions it is important to notice that action implementations are stateless, or idempotent. While the system does not enforce this property, it is not guaranteed that any state maintained by an action is available across invocations. Moreover,  Cloud Functions has a few system limits, including how much memory an action can use and how many action invocations are allowed per minute. For more detail on these limits please visit ‘System Limits‘.

    Beyond creating the packages, the sequence, the trigger and relevant actions, there will be additional activities involved related to checking and monitoring the results produced by IBM Cloud Functions, which will get you to use the command line interface, the code editor and the monitoring dashboard.


    Setting up the Kafka (IBM Event Streams) Package

    In order to send weather data to the IBM Events Streams service we will need to bind the Kafka package. To do this, you will  need the command line interface as well as the credentials for the Kafka service:

    1. Obtain the kafka_brokers_sasl, user name, password and kafka admin URL from the IBM Event Stream service as shown in section 3.
    2. Change the kafka_brokers-sasl property so that it becomes a single line string with quotation marks properly escaped using backslash as shown below.
    3. In the CLI, submit the following command using your values for kafka_brokers_sasl, user, password and kafka_admin_url:

    ibmcloud fn package bind /whisk.system/messaging weather-data-event-streams -p kafka_brokers_sasl "[\"kafka04-prod01.messagehub.services.eu-de.bluemix.net:9093\",\"kafka02-prod01.messagehub.services.eu-de.bluemix.net:9093\",\"kafka01-prod01.messagehub.services.eu-de.bluemix.net:9093\",\"kafka03-prod01.messagehub.services.eu-de.bluemix.net:9093\",\"kafka05-prod01.messagehub.services.eu-de.bluemix.net:9093\"]" -p user "<username>" -p password "<password>" -p kafka_admin_url "https://kafka-admin-prod01.messagehub.services.eu-de.bluemix.net:443"


    You may find the following CLI commands useful when working with IBM Functions:

    • To get help: ibmcloud fn help
    • Show the installed packages: ibmcloud fn package list
    • Update the installed packages: ibmcloud fn package refresh
    • Retrieve list of actions: ibmcloud fn action list


    If you would like to change the target IBM Event Stream service at some point in time so that the weather observations are send to another instance of the server, then it is easily done using the package update command, but be aware that you will need to define bindings for each single parameter (just updating e.g. the user name and the password will not do):

    ibmcloud fn package update weather-data-event-streams -p kafka_brokers_sasl "[\"kafka04-prod01.messagehub.services.eu-de.bluemix.net:9093\",\"kafka02-prod01.messagehub.services.eu-de.bluemix.net:9093\",\"kafka01-prod01.messagehub.services.eu-de.bluemix.net:9093\",\"kafka03-prod01.messagehub.services.eu-de.bluemix.net:9093\",\"kafka05-prod01.messagehub.services.eu-de.bluemix.net:9093\"]" -p user "<username>" -p password "<password>" -p kafka_admin_url "https://kafka-admin-prod01.messagehub.services.eu-de.bluemix.net:443"

    Notice that the command prefix ibmcloud fn is equivalent to the original OpenWhisk command wsk.


    Viewing Packages in IBM Cloud console

    Having created the package bindingswe can check if the packages have been created correctly using the IBM Cloud dashboard. To achieve this do the following:

    1. In the IBM Cloud portal select the Catalog.
    2. Enter ‘Function’ as filter and locate the icon for IBM Functions.
    3. Click the Functions service to open it.
    4. Click Actions in the toolbar to the left of the page.
    5. Select the correct region, Cloud Foundry organization and space in the toolbar at the top of the page.
    6. Scroll down and expand the package binding named ‘weather-data-event-streams’:
    7. Click the messageHubProduce action and wait until the code is loaded:

    Notice that the action uses parameters such as those defining the tooic to be used for the messages as well as the credentials for thes service. These credentials come from the package binding which has the advantage that secrets aren’t hardcoded into code. However, there are more parameters than just the secrets. To view these parameters simply click Parameters in the left part of the screen. Then select Code to get back to the code and find where these parameters are used in the code to connect to the service.


  6. Ingestion of Weather Data using IBM Functions

    With the predefined package correctly configured in IBM Functions we can continue ingesting weather data using IBM Functions.  This will require several steps

    1. Create a package for the current application.
    2. Create an action to read weather data from the Weather Company Data API.
    3. Create an action that defines the location of the weather data to be retrieved.
    4. Create sequence that chain together the various actions needed.
    5. Test the sequence.


    Create initial package and action

    We will create a package for the application as well as an initial action that contains the code for reading Weather Company observation data. This corresponds to the steps 1-2 mentioned in the beginning of this section.

    The code for the action will be as follows:

    const request = require('request-promise');

    function main(params) {

        console.log('input params:', params);

        var lat = params.latitude || '53.5453';

        var lon = params.longitude || '9.9953';

        var key = params.apiKey || '0';

        var URL = "https://api.weather.com/v1/geocode/" + lat + "/" + lon +
                  "/observations.json?language=en-US&units=e&apiKey=" + key;

        return request(URL).then(response => {
            return JSON.parse(response);



    Notice that the code for calling the REST service is rudimentary since it does only deal with the happy day scenario. We will show more elaborated code for issuing REST calls using the curl action in section 11. For now, this simple piece of code should do.

    To create the action do the following:

    1. In the IBM Cloud portal, select Getting Started > Overview in the toolbar to the left:
    2. Click the Start Creating button.
    3. In the upper part of the screen select the appropriate Region, Cloud Foundry Org and Cloud Foundry Space:
    4. Select Create Action.
    5. On the next screen enter ‘forecast’ as the name of the sequence.
    6. Right to the field Enclosing Package, click Create Package to create a new package.
    7. Enter ‘weather-data-demo’ as the package name and click Create.
    8. Select the Node.js 10 runtime environment:
    9. Click Create.

    This will create a new action for you but with a ‘hello world’ dummy body. You will now enter (copy/paste) the real code and then provide some preliminary values for the parameters so that you can test the code right away:

    1. Copy the code in the beginning of the section and replace the definition of the action with that code.
    2. Substitute the default value of ‘0’ for the parameters with your values for the latitude, longitude and api key:
    3. Click Save in the upper right corner to save the changes.
    4. Click Invoke in the upper right corner to run the function.
    5. Observe the results.

    Now that you know that the action works as intended time has come to improve it a bit. Storing API keys in code is a bad habbit for sure. We can improve the code by defining the API key as a parameter instead:

    1. Select the Parameters section in the left part of the screen (the entry below Code).
    2. On the next page, click Add Parameter.
    3. Type ‘apiKey’ for the Parameter Name.
    4. Copy and paste your API key into the Parameter Value field.
    5. Click Save.
    6. Select the Code section.
    7. Back in the code replace the value of your apiKey with the original default value of ‘0’.
    8. Save the changes.
    9. Invoke the action.


    The invocation should work but if you scroll down to end of the execution results you will see that the log contains a print out of the API key. This is due to the initial statement in the function body that prints out the parameters. This statement has simply been incorporated to enable debugging in case something should go wrong. The statement should of course be deleted before the code in used in production.



    Create Sequence

    The simple action that reads the weather observation data works. However, the results of this action need to be passed onwards to the IBM Event Streams. Moreover, we would like to show you how to define the variables latitude and longitude externally to the function itself rather than having it directly hardwired into the code. This calls for the use of a sequence so several actions can be sequenced together to perform more comple logic.

    To create a sequence do the following:

    1. In the IBM Cloud portal, select Getting Started > Overview in the toolbar to the left.
    2. Click the Start Creating button.
    3. Select Create Sequence.
    4. On the next screen enter ‘observeWeather’ as the name of the sequence.
    5. Select ‘weather-data-demo’ as the package.
    6. Click Select Existing for the initial action, then pick the forecast function that you just created as the initial action of the sequence.
    7. Click Create.
    8. On the resulting page you will see that the sequence has been create with an initial action.
    9. Click Invoke to test the sequence.


    Create action defining the latitude and longitude

    The sequence will work as it is defined because you have hardwired the values for latitude and longitude directly into the forecast action. Do the following:

    1. Select the forecast function and change the definition of latitude and longitude default values back to ‘0’.
    2. Select the sequence again.
    3. Run it and observe that it fails.


    To be able to test the sequence with the current definition of forecast it will be necessary to create a function that will pass latitude and longitude values to the forecast function. This is achieved in Node.js by the following function definition:

    function main(params) {
        return {
           latitude: 53.5453,
           longitude: 9.9953

    To create and test the action (as well as the sequence) in IBM Cloud do the following:

    1. Navigate to the page showing the ‘observeWeather’ sequence.
    2. Click the Add button in the list of Sequence Actions.
    3. On the next page name the action ‘prepareParams’.
    4. Select ‘weather-data-demo’ as the package for the action.
    5. Select ‘Node.js 10’ for the runtime.
    6. Click Create & Add.
    7. This will create the action and the sequence will now have two actions, but’ createParams’ will be the second.
    8. Click the Up-Arrow for the ‘createParams’ action to move it one step up in the list in front of the forecast action.
    9. Click the link named ‘createParams’ to open the code editor for the action.
    10. Change the code body to the following:
    11. Click Save to save the changes.
    12. Click Invoke to test the function.
    13. Click the Collapse button in the top part of the screen to collapse the view.
    14. Click the browsers Back Arrow to go to the previous page (which shows the sequence).
    15. Click Save to save the changes to the sequence.


    Test the sequence

    To test that reading data from the Weather Data Company service works, do the following:

    1. Click Invoke.
    2. Wait for the result to appear:
    3. IBM Functions will return the result as well as the time used for the computation.


    You have now ingested weather data to IBM Cloud using IBM Functions. Notice that the result is of type observation and that the data returned cover a variety of properties relevant for describing the current weather condition in a given location (in this case Hamburg Finkenwerder). Also observe that the data is a JSON value that contains meta data as well as the observation data.

  7. Sending Weather Data to IBM Event Streams using IBM Functions

    Having read the weather data only a few steps are needed in order to write the data to IBM Event Streams:

    1. Create the action ‘prepareMessage’ that prepares the weather data for delegation to IBM Event Streams.
    2. Create the action that sends the data to IBM Event Streams.


    Create action that prepares the weather data for IBM Event Streams

    The weather data received from the Weather Company Data API can’t be forwarded directly to IBM Event Streams since it takes the form of a JSON object and IBM Event Streams expects a flat string value. The JSON object must therefore be converted to a string value. Moreover, if we look that the data received in the screenshot above, then it can be observed that the location parameters are part of the metadata and not the message itself. We would consequently like to add the latitude and longitude properties to the message before it is passed onwards to Kafka. Basically this means that we will need to define an action that has the following function definition:

    function main(params) {
        params.observation.latitude = params.metadata.latitude;
        params.observation.longitude = params.metadata.longitude;
        return {
           topic: "weather-data",
           value: JSON.stringify(params.observation)

    Define a new action following the same steps as you just did to define the prepareParams action:

    1. Go to the page that defines the ‘observeWeather’ sequence.
    2. Click Create New to create a new action.
    3. Name the new action ‘prepareMessage’ and use the following settings to define the action.
    4. Select the new action in the sequence (it should be the last in the list).
    5. Define the semantics of the action as shown below.
    6. Save the new sequence.
    7. Test the sequence by clicking the Invoke button.


    Create action that sends the message to IBM Event Streams

    The final action needed in the sequence is to send the message to IBM Event Streams. To achieve this do the following:

    1. Create a new action for the sequence.
    2. In the Add Action dialog, select Use Public.
    3. Select the MessageHub package.
    4. Select the ‘messageHubProduce’ action.
    5. Select the binding My Bindings.
    6. Select the binding named ‘weather-data-event-streams’.
    7. Click Add to create the new action.
    8. The new action should now be added at the end.
    9. Save the sequence.
    10. Invoke the sequence.


    You should get the message in the log that the weather data has been send successfully to the Event Stream service.

  8. Scheduling of IBM Functions using Triggers

    The defined sequence has been executed manually so far. A better approach would be to schedule it so that data are fetched from the Weather Company Data API at regular intervals. This can be achieved by defining an associated trigger for the sequence.

    During testing it would make sense to trigger the sequence every minute (or every fifth minute depending on your preferences and patience). However, for a production system every hour would probably be a more relevant choice.

    To create the trigger do the following:

    1. On the page that defines the sequence, select connected triggers in the toolbar to the left.
    2. Click the Add Trigger command in the top right part of the page.
    3. On the next page select Periodic as trigger type.
    4. Name the trigger ‘observeWeatherTrigger5Min’.
    5. Scroll down to the control that is named UTC Minutes and set the pattern to ‘Every 5 Minutes’.
    6. Click Create and Connect.
    7. Check that the trigger has now been created and that it is enabled.


    Do not forget to change the trigger so that it only triggers the sequence every hour (or never – depending on your preferences) before you finish work on this recipe. Remember: running systems in Cloud may incur costs. Alternatively, disable the trigger by deselecting the check-button for the trigger.

  9. Monitoring of IBM Functions

    All logs and activation records are automatically available via IBM Cloud Log Analysis Service which provides rich Kibana-based experience for searching logs in any dimension. This feature includes visualization capabilities which are critical for quickly detecting trends and anomalies and for problem determination at scale.

    To view the log do the following:

    1. Select the sequence that you defined.
    2. Click Log in the toolbar to the left
    3. Once on the window for IBM Cloud Logging, double click Discover in the toolbar to the left.

    You can get insight into the performance of your actions that are deployed with Cloud Functions by using IBM Cloud Monitoring. You can also monitor the health and performance of your actions by using the dashboard to see a graphical summary of your activity.

    An overview of the activity summary, the activity log entries and the timeline do the following:

    1. Select Monitor in the toolbar to the left.
    2. View the summary of the activities for the sequence that you defined:
    3. Notice that you can filter the view to display specific actions.
    4. Using the activity log you can easily get a detailed view of the log messages.
  10. Receiving Weather Data from IBM Event Streams using IBM Functions

    Apache Kafka offers several kind of API’s that can be used by Apache Kafka clients – be it Java, Scala, Node.js or Python clients (to mention a few):

    • A Producer API that allows an application to publish records to one or more Kafka topics.
    • A Consumer API that allows an application to subscribe to one or more topics and process the records of the topics.
    • A Streams API allows an application to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.

    Using IBM Functions you can consume IBM Event Stream messages from a specific topic. This simply requires the definition of a trigger that will then fire whenever a message comes available in the topic. With the trigger as a starting point it is then possible to further process the message using actions and to chain actions using sequences.

    We shall therefore just show the creation of the trigger which can be created as follows:

    1. In the IBM Cloud portal for functions select, select Triggers in the toolbar to the left.
    2. On the next page titled ‘Triggers’, click the Create button.
    3. On the next page titled ‘Create’, select Create Trigger.
    4. On the next page titled ‘Connect Trigger’ select MessageHub.
    5. On the next page:
      1. Provide a name for the trigger (e.g. ‘consumeWeatherData’).
      2. Select the MessageHub instance.
      3. Select the topic to subscribe to (‘weather-data’).
      4. Select Is JSON Data.
    6. Click Create to create the trigger


    Once the trigger is created you can combine it with actions to process the data read as has already been demonstrated in the previous sections.  You can achieve the same using the CLI. To create a trigger for an IBM Event Stream topic, submit a command on the following form replacing the placeholders with your service and topic name accordingly:

    ibmcloud fn trigger create MyMessageHubTrigger -f <myMessageHub>/messageHubFeed -p topic <mytopic> -p isJSONData true

  11. On Demand Ingestion of Weather Data

    So far we have worked with current weather data represented in form of fairly small JSON files that were ingested to the IBM Cloud and delegated onwards to IBM Event Streams on a timely basis using triggers. However, we can also use the Weather Data API to get information about for example lightning’s World Wide, and in that case we may be dealing with information around the size of several MBs. Of course we could ingest that information at regular intervals as well (e.g. every minute) but that would only make sense if there was a business need for doing so. There may be situations where we would rather like to invoke the ingestion from the outside – on demand – using a REST call rather than doing it automatically and periodically. In this section we shall show you how to achieve this using web actions.

    In this section we shall use the URL on the form above since it is more similar to the Weather Company Data API where the API Key is given directly in the URL. We shall furthermore use an URL that retrieves daily weather forecast information:


    The IBM Functions sequence will be invoked by an external REST call (a web action).  The first steps in the sequence named ‘prepareURL’ will return as parameter the URL to be used to fetch the weather data – be it from the Weather Company API or from the Weather Company Data service on IBM Cloud.  This URL will then be passed onwards to the second step which will use the curl utility action of IBM Functions to submit a REST call that retrieves the weather data from the corresponding service:


    The last two steps are the same as in the sequence defined previously. We will therefore focus on creating the sequence, its trigger and the first two actions. All in all it will require the following steps:

    1. Create sequence.
    2. Add the initial curl function of the samples package to the sequence.
    3. Add the ‘prepareURL’ action as first action to the sequence with code that returns the URL to be used by the curl call.
    4. Test the sequence by invoking it.
    5. Create a web action for the sequence so that the sequence can be invoked by a URL.
    6. Test the URL.


    Go the IBM Functions and make sure that you are in the correct region. To create the sequence do the following:

    1. Select Actions and invoke  the Create Action command.
    2. Select Sequence.
    3. Name the sequence ‘getForecast’, select ‘weatherdatademo’ as package and ‘Use Public’ for the action to be inserted. 
    4. Select Samples as the public package to use.
    5. Select curl as the action of the sequence.
    6. Click Create to create the sequence with the first action contained.


    Once the sequence is created, open the curl action and observe the code:


    The action requires as parameter the URL to be used for the REST request. This means that for a quick solution you can simply insert an action ‘prepareURL’ that will return a message with a payload parameter that contains the Weather Data URL.

    The sequence should look as follows:


    The code for the paramURL action should be as follows using your API key:

    function main(params) {
        return {
            payload : 'https://api.weather.com/v3/wx/forecast/daily/3day?geocode=33.74,-84.39&format=json&units=e&language=en-US&apiKey=<yourAPIKey>'

    Having completed the sequence you can test the work by clicking the Invoke button for the sequence. This should return a JSON object with the relevant information for the URL. What is missing is to define a web action for the sequence so that the invocation can be triggered by a REST request, which requires the following steps:

    1. Select the sequence named ‘getForecast’.
    2. Select the Web Action button to the right of the sequence name:
    3. Select Enable as Web Action.
    4. Click the Copy icon to the right of the URL of the web action to get the URL for the new web action.


    To test the URL (in our case ‘https://eu-de.functions.cloud.ibm.com/api/v1/namespaces/einar.karlsen%40de.ibm.com_dev/actions/weatherdemo/getForecast’) simply paste it into a web browser and wait for the result to occur. 

    It should be noted that the current solution using Web Actions would allow anyone to initiate the request. This can be avoided e.g. by using API Keys or by using securing the Web Action as described in the online documentation ‘Creating Web Actions‘. If you click the API-KEY link above you can obtain the API Key for your name space:


    We shall leave the discussion regarding protection of the web actions as it is for now since it goes beyond the scope of the current recipe.

  12. Conclusion

    In this recipe we have shown how to ingest weather data coming from the Weather Company using IBM Cloud services covering 2 key steps: ingestion of weather data into the IBM Cloud using the Weather Company API and IBM Cloud Functions, and delegation of the captured weather data for further processing to IBM Event Streams. For this purpose we have used key features of IBM Functions (and thus Apache OpenWhisk) such as packages, sequences, actions, triggers and web actions. With IBM Functions we have shown how to produce messages for IBM Event Streams as well as how to consume messages and how to test and monitor that the operations work correctly.

  13. Acknowledgement

    This recipe represents joint work between Julia Wiegel, Rene Meyer and Einar Karlsen. We would like to thank Paul Smith from the Weather Company for providing input regarding the Weather Company API and use of Node.js clients to ingest weather data using the API.

Join The Discussion