Overview

Skill Level: Beginner

This recipe shows how to extract event data from Cloudant NoSQL DB, transform it into an aggregated form and finally load the aggregated data into a data warehouse using Apache Spark, Jupyter and Python under the hood of IBM Data Science Experience.

Step-by-step

  1. Introduction

    The recipe titled ‚ÄėStore Elevator device data and alerts in Cloudant NoSQL DB‚Äô showed you how to store IoT device data in Cloudant NoSQL DB using e.g. the Historical Data Storage Extension. ¬†Another recipe titled ‚ÄėCreate Data Warehouse for reporting on Elevator device data using IBM DB2 Warehouse on Cloud ‚Äô showed how you could create a data warehouse for reporting over IoT event data using IBM DB2 Warehouse on Cloud. A third recipe titled ‚ÄėMonitor Elevator events using IBM Cognos Analytics‚Äô finalized the scenario by showing basic steps for building dashboards and visualizations over that data warehouse using IBM Cognos Analytics.

    The approach used to create the data warehouse did however represent a fast path for getting the data into the data warehouse, insofar that IoT time series data stored in Cloudant NoSQL DB was simply synchronized one-to-one to the data warehouse using a built in integration called Cloudant Analytics (Warehouse).  The consequence was that the information in the data warehouse was at a very detailed level as time series information generated every 5th second. Reporting however required aggregated information such as the minimum, average and maximum motor temperature on a daily basis only.

    In this recipe you will get to define an Extract-Transform-Load (ETL) job that will read information from the IBM Cloudant NoSQL database once a day and then transform the data calculating the minimum, average and maximum motor temperature for that day for each elevator and subsequently store it in a data warehouse table. The job will be defined using IBM Data Science Experience and its underlying technology such as Apache Spark, Python, Jupyter Notebooks and SQL.

    This recipe will take you through the following basic steps needed to create, test and schedule the ETL job:

    • Sign in to IBM Data Science Experience.
    • Get the credentials needed to connect to Cloudant NoSQL DB and IBM DB2 Warehouse on Cloud.
    • Create the target database table (and view) using IBM DB2 Warehouse on Cloud.
    • Create and test the Jupyter notebook that defines the ETL job using IBM Data Science Experience.
    • Schedule the ETL job using IBM Data Science Experience.

    Before you start, let’s enter a short introduction on the architecture and rationale behind the solution.

  2. Architecture

    The main goal of this recipe is to define an ETL job that can transform time series data stored as hierarchically structured JSON documents in a Cloudant database to data in aggregated form stored in a relational data warehouse table in IBM DB2 Warehouse on Cloud as shown in the figure below:

    02.1-ETL-Job-Input-Output

    The aggregated form is more suitable for reporting on a daily basis and requires less compute and store resources for the implementation than using time series information directly.

    The first question that arise in such a context is: what kind of technology should we use to implement and schedule the ETL job? One option would be to use an enterprise ETL solution available on the Cloud such as e.g. IBM Data Stage on Cloud. Another option would be to use Apache Spark and its inherent support for clusters, in memory parallel processing and map/reduce. In fact there are several publications and presentations that advocate the use of Spark for that purpose (see e.g. Slideshare). The support in Spark for functional programming languages such as Python and Scala provides a perfect technological foundation for data manipulation using e.g. functional programming paradigms, higher order functions, filters and maps.

    Next step would be to select the toolset. We could have used a Spark Service on the Cloud as runtime environment, a local Integrated Development Environment for the development of the solution and a scheduler of free choice in order to schedule execution of the job to run once a day (e.g. shortly after midnight). All 3 capabilities are however offered by IBM Data Science Experience and its underlying support for a Spark cluster as runtime environment, a web based editor for Jupyter notebooks as well as a scheduler available out-of-the-box. Since IBM Data Science Experience has already been used in a previous recipe it comes as a natural choice for this recipe as well.

    IBM Data Science Experience supports the programming languages Python, Scala and R. Python and Scala could both do the job with Scala having the advantage that it is strongly typed and efficiently compiled into Java bytecode. Nevertheless I have chosen to use Python for the simple reason that I am more familiar with that programming language. For Python, several frameworks are supported that could be used to extract, transform and load the data: Resilient Distributed Datasets (RDD), Spark Data Frames and Spark SQL.  The decision has been taken to use Spark Data Frames since it provides the operations needed for the task at hand, such as adding new columns to hold computed information as well as operations for aggregating data and computing minimum, average, maximum and count figures.

    The resulting architecture behind the solution is outlined in the figure below:

    02.2-Architecture 

    The IBM Watson IoT Platform is key in injecting and reacting to IoT event data, e.g. by storing the data in Cloudant NoSQL DB using the Historical Data Storage Extension of the IBM Watson IoT Platform. A Jupyter notebook, scheduled to run once a day using IBM Data Science Experience, will then extract the information from Cloudant, transform it and then finally load it into the data warehouse in a table containing aggregated data. The aggregated information can then be used by reporting tools such as IBM Cognos Analytics. From Cloudant NoSQL DB the event data can also be synchronized one-to-one to the IBM DB2 data warehouse using the built in integration called Cloudant Analytics (orange arrow). This reflects the fast path that was chosen previously. However, it is the goal eventually to disable this integration in order to avoid having time series information stored in the data warehouse. This recipe is the first step in that direction.

    In the first implementation of the data warehouse a database view was defined that aggregated the time series information using SQL statements with GROUP BY, MAX, AVG and MIN.

    02.3-DB-Architecture

    Although it solved the task the solution basically meant blowing up the data warehouse with information not relevant for the current reporting tasks. Once that the ETL job has been implemented, the view can simply be changed to take data directly from the new table containing the aggregated data with no further aggregation needed on the level of SQL. Such a solution will work very well using the Historical Data Storage Extension of the IBM Watson IoT Service, in particular if the bucket interval is set to 1 day. Having run the ETL job, the bucket can be archived unless it is needed for other purposes. Beyond creating the Jupyter notebook, you will therefore also have to re-configure the Historical Data Storage Extension to use a bucket interval of 1 day as well as redefining the database view (see the recipe ‘Store Elevator device data and alerts in Cloudant NoSQL DB‚Äô for instructions on how to configure the extension). Neither the Cognos data model nor the dashboards and visualizations should be impacted by this underlying change in the database.

  3. Getting Started with IBM Data Science Experience

    In this section you will register with IBM Data Science Experience, log into the tool and create a project. You can skip this section and log into IBM Data Science Experience directly if you have already gained access, e.g. as defined in the recipe ‚ÄėAnalyze Elevator device data using IBM Data Science Experience‚Äô. If so, just reuse the project from that recipe.

    IBM Data Science Experience supports the data scientist community to learn, create, and collaborate. It allows the user community to organize analytic and data assets into projects so that you can share work with collaborators. It allows data scientist to analyze data using RStudio, Jupyter, and Python in a configured, collaborative environment that includes IBM value-adds, such as managed Spark.

    You can register with the IBM Data Science Experience by going through the following steps:

    1. Click the link Data Science Experience. This will open up a browser window for IBM Data Science Experience.
      03.1-Welcome
    2. Click Sign Up (or alternatively Sign Up for a Free Trial).
      03.2-Sign-Up
    3. Click the link Sign in with your IBM ID.
    4. Enter your IBM ID and click Continue.
      03.3-Login
    5. Enter your IBM Password and click Sign In.
      03.4-Passw0rd
    6. Once in IBM Data Science Experience, you will see the welcome page.
      03.5-Data-Science-Experience

    Projects allow you to orient a team of collaborators around a set of notebooks, data sets, articles, and analysis work streams. This makes it easier for organizations to re-use assets, and for teams to establish and enforce shared best practices while collaborating with each other and learning from the community.

    To create a project do the following:

    1. Click Create New Project (shown in the previous screenshot). This will open up the New Project dialog for defining the properties of the project.
      03.6-Create-Project
      Enter the properties of the Project as follows:
      Name as ‘iot-elevators’.
      Description as you like.
      Spark Service as DSX-Spark.
      Storage Type as Object Storage.
    2. Click Create
    3. You should now see the project in IBM Data Science Experience and will be ready to create notebooks.03.7-Project
  4. Retrieve Source and Target Database Credentials

    Before you can start working with the Jupyter notebook for the ETL job you will need to go through a couple of preparation steps related to the source and target database for the ETL job to ensure that you have the required information at hand such as the credentials and the database names. These steps are:

    1. Retrieve the credentials for the two services. This information will be used in the notebook to connect to the services.
    2. Retrieve the name of the IBM DB2 Warehouse on Cloud target schema.
    3. Retrieve the names of the Cloudant NoSQL DB source database(s).
    4. If required, change the configuration of the Historical Data Storage extension to generate database buckets for IOT event data once a day if not already done.

    To retrieve the credentials for the two database services involved, do the following first:

    1. In IBM Cloud, select the data lake application named e.g. elevator-datalake-<your name> that you created in one of the previous recipes.
      04.1-CF-Applications
    2. Click the Connections tab, then Click the View credentials button for the IBM DB2 Warehouse on Cloud service.
      04.2-Connections
    3. Copy the credentials for IBM DB2 Warehouse on Cloud and store them in a file on your local computer for later use. 
      04.3-Credentials
    4. Then close the window by clicking the X button.
    5. Repeat the steps to get the credentials for the Cloudant NoSQL DB.

     

    Next, open the IBM DB2 Warehouse on Cloud console to get hold on the schema name:

    1. Back in the Connections tab, select the IBM DB2 Warehouse on Cloud service to open up the dashboard for the service.
      04.4-dashDB-Service
    2. Click the OPEN button to open the IBM DB2 Warehouse on Cloud console.
    3. In the IBM DB2 Warehouse on Cloud console, click Explore to view the existing tables.
      04.5-dashDB-Databases
    4. Select the schema named by your dash DB user account, i.e. DASH<nnnn> (in the picture above DASH6769).

     

    Note the relevant schema name for later use. Then do the following to get hold on the database names for Cloudant NoSQL DB:

    1. Back in the Connections tab, select the IBM Cloudant NoSQL DB service to open up the dashboard for the service.
    2. Click the Launch button to open the console.
    3. In the console, click Databases to view the existing databases.
      04.6-Cloudant-databases

    Note the names and patterns of the historic databases to be used. The important part is the prefix, in this case ‘iotp_ep8hc_elevator_history’. You will be able to test the ETL job with any of the databases coming from the Historical Data Storage Extension so select one now that has a sufficient number of documents (e.g. ‘iotp_ep8hc_elevator_history_2017-05’ in my case). The list above contains buckets that have been generated on a daily basis (marked red) as well as buckets that have been generated for an entire month (e.g. ‘iotp_ep8hc_elevator_history_2017-07’). However, running the job automatically every night will require that you have configured the extension in the Watson IoT service to create buckets on a day by day basis as shown below:

           04.7-Historical-Data-Storage-Extension

    If this has not been done please follow the instructions in the recipe ‚ÄėStore Elevator device data and alerts in Cloudant NoSQL DB‚Äô to configure the extension appropriately.

  5. Create the Target Database and View

    Reporting will be done on the basis of filtered and aggregated views of the device data. In this section you will get hold on SQL view definitions from a GitHub repository and then run the SQL statements using the IBM DB2 Warehouse on Cloud console to create the target database table and corresponding view.

    First, download the required SQL files from GitHub:

    1. Click the link “https://github.com/EinarKarlsen/iot-dsx-etl-job”.
      05.1-Github
    2. Download the 2 files in the GitHub repository to your local computer and unzip the zip file.
    3. Open the text file containing SQL statements with Wordpad (or another editor of your choice) and replace the schema name in the files (DASH6769) with the current schema name of your IBM DB2 Warehouse on Cloud instance.
    4. Save the modified file.

     

    The file “Create SQL Views and Tables.txt” contains 1 table definition and 1 view definition. One is DASH6769.ELEVATOR_EVENTS_AGGREGATED_BY_DAY that defines the aggregated temperatures for each day:

    CREATE TABLE DASH6769.ELEVATOR_EVENTS_AGGREGATED_BY_DAY (
       DATE DATE NOT NULL,
       DEVICEID VARCHAR(64) NOT NULL,
       DEVICETYPE VARCHAR(64),
       MINMOTORTEMP DOUBLE,
       AVGMOTORTEMP DOUBLE,
       MAXMOTORTEMP DOUBLE
    );

    Next, create the table DASH6769.ELEVATOR_EVENTS_AGGREGATED_BY_DAY using the IBM DB2 Warehouse on Cloud console:

    1. Select the IBM DB2 Warehouse on Cloud console in your browser.
    2. Select the Run SQL button at the top of the console.
      05.2-Create-DashDB-Table
    3. Copy the definition of the table rom the text file into the SQL Editor as shown above.
    4. Click Run All.
    5. Check that the SQL statement has succeeded as shown above.
    6. Copy the view definition into the SQL editor and click Run All again.
    7. If you click the Results tab you should observe that neither the table nor the view returns any data at the current point in time.

     

    If you need to delete either the table or the view use one of the following SQL statements for that purpose:

    DROP TABLE DASH6769.ELEVATOR_EVENTS_AGGREGATED_BY_DAY;
    DROP VIEW DASH6769.VW_ELEVATOR_EVENTS_BY_DAY;

     

  6. Create the Jupyter Notebook

    Jupyter notebooks are a web-based environments for interactive computing. You can run small pieces of code that process your data, and you can immediately view the results of your computation. Notebooks include all of the building blocks you need to work with data: data, code computations, visualizations, text and rich media to enhance understanding.

    In this section of the recipe you shall:

    1. Create a new notebook for Python programming using an existing notebook on GitHub as starting point.
    2. Update the notebook to use the database credentials and database names that you collected in section 4.
    3. Run the notebook one step at a time – interactively.
    4. Test the result of the notebook in the IBM DB2 Warehouse on Cloud console.

     

    The notebook will be created using Python as programming language and Spark Python API (PySpark) as key API. The Spark Python API exposes the Spark programming model to Python. More information on the PySpark package can be found at the PySpark package documentation. In this recipe we shall use DataFrames and Spark SQL, which allows you to query structured data inside Spark programs.

    To create a notebook, do the following:

    1. Click add notebooks.

      06.1-Create-Notebook

    2. This will open the Create Notebook dialog for defining the properties of the notebook.
    3. Set the properties of the notebook as follows:
      06.2-Create-Notebook
      Name: ‘elevator-data-aggregation-etl’.
      Description: whatever you like.
      Language: Python 2.
    4. Select the tab named From URL. You will need to import a notebook from GitHub to be used as the starting point.
      Click the following link for GitHub project iot-dsx-etl-job, copy the URL from the browser and insert it into the Notebook URL field as shown above.
    5. Click Create Notebook. This will create a notebook for you looking like this:

      06.3-Notebook-Heading

    The toolbar at the top provides commands for sharing (e.g. on Github), scheduling, versioning, collaborating and finding resources. The Kernel menu provides commands for e.g. restarting the underlying kernel runtime system. When you open a notebook in edit mode, exactly one interactive session connects to a Jupyter kernel for the notebook language you select. This kernel executes code that you send and returns the computational results. The kernel remains active even if the web browser window is closed. Reopening the same notebook from Data Science Experience will connect the web application to the same kernel.

    There are menus as well for handling interaction with the file system (saving, printing, downloading) and for editing the cells of the notebook. Each cell has a specific format: it can be a header, a markdown or a piece of code that can be executed. The format can be controlled in the Format selection box. To run the code you will actually use the Run Cell command – that is the triangle surrounded by a circle in the screenshot above.

    The notebook itself is divided into several sections as documented in the screenshot above. We shall look into the common declarations first:

    1. Scroll down to the section:

      06.4-Common-Declarations

    2. Select the cell containing Python code.
    3. Invoke Run Cell by clicking 06.5-Run-Cell in the toolbar.

     

    The cell containing Python code initializes the Spark session and declares a function for extracting information from a Cloudant NoSQL DB. It furthermore initializes a debug variable that controls whether intermediate results of the notebook are printed (debug=1) or not (debug=0).

    Next we will have a look at the section defining the database credentials:

    1. Scroll down to the section:

      06.6-Database-Credentials

    2. Enter the database credentials that you noted in section 4 as shown above.
    3. Also set the variable bucket_prefix to the prefix for your buckets.
    4. Select the cell containing the Python code.
    5. Invoke Run Cell.
    6. Wait for the output to appear.
    7. Save the notebook by invoking File > Save.

     

    Beyond declaring the credentials needed for obtaining access to the databases, this section also defined a variable named yesterdays_bucket_name. This variable holds the name of the Cloudant database for yesterday’s event data. This is the database name to use for a scheduled ETL job. For now you can just enter a hardcoded database name as shown above.

    Next we will have a look at the section extracting data from Cloudant NoSQL DB:

    1. Scroll down to the section:

      06.7-Read-Event-Data-From-Cloudant-1

    2. Select the cell containing the Python code.
    3. Invoke Run Cell.
    4. Wait for the output to appear. It should be similar to the output above.

     

    What is returned from this cell is a Data Frame containing a sequence of raw event data reflecting the hierarchical structure of the JSON documents in Cloudant. However, of all the event data stored in the column named data, we are only interested in motor temperatures so it makes sense to compute a new Data Frame containing only a selection of the columns plus a few derived ones such as the date. The date in turn can be computed by taking the first 10 characters of the timestamp:

    1. Scroll down to the next cell containing Python code:
      06.8-Column-Selection
    2. Select the cell containing the Python code.
    3. Invoke Run Cell.
    4. Wait for the output to appear.

     

    For a complete list of the operations that are available on Data Frames please refer to documentation of the pyspark sql module.

    The data is now in a form where it is ready for aggregation:

    1. Scroll down to the section:

      06.9-Aggregate-Event-Data

    2. Select the cell containing the Python code.
    3. Invoke Run Cell.
    4. Wait for the output to appear.

     

    The data has now been extracted and transformed and is now ready to be loaded into the data warehouse. We will do this in two steps by first loading the data into a staging area, and then in a next step merge the data from the staging area to the data warehouse table. This represents a typical solution pattern in design analytics applications.

    To save the data to the staging area execute the following piece of code of the notebook:

    1. Scroll down to the section:

      06.10-Save-Data-to-Data-Warehouse

    2. Select the cell containing the Python code and invoke Run Cell.
    3. Wait for the spark job to finish.

     

    The aggregated data has now been stored in a temporary table named TEMP_DATAFRAME_AGGR_EVENT_DATA. From there it needs to be copied to the table named ELEVATOR_EVENTS_AGGREGATED_BY_DAY that holds all aggregated data from previous runs of the ETL job. This task will be achieved by executing an SQL statement that merges the records in the staging area to the data warehouse table:

    MERGE INTO ELEVATOR_EVENTS_AGGREGATED_BY_DAY t
    USING (
        SELECT
           s."deviceId" AS DEVICEID,
           s."deviceType" AS DEVICETYPE,
           DATE(s."date") AS DATE,
           ROUND("min(motorTemp)",0) AS MINMOTORTEMP,
           ROUND("avg(motorTemp)",0) AS AVGMOTORTEMP,
           ROUND("max(motorTemp)",0) AS MAXMOTORTEMP
           FROM TEMP_DATAFRAME_AGGR_EVENT_DATA s) e ON (e.DATE=t.DATE AND e.DEVICEID = t.DEVICEID
        )
    WHEN MATCHED THEN
        UPDATE SET t.MINMOTORTEMP = e.MINMOTORTEMP, t.AVGMOTORTEMP = e.AVGMOTORTEMP, t.MAXMOTORTEMP = e.MAXMOTORTEMP
    WHEN NOT MATCHED
        THEN INSERT (DATE, DEVICEID, DEVICETYPE, MINMOTORTEMP, AVGMOTORTEMP, MAXMOTORTEMP)
        VALUES (e.DATE, e.DEVICEID, e.DEVICETYPE, e.MINMOTORTEMP,e.AVGMOTORTEMP,e.MAXMOTORTEMP);

    The solution has the advantage in this context that it makes the ETL job idempotent: even if run several times it will produce the same result by updating existing records in the target database.

    In this notebook we have used the ibmdpy library to create a connection to IBM DB2 Warehouse on Cloud as described in the notebook titled ‘Sample Python Notebook – Running SQL against IBM DB2 Warehouse on Cloud‘. Having established the connection to the IBM DB2 Warehouse on Cloud, the SQL statement is¬† submitted which merges the content of the table TEMP_DATAFRAME_DISTR_EVENT_DATA to the table ELEVATOR_EVENTS_AGGREGATED_BY_DAY as defined above.

    To view and execute the code do the following:

    1. Scroll down to the section:

      06.11-Move-Data-to-Target-Table

    2. Select the first cell containing the Python code and invoke Run Cell.
    3. Select the second cell containing the Python code and invoke Run Cell again.

     

    Notice the following: unlike described in the notebook ‘Sample Python Notebook – Running SQL against IBM DB2 Warehouse on Cloud‘, it is no longer necessary to import pixiedust and declare a custom JDBC driver using Scala code.The code is therefore significantly simplified.

    To test the complete execution of the notebook invoke Kernel > Restart and Run All from the menu.

    To view the content in the resulting databases, go back to the IBM DB2 Warehouse on Cloud console and do the following:

    1. Select the Explore tab.
      06.12-Explorer-Data-Warehouse
    2. Select the Schema and the database Table as shown above.
    3. Click View Data.
      06.13-ViewData-Warehouse-Data
    4. Once you are finished viewing the data, click Back in the upper left corner.

     

  7. Schedule the Notebook

    ETL jobs are usually scheduled to run on a regular basis. How often is dictated by the reporting requirements and how strong the need is to obtain current data. In order to schedule the notebook to run once a night you will need to go through the following number of steps:

    1. Copy and rename the notebook just defined.
    2. Change the notebook to automatically compute the name of the Cloudant database so that the bucket from the last day is always used.
    3. Schedule the execution of the notebook.

     

    To create a copy of the notebook from the previous section, simply do the following:

    1. In IBM Data Science Experience, navigate back to the project view by clicking the name of the project (‘iot-elevators’) in the upper left corner of the screen.
      07.1-Duplicate-notebook
    2. Duplicate the notebook by invoking the Duplicate command from the popup menu as shown above.
    3. Open the editor for the new notebook in editing mode by clicking the Pencil button.
    4. In the toolbar click the button named Properties.
      07.2-Rename-notebook
    5. Provide a new name of the notebook as shown above.
    6. Select the “Trust this Notebook to run with your” checkbox at the bottom of the dialog.
    7. Close the dialog by clicking the X in the upper left corner of the dialog.
    8. Save the notebook by invoking File > Save.

     

    To change the notebook to use the Cloudant database from yesterday do the following:

    • Add the following 3 lines of code to the cell that defines the credentials as shown below:
      07.3-Cloudant-Database-1
    • The first statement disables printout of intermediate results. The second changes the database name of the Cloudant database.
    • Save the notebook by invoking File > Save.
    • Test the change by invoking Run Cell (you may temporarily set debug to 1 to do that)!
    • Check that the name of the Cloudant database is in fact yesterdays bucket name.

     

    You can now schedule the notebook:

    1. Invoke Schedule from the toolbar.
      07.4-Toolbar
    2. Fill in the details as shown below:
      07.5-Schedule-Job
    3. Save the changes by clicking the Schedule button in the lower right corner of the dialog.

    The job is now scheduled to run. In the current example or a short period of time however.

  8. Conclusion

    In this recipe we have taken on the task to create an ETL job that could appropriately extract information out of a Cloudant NoSQL database, transform the information by aggregating motor temperatures on a day by day basis, and then store the information for subsequent reporting in an IBM DB2 Warehouse on Cloud database. IBM Data Science Experience and its underlying technology consisting of Apache Spark, Jupyter notebooks, Python and SQL was used to achieve this task. IBM Data Science Experience was also used to schedule the ETL job to run once a night.

    In a real productive environment one would be more likely to use another scheduler that would allow to chain a larger set of ETL jobs to be executed in the right order e.g. using diagrams such as flowcharts or activity diagrams. Moreover, no effort has been made in the presented approach to optimize execution in the small by more efficient programming e.g. using Scala or in the large by running the notebook in a Spark cluster taking large data sets as input.

    This recipe was one first steps towards optimizing the implementation of the data warehouse used for monitoring elevators as defined in the recipe ‘Monitor Elevator events using IBM Cognos Analytics‘. In the recipe ‘Processing Elevator device data using the Watson IoT Platform and Node.js Consumers‘ we will then how to perform near real time analytics on the current state of the elevators by creating a Node.js consumer that read elevator event data from the Watson IoT Platform and saves the current status to a dedicated database table in the Data Warehouse. When this is done, we can disable the synchronization between Cloudant NoSQL DB and IBM DB2 Warehouse on Cloud and avoid cluttering up the Data Warehouse with time series information.

  9. Acknowledgement

    I would like to thank Torsten Steinbach for providing valueable input, links and tips and tricks relevant for storing information in IBM DB2 Warehouse on Cloud.

Join The Discussion