Overview

Skill Level: Intermediate

This article describes how to implement the Shadower to send IZDS or IZPCA data to IZCDP. The Shadower is reading the data directly from Db2 database and sending to IZCDP Data Streamer which according to the policy sends off the platform (ELK, Splunk).

Ingredients

There are different ways to send the data from IBM Z Decision Support or IBM Z Performance and Capacity Analytics off the z/OS platform, but I will describe how to send the data from their Db2 database (near real-time or historical) through IBM Z Common Data Provider to Elasticsearch Platform.

 The first part of the article will cover the work that needs to be done on z/OS and the second part will cover the work that needs to be done on Elasticsearch platform.

I will use the following acronymous through these articles:

  • IZPCA – IBM Z Performance and Capacity Analytics (or IBM Z Decision Support)
  • IZCDP – IBM Z Common Data Provider
  • IZDS – IBM Z Decision Support
  • ELK – Elasticsearch Platform (Elasticsearch, Logstash and Kibana)

Overview

The Shadower is reading the data from Db2 database using JDBC driver. That data can be historical or near-real time. The Shadower is then sending the data to Data Streamer. Data streamer is receiving the data according to the policy and then sending to the designated subscriber, in this case ELK.

You need supported z/OS operating system and:

IBM Z Performance and Capacity Analytics 3.1 or IBM Z Decision Support 1.9
IBM Z Common Data Provider 2.1 or IBM Z Operations Analytics 4.1

You also need the Elastic Stack 7.8.1 components installed and started on any supported platform:

  • Logstash
  • Elasticsearch
  • Kibana

Always implement the newest maintenance to IZPCA or IZDS, and you can find what is new here:

You can check Part 2 where I described how to implement the reporting in ELK.

Step-by-step

  1. Generate stream definitions and table mappings from Db2 database

    The sample DRLJCDPS from SDRLDEFS library contains the stream and JSON map generation. That job contains several tools:

    • DRLELSTT – table list generation utility 
    • DRLEMCDS – IZCDP stream definition generation
    • DRLEMTJS – JSON table map generation

     

    When you customize and submit the job, you got the result, in provided folder, for example /u/izpca/mapping-util, which contains the following files:

    • table_name.json – for each installed table
    • izds.streams.josn – IZCDP stream definition 

     

    After applying maintenance or any customization to IZDS/IZPCA tables, this step must be rerun again.

  2. Configuring IZCDP for data streaming

    The izds.streams.json file is required by IBM Z Common Data Provider. It needs to be copied into its configuration folder, for example /u/zcdpConfig/cdpConfig for the IBM Z Common Data Provider user interface.

    So, create a link from that file that you got as the result of the previous step, to cdpConfig:

    # ln -sf /u/izpca/mapping-util/izds.streams.json /u/zcdpConfig/cdpConfig/izpca.streams.json

    Instead of creating the symbolic link, you can also copy the file. But then you have to repeat that step each time the file is updated.

    # cp /u/izpca/mapping-util/izds.streams.json /u/zcdpConfig/cdpConfig

    After that close the IZCDP policy and open it again.

  3. Update IZCDP policy to stream the data

    After opening the policy, click on Data Stream button:

    DataStreamAdd

     

    And select all streams from IZDS tables collection that you want to stream.

    IZDStables

     

    Select only tables/streams that are installed with the components in IZPCA/IZDS.

    Create ELK subscriber:

    SubscriberELK

     

    As Protocol choose: IZOA on Elasticsearch via Logstash, put the hostname and port where Logstash is installed and listening for IZPCA data.

    Subscribe all the streams you want:

    SubscriberStreams

    After clicking on “Update subscriptions”, you will have the result similar as this one:

    ZCDP

    Now, you can save the policy and restart address spaces from IBM Z Common Data Provider, when you are ready to receive the data.

  4. Setting up Db2 Shadowing

    This step involves Forecaster running a Shadower that will actively poll Db2 for updates to tables and views using JDBC.

    You will create a new Forecaster which polls Db2 data and transforms it into JSON and sends it to the receiver.

    Unpack Forecaster module

    If you didn’t do it previously, unpack a new Forecaster:

    # tar -xvof /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJFC -C /u/izpca/

     

    If you are already using it “in production” unpack it in temporary directory if it is newer version and then copy appropriate files to the production directory:

    # tar -xvof /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJFC -C /u/izpca/temp

    If you apply a maintenance, you need to copy the changed files to your production working directory /u/izpca/Forecaster/.

     

    Edit Shadower.properties

    Edit extracted Shadower.properties or ShadowerRange.properties file in /u/izpca/Forecaster/config directory.


    input.1.type = SHADOW
    input.1.interval = 15
    input.1.protocol = db2
    input.1.connection = localhost:5050
    input.1.location = DBCG
    input.1.access = secure/db2access.properties
    input.1.schema = DRL
    #
    ...
    # Each table mentioned must be installed
    input.1.tables = 132
    #
    ...
    #
    input.1.table.132.name = KPM_EXCEPTION_T
    input.1.table.132.date = DATE
    input.1.table.132.time = TIME
    input.1.table.132.frequency = TIMESTAMP
    #input.1.table.132.initial = RANGE
    #input.1.table.132.range.from = 2020-06-01 00.00.00
    #input.1.table.132.range.to = 2020-11-01 00.00.00
    input.1.table.132.source = MVS_SYSTEM_ID
    #
    input.1.sources = 9
    input.1.source.1 = S0W1
    input.1.source.2 = S0W2
    input.1.source.3 = S0W3
    ...
    input.1.source.9 = S0W9
    #
    ...
    input.1.check_dir = shadow
    #input.1.check_dir = shadowRange
    ...
    # Send to CDPz
    #
    outputs.1 = 1
    #
    # Over TCPIP
    #
    output.1.1.type = TCPIP
    output.1.1.host = localhost
    output.1.1.port = 51401
    output.1.1.directory = bufferR2
    #

    Run the table mapping utility

    You already ran it here, so just copy the files from previous run:

    # cp /u/izpca/mapping-util/*.json /u/izpca/Forecaster/mappings/

     

    Create a Started Task for Shadowing

    User ID configuration

    Create a user ID that can only access the Db2 database that is used by IZPCA. Ensure that the ID has read/write access.

    Edit the izpca/Forecaster/secure/db2access.properties sample to specify the userid and password that you created.

    Restrict access to the izpca/Forecaster/secure subdirectory to only the following users:

    • The administrator who manages the user ID and password for the Db2 access must have read/write access.
    • The Batch user ID that runs the forecasting must have read only access.

    When the password is changed, only the izpca/Forecaster/secure/db2access.properties file needs to be updated.

     

    Forecaster.sh script

    Edit the Forecaster.sh shell script and update the values for the following variables to match your site:

    • rundir – The directory that contains the Forecaster.sh script.
      • rundir=”/u/izpca/Forecaster”
    • logfile – The log file that sets the message output properties.
    • config – This maps the input parameter into a config file.
    • db2jdbc – The directory that contains the JDBC .jar files for the Db2 that it is running against.
      • db2jdbc=”/usr/lpp/db2c10/jdbc/classes”
    • JAVA_HOME – The directory where you have an instance of the IBM Java SDK 8 installed.
      • JAVA_HOME=/usr/lpp/java/J8.0_64

     

    Create a new started task for Shadower

    Use the DRLJFCP sample to produce a JCL procedure to run the Forecaster module as a job. You will use this JCL to run the Forecaster module.


    //DRLJFCP PROC CONFIG='Shadower'
    //*
    //* Set to the directory you installed the Forecaster into
    //*
    // SET WDIR='/u/izpca/Forecaster'
    //*
    //RUNFC EXEC PGM=BPXBATSL,REGION=0M,TIME=NOLIMIT,
    // PARM=('PGM /bin/sh &WDIR./Forecaster.sh &CONFIG')
    //* Output data sets
    //STDOUT DD SYSOUT=*
    //STDERR DD SYSOUT=*
    //*
    // PEND

    Once started, the Shadower Forecaster will be set up to read data from Db2, convert it into JSON and send it to the IBM Z Common Data Provider for distribution.

    Depending on the configuration in the Shadower.properties file you can send near real-time or recent data (if continous collector is implemented), or historical data. Instructions for that are contained inside the Shadower.properties and ShadowerRange.properties files.

  5. Next steps

    Now, you can check Part 2 where I described how to implement the reporting in ELK.

Join The Discussion