Overview

Skill Level: Intermediate

Many SMF records that are not supported by IZOA with dashboards and insights on the supported analytics platforms are supported by CDPz. But there are still some unsupported records. Here is what and how can you do it on Elastic Stack side.

Ingredients

IBM Z Operations Analytics (IZOA) is supporting quite several log records produced on z/OS platform with a dashboards and problem insights on IBM Operations Analytics Log Analysis, Splunk and Elastic Stack analytics platforms. IZOA is using Common Data Provider for z Systems (CDPz) capability to send log data to the target analytics platform. CDPz can be standalone product or the part of IZOA.

Many SMF records that are not supported by IZOA with dashboards and insights are supported by CDPz. If you want to send that records to your target platform, it is then your responsibility to create meaningful dashboards, searches and insights on the target platform. But, at least IZOA and CDPz are offering the capability to send all that records to your analytics platform.

Even though there are more and more SMF record supported with definitions by CDPz, there are still some that are not. If you don’t want to wait for that records to become the part of CDPz or IZOA, you can do it by yourself. And I am providing the receipt here how you can do it on Elastic platform.

You need supported z/OS operating system and one of the two products:

IBM Z Operations Analytics
IBM Common Data Provider

You also need Elastic Stack components installed and started on any supported platform:

  • Logstash
  • Elasticsearch
  • Kibana

 

Minimum version supported by IZOA Insights Pack is 6.1.

We want to be able to receive SMF record SMF_042_9 which is not supported by either IZOA nor CDPz.

Let’s assume you created SMF RECORD, UPDATE and custom data stream definition as explained in Part 1

Step-by-step

  1. Add the definitions to Logstash

    You must add SMF definition to Elasticsearch platform. The best way to do that is use the closest existing definition and make the required modifications.

    On z/OS system, where IZOA or CDPz are installed, you can find all SMF definitions supported by CDPz, the default location is:

    /usr/lpp/IBM/cdpz/v1r1m0/DS/LIB/ibm_cdpz_ELK.tar.gz

    Download ibm_cdpz_ELK.tar.gz from your z/OS system, copy and extract that file on the server where your Logstash is running. When you extract the file, you will get the folder cdp-config with all definitions necessary to process all CDPz supported records by Elastic stack. If you are using IZOA dashboards and insights, you will just use what you need from that folder.

     

    cdp-config folder

     

    So, copy a pair of files (H_* and N_*) with closest definition to your zoa-config folder.

    For example, if you want to send SMF_042_9, you copy H_SMF_042_6.lsf and N_SMF_042_6.lsf and rename them to H_SMF_042_6.conf and N_SMF_042_6.conf

    Change the line in both files (note the lower case):

       if [sourceType] == "zOS-SMF_042_6" { 

    to

       if [sourceType] == "zos-smf_042_9" {

     

    and put appropriate fields as you defined in your custom definition dataset in H_SMF_042_9 file:

     
    # Field Annotation for stream zOS-SMF_042_9

    filter {
    if [sourceType] == "zos-smf_042_9" {

    csv{ columns => [ "Correlator", "SMF42TME", "SMF42DTE", "SMF42SID",
    "SMF42SSI", "S42ASYID", "S42JOBN", "S42AAUID", "S42ASTPN", "S42FLAGS",
    "S42DSNME", "S42VOLSR", "S42NEXT", "S42MCNME", "S42SCNME","S42DCNME" ] separator => "," }
    }
    }

     

    If you enhanced your definitions on z/OS side, you can now create the file with enhanced definitions in zoa-config folder on your Logstash server, for example H_SMF_042_9.conf:


    # Field Annotation for stream zOS-SMF_042_9

    filter {
    if [sourceType] == "zos-smf_042_9" {

    csv{ columns => [ "UPDATE_NAME", "UPDATE_VERSION", "TIMESTAMP",
    "MVS_SYSTEM_ID", "JOB_NAME", "USER_ID", "STEP_NUMBER", "ABEND",
    "DATASET_NAME", "VOLSER", "NUMBER_OF_EXTENTS", "MANAGEMENT_CLASS",
    "STORAGE_CLASS", "DATA_CLASS" ]
    separator => "," }
    }
    }

     

  2. Restart or refresh Logstash

    Restart or refresh Logstash and check if you can see you records in Kibana:

    Record SMF_042_9 in Kibana

    or:

    SMF_042_9 in Kibana with enhanced definitions

    You can see small yellow triangle before the value of most of the fields. That means that these fields are not indexed in Elasticsearch.

  3. Create index in Elasticsearch

    Delete the index if previously created

    Delete your index if you collected any data, before you indexed the fields. Go to Management / Elasticsearch / Index Management:

    Index Management

    For each index you want to delete, click on Manage (bottom, right) and then Delete Index.

    All records related to that index will be deleted too.

     

    Create index template

     

    PUT _template/zos-smf_042_9
    {
    "index_patterns": "zoa-zos-smf_042_9-*",
    "settings": {
    "index.refresh_interval": "1s"
    },
    "mappings": {
    "doc": {
    "dynamic": false,
    "properties": {
    "TIMESTAMP": {
    "type": "keyword"
    },
    "JOB_NAME": {
    "type": "keyword"
    },
    "MVS_SYSTEM_ID": {
    "type": "keyword"
    },
    "USER_ID": {
    "type": "keyword"
    },
    "STEP_NUMBER": {
    "type": "keyword"
    },
    "ABEND": {
    "type": "keyword"
    },
    "DATASET_NAME": {
    "type": "keyword"
    },
    "VOLSER": {
    "type": "keyword"
    },
    "NUMBER_OF_EXTENTS": {
    "type": "keyword"
    },
    "MANAGEMENT_CLASS": {
    "type": "keyword"
    },
    "STORAGE_CLASS": {
    "type": "keyword"
    },
    "DATA_CLASS": {
    "type": "keyword"
    }
    }
    }
    }
    }

     

    Execute that in Dev Tools of Kibana user interface:

    Picture24

     

     

    Refresh field list
     

    On Management / Kibana / Index Patterns click on Refresh field list.

    Dev Tools

     

    Now, your data looks like this in Kibana interface:

    Picture26

    So, now, you can search on the indexed data.

Join The Discussion