Overview

Skill Level: Intermediate

Many SMF records that are not supported by IZOA with dashboards and insights on the supported analytics platforms are supported by ZCDP. But there are still some unsupported records. Here is what and how can you do it on Elastic Stack side.

Ingredients

 

IBM Z Operations Analytics (IZOA) is supporting quite a few log records produced on z/OS platform with a dashboards and problem insights on IBM Operations Analytics Log Analysis, Splunk and Elasticsearch analytics platforms. IZOA is using IBM Z Common Data Provider Systems (ZCDP) capability to send log data to the target analytics platform. ZCDP can be standalone product or the part of IZOA.

There are SMF records that are not supported by IZOA and that means that out of the box dashboards and insights are not available. ZCDP can also be standalone product, so many records not convered by IZOA are covered by ZCDP. If you want to send that records to your target platform, it is then your responsibility to create meaningful dashboards, searches and insights on the target platform. But, at least IZOA and  ZCDP are offering the capability to send all that records to your analytics platform.

Even though there are more and more SMF record supported with the definitions by ZCDP, there are still some that are not. If you don’t want to wait for that records to become the part of ZCDP or IZOA, you can do it by yourself.

And I am providing the recipe here that will walk you through the example of the steps to be done on Elasticsearch platform.

You need supported z/OS operating system and one of the two products:

IBM Z Operations Analytics
IBM Z Common Data Provider

You also need the Elastic Stack components installed and started on any supported platform:

 

  • Logstash
  • Elasticsearch
  • Kibana

 

Minimum Elasticsearch version supported by IZOA Insights Pack is 6.1. In order to deploy IZOA 4.1 on the Elastic Stack platform, check this link:

https://www.ibm.com/support/knowledgecenter/SS55JD_4.1.0/com.ibm.zosla.doc/topics/zoa_deploy_elastic.html.

If you are using only ZCDP or if you want to use ZCDP definitions in addition to IZOA, you can check official documentation on this link:

https://www.ibm.com/support/knowledgecenter/SSGE3R_2.1.0/elasticsearch_prep.html

 

So, let’s say, you want to be able to receive SMF record SMF_042_9 which is not supported by either IZOA or ZCDP.

 

Let’s assume SMF RECORD, UPDATE and custom data stream definition are created as explained in Part 1

Step-by-step

  1. Add the definitions to Logstash

    You must add SMF definition to Elasticsearch platform. The easiest way to do that is to use the closest existing definition and make the required modifications.

    On z/OS system, where IZOA or ZCDP are installed, you can find all SMF definitions supported by ZCDP, the default location is:

    /usr/lpp/IBM/zcdp/v2r1m0/DS/LIB/ibm_cdpz_ELK.tar.gz

    Download ibm_cdpz_ELK.tar.gz from your z/OS system, copy and extract that file on the server where your Logstash is running. When you extract the file, you will get the folder cdp-config with all definitions that are necessary to process all ZCDP supported records by Elastic stack. If you are using IZOA dashboards and insights, the best way is to use just what you need from that folder.

     

    7-CDPELK-1

     

    So, copy a pair of files (H_* and N_*) with closest definition of the target SMF record to your zoa-config folder.

    For example, if you want to send SMF_042_9, you copy H_SMF_042_6.lsf and N_SMF_042_6.lsf and rename them to H_SMF_042_9.conf and N_SMF_042_9.conf

     

    Change the line in both files (note the lower case):

       if [sourceType] == "zOS-SMF_042_6" { 

    to

       if [sourceType] == "zos-smf_042_9" {

     

    and put appropriate fields as you defined in your custom definition dataset in H_SMF_042_9 file:

     
    # Field Annotation for stream zOS-SMF_042_9

    filter {
    if [sourceType] == "zos-smf_042_9" {

    csv{ columns => [ "Correlator", "SMF42TME", "SMF42DTE", "SMF42SID",
    "SMF42SSI", "S42ASYID", "S42JOBN", "S42AAUID", "S42ASTPN", "S42FLAGS",
    "S42DSNME", "S42VOLSR", "S42NEXT", "S42MCNME", "S42SCNME","S42DCNME" ] separator => "," }
    }
    }

     

    If you enhanced your definitions with more user friendly names and the filters on z/OS side, you can now create the file with enhanced definitions in zoa-config folder on your Logstash server, for example H_SMF_042_9.conf:


    # Field Annotation for stream zOS-SMF_042_9

    filter {
    if [sourceType] == "zos-smf_042_9" {

    csv{ columns => [ "UPDATE_NAME", "UPDATE_VERSION", "TIMESTAMP",
    "MVS_SYSTEM_ID", "JOB_NAME", "USER_ID", "STEP_NUMBER", "ABEND",
    "DATASET_NAME", "VOLSER", "NUMBER_OF_EXTENTS", "MANAGEMENT_CLASS",
    "STORAGE_CLASS", "DATA_CLASS" ]
    separator => "," }
    }
    }

      In this case, don’t use N_SMF_042_9.conf file.

  2. Restart or refresh Logstash

    Restart or refresh Logstash and check if you can see your records in Kibana user interface:

    Record SMF_042_9 in Kibana

    or (with the enhanced definitions):

    SMF_042_9 in Kibana with enhanced definitions

    You can see small yellow triangle before the value of most of the fields. That means that these fields are not indexed in Elasticsearch.

  3. Create index in Elasticsearch

    Delete the index if previously created

    Delete your index if you collected any data, before you indexed the fields. Go to Management / Elasticsearch / Index Management:

    Index Management

    For each index you want to delete, click on Manage (bottom, right) and then Delete Index.

    All records related to that index will be deleted too.

     

    Create an index template

     

    PUT _template/zos-smf_042_9
    {
    "index_patterns": "zoa-zos-smf_042_9-*",
    "settings": {
    "index.refresh_interval": "1s"
    },
    "mappings": {
    "doc": {
    "dynamic": false,
    "properties": {
    "TIMESTAMP": {
    "type": "keyword"
    },
    "JOB_NAME": {
    "type": "keyword"
    },
    "MVS_SYSTEM_ID": {
    "type": "keyword"
    },
    "USER_ID": {
    "type": "keyword"
    },
    "STEP_NUMBER": {
    "type": "keyword"
    },
    "ABEND": {
    "type": "keyword"
    },
    "DATASET_NAME": {
    "type": "keyword"
    },
    "VOLSER": {
    "type": "keyword"
    },
    "NUMBER_OF_EXTENTS": {
    "type": "keyword"
    },
    "MANAGEMENT_CLASS": {
    "type": "keyword"
    },
    "STORAGE_CLASS": {
    "type": "keyword"
    },
    "DATA_CLASS": {
    "type": "keyword"
    }
    }
    }
    }
    }

     

    Execute definition that in Dev Tools of Kibana user interface:

    Picture24

     

     

    Refresh field list
     

    On Management / Kibana / Index Patterns click on Refresh field list.

    Dev Tools

     

    Now, your data looks like this in Kibana interface:

    Picture26

    So, now, you can search on the indexed data.

Join The Discussion