Overview

Skill Level: Intermediate

Many SMF records that are not supported out of the box by IZOA on the supported analytics platforms are covered by ZCDP, but not all. Here is what and how can you do it on z/OS side in order to make them available on your target analytics platform.

Ingredients

 

IBM Z Operations Analytics (IZOA) is supporting quite a few log records produced on z/OS platform with a dashboards and problem insights on IBM Operations Analytics Log Analysis, Splunk and Elastic Stack analytics platforms. IZOA is using IBM Z Common Data Provider (ZCDP) capability to send the log data to the target analytics platform. ZCDP can be standalone product or the part of IZOA.

There are SMF records that are not supported by IZOA and that means that out of the box dashboards and insights are not available. ZCDP can also be standalone product, so many records not covered by IZOA are covered by ZCDP. If you want to send that records to your target platform, it is then your responsibility to create meaningful dashboards, searches and insights on the target platform. But at least IZOA and  ZCDP are offering the capability to send all that records to your analytics platform.

Even though there are more and more SMF records supported with the definitions by ZCDP, there are still some that are not. If you don’t want to wait for that records to become the part of ZCDP or IZOA, you can do it on your own.

This recipe will walk you through an example of creating the custom SMF definition on z/OS side, for SMF records to be sent to your target platform. If you want to see how you can ingest that records into the Elastic Stack platfom, please, check Part 2.

You need supported z/OS operating system and one of the two products:

  • IBM Z Operations Analytics
  • IBM Z Common Data Provider

Step-by-step

  1. Create data set for custom definitions

    Let’s say you want to be able to send SMF record SMF_042_9 which is not supported by either IZOA or ZCDP.

    First you need a separate dataset to use with the custom definitions. Create a partitioned data set (PDS), for example IZOA.USER.DEFS. You can allocate it based on SHBODEFS or SGLADEFS datasets (VBA, LRECL=255) as explained here in the step 1:

    IBM Knowledge Center – Creating a System Data Engine data stream definition

    The minimum of the definitions which you need are RECORD and UPDATE definitions for your SMF record.

     

  2. Create RECORD and UPDATE definitions

    Create custom definitions for the RECORD and the UPDATE. Create, for example, members MPERS042 for the record and MPEUS042 for the update. You can copy similar definitions from ZCDP SHBODEFS dataset (members HBORSX42 and HBOUS042) and update them. For example focus on SMF_042_6 which is supported by ZCDP.

    You can find the record definitions for all SMF records in Knowledge Center › z/OS › z/OS MVS › z/OS MVS System Management Facility (SMF). SMF_042 is explained here:

    IBM Knowledge Center – Record type 42 (2A) — DFSMS statistics and configuration

     

    Update the RECORD definition
     

    Based of the above documentation the definition for SMF_042_9 record would look like this (the member MPERS042 in IZOA.USER.DEFS):


    DEFINE RECORD SMF_042_9
    VERSION 'MPE.RW060320' -- @02C
    IN LOG SMF
    IDENTIFIED BY SMF42RTY = 42
    AND SMF42STY = 9
    FIELDS(
    SMF42LEN OFFSET 0 LENGTH 2 BINARY, -- Record length.
    SMF42SEG OFFSET 2 LENGTH 2 BINARY, -- Segment descriptor.
    SMF42FLG OFFSET 4 LENGTH 1 HEX, -- System indicator
    -- Bit Meaning When Set
    -- 0 New record format
    -- 1 Subtypes used
    -- 2 Reserved
    -- 3 MVS/SP Version 4
    -- 4 MVS/SP Version 3
    -- 5 MVS/SP Version 2
    -- 6 VS2
    -- 7 PR/SM machine.
    -- Note: For MVS/SP Version 4, bits 3,
    -- 4, 5, and 6 will be on.
    SMF42RTY OFFSET 5 LENGTH 1 BINARY, -- Record type: 42 (X'2A').
    SMF42TME OFFSET 6 LENGTH 4 TIME(1/100S),
    -- Time, in hundredths of a second,
    -- that the record was moved to the SMF
    -- buffer.
    SMF42DTE OFFSET 10 LENGTH 4 DATE(0CYYDDDF),
    -- Date that the record was moved to
    -- the SMF buffer, in the form 0cyydddF
    -- (where F is the sign).
    SMF42SID OFFSET 14 LENGTH 4 CHAR,
    -- System identification (from the SID
    -- parameter).
    SMF42SSI OFFSET 18 LENGTH 4 CHAR,
    -- Subsystem identification (blanks).
    SMF42STY OFFSET 22 LENGTH 2 BINARY, -- Record subtype.
    SMF42NT OFFSET 24 LENGTH 2 BINARY,
    -- Number of triplets in record.
    -- A triplet is a set of
    -- offset/length/number values that
    -- defines a section of the record.
    * OFFSET 26 LENGTH 2 CHAR, -- Reserved.
    /******************************************************************/
    /* The following three fields apply to the product section: */
    /******************************************************************/
    SMF42OPS OFFSET 28 LENGTH 4 BINARY,
    -- Offset to product section.
    SMF42LPS OFFSET 32 LENGTH 2 BINARY,
    -- Length of product section.
    SMF42NPS OFFSET 34 LENGTH 2 BINARY,
    -- Number of product sections.
    /******************************************************************/
    /* The following fields are only included with subtype 9 */
    /******************************************************************/
    SMF42ABO OFFSET 36 LENGTH 4 BINARY,
    -- Offset to X37 abend data section.
    SMF42ABL OFFSET 40 LENGTH 2 BINARY,
    -- Length of X37 abend data section.
    SMF42ABN OFFSET 42 LENGTH 2 BINARY,
    -- Number of X37 abend data section.
    SMF42SMO OFFSET 44 LENGTH 4 BINARY,
    -- Offset to SMS data section (0 if data set
    -- is not SMS managed
    SMF42SML OFFSET 48 LENGTH 2 BINARY,
    -- Length of SMS data section (0 if data se
    -- is not SMS managed
    SMF42SMN OFFSET 50 LENGTH 2 BINARY)
    -- Number of SMS sections (always 1 if dset
    -- is SMS managed. Otherwise, 0)
    /**********************************************************************/
    /* Product Section */
    /**********************************************************************/
    SECTION SMF42PDS
    OFFSET SMF42OPS LENGTH SMF42LPS NUMBER SMF42NPS
    FIELDS(
    SMF42PDL OFFSET 0 LENGTH 8 CHAR, -- Product level.
    SMF42PDN OFFSET 8 LENGTH 10 CHAR, -- Product name.
    SMF42PSV OFFSET 18 LENGTH 1 BINARY, -- Subtype version number@02C
    * OFFSET 19 LENGTH 1 CHAR, -- Reserved @02A
    SMF42PTS OFFSET 20 LENGTH 8 TIMESTAMP(TOD),
    -- Interval Start or Open TOD
    SMF42PTE OFFSET 28 LENGTH 8 TIMESTAMP(TOD),
    -- Interval End or Close TOD
    * OFFSET 36 LENGTH 4 CHAR) -- Reserved
    /**********************************************************************/
    /* B37/D37/E37 abend data section (SMF42 subtype 9) */
    /**********************************************************************/
    SECTION SMF42B37
    OFFSET SMF42ABO LENGTH SMF42ABL NUMBER SMF42ABN
    FIELDS(
    S42ASYID OFFSET 0 LENGTH 4 CHAR,
    -- System ID
    S42JOBN OFFSET 4 LENGTH 8 CHAR,
    -- Job name
    S42RDST OFFSET 12 LENGTH 4 TIME(1/100S),
    -- Reader start time
    S42RDSD OFFSET 16 LENGTH 4 DATE(0CYYDDDF),
    -- Reader start date
    S42AAUID OFFSET 20 LENGTH 8 CHAR,
    -- User ID
    S42ASTPN OFFSET 28 LENGTH 1 BINARY ,
    -- Job step number
    S42FLAGS OFFSET 29 LENGTH 1 BINARY ,
    -- Flags
    -- 1... .... B37 abend
    -- .1.. .... D37 abend
    -- ..1. .... E37 abend
    * OFFSET 30 LENGTH 4 CHAR,
    -- Reserved
    S42DSORG OFFSET 34 LENGTH 2 BINARY ,
    -- Dataset organization
    S42ADISP OFFSET 36 LENGTH 1 BINARY,
    -- Dispozition
    S42DSNME OFFSET 37 LENGTH 44 CHAR,
    -- Data set name
    S42VOLSR OFFSET 81 LENGTH 6 CHAR,
    -- Volser
    S42UCBTP OFFSET 87 LENGTH 4 BINARY,
    -- UCB Type
    S42NEXT OFFSET 91 LENGTH 1 BINARY,
    -- Number of extends on current volume
    S42TNTRK OFFSET 92 LENGTH 4 BINARY ,
    -- Number of tracks on this volume
    S42ASSAT OFFSET 96 LENGTH 4 BINARY ,
    -- Secondary allocation amount from the JFCB
    S42ADLRH OFFSET 100 LENGTH 3 BINARY ,
    -- Average block length if specified.
    * OFFSET 103 LENGTH 5 CHAR) -- Reserved PQ52051
    /*--------------------------------------------------------------------*/
    /* SMS Data Section (SMF42 subtype 9) */
    /*--------------------------------------------------------------------*/
    SECTION SMF42DSH
    OFFSET SMF42SMO LENGTH SMF42SML NUMBER SMF42SMN
    FIELDS(
    S42MCNME OFFSET 0 LENGTH 30 CHAR,
    -- Management class name
    S42SCNME OFFSET 30 LENGTH 30 CHAR,
    -- Storage class name
    S42DCNME OFFSET 60 LENGTH 30 CHAR)
    -- Data class name
    ;
    /**********************************************************************/
    /* Comment On for SMF 42 subtype 9 */
    /**********************************************************************/
    COMMENT ON RECORD SMF_042_9 IS
    'B37/D37/E37 abends'
    ;

     

    Update the UPDATE definition
     

    Create the simplest definition for the UPDATE. The member MPEUS042 in IZOA.USER. DEFS now looks like this:

                                          
    SET IBM_FILE = 'SMF04209';

    DEFINE UPDATE SMF_042_9
    VERSION 'MPE.110 '
    FROM SMF_042_9
    TO &IBM_UPDATE_TARGET
    &IBM_CORRELATION
    AS &IBM_FILE_FORMAT SET(ALL);

       

    With the above UPDATE definition, you would send all content (all fields) of your SMF record SMF_042_9 to the subscriber.  All fields from the SMF records would be present even if you don’t need them.  

                                                               

  3. Filtering the data - optional

    I will explain, how to filter the fields streamed to the target platform, so that only fields that you  need are sent to the subscriber.

    There are at least two ways how you can filter the fields sent to the target platform:

    • By using TEMPLATE definition
    • By enhancing UPDATE definition

     

    Filtering the fields streamed to the target platform by using the TEMPLATE
     

    In order to filter the fields streamed, you must add a DEFINE TEMPLATE statement for the update definition to the same member where this update is placed.

    You can use the RECORD definition and include only the fields that you want to send to the final destination. You must include the date and time fields from the SMF record header.  In this example, these fields are SMF42DTE and SM42TME:                                   

                                                                  
    DEFINE UPDATE SMF_042_9
    VERSION 'MPE.110 '
    FROM SMF_042_9
    TO &IBM_UPDATE_TARGET
    &IBM_CORRELATION
    AS &IBM_FILE_FORMAT SET(ALL);

    DEFINE TEMPLATE SMF_042_9 FOR SMF_042_9
    ORDER
    (
    SMF42TME,
    SMF42DTE,
    SMF42SID,
    -- System identification
    SMF42SSI,
    -- Subsystem identification (blanks).
    S42ASYID,
    -- System ID
    S42JOBN,
    -- Job name
    S42AAUID,
    -- User ID
    S42ASTPN,
    -- Job step number
    S42FLAGS,
    -- Flags
    -- 1... .... B37 abend
    -- .1.. .... D37 abend
    -- ..1. .... E37 abend
    S42DSNME,
    -- Data set name
    S42VOLSR,
    -- Volser
    S42NEXT,
    -- Number of extends on current volume
    S42MCNME,
    -- Management class name
    S42SCNME,
    -- Storage class name
    S42DCNME)
    -- Data class name
    AS &IBM_FILE_FORMAT;

     

    Based on this definition, you would get something like this in your target platform:

      The result in Kibana

     

    Now, the field S42FLAGS contains the value “32” which is not meaningful. The reason for that is that the format of S42FLAGS is BIT of LENGTH (8), and if you have the value 32, or “0010 0000”, that means that the abend “E37” happened (as documented with that record type).

     

    Filtering the fields by enhancing the UPDATE definition
     

    You can achieve the better result with an UPDATE statement. That way, you can do some calculations or set the field values based on certain conditions. It is the best to look at the members in IZOA SGLADEFS dataset and copy some ideas from there.

    So now, the update member can be updated like this:

     

    SET IBM_FILE = 'SMF04209';                                 

    DEFINE UPDATE SMF_042_9
    VERSION 'MPE.110 '
    FROM SMF_042_9
    TO &IBM_UPDATE_TARGET
    AS &IBM_FILE_FORMAT
    GROUP BY NONE
    SET (
    UPDATE_NAME = FIRST('SMF_042_9 '),
    UPDATE_VERSION = FIRST('IZOA.320 '),
    TIMESTAMP = FIRST(TIMESTAMP(SMF42DTE,SMF42TME)),
    MVS_SYSTEM_ID = FIRST(SMF42SID),
    JOB_NAME = FIRST(S42JOBN),
    USER_ID = FIRST(S42AAUID),
    STEP_NUMBER = MAX(S42ASTPN),
    ABEND = FIRST(CASE WHEN S42FLAGS=128
    THEN 'B37'
    WHEN S42FLAGS=64
    THEN 'D37'
    WHEN S42FLAGS=32
    THEN 'E37'
    ELSE 'F37'
    END),
    DATASET_NAME = FIRST(S42DSNME),
    VOLSER = FIRST(S42VOLSR),
    NUMBER_OF_EXTENTS = MAX(S42NEXT),
    MANAGEMENT_CLASS = FIRST(S42MCNME),
    STORAGE_CLASS = FIRST(S42SCNME),
    DATA_CLASS = FIRST(S42DCNME)
    );

     

    In the above definition you renamed the fields of the record and you set the value of ABEND field based on the value of the S42FLAGS field of the original SMF record.

    With these definitions you get more user-friendly result on your target platform:

     The result in Kibana, after enhancing the definitions

     

    As you can see in the above picture, now there are limited number of the fields with more meaningful names and the content.

     

    Test your definitions
     

    Before going to the next step, you can test your enhanced definitions with this job:


    //HBOSMFCB EXEC PGM=HBOPDE,REGION=0M
    //STEPLIB DD DISP=SHR,DSN=IZOA.V4R1.SHBOLOAD
    //*
    //* CONTROL STATEMENTS
    //*
    //HBOIN DD DISP=SHR,DSN=IZOA.V4R1.SHBODEFS(HBOCCSV)
    // DD DISP=SHR,DSN=IZOA.V4R1.SHBODEFS(HBOCCORY)
    // DD DISP=SHR,DSN=IZOA.V4R1.SHBODEFS(HBOLLSMF)
    // DD DISP=SHR,DSN=IZOA.USER.DEFS(MPERS042)
    // DD DISP=SHR,DSN=IZOA.USER.DEFS(MPEUS042)
    //*
    //* Sample COLLECT statement for processing log data
    //*
    // DD *
    COLLECT SMF
    WITH STATISTICS
    BUFFER SIZE 1 M
    /*
    //HBOLOG DD DUMMY
    //HBOOUT DD SYSOUT=*
    //HBODUMP DD SYSOUT=*

     

    Check the output of the job to verify if the record and update definitions were successful.

     


    2020-11-26 21:09:35 HBO0125I SMF_042_9 was successfully defined.       
    2020-11-26 21:09:58 HBO0201I Update SMF_042_9 was successfully defined.  

  4. Create a custom data stream definition

    Now you are ready to go to the configuration tool and click on “Manage custom data stream definitions”:

     Manage custom data stream definitions

     Click on Create System Data Engine data stream definition:

      Create System Data Engine data stream definition

     

    Fill the following fields:

     Define System Data Engine Data Stream

     

    Name: name of your stream (SMF)

    Group: Under which group will this stream be placed in policy

    Subgroup: Under which subgroup will this stream be placed in policy

    SHBODEFS dataset members: The members you created for your custom stream: MPERS042 and MPEUS042.

    Define System Data Engine Data Stream

     

    After you press OK and the definition is created, you can check in cdpConfig folder, that your stream definition is created:

     cdpConfig folder

    You can update your policy now.

     

  5. Update your policy with the new data stream definition

    Now, please, click on your policy:

     CDPz Tool

     

    Inside the policy, click on Add Data Stream:

     Policy

     

    Select your new stream SMF_042_9 under “Custom Data Streams”:

     1-Select-Data-Stream

     

    Data stream is added:

    2-Add-042
     

     

    Assign a subscriber to the data stream:

     3-Subscribe

     

    The Data Stream is connected to the Subscriber now:

     

     4-Subscribed

     

     

    Now, click on System Data Engine to add your custom definition dataset:

     

      Picture14

     

    Put the name of your custom definition dataset under USER Concatenation:

     5-SDE

     

    And click OK:

     

     6-SDE

     

    Save the policy.

     

  6. Restart the started tasks

    In order to activate previous changes, you must restart Data Streamer and System Data Engine. Since Log Forwarder is sending the data to Data Streamer, you will have to restart it too:

    1. Stop System Data Engine and Log Forwarder
    2. Stop Data Streamer
    3. Start Data Streamer
    4. Start System Data Engine and Log Forwarder

     

  7. Updating custom definitions - optional

    In order to update custom definitions, you must first delete them and then create new data stream definitions.

     

    Deleting custom data streams definitions

    If you want to delete your custom data stream definitions, do the following:

    1. Remove the streams from all policies using the custom data stream, in the Configuration Tool.
    2. Delete the appropriate *.streams.json file from the cdpConfig folder
    3. Logoff from the Configuration Tool

     

    Updating custom data streams definitions

    If you want to update your data streams definitions, first you must delete them as explained in “Deleting custom data streams definitions” and then create new data stream definitions.

     

    Updating custom RECORD or UPDATE definitions

    If you change the RECORD, TEMPLATE or the UPDATE definition, you must edit and save the policy that is using that definitions. After that, you must restart started tasks as explained in previous step.

     

  8. Next step

    It is possible to use these definitions to ingest the records in one of the platforms for log analysis, for example:

    • IBM Operation Analytics Log Analysis
    • The Elastic Stack (Elasticsearch, Logstash and Kibana)
    • Splunk

     

    I explained in Part 2 what and how you can do it for Elastic Stack.

Join The Discussion