Overview

Skill Level: Intermediate

Many SMF records that are not supported by IZOA with dashboards and insights on the supported analytics platforms are supported by CDPz. But there are still some unsupported records. Here is what and how can you do it on z/OS side.

Ingredients

IBM Z Operations Analytics (IZOA) is supporting quite a few log records produced on z/OS platform with a dashboards and problem insights on IBM Operations Analytics Log Analysis, Splunk and Elastic Stack analytics platforms. IZOA is using Common Data Provider for z Systems (CDPz) capability to send log data to the target analytics platform. CDPz can be standalone product or the part of IZOA.

Many SMF records that are not supported by IZOA with dashboards and insights are supported by CDPz. If you want to send that records to your target platform, it is then your responsibility to create meaningful dashboards, searches and insights on the target platform. But, at least IZOA and CDPz are offering the capability to send all that records to your analytics platform.

Even though there are more and more SMF record supported with definitions by CDPz, there are still some that are not. If you don’t want to wait for that records to become the part of CDPz or IZOA, you can do it by yourself. Here is what and how can you do it on z/OS side. If you want to see how you can use that data in Elastic Stack, please, check Part 2.

Let’s say we want to be able to send SMF record SMF_042_9 which is not supported by either IZOA nor CDPz.

You need supported z/OS operating system and one of the two products:

  • IBM Z Operations Analytics
  • IBM Common Data Provider

Step-by-step

  1. Create data set for custom definitions

    First you need a separate dataset for use with custom definitions. Create a partitioned data set (PDS), for example GLA320U.DEFS. You can allocate it based on SHBODEFS (VBA) or SGLADEFS (FB, LRECL=80) as explained here in the step 1:

    IBM Knowledge Center – Creating a System Data Engine data stream definition

    The minimum of the definitions which you need are RECORD and UPDATE definitions for your SMF record.

     

  2. Create RECORD and UPDATE definitions

    Create custom definitions for record and update, for example create members MPERS042 and MPEUS042. You can copy similar definitions from CDPz SHBODEFS dataset (members HBORS042 and HBOUS042) and update them, for example focus on SMF_042_6 which is supported by CPDz.

    You can find the record definitions for all SMF records in Knowledge Center › z/OS › z/OS MVS › z/OS MVS System Management Facility (SMF). SMF_042 is explained here:

    IBM Knowledge Center – Record type 42 (2A) — DFSMS statistics and configuration

     

    Update the RECORD definition
     

    Based of the above documentation the definition for SMF_042_9 record would look like this (the member MPERS042 in GLA320U.DEFS):

     


    DEFINE RECORD SMF_042_9
    VERSION 'MPE.RW060320' -- @02C
    IN LOG SMF
    IDENTIFIED BY SMF42RTY = 42
    AND SMF42STY = 9
    FIELDS(
    SMF42LEN OFFSET 0 LENGTH 2 BINARY, -- Record length.
    SMF42SEG OFFSET 2 LENGTH 2 BINARY, -- Segment descriptor.
    SMF42FLG OFFSET 4 LENGTH 1 HEX, -- System indicator
    -- Bit Meaning When Set
    -- 0 New record format
    -- 1 Subtypes used
    -- 2 Reserved
    -- 3 MVS/SP Version 4
    -- 4 MVS/SP Version 3
    -- 5 MVS/SP Version 2
    -- 6 VS2
    -- 7 PR/SM machine.
    -- Note: For MVS/SP Version 4, bits 3,
    -- 4, 5, and 6 will be on.
    SMF42RTY OFFSET 5 LENGTH 1 BINARY, -- Record type: 42 (X'2A').
    SMF42TME OFFSET 6 LENGTH 4 TIME(1/100S),
    -- Time, in hundredths of a second,
    -- that the record was moved to the SMF
    -- buffer.
    SMF42DTE OFFSET 10 LENGTH 4 DATE(0CYYDDDF),
    -- Date that the record was moved to
    -- the SMF buffer, in the form 0cyydddF
    -- (where F is the sign).
    SMF42SID OFFSET 14 LENGTH 4 CHAR,
    -- System identification (from the SID
    -- parameter).
    SMF42SSI OFFSET 18 LENGTH 4 CHAR,
    -- Subsystem identification (blanks).
    SMF42STY OFFSET 22 LENGTH 2 BINARY, -- Record subtype.
    SMF42NT OFFSET 24 LENGTH 2 BINARY,
    -- Number of triplets in record.
    -- A triplet is a set of
    -- offset/length/number values that
    -- defines a section of the record.
    * OFFSET 26 LENGTH 2 CHAR, -- Reserved.
    /******************************************************************/
    /* The following three fields apply to the product section: */
    /******************************************************************/
    SMF42OPS OFFSET 28 LENGTH 4 BINARY,
    -- Offset to product section.
    SMF42LPS OFFSET 32 LENGTH 2 BINARY,
    -- Length of product section.
    SMF42NPS OFFSET 34 LENGTH 2 BINARY,
    -- Number of product sections.
    /******************************************************************/
    /* The following fields are only included with subtype 9 */
    /******************************************************************/
    SMF42ABO OFFSET 36 LENGTH 4 BINARY,
    -- Offset to X37 abend data section.
    SMF42ABL OFFSET 40 LENGTH 2 BINARY,
    -- Length of X37 abend data section.
    SMF42ABN OFFSET 42 LENGTH 2 BINARY,
    -- Number of X37 abend data section.
    SMF42SMO OFFSET 44 LENGTH 4 BINARY,
    -- Offset to SMS data section (0 if data set
    -- is not SMS managed
    SMF42SML OFFSET 48 LENGTH 2 BINARY,
    -- Length of SMS data section (0 if data se
    -- is not SMS managed
    SMF42SMN OFFSET 50 LENGTH 2 BINARY)
    -- Number of SMS sections (always 1 if dset
    -- is SMS managed. Otherwise, 0)
    /**********************************************************************/
    /* Product Section */
    /**********************************************************************/
    SECTION SMF42PDS
    OFFSET SMF42OPS LENGTH SMF42LPS NUMBER SMF42NPS
    FIELDS(
    SMF42PDL OFFSET 0 LENGTH 8 CHAR, -- Product level.
    SMF42PDN OFFSET 8 LENGTH 10 CHAR, -- Product name.
    SMF42PSV OFFSET 18 LENGTH 1 BINARY, -- Subtype version number@02C
    * OFFSET 19 LENGTH 1 CHAR, -- Reserved @02A
    SMF42PTS OFFSET 20 LENGTH 8 TIMESTAMP(TOD),
    -- Interval Start or Open TOD
    SMF42PTE OFFSET 28 LENGTH 8 TIMESTAMP(TOD),
    -- Interval End or Close TOD
    * OFFSET 36 LENGTH 4 CHAR) -- Reserved
    /**********************************************************************/
    /* B37/D37/E37 abend data section (SMF42 subtype 9) */
    /**********************************************************************/
    SECTION SMF42B37
    OFFSET SMF42ABO LENGTH SMF42ABL NUMBER SMF42ABN
    FIELDS(
    S42ASYID OFFSET 0 LENGTH 4 CHAR,
    -- System ID
    S42JOBN OFFSET 4 LENGTH 8 CHAR,
    -- Job name
    S42RDST OFFSET 12 LENGTH 4 TIME(1/100S),
    -- Reader start time
    S42RDSD OFFSET 16 LENGTH 4 DATE(0CYYDDDF),
    -- Reader start date
    S42AAUID OFFSET 20 LENGTH 8 CHAR,
    -- User ID
    S42ASTPN OFFSET 28 LENGTH 1 BINARY ,
    -- Job step number
    S42FLAGS OFFSET 29 LENGTH 1 BINARY ,
    -- Flags
    -- 1... .... B37 abend
    -- .1.. .... D37 abend
    -- ..1. .... E37 abend
    * OFFSET 30 LENGTH 4 CHAR,
    -- Reserved
    S42DSORG OFFSET 34 LENGTH 2 BINARY ,
    -- Dataset organization
    S42ADISP OFFSET 36 LENGTH 1 BINARY,
    -- Dispozition
    S42DSNME OFFSET 37 LENGTH 44 CHAR,
    -- Data set name
    S42VOLSR OFFSET 81 LENGTH 6 CHAR,
    -- Volser
    S42UCBTP OFFSET 87 LENGTH 4 BINARY,
    -- UCB Type
    S42NEXT OFFSET 91 LENGTH 1 BINARY,
    -- Number of extends on current volume
    S42TNTRK OFFSET 92 LENGTH 4 BINARY ,
    -- Number of tracks on this volume
    S42ASSAT OFFSET 96 LENGTH 4 BINARY ,
    -- Secondary allocation amount from the JFCB
    S42ADLRH OFFSET 100 LENGTH 3 BINARY ,
    -- Average block length if specified.
    * OFFSET 103 LENGTH 5 CHAR) -- Reserved PQ52051
    /*--------------------------------------------------------------------*/
    /* SMS Data Section (SMF42 subtype 9) */
    /*--------------------------------------------------------------------*/
    SECTION SMF42DSH
    OFFSET SMF42SMO LENGTH SMF42SML NUMBER SMF42SMN
    FIELDS(
    S42MCNME OFFSET 0 LENGTH 30 CHAR,
    -- Management class name
    S42SCNME OFFSET 30 LENGTH 30 CHAR,
    -- Storage class name
    S42DCNME OFFSET 60 LENGTH 30 CHAR)
    -- Data class name
    ;
    /**********************************************************************/
    /* Comment On for SMF 42 subtype 9 */
    /**********************************************************************/
    COMMENT ON RECORD SMF_042_9 IS
    'B37/D37/E37 abends'
    ;

     

    Update the UPDATE definition
     

    The member MPEUS042 in GLA320U.DEFS now looks like this:

     

                                          
    SET IBM_FILE = 'SMF04209';

    DEFINE UPDATE SMF_042_9
    VERSION 'MPE.110 '
    FROM SMF_042_9
    TO &IBM_UPDATE_TARGET
    &IBM_CORRELATION
    AS &IBM_FILE_FORMAT SET(ALL);

       

    With above UPDATE definition, you are sending all content of your SMF record SMF_042_9.    

                                                               

  3. Filtering the data - optional

    I will explain, how to filter the fields streamed to the target platform, so that not all fields that are part of the records (and that you don’t need) are sent.

    There are at least two ways how you can filter the fields sent to the target platform do this:

    • By using TEMPLATE definition
    • By enhancing UPDATE definition

     

    Filtering the fields streamed to the target platform by using the TEMPLATE
     

    In order to filter the fields streamed, you must add a DEFINE TEMPLATE statement for the update definition to the same member where this update is placed.

    You can use the RECORD definition and include only the fields that you need to be streamed. You must include the date and time fields from the SMF record header.  In this example, these fields are SMF42DTE and SM42TME.

                                         

    DEFINE UPDATE SMF_042_9
    VERSION 'MPE.110 '
    FROM SMF_042_9
    TO &IBM_UPDATE_TARGET
    &IBM_CORRELATION
    AS &IBM_FILE_FORMAT SET(ALL);

    DEFINE TEMPLATE SMF_042_9 FOR SMF_042_9
    ORDER
    (
    SMF42TME,
    SMF42DTE,
    SMF42SID,
    -- System identification
    SMF42SSI,
    -- Subsystem identification (blanks).
    S42ASYID,
    -- System ID
    S42JOBN,
    -- Job name
    S42AAUID,
    -- User ID
    S42ASTPN,
    -- Job step number
    S42FLAGS,
    -- Flags
    -- 1... .... B37 abend
    -- .1.. .... D37 abend
    -- ..1. .... E37 abend
    S42DSNME,
    -- Data set name
    S42VOLSR,
    -- Volser
    S42NEXT,
    -- Number of extends on current volume
    S42MCNME,
    -- Management class name
    S42SCNME,
    -- Storage class name
    S42DCNME)
    -- Data class name
    AS &IBM_FILE_FORMAT;

     

    Based on this definition, you would get this result in your target platform:

      The result in Kibana

     

    All fields from the SMF records are present even if you don’t need them. Also, the field S42FLAGS contains the value “32” which is not meaningful. The reason for that is that the format of S42FLAGS is BIT of LENGTH (8), and if you have the value 32, or “0010 0000”, that means that the abend “E37” happened.

     

    Filtering the fields by enhancing the UPDATE definition
     

    You can achieve the better result with an UPDATE statement. That way, you can do some calculations, or set the field values, based on certain conditions. It is the best to ook at the members in IZOA SGLADEFS dataset and copy some ideas from there.

    So now, the update member is looking like this:


    SET IBM_FILE = 'SMF04209';

    DEFINE UPDATE SMF_042_9
    VERSION 'MPE.110 '
    FROM SMF_042_9
    TO &IBM_UPDATE_TARGET
    AS &IBM_FILE_FORMAT
    GROUP BY NONE
    SET (
    UPDATE_NAME = FIRST('SMF_042_9 '),
    UPDATE_VERSION = FIRST('IZOA.320 '),
    TIMESTAMP = FIRST(TIMESTAMP(SMF42DTE,SMF42TME)),
    MVS_SYSTEM_ID = FIRST(SMF42SID),
    JOB_NAME = FIRST(S42JOBN),
    USER_ID = FIRST(S42AAUID),
    STEP_NUMBER = MAX(S42ASTPN),
    ABEND = FIRST(CASE WHEN S42FLAGS=128
    THEN 'B37'
    WHEN S42FLAGS=64
    THEN 'D37'
    WHEN S42FLAGS=32
    THEN 'E37'
    ELSE 'F37'
    END),
    DATASET_NAME = FIRST(S42DSNME),
    VOLSER = FIRST(S42VOLSR),
    NUMBER_OF_EXTENTS = MAX(S42NEXT),
    MANAGEMENT_CLASS = FIRST(S42MCNME),
    STORAGE_CLASS = FIRST(S42SCNME),
    DATA_CLASS = FIRST(S42DCNME)
    );

     

    In above definition you renamed the fields of the record and you set the value of ABEND field based on the value of the S42FLAGS field of the original SMF record.

    With these definitions you get the result like this one:

     The result in Kibana, after enhancing the definitions

     

    As you can see in above picture, there are limited number of fields with more meaningful names and content.

     

    Test your definitions
     

    Before going to the next step, you can test your enhanced definitions with this job:

                                              
    //HBOSMFCB EXEC PGM=HBOPDE,REGION=0M
    //STEPLIB DD DISP=SHR,DSN=GLA320.SHBOLOAD
    //*
    //* CONTROL STATEMENTS
    //*
    //HBOIN DD DISP=SHR,DSN=GLA320.SHBODEFS(HBOCCSV)
    // DD DISP=SHR,DSN=GLA320.SHBODEFS(HBOCCORY)
    // DD DISP=SHR,DSN=GLA320.SHBODEFS(HBOLLSMF)
    // DD DISP=SHR,DSN=GLA320U.DEFS(MPERS042)
    // DD DISP=SHR,DSN=GLA320U.DEFS(MPEUS042)
    //*
    //* Sample COLLECT statement for processing log data
    //*
    // DD *
    COLLECT SMF
    WITH STATISTICS
    BUFFER SIZE 1 M
    /*
    //HBOLOG DD DUMMY

     

    Check the output of the job if the record and update definition were successful.

     

  4. Create custom data stream definition

    Now, please, go to CDPz configuration tool and click on “Manage custom data stream definitions”:

     Manage custom data stream definitions

     

    Now, click on Create System Data Engine data stream definition

      Create System Data Engine data stream definition

     

    Fill the following fields:

     Define System Data Engine Data Stream

     

    Name: name of your stream (SMF)

    Group: Under which group will this stream be placed in policy

    Subgroup: Under which subgroup will this stream be placed in policy

    SHBODEFS dataset members: The members you created for your custom stream: MPERS042 and MPEUS042.

    Define System Data Engine Data Stream

     

    After you press OK and the definition is created, you can check in cdpzConfig folder, that your stream definition is created:

     cdpConfig folder

    You can update your policy now.

     

  5. Update your policy with the new data stream definition

    Now, pleae, click on your policy:

     CDPz Tool

     

    Inside the policy, click on Add Data Stream:

     Policy

     

    Select your new stream SMF_042_9 under “Custom Data Streams”:

     

     Select data stream

     

     

    Data stream is added:

     

    Picture11 

     

    Assign a subscriber to the data stream:

     

     Assign a subscriber to the data stream

     

    It is connected now:

     

     Subscriber connected

     

    Now, click on System Data Engine to add your custom definition dataset:

      Picture14

     

    Put the name of your custom definition dataset under USER Concatenation:

     

     Picture15

     

    And click OK:

     

     Configure System Data Engine properties

     

     

    Save the policy.

     

  6. Restart the started tasks

    In order to activate previous changes, you mustplease,  restart Data Streamer and System Data Engine. Since Log Forwarder is sending the data to Data Streamer, you will have to restart it too:

    1. Stop System Data Engine and Log Forwarder
    2. Stop Data Streamer
    3. Start Data Streamer
    4. Start System Data Engine and Log Forwarder

     

  7. Updating custom definitions - optional

    Deleting custom data streams definitions

    If you want to delete your custom data stream definitions, please, do the following:

    1. Remove the streams from all policies using the custom data stream, in Configuration Tool.
    2. Delete the appropriate file from ZFS file system in cdpzConfig folder
    3. Logoff from the Configuration Tool

     

    Updating custom data streams definitions

    If you want to update your data streams definitions, first you must delete them as explained in “Deleting custom data streams definitions” and then create new data stream definitions.

     

    Updating custom RECORD or UPDATE definitions

    If you change the RECORD, TEMPLATE or the UPDATE definition, you must edit and save the policy that is using that definitions. After that, you must restart started tasks as explained in previous step.

     

  8. Next step

    It is possible to use these definitions on one of the supported platforms for log analysis:

    • IBM Operation Analytics Log Analysis
    • Elastic stack
    • Splunk

     

    You might add the definitions for the log record specific to that platform. I explained in Part 2 how you can do it for Elastic stack.

Join The Discussion