Overview

Skill Level: Beginner

IOALA, Java

This recipe describes how to create an insight pack for the current version of IBM Log Analysis v1.3.5. In this recipe I will describe how to create performance optimized insight pack capable of splitting and annotating logs generated by Log4J framework.

Ingredients

  • IBM Operations Analytics - Log Analysis version 1.3.5
  • IBM Installation Manager (IIM)
  • Eclipse Juno SR2
  • Oracle Java JDK 1.7
  • IBM InfoSphere BigInsights Eclipse Plugin
  • Log Analysis Eclipse Plugin

 

Step-by-step

  1. Install IBM Log Analysis

    This recipe is not intended to describe in detail how to install IBM Log Analysis. You can refer to IBM documentation for more details. The installation will be performed on RHEL Server 7.2 64bit. Installation will be performed using non-root user netcool.

     1. Download Required Packages

    • Download IBM Installation Manager (IIM) v1.8.9.2 from this IBM Support Fix Central URL using your IBM ID.
    • Download IBM Log Analysis for Linux on System x86_64: go to your passport advantage account and search for part number CNE6BEN

     2. Prepare Environment

    Perform the following steps as root user:

    a. change hostname to be fully qualified, use domain name like “.ibm.com” :
    [root@localhost]# vi /etc/hostname
    [root@localhost]# systemctl restart systemd-hostnamed

    As a check run the following commands, you should get short and FQDN names:

    [root@localhost]# hostname -s
    analytics
    [root@localhost]# hostname -f
    analytics.ibm.com

    b. enable 32 bit libs in yum
    [root@localhost]# vi /etc/yum.conf

    multilib_policy=all

    c. configure /etc/hosts to have both short names and fully qualified names for the server.

    d. Disable SE Linux
    [root@localhost]# vi /etc/selinux/config

    SELINUX=disabled

    e. Disable firewall
    [root@localhost]# systemctl status firewalld
    [root@localhost]# service firewalld stop
    [root@localhost]# systemctl disable firewalld

    f. ulimit configuration
    [root@localhost]# vi /etc/security/limits.conf

    * hard nofile 200000
    * soft nofile 200000
    * hard nproc 200000
    * soft nproc 200000

    [root@localhost]# vi /etc/security/limits.d/20-nproc.conf
    * soft nproc unlimited
    root soft nproc unlimited

    g. Change OS locale:
    [root@localhost]# vi /etc/locale.conf
    LANG=”en_US”

    h. Create netcool User
    [root@localhost]# useradd -d /home/netcool -m -s /bin/bash netcool
    [root@localhost]# passwd netcool object00

    i. Create application storage ownership
    [root@localhost]# mkdir /opt/IBM
    [root@localhost]# chown ‚ÄďR netcool:netcool /opt/IBM

    j. Install libraries
    [root@localhost]# yum install -y ksh* python* libXp* libXmu openmotif* libgcc* gcc* glibc* nss-softokn* glibc* libstdc++* nss* compat-glibc* compat-libstdc++* libXtst* libstdc* libstdc++* compat-libstdc++* libpng12*

     3. Install IIM

    Log Analysis must be installed with IIM installed in User Mode, you will not be able to install IBM Tivoli Log File Agent (LFA) with IIM installed in Group Mode.

    As netcool user:

    a. Create directories required for IIM installation
    [netcool@analytics ~]$ mkdir /opt/IBM/netcool
    [netcool@analytics ~]$ cd /opt/IBM/netcool
    [netcool@analytics netcool]$ mkdir IM
    [netcool@analytics netcool]$ cd IM
    [netcool@analytics IM]$ mkdir -p InstallationManager/eclipse
    [netcool@analytics IM]$ mkdir IBMIMData
    [netcool@analytics IM]$ mkdir IBMIMShared

    b. Go to the directory where you have IIM zip file, and unzip it
    [netcool@analytics IM]$ cd /opt/IBM/software
    unzip agent.installer.linux.gtk.x86_64_1.8.9002.20181015_1517.zip

    c. Install IIM in user mode
    [netcool@analytics software]$ ./userinstc -dataLocation /opt/IBM/IM/IBMIMData -installationDirectory /opt/IBM/IM/InstallationManager/eclipse -configuration /opt/IBM/IM/etc -acceptLicense

     

     4. Install Log Analysis

    As a netcool user:
    1. Extract the IBM Log Analysis tar file
    2. Run IIM
         [netcool@analytics ~]$ cd /opt/IBM/IM/InstallationManager/eclipse/
         [netcool@analytics eclipse]$ ./IBMIM &
    3. Configure Repository to add the diskTag.inf under the newly extracted Log Analysis tar file
         File -> Preferences -> Add Repository
    4. Return to IIM menu and click Install to start installing
         a. Select installation package
         b. Accept license
         c. Shared Resources directory: /opt/IBM/IM/IBMIMShared
         d. Installation directory: /opt/IBM/LogAnalysis
         e. Select all features to install, you must see IBM Tivoli Log File Agent 06.30.00.04 feature active

    LFA
         f. Use default ports
         g. Continue till the end of the installation

    5. Start Log Analysis, then check its correct status

    [netcool@analytics ~]$ /opt/IBM/LogAnalysis/utilities/unity.sh -start
    Thu Nov 8 14:44:20 EET 2018
    IBM Operations Analytics – Log Analysis v1.3.5.0 ENTRY EDITION Application Services Status:
    ———————————————————————————–
    No. Service                                 Status     Process ID
    ———————————————————————————–
    1    Derby Network Server        DOWN
    2    ZooKeeper                            DOWN
    3    Websphere Liberty Profile DOWN
    4    EIF Receiver                         DOWN
    5    Log File Agent instance      DOWN
    ———————————————————————————-
    Getting status of Solr on analytics.ibm.com
    Status of Solr Nodes:
    ————————————————————————————————–
    No.  Instance Name            Host                             Status  State
    ————————————————————————————————–
    1     SOLR_NODE_LOCAL     analytics.ibm.com DOWN  ACTIVE
    ————————————————————————————————–
    All Application Services are in Stopped State
    Starting IBM Operations Analytics – Log Analysis v1.3.5.0 ENTRY EDITION Application Services…
    ——————————————————————————-
    Starting Derby Network Server…
    Started Derby Network Server…
    Starting ZooKeeper Service…
    JMX enabled by default
    Using config: /opt/IBM/LogAnalysis/zookeeper-3.4.6/bin/../conf/zoo.cfg
    Started ZooKeeper Service…
    Starting Solr on analytics.ibm.com
    Started Solr with PID 10125
    Maximum Solr start wait time set to 5 mins
    Starting Websphere Liberty Profile…
    Starting server Unity.
    Server Unity started with process ID 10568.
    Started Websphere Liberty Profile…
    Starting EIF Receiver…
    Started Data Collection Application…

    Starting Log File Agent Instance – default_workload_instance…
    Processing. Please wait…
    Starting Tivoli Log File Agent …
    Tivoli Log File Agent started
    Started Log File Agent Instance – default_workload_instance…
    ——————————————————————————
    Started All Services…
    [netcool@analytics ~]$
    [netcool@analytics ~]$
    [netcool@analytics ~]$
    [netcool@analytics ~]$ /opt/IBM/LogAnalysis/utilities/unity.sh -status
    Thu Nov 8 14:49:41 EET 2018
    IBM Operations Analytics – Log Analysis v1.3.5.0 ENTRY EDITION Application Services Status:
    ——————————————————————————————-
    No. Service                                   Status           Process ID
    ——————————————————————————————-
    1    Derby Network Server         UP                 9901
    2    ZooKeeper                              UP                9959
    3    Websphere Liberty Profile UP                10568
    4    EIF Receiver                          UP                10749
    5    Log File Agent instance       UP                11134
    ——————————————————————————————
    Getting status of Solr on analytics.ibm.com
    Status of Solr Nodes:
    —————————————————————————————————-
    No. Instance Name                  Host                      Status  State
    —————————————————————————————————-
    1    SOLR_NODE_LOCAL  analytics.ibm.com   UP         ACTIVE
    —————————————————————————————————-
    All Application Services are in Running State
    Checking server initialization status: Server has initialized!
    [netcool@analytics ~]$

    6. Login to the Unity GUI, enter URL https://analytics.ibm.com:9987/Unity/login.jsp in the web browser, use user unityadmin with password unityadmin. Where analytics.ibm.com is my server FQDN.

    unity_login

     

  2. Prepare Eclipse IDE

    In this section I will describe how to correctly prepare Eclipse IDE and Install required IBM plugins. I tested the installation on Windows 7 and Red Hat Enterprise Linux Server 7.2 and it went successful for both. The recipe here will focus on development on Windows 7, however the deployment for Log Analysis is done on RHEL 7.2.

    1. Download Required Packages
       Download Eclipse  

      Insight pack development requires Eclipse Juno SR2 for Java EE Developers, you can download the one that relates to your operating system from this link:

       http://www.eclipse.org/downloads/packages/release/juno/sr2/eclipse-ide-java-ee-developers

       Download Oracle JDK 1.7  

      Eclipse Juno SR2 comes with ANT 1.8.3 which runs with Oracle JDK 1.7, it won’t run with higher version. Please notice that you should download JDK not JRE, as the ANT launcher is expecting JDK.

      Go to https://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html

      Select the suitable Oracle Java SE Development Kit 7u80 for your operating system, accept license and download it.

       Download IBM InfoSphere BigInsights Plugin  

      You need to download The IBM InfoSphere BigInsights Eclipse tools. Those tools are used as Eclipse project libraries. It gives you the ability to develop and deploy applications to the BigInsights server using Java MapReduce, JAQL and Hive. They also support developing text analysis programs, extractors, that run on InfoSphere BigInsights.

      To download the tools (single zip file), go to your account on IBM and search for OPERATIONS_ANALYTICS_LA1.3.2_LIN.zip and download it.

       Retrieve Log Analysis Eclipse Plugin  

      You need to retrieve the Log Analysis Eclipse Plugin for Insight Pack developers. The plugin does come with your Log Analysis installation.

      Go to /opt/IBM/LogAnalysis/unity_content/tools and get the LogAnalysis_1.0.4.v201610070752.zip from there.

    2. Install Eclipse IDE
       Installation of Eclipse IDE is so simple, you just need to extract the downloaded archive to your directory of selection.
    3. Install Oracle JDK
       Double click on the downloaded JDK executable and follow installation steps till the end.
    4. Install Log Analysis plugin
       a.    Go to the directory where you installed Eclipse

       b.    Run eclipse and select the workspace (directory) where you want your projects to reside in.0

       

      ¬†c.¬†¬†¬† Click on the Help in the tool bar, then select ‚ÄúInstall New Software‚ÄĚ

      1-1

      ¬†d.¬†¬†¬† Install window will open, click on ‚ÄúAdd‚ÄĚ button

      2-1

      ¬†e.¬†¬†¬† Add Repository window opens, click on the ‚ÄúArchive‚ÄĚ button

      3-2

       f.    Select the LogAnalysis_1.0.4.v201610070752.zip plugin, and click OK button

      4

      ¬†g.¬†¬†¬† Select Log Analysis, and click ‚ÄúNext‚ÄĚ button

      5

       h.    Complete installation steps till the end, it should go without error, then restart the IDE when asked to do so.

      6

       i.    When the IDE restarts, check the success of the installation by clicking File -> New -> Project

      7

      ¬†j.¬†¬†¬† You should find ‚ÄúInsight Pack Project‚ÄĚ under ‚ÄúLog Analysis‚ÄĚ folder

      8

    5. Install Big Insights plugin
       a.    Repeat steps from 2-5 in Install Log Analysis Plugin

       b.    Select the OPERATIONS_ANALYTICS_LA1.3.2_LIN.zip plugin

      9

       c. Select the IBM InfoSphere BigInsights category and click Next button. Don’t select the IBM InfoSphere BigInsights Patches category as it is required only when installing the BigInsights tools for Eclipse into environments with existing ILOG components, like Data Studio 4.1 or Rational Software Architect 9.0. 

      10

       d.    Complete installation steps till the end, it should go without error, then restart the IDE when asked to do so.

      6

      ¬†e.¬†¬†¬† After the restart, you should see the ‚ÄúTask Launcher for Big Data‚ÄĚ launched automatically.

      11

  3. Create Insight Pack Eclipse Project

     1. Start your eclipse IDE
     2. Click File -> New -> Project

    ¬†3. Go down and select ‚ÄúLog Analysis -> Insight Pack Project‚ÄĚ or use the quick filter13

     4. Choose a name for your project, then click finish

    14

     5. Expand the project in the Project Explorer view

    15

     
    ¬†6. Change JRE System Library to the downloaded jre1.7 (downloaded within JDK 1.7) in step “Prepare Eclipse IDE”. Skip this step if your Eclipse IDE picks-up JRE comes with installed JDK 1.7 on your environment.

     a. Right click on the project name and from the drop-down menu select Properties or just select the project and hit Alt+Enter 

    16

     b. Select the Java Build Path from the left panel (if not selected), go to the Libraries tab, select the existing JRE, then hit the Remove button 17

     

    ¬†c. Click on ‚ÄúAdd Library‚ÄĚ button, that should bring the ‚ÄúAdd Library‚ÄĚ window to select your preferred library, select JRE System Library, then hit the Next button

    18

    ¬†d. Select ‚ÄúAlternate JRE‚ÄĚ, then hit ‚ÄúInstalled JREs‚ÄĚ button19

     

    ¬†e. ‚ÄúInstalled JRE‚ÄĚ window opens, hit the ‚ÄúAdd‚ÄĚ button

    20

    ¬†f. Select ‚ÄúStandard VM‚ÄĚ from the ‚ÄúJRE Type‚ÄĚ and click ‚ÄúNext‚ÄĚ

    21

    ¬†g. The ‚ÄúJRE Definition‚ÄĚ window opens, click on the ‚ÄúDirectory‚ÄĚ button and select the directory where you installed the JDK, then click ‚ÄúFinish‚ÄĚ button; other fields will be populated automatically.

    22

     h. Tick on JDK 1.7 to select it, then click on the OK button

    23

    ¬†i. Select the newly added JDK 1.7 from the ‚ÄúAdd Library‚ÄĚ window, and click ‚ÄúFinish‚ÄĚ button to return to the project properties window

    24

    ¬†j. You should see newly added JDK 1.7 in the Java Build Path, click ‚ÄúOK‚ÄĚ button to close properties window

    25

    7. Update project compiler level

    project_compiler

     8. Create a new project folder, name it lib

    26

    27

     

     9. From the Log Analysis installation directory get these jars: commons-lang-2.6.jar, JSON4J.jar, log4j-1.2.16.jar, unity-analytics-framework.jar, unity-data-ingestion.jar and copy them under the newly created lib folder. You will find them under /opt/IBM/LogAnalysis/wlp/usr/servers/Unity/apps/Unity.war/WEB-INF/lib

    28

    ¬†10. Add jars to the project Java build path. Select all jars under lib folder, right click on them and select “Build Path” -> “Add to Build Path”.

    31-1

    11. Add logging support to the insight pack

     a. Right click on the project New->File
     b. In the New File window, set file name to log4j.properties

     c. Copy/paste the following content to the newly created log4j.properties file, that will enable INFO level logs on Unity server logs:

    #Custom logger factory
    log4j.loggerFactory=com.ibm.tivoli.unity.common.logging.UnityLoggerFactory
    # define the root logger with two appenders writing to console and file
    #log4j.rootLogger=INFO, CONSOLE, FILE
    log4j.rootLogger=INFO, STDOUT, UNITY_GR_CONSOLE

    # Define a console appender
    # In this case we are going to contribute to the Generic Receiver
    # console appender.
    #log4j.logger.UnityGenericReceiver=DEBUG,STDOUT
    log4j.logger.UnityGenericReceiver=INFO,UNITY_GR_CONSOLE
    log4j.appender.UNITY_GR_CONSOLE=org.apache.log4j.ConsoleAppender
    log4j.appender.UNITY_GR_CONSOLE.layout=org.apache.log4j.PatternLayout
    log4j.appender.UNITY_GR_CONSOLE.layout.ConversionPattern=%d [%t] %-5p %C \u2013 %m%n
    log4j.appender.UNITY_GR_CONSOLE.threshold=INFO

    log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender
    log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout
    log4j.appender.STDOUT.layout.conversionPattern=%d{ISO8601} %c{2} %p %m%n

  4. Developing Insight Pack

    An insight pack is composed of two Java classes. The first class is used to check the validity of the log line, trim and prune it. The second class is used to parse the log line into tokens then construct a JSON object using those tokens. The first class is named splitter which implements the interface com.ibm.tivoli.unity.splitterannotator.splitter.IJavaSplitter. The second one is named annotator and implements the interface com.ibm.tivoli.unity.splitterannotator.annotator.IJavaAnnotator.

     Note: I provided URL in the reference section for a github repository, the repository for a fully functioning eclipse project with a large log file for testing purpose under logSamples folder.

     The split function, takes the whole log text as a single JSON object input and returns an array of JSON objects, each object of them represents single line of the log text after pruning it.The annotate function takes as an input a single object from the returned JSON array and tokenize its content.

     So, how you should start developing your split and annotate classes? It begins with looking into your log file and determines a common pattern for the lines. The log file that I received from my client was following this common pattern:

     YYYY-MM-dd HH:mm:ss,SSS | Producer-Name | Severity | java-method(java-class.java:line-number) | Message

     e.g.

     2017-12-03 00:01:27,101 | Producer-CALLBACK-86 | INFO | com.abcClient.subscription.dao.CallbackDAO.getPendingRequests(CallbackDAO.java:125) | [0] pending callbacks retrieved in [53] milliseconds.

     However, there were some violation for that pattern:

    1. At the begining of each line there where some weird characters, so I used left trim (ltrim) method in the split class to remove them.

    weird_chars

    2. Some lines include multiple of asterisk character, which caused an impediment for the annotate method, so I generate a synthetic line to flag the error inside the split method.

    asterisk_chars-1

     

    Finally, in the annotate method I tokenize each line, then return JSON object for that line. The schema for the object will be used to render the search results after deploying into Log Analysis.

    {
    ¬†¬†¬†“content”:{
    ¬†¬†¬†¬†¬†¬†“text”:”2017-12-03¬†00:00:02,152¬†|¬†Producer-CALLBACK-71¬†|¬†INFO¬†¬†|¬†com.abcClient.subscription.dao.CallbackDAO.updateCallbackReceivedRequests(CallbackDAO.java:147)¬†|¬†Updating¬†callback¬†staus to¬†Q¬†for¬†processor¬†[102]”
       },
    ¬†¬†¬†“annotations”:{
    ¬†¬†¬†¬†¬†¬†“LineNumber”:”147”,
    ¬†¬†¬†¬†¬†¬†“Severity”:”INFO”,
    ¬†¬†¬†¬†¬†¬†“Time”:”00:00:02,152”,
    ¬†¬†¬†¬†¬†¬†“Message”:”Updating¬†callback¬†staus¬†to¬†Q¬†for¬†processor¬†[102]”,
    ¬†¬†¬†¬†¬†¬†“Date”:”2017-12-03”,
    ¬†¬†¬†¬†¬†¬†“Producer”:”Producer-CALLBACK-71”,
    ¬†¬†¬†¬†¬†¬†“JavaMethod”:”com.abcClient.subscription.dao.CallbackDAO.updateCallbackReceivedRequests”,
    ¬†¬†¬†¬†¬†¬†“JavaClass”:”CallbackDAO.java”
       }
    }

  5. Performance Tuning

    As you may have guessed, passing a large log file (order of Gigabytes) will have a performance impact on the split method. The performance impact stems from the fact that Java APIs for splitting, substring, and replacing a string use static memory allocation. A memory manager will take time to find the requested continuous block of memory before it returns it to you. If the memory manager cannot find the requested single block of memory, your application will receive heap memory error.

    As a solution I modified the standard JAVA APIs. In the modified methods, I replaced all static allocations with dynamic allocations. That saved me a huge time, so rather than taking more than hour and half to process 1.4 Gigabytes file then returning with heap error to only taking 45 mins on Windows 7 machine with 32 Gigabytes of memory.

    In the ABCClientLogSplitter class code you will find these custom methods:

    • splitString
    • substring
    • replaceOnce
    • replace

     

  6. Build Insight Pack

    You need to configure the indexing system (Apache Solr) with the filed names. Those field names come from the splitted tokens by the annotate method. Recall that the log line follows this pattern:

    YYYY-MM-dd HH:mm:ss,SSS | Producer-Name | Severity | java-method(java-class.java:line-number) | Message

    So the annotate method contains these fields:

    1. Producer
    2. Severity
    3. JavaMethod
    4. JavaClass
    5. LineNumber
    6. Message

    I also thought it would be helpfulf to split the timestamp into two separate fields: Date field and Time field. Please notice that by default the timestamp field is configured in the indexing schema.

    To complete the build process perform the following steps:

     1. Double click on indexconfig.spec.ucdk to open it and go to Field Configuration tab

    • Click on the Add button to add the eight fields with the shown order. This is the same order where we want the columns appear after deploying the insight pack. All fields will be of Text type, all attributes will be selected, and the Path will be the field name preceeded by “annotations.”

    fields_order

    • Change timestamp format (yyyy-MM-dd HH:mm:ss,SSS) to reflect the one that appears in the log file

    timestamp_format

    • Save configuration

     2. Create insight pack jar

    • Modify content of build_fileSetJar.xml with the following and save it:

    <project>
    <target name=”clean”>
    <delete dir=”build”/>
    </target>
    <!– Compile custom Java source code into class files –>
    <target name=”compile” description=”Compile splitter/annotator Java source code into class files”>
    <mkdir dir=”build/classes”/>
    <javac srcdir=”src” destdir=”build/classes”>
    <classpath refid=”compile.classpath”/>
    </javac>
    <copy file=”log4j.properties” todir=”./classes”/>
    </target>

    <!– Create JAR files containing custom Java class files –>
    <target name=”jar” depends=”compile” description=”Create JAR files containing custom Java class files”>
    <jar destfile=”src-files/extractors/fileset/java/ABCClientExtractor.jar” basedir=”build/classes”>
    </jar>
    </target>

    <target name=”run”>
    </target>

    <!– Define the CLASSPATH –>
    <path id=”compile.classpath”>
    <fileset dir=”./lib”>
    <include name=”*.jar”/>
    </fileset>
    <fileset dir=”build/classes”>
    <include name=”*/**”/>
    </fileset>
    </path>
    </project>

    • Build the jar, right click on build_fileSetJar.xml and make sure you got BUILD SUCCESSFUL message in the console:

    run_as_ant

    ant_config

    ant_build_success-1

     

    • Check the jar is created, refresh the project by pressing F5:

    jar_created

     

     3. Define Source type that will be created in Log Analysis after deploying the insight pack:

    • Double click on insightpack_spec.ucdkt (don’t forget to save after you finish)
    • Go to File sets tab to define new file set for the splitter and annotator

    splitter_fileset

     

    annotator_fileset

     

    • Go to Source types tab to define your source type

    source_type

     4. Build Insight pack zip file

    • Right click on the project, and select Build Insight Pack

    build_insightpack

    • After successful build, locate the zip file under the dist folder

    Insight Pack build is successful.
    C:\Users\IBM_ADMIN\workspace-juno-SR2\ABCClientInsightPack\dist\ABCClientInsightPack_v1.0.0.0.zip

  7. Deploy Insight Pack

    In order to deploy the Insight pack, as a netcool user do the following:

    • copy the insight pack (ABCClientInsightPack_v1.0.0.0.zip) from the dist folder under the eclipse project to the Log Analysis server
    • execute the command: /opt/IBM/LogAnalysis/utilities/pkg_mgmt.sh -install ABCClientInsightPack_v1.0.0.0.zip
  8. Test Insight Pack

    1. Create Datasource

    a. Open your Unity console https://analytics.ibm.com:9987/Unity/login.jsp

        where analytics.ibm.com is my hostname, replace it with your hostname.

    b. Open Administrative Settings

    c. Go to Data Types tab. Expand Source Types panel and make sure you can find ABCClientLog_ST. Expand File Sets tab and make sure you can find ABCClientLogSplitter and ABCClientLogAnnotator.

    d. Go to Data Sources tab. Click on Add Data Source button, the “Add Data Source” window opens. The most important configuration is in the Select Data tab, I splitted the 1.4 GBytes log file into 14 files of size 100 MBytes, and enabled the rolling file feature. Don’t put the all 14 files at the same time while configuring the data source, it is advised to put them under the logging directory one by one.

    add_datasource1

    add_datasource2

    add_datasource3

    2. Configure Search

    a. Return to the Unity console, and select New Search tab

    b. Select “Last Year” and “abcclient” as shown down

    search_datasource1

    c. Click on the Search button, notice that you will see the synthetic logs first, go further and filter by date

    search_datasource2-1

    3. IBM Log Analysis is preconfigured to return 1000 search hits only, you may need to increase that by doing the following:

    • Edit /opt/IBM/LogAnalysis/wlp/usr/servers/Unity/apps/Unity.war/WEB-INF/unitysetup.properties to change the property MAX_SEARCH_RESULTS to 100000
    • Restart Unity server
    • Change formdataUploadLimitInKB property in the two files: /opt/IBM/LogAnalysis/solrConfigs/coldTier/solrconfig.xml , /opt/IBM/LogAnalysis/solrConfigs/hotTier/solrconfig.xml . I changed it to 16777216 (16 Gbytes) as my server has 24 Gbytes RAM
    • To reflect changes for formdataUploadLimitInKB, do the following (no need to restart Unity server):
      • export UNITY_HOME=/opt/IBM/LogAnalysis
        export WLP_HOME=$UNITY_HOME/wlp/usr/servers/Unity/apps/Unity.war/WEB-INF/lib
        export CLASSPATH=.:$WLP_HOME/*
        $UNITY_HOME/ibm-java/bin/java -cp $CLASSPATH com.ibm.tivoli.unity.solr.util.SolrConfigUpdater localhost 12181 $UNITY_HOME

     

  9. Handful Tools

    While working with Log Analysis you will need handful tools, like delete data source from file system to save space, install and uninstall the pack.

    1. IBM Log Analysis comes with a delete tool, it supports four use cases. usecase_3 and usecase_4 deletes indices permanently from the file system. usecase_1 and usecase_2 marks data for deletion only which makes it doesn’t appear in the search, but it is still there in the indexed data.

    2. Use Solr UI on port 8983 (http://analytics.ibm.com:8983/solr) to check if the indexed data is still there

    a. Select a core from the core selector drop down list

    b. click on the search glass icon to perform a select query, if nothing returned that means data deleted permenantly

    solr_admin

     3. tail on /opt/IBM/LogAnalysis/logs/GenericReceiver.log file while your data is ingested by the insight pack to monitor its progress

  10. References

    1. Insight Packs (Create), https://developer.ibm.com/itom/docs/log-analysis/insight-pack-creating/
    2. Editing the Index Configuration, https://www.ibm.com/support/knowledgecenter/en/SSPFMY_1.3.5/com.ibm.scala.doc/extend/iwa_extend_tooling_editingindexconfig.html
    3. Source code on github as an Eclipse project: https://github.com/ahmedasaleh/IOALA_InsightPack

     

Join The Discussion