Forwarding MQ JSON error logs to Elasticsearch

 View Only

Forwarding MQ JSON error logs to Elasticsearch 

Wed March 04, 2020 03:11 PM

Forwarding MQ JSON error logs to Elasticsearch


Matthew Whitehead
Published on 02/08/2019

 

Introduction

In 2016 I wrote an article demonstrating how you could take MQ’s error logs and send them to Elasticsearch for centralised logging and analysis. While that demonstrated some useful ways of centralising your MQ error logs, it also demonstrated how complicated it was to parse MQ’s multi-line error log format and create individual log entries that Elasticsearch could handle. Since then MQ has introduced an alternative JSON format for error logs, originally available in MQ 9.0.4. This article demonstrates how to take those JSON formatted error logs and send them to Elasticsearch, with far simpler configuration and no need to handle multi-line error messages.

MQ JSON error logs

The Diagnostic message logging topic in the Knowledge Center gives an overview of the different error logging options you have and how to configure them. For the purposes of this article I’m going to enable JSON error logs for 3 queue managers, each writing to /var/mqm/qmgrs/QMNAME/errors/AMQERR0x.json.

You can also enable JSON logs for system wide error logs and client error logs.

Benefits of using JSON error logs

There are several reasons why you might want to configure MQ to write error logs as JSON:

  • To parse them more easily into their constituent parts, for example to extract the error code and QM name from each message
  • To make it easier to iterate over a list of error messages, because each JSON entry is on a single line in the file
  • To forward and consume the logs in tooling designed to store logs from multiple sources

It’s this last point that is the focus of this article. I’m going to be using Elasticsearch 7.2.1, Kibana 7.2.1 and Filebeat 7.3 to collect the error logs for all 3 of my queue managers. Filebeat will consume log entries written to the files, it will pull out all of the JSON fields for each message and forward them to Elasticsearch. I’ll then use Kibana to browse the log entries and create some visualisations to help me understand what is happening with my queue managers.

Turning on JSON error logs for a queue manager

To configure JSON errors I’m going to add the following stanza to the qm.ini file for each of my queue managers:

DiagnosticMessages:

Service = File

Name = JSONLogs

Format = json

FilePrefix = AMQERR

It is not possible to disable the existing AMQERR0x.LOG files, so the JSON error logs will be written in addition to the usual text error logs.

Once I have done this and restarted my queue managers, the error directories for each queue manager now look like this:

mwhitehead@ubuntu:~$ ls /var/mqm/qmgrs/QM1/errors/

AMQERR01.json AMQERR01.LOG AMQERR02.json AMQERR02.LOG AMQERR03.json AMQERR03.LOG

mwhitehead@ubuntu:~$

If we now take a look in AMQERR01.JSON we’ll see entries like this:

{"ibm_messageId":"AMQ5041I","ibm_arithInsert1":0,"ibm_arithInsert2":0,"ibm_commentInsert1":"LOGGER-IO",...

{"ibm_messageId":"AMQ8004I","ibm_arithInsert1":0,"ibm_arithInsert2":0,"ibm_commentInsert3":"QM1",...

{"ibm_messageId":"AMQ5029I","ibm_arithInsert1":10621,"ibm_arithInsert2":0,"ibm_commentInsert1":"SYSTEM.AMQP.SERVICE",...

You can see that each message is in a separate JSON object on its own line, and each part of the error message is in its own JSON field, so it’s easy with JSON tooling to pull out, for example, all of the message IDs by extracting just the ibm_messageId field or the message description itself by extracting just the message field.

Configuring Filebeat to consume the files as JSON and forward them to Elasticsearch

Now I have MQ error logs being written as JSON objects I can configure Filebeat to read the JSON files and forward them to Elasticsearch. For my configuration I’m going to edit the default filebeat configuration in /etc/filebeat/filebeat.yml. This file contains a lot of configuration options to do with which files to read, how to parse them, and where to send them to. Below is the filebeat.inputs section I have configured in filebeat.yml. The rest of the configuration file has been left to its default settings:

filebeat.inputs:

 

- type: log

enabled: true

paths:

- /var/mqm/qmgrs/*/errors/*.json

json.keys_under_root: true

json.message_key: message

There are a few points to note about this configuration. Firstly, by using “/*/” in the path where the queue manager name would go, I can have Filebeat look for files in all of my queue manager directories. Secondly, by specifying the json.keys_under_root attribute I can tell Filebeat to extract each of the JSON fields in the MQ error messages and put them as top-level fields in the Elasticsearch message. This makes it easier to select and filter on each of the MQ JSON fields in Kibana.

Depending on where you have installed Elasticsearch and Kibana you may need to modify the default configuration for where Filebeat sends its data to. If you have Elasticsearch and Kibana running on the same host as Filebeat and on the default ports, you may not need to modify the default settings. Simply start Filebeat and it will begin reading the MQ error logs and sending them to Elasticsearch.

Viewing and filtering error messages in Kibana

When you first load Kibana you will be asked to create a Kibana index for your filebeat data. Follow the instructions in Kibana for creating an index.

When creating an index you will be asked to choose which timestamp field to use as the time index for error log messages. If you select the @timestamp field, messages will be indexed based on the time Filebeat read them from the MQ error logs. If you wish to index them based on the time MQ wrote them to the error logs, choose the ibm_datatime field as the timestamp index instead:

Selecting the MQ timestamp field

Once you have created the index and allowed Filebeat some time to read and send the MQ error logs you can use the Discover view in Kibana to preview the data in Elasticsearch. By default you get a histogram view showing the number of error log messages written for the given time period, and a table listing the error log messages and their fields. Here’s my Kibana view after starting it for the first time:

Kibana dashboard showing MQ error log messages

The default view is quite noisy so the first thing we can do is choose the MQ message fields we are interested in. Using the list of available fields on the left of the dashboard, select the MQ fields you are interested in seeing. For example message, which is the translated MQ error message text, and ibm_serverName which contains the name of the queue manager:

Kibana dashboard with specific MQ fields

This view is more useful than the default dashboard but it still shows us every single MQ error log message, many of which we may not be interested in seeing. One way of restricting the types of message shown in the dashboard is to filter the view based on the value of one or more MQ fields. For example, perhaps we would like to see any authentication attempts that failed because the application didn’t provide the correct credentials for the current CONNAUTH setup. We can create a filter that only includes messages who’s ibm_messageId starts with “AMQ5542”:

Filtering to only show AMQ5542 messages

Notice how I’ve also added a new field to the view – ibm_commentInsert1. MQ error log messages have up to 2 arithmetic inserts and 3 comment inserts. If we know what each of the inserts contains for particular message IDs we can see more useful information about the message. For AMQ5542 messages the ibm_commentInsert1 field contains the user ID that failed the CONNAUTH check so adding it to the dashboard allows me to see which user IDs are failing CONNAUTH checks and against which queue managers.

Finally, if you decide that you are only ever going to want to show higher severity messages in Kibana, there is an option in MQ to configure the severity of the messages written to the error logs. This has the advantage that you won’t be unnecessarily sending every single MQ message to Elasticsearch, but has the down side that if you want to see informational messages in the future you’ll have to update your MQ configuration. To limit the severity of message sent to the MQ JSON error logs add the Severities attribute to your DiagnosticMessages stanza:

DiagnosticMessages:

Service = File

Name = JSONLogs

Format = json

FilePrefix = AMQERR

Severities=W+

By specifying W+ MQ will write messages that are of warning severity or higher. You could specify E+ to get only the error severity messages (or higher). Here’s a screenshot of my Kibana dashboard for a queue manager that only shows warning messages or higher. Note that I’ve also added the loglevel field to the dashboard:

MQ only writing warning severity messages or higher

Summary

Hopefully this article has given you an overview of the JSON error logging options in MQ and how they make it much easier to send MQ error log messages to Elasticsearch for analysis and visualisation.

 

Entry Details

Statistics
0 Favorited
15 Views
1 Files
0 Shares
7 Downloads
Attachment(s)
pdf file
Forwarding MQ JSON error logs to Elasticsearch.pdf   1.69 MB   1 version
Uploaded - Wed March 04, 2020

Tags and Keywords

Related Entries and Links

No Related Resource entered.