WebSphere Message Broker or IBM Integration Bus provides a comprehensive range of diagnostics tools which can help you find, diagnose and fix a range of problems throughout the entire lifecycle of an integration solution. This article will give you a quick tour of these facilities and give you some pointers on what sort of problems each method is particularly well suited for solving.

Visual Flow Debugger
Visual Studio Debugger
User Trace
Trace Nodes
Log4J Node
Activity Log
Stats and Accounting
Web Statistics
Resource Statistics
Event Monitoring
Record / Replay
The Broker Error Log
Exception List
Standard Output and Standard Error

Visual Flow Debugger
Problem Type Flow logic, Data, Java
Environment Type Development, Test
Applicable Versions All

The Visual Flow Debugger is built in to the Toolkit and provides an interface for debugging a message flow in the same way you would debug an application. The debugger allows you to set breakpoints within your message flow and then step through the flow following execution as it happens. It is also possible to view the content of message trees as the message is processed through the flow. You can even step in to transform nodes and debug Java code or ESQL code line by line.

This makes the Visual Flow Debugger an excellent tool for understanding problems with flow logic, being able to quickly zoom in and track the behavior of a single message at any level of detail is an invaluable aid to Message Flow Developers.

The Visual Flow Debugger is implemented as an extension to the Eclipse Java debugger, so this means that if you step into a Java Compute Node, and you have the source available in your workspace you can use all the facilities of the Java debugger including viewing the content of variables during message flow execution.

In order for the Visual Flow Debugger to work you must enable a debug port on the Execution Group or Integration Server that you are debugging. This can be done through Message Broker Explorer or through the command line using the command:

mqsichangeproperties -e -o ComIbmJVMManager -n jvmDebugPort -v

Each Execution Group must use it’s own port, and this port must not be used by any other application on the system. After setting this parameter you must restart the Execution Group because enabling the debug port for the Execution Group’s underlying JVM must be done during the intial creation of the JVM.

Once the debug port is open you can then attach to the JVM from the Message Broker Toolkit, this toolkit can be located on a different machine to the Broker being debugged. To do this you need to create a new debug configuration of type “Message Broker Debug” and fill in the relevant host and port details.

It is important to note that having a debug port enabled on an Execution Group does have a performance impact so the Visual Flow Debugger is not generally suitable for use on Performance Testing or Production systems.

For more information on configuring the Visual Flow Debugger read the following InfoCenter Topic


Visual Studio Debugger
Problem Type Flow Logic, Data, .NET
Environment Type Development, Test, Windows Only
Applicable Versions V8, V9

When developing using .NET on Windows platforms the Visual Studio Debugger can be used to debug .NET assemblies deployed to the Broker providing the same rich user experience available to developers of command line .NET applications. In order to use the Visual Studio you simply need to attach to any Broker hosting an App Domain from within the Visual Studio development environment, ensuring that the debugger is set to attach to “Managed Code”.

Once you are debugging you can set breakpoints in .NET code and single step through execution as well as examine the contents of .NET variables during execution. The debugger will work with any of the .NET languages supported by the Broker runtime.

Unlike the Eclipse based Java debugger you do not need to open a port on the Broker before debugging. It should be noted that debugging live on a production system is still to be discouraged since when debugging an application message flow processing threads are paused. This means that the Broker may not be able to process as much work as it would otherwise be able to and also any requests to stop or start resources will not be able to be processed until the debug session releases the paused thread.

For more information on using and configuring the Visual Studio Debugger see the the Using WebSphere Message Broker V* in Mid-Market Environments RedBook


User Trace
Problem Type Flow Logic, Development, Test, Production, ESQL,
Environment Type Development, Test, Production
Applicable Versions All

You can use UserTrace to work out what path a message followed through a messageflow and why. These trace messages are a subset of the information logged in a full service trace and the trace points are picked to have significance to a message flow developer. The trace messages are translated into the broker’s local language during the collection and formatting process.

UserTrace is written to binary log files stored in the $MQSI_WORKPATH/common/log and needs to be enabled, collected and formatted using a series of commands run on the same box as the broker itself. UserTrace is particularly useful for diagnosing issues related to the ESQL code executed by Compute Nodes because each expression of ESQL which is evaluated logs a line of UserTrace with it’s result.

Other nodes will produce events at key points where information about the activity in the node could be used for problem diagnosis, for example the MQInput node logs a UserTrace entry with the MQCC and MQRC every time it does an MQGET.

UserTrace can also be set as the target destination for Trace Nodes (discussed below) and be used to log application specific tracing and you can even issue your own cciUserTrace calls from C User Defined Nodes.

For more information on enabling and collecting user trace see Debugging with User Trace


Trace Nodes
Problem Type Flow Logic, Development, Test, Production, Parsing
Environment Type Development, Test, Production
Applicable Versions All

Trace Nodes are special nodes that you can place in your message flow which log information to either, the local error log, a file on the filesystem or User Trace. When you drag a trace node onto the canvas you can set the “Pattern” property to contain any correllation path that you would like to log. Whenever execution of a messageflow passes through this node the expression will be evaluated and the output sent to the destination you have selected.

Trace nodes are most commonly used to output portions of the message tree that are relevant to the section of flow that they are being logged from. For example, following a decision point in the flow a trace node may be configured to log the part of the message that the decision is based upon. Alternatively the trace nodes can be used to record the entire message tree as a form of logging.

While Trace Nodes are somewhat analogous to writing to the stdout stream from application code there are several benefits to using the Trace Nodes. Firstly they can be turned on and off dynamically, meaning that on a production system they can be disabled by default in order to prevent the performance cost associated with performing the tracing. However if a problem is encountered they can be dynamically enabled to help with debugging what went wrong.

Additionally if you choose to write to UserTrace then the broker will automatically handle rolling over the binary log files when they grow beyond the maximum trace size set in the mqsichangetrace command.

One key “gotcha” to remember when using Trace Nodes is that if you are tracing a part of the message tree then this could change the timing of the parsing performed on the message. So for example if the parse timing on an input node is set to “On Demand” then a trace node logging ${Root} immediately after the input node will cause the entire message to parsed in order to output the trace data. This can subtly change the behaviour of the flow and the placement of Trace Nodes within the message flow should be performed with care in order to ensure that the appropriate exception handling branches of the flow are executed if the parse performed by a trace node were to fail.

Despite this restriction the Trace Node can be invaluable in determining what data was actually being processed by the flow at different point, especially when the transport being used does not make it easy to log the messages in flight.

For more information about using Trace Nodes see the Trace Nodes InfoCenter Topic


Log4J Node
Problem Type Flow Logic, Data, Java, Parsing
Environment Type Development, Test, Production
Applicable Versions V6.1, V7, V8

The Log4J Node is a Category 2 SupportPac which provides access to Log4J java logging framework through either a new plugin node or through an ESQL API. The SupportPace can be downloaded from IBM SupportPacs and needs to be installed on both the toolkit where you will be developing your flows and the runtime broker to which you will be deploying them.

Using the plugin node provided by the SupportPac provides a similar interface to a Trace Node, the parts of the message tree which are traced are provided in an XPath expression, however the output can be sent to any Log4J Logger defined in a specified XML file. This has a few advantages over the built-in capability of Message Broker.

Firstly the Log4J project supports a large number of options to handle rotating log files based on either size or date. This makes managing large amounts of logging data much easier on busy production systems.

Secondly the Node allows the user to specify a “severity” level for different log events. This severity level which is being actively logged can be varied dynamically, meaning that by default a production system can be configured to log only “SEVERE” events, however if a problem is encountered the log level can be altered so that fine detailed logging can be collected for problem determination.

Sine the Log4J SupportPac provides an ESQL interface and Log4J can be easily used directly from within Java Compute Nodes it is also possible to provide a unified logging approach throughout the whole message flow using this SupportPac.

For more details see Configuring the Log4j node in WebSphere Message Broker.


Activity Log
Problem Type Operational Monitoring
Environment Type Production
Applicable Versions v8 and above

ActivityLog is a lightweight record of the most important actions that have been performed by flows during operation. ActivityLog is enabled by default and is like an in flight data recorder. A set number of entries are cached per thread and these are then overwritten in a circular fashion so that the most recent activity for each thread is available.

To view the ActivityLog for a flow simply right click on the flow definition in MBX and select “Open ActivityLog”. It is also possible to filter output to quickly drill down to the information that is needed. By modifying the ActivityLog configurable service it is also possible to log information to a file.

ActivityLog is a great tool for providing a general overview of the health of each flow, it is possible for example to see at a glance if a flow is processing message and if these are reaching the expected output nodes.

Find more information in the Activity log overview.


Stats and Accounting
Problem Type Performance
Environment Type Test, Production
Applicable Versions All

Stats and Accounting allows you to track the performance of your Message Flows in a lightweight, non-invasive manner suitable for use in production environments. Using this facility you can determine how many times particular flows or nodes have been executed as well as how much CPU time and elapsed time was spent in each node. This information can be vital for identifying performance bottlenecks and helping you to focus optimizations to where they will be most effective. You can also use the output from Stats and Accounting to measure the effect of performance tuning operational parameters like the number of additional instances.

The Stats and Accounting facility when enabled, causes the Broker to collect snapshot performance data which can either be published as XML documents to a topic, or written to UserTrace for subsequent analysis. You can also use Message Broker Explorer to collect the stats and accounting data and produce graphs of the data.


Web Statistics
Problem Type Performance
Environment Type Test, Production
Applicable Versions v9

At IIB version 9 the Web Admin Console was enhanced to provide an easy way to enable and disable Stats and Accounting Data and also provide visualizations of the data within Web Admin. As well as graphs of key metrics over time the Web UI can also display a visualization of the flow along with a summary of performance data collected for the flow. The information available is exactly the same as the data described in the section above however this provides a much easier way of visualizing and accessing the data.

For more information see Viewing accounting and statistics data in the web user interface in the InfoCenter.


Resource Statistics
Problem Type Performance
Environment Type Test, Production
Applicable Versions All

Resource Statistics are similar to the Stats and Accounting feature however these statistics instead of being aggregated at a flow, or application library give you information about the activity from a Resource Manager perspective. You can use these statistics to tell you about activity across an entire resource type, for example all uses of Parsers or all uses of SOAP.

Each individual Resource Manager reports data that is pertinent to that resource type, so for example the TCPIPClient Nodes Resource Manager reports on the number of open and closed connections, the bytes sent and received and the number of failed connections. By contrast the JVM Resource Manager report on the amount of memory used by the JVM heap and the performance of garbage collection.

The Resource Manager available at v9 are:

CICS
CORBA
Decision services
DotNet App Domains
DotNetGC
File
File Transfer Protocol
FTEAgent
Global cache
IBM Sterling Connect:Direct
Java Virtual Machine (JVM)
JMS
JDBC connection pools
ODBC
Outbound sockets
Parsers
Security
SOAP
TCPIP Client Nodes
TCPIP Server Nodes

Statistics can be turned on using MBX/IBX by selecting the Execution Group you are interested in then right clicking and selecting Statistics -> Start collecting Resource Statistics. Once collection is enabled you can either subscribe to reports to receive Resource Statistics snapshot data or you can view in IBX / MBX by selecting the Execution Group, right clicking and selecting Statistics -> View Resource Statistics.

For more information see Resource Statistics


Event Monitoring
Problem Type Operational Monitoring
Environment Type Production
Applicable Versions v8 and above

Event Monitoring provides “hooks” into key points in a message flows execution that allow “Event Messages” to be published when flow execution reaches these points. Events can be configured to be emitted on Transaction start, end and rollback or when execution passes through specific terminals in the flow. The content of these messages can be customized to contain parts of the message content and it is easy to configure automatic correlation so that events from a single message flow invocation can be grouped and filtered.

Configuration of monitoring events can either be done using the toolkit and integrated XPath wizards accessible through the “monitoring” tab on any node or by directly modifying a “monitoring profile” xml file.

Events themselves are published to topics so you can either subscribe to publications, or you can integrate directly with compatible monitoring products such as WebSphere Business Monitor.

For more information see Monitoring Basics


Record / Replay
Problem Type Operational Monitoring, Audit
Environment Type Production
Applicable Versions v8.0.0.1 and above

Record and Replay extends the Event Monitoring capability provides a way to capture messages that are used to drive a flow and save them in a database or DataCaptureStore. Once these messages are captured they can be viewed or stored for audit purposes. From a problem determination perspective however the most valuable utility is being able to subsequently, at a later date, replay the messages and run them through the flow again.

This allows for example failing messages to be archived and then replayed while the Visual Flow debugger is attached or trace is turned on at a time when production traffic will not be impacted. Stored messages can be viewed and replayed through and integrated view in the Web Admin Console and configuration of the DataCapture Store and which message flows to capture messages from can be done through MBX//IBM.

For more information see Monitoring Basics.


The Broker Error Log
Problem Type Administration, Deployment
Environment Type All
Applicable Versions All

IIB and WMB write all message to the relevant system error log. On Unix platforms this is the syslog, on Windows this is the event viewer and on z/OS this is the job log. Each message output to the log has a unique “BIP number” corresponding to the cause of the message. Messages have 3 severity levels:

I = Informational These messages are for Informational purposes and do not require further action, for example Informational messages are issued when a Broker starts, when a connection is made to a JMS provider etc.

W=Warning These messages are are often used to draw attention to conditions that while not errors may be unintended and require action to reconfigure the affected flows.

E=Error This message indicates an error

Each BIP message also gives the name of the Broker, the process ID and the thread ID and EG name where the problem occurred in order to make it easier to group errors for particular workloads.

All BIP messages have a recommended “User Response” section which indicates how the error may be resolved however due to limitations in the syslog technology this is only visible by default on Windows platforms. In order to see the user response for a particular BIP message you can use the command:

mqsiexplain

In general messages can be grouped into 2 main categories:

1.) Messages raised during the processing of messages

When a node in a message flow throws an exception this is assigned a BIP number. As the exception is propagated up the flow, possibly being processed by various exception handling routines attached to the catch terminal of nodes it may be “wrapped” in several outer exceptions. This forms a “stack” of exceptions known as an ExceptionList (see the section on the ExceptionList below).

If an exception reaches an input node without being handled by any branch of the flow then the input transaction is rolled back and each message in the ExceptionList is output to the syslog. On IIB v9 the position within a nested stack of exceptions is indicated by the “(Msg 1/4)” decorators.

For many transports if the root cause (usually the innermost exception of the ExceptionList) is transient a retry mechanism will mean that these errors will be resolved without requiring intervention from a user. For instance an MQInput node will retry the message until the “backout threshold” is reached at which point it will requeue the message to the back out queue or dead letter queue.

Where errors are a result of message processing at least one message in the exception stack will usually identify the message flow that is causing the problem.

BIP messages as a result of message processing are therefore most often of interest to message flow developers who understand the flow logic.

2.) Administrative messages

As well as messages logged to the syslog as a result of message processing there will also be messages emitted corresponding to Administrative actions. These messages could be due to administrative actions such as deployment or configuration of a broker component. They could also be reports about the state of the broker itself. For instance if the Broker crashes unexpectedly then a message is written to inform the Administrator there is a problem before restarting the process.

Errors that are the result of administrative actions will usually need action from a Broker Administrator in order to resolve.


Exception List
Problem Type Message Processing
Environment Type All
Applicable Versions All

WebSphere Message Broker controls error behaviour through the use of exceptions. When a node encounters a problem an exception is thrown and this may be caught and handled by any node which has a Catch terminal wired in the upstream flow. When an exception is caught but can not be handled by a node it is generally wrapped in another exception which adds context information and then thrown upstream.

This creates a nested “stack” of exceptions where typically the innermost exception contains the root cause of the problem and the outer exceptions provide increasingly general information about the path of the exception through the flow.

Most nodes in WMB do not have access to the exception objects directly (apart from .NETCompute and JavaCompute nodes) so instead the broker provides a way to process exceptions which is a natural extension of the existing transformation capabilities. When execution is propagated from either a catch terminal or a failure terminal then a new message tree is added to the message assembly called “ExceptionList”. This can be navigated like any other tree in the product using any of the usual transformation capabilities.

Additionally each exception is mapped onto a “BIP” error message, this serves two main purposes:

1.) If the exception reaches an input node and still has not been handled then the entire exception stack is output to the Broker Error log. The individual exceptions are serialized as BIP messages to the Broker Error Log as describe above.

2.) The BIP message provides structure when building the ExceptionList message tree

Lets examine an example. In the following scenario we have an error thrown by an MQ Input Node because the queueName parameter does not exist. The catch terminal has been wired and a trace node used to display the contents of the ExceptionList:

 

In this case the output from the trace node looks like this:

( [‘MQROOT’ : 0x7fc950050090]
(0x01000000:Name):RecoverableException = (
(0x03000000:NameValue):File = ‘/build/slot1/S900_P/src/DataFlowEngine/MQLibrary/linklib/ImbMqOutputNode.cpp’ (CHARACTER)
(0x03000000:NameValue):Line = 877 (INTEGER)
(0x03000000:NameValue):Function = ‘ImbMqOutputNode::evaluate’ (CHARACTER)
(0x03000000:NameValue):Type = ‘ComIbmMQOutputNode’ (CHARACTER)
(0x03000000:NameValue):Name = ‘failingMQFlow#FCMComposite_1_2′ (CHARACTER)
(0x03000000:NameValue):Label = ‘failingMQFlow.MQ Output’ (CHARACTER)
(0x03000000:NameValue):Catalog = ‘BIPmsgs’ (CHARACTER)
(0x03000000:NameValue):Severity = 3 (INTEGER)
(0x03000000:NameValue):Number = 2230 (INTEGER)
(0x03000000:NameValue):Text = ‘Caught exception and rethrowing’ (CHARACTER)
(0x01000000:Name ):Insert = (
(0x03000000:NameValue):Type = 14 (INTEGER)
(0x03000000:NameValue):Text = ‘failingMQFlow.MQ Output’ (CHARACTER)
)
(0x01000000:Name ):MessageException = (
(0x03000000:NameValue):File = ‘/build/slot1/S900_P/src/DataFlowEngine/MQLibrary/linklib/ImbMqOutputNode.cpp’ (CHARACTER)
(0x03000000:NameValue):Line = 1989 (INTEGER)
(0x03000000:NameValue):Function = ‘ImbMqOutputNode::putMessage‘ (CHARACTER)
(0x03000000:NameValue):Type = ‘ComIbmMQOutputNode’ (CHARACTER)
(0x03000000:NameValue):Name = ‘failingMQFlow#FCMComposite_1_2′ (CHARACTER)
(0x03000000:NameValue):Label = ‘failingMQFlow.MQ Output’ (CHARACTER)
(0x03000000:NameValue):Catalog = ‘BIPmsgs’ (CHARACTER)
(0x03000000:NameValue):Severity = 3 (INTEGER)
(0x03000000:NameValue):Number = 2666 (INTEGER)
(0x03000000:NameValue):Text = ‘Failed to open queue’ (CHARACTER)
(0x01000000:Name ):Insert = (
(0x03000000:NameValue):Type = 2 (INTEGER)
(0x03000000:NameValue):Text = ‘-1’ (CHARACTER)
)
(0x01000000:Name ):Insert = (
(0x03000000:NameValue):Type = 5 (INTEGER)
(0x03000000:NameValue):Text = ‘MQW101’ (CHARACTER)
)
(0x01000000:Name ):Insert = (
(0x03000000:NameValue):Type = 2 (INTEGER)
(0x03000000:NameValue):Text = ‘2085’ (CHARACTER)
)
(0x01000000:Name ):Insert = (
(0x03000000:NameValue):Type = 5 (INTEGER)
(0x03000000:NameValue):Text = ” (CHARACTER)
)
(0x01000000:Name ):Insert = (
(0x03000000:NameValue):Type = 5 (INTEGER)
(0x03000000:NameValue):Text = ” (CHARACTER)
)
(0x01000000:Name ):Insert = (
(0x03000000:NameValue):Type = 5 (INTEGER)
(0x03000000:NameValue):Text = ‘IDONTEXIST’ (CHARACTER)
)
)
)
)

Here we can see there are 2 nested exception in the following structure:

BIP2230

->BIP2666

The inner most message contains the inserts relating to the MQ error whereas the outer message provides context indicating that the MQOutput Node was unable to handle the exception and allowed it to be rethrown up the flow. No other node in the flow attempted to handle the exception so we see no other entires in the exception stack.

If we unwired the catch terminal and allowed the message to rollback we would see the exception list output to the Broker Event Log as shown below:

May 20 15:57:21 prionson IIB[6139]: IBM Integration Bus v9001 (BRK9.default) [Thread 14702] (Msg 1/3) BIP2628E: Exception condition detected on input node ‘failingMQFlow.MQ Input’.
May 20 15:57:21 prionson IIB[6139]: IBM Integration Bus v9001 (BRK9.default) [Thread 14702] (Msg 2/3) BIP2230E: Error detected whilst processing a message in node ‘failingMQFlow.MQ Output’.
May 20 15:57:21 prionson IIB[6139]: IBM Integration Bus v9001 (BRK9.default) [Thread 14702] (Msg 3/3) BIP2666E: An error occurred in node ‘failingMQFlow.MQ Output’ when opening queue ‘IDONTEXIST’ on queue manager ”. State = ‘-1’ ‘MQW101’ ‘2085’ ”
May 20 15:57:22 prionson IIB[6139]: IBM Integration Bus v9001 (BRK9.default) [Thread 14702] (Msg 1/1) BIP2648E: Message backed out to a queue; node ‘failingMQFlow.MQ Input’.

The first three entries correspond to the ExceptionList in the trace node example, here the message text has been added in the appropriate language for the locale and the inserts copied in to make the error human readable. The MQInput Node has also wrapped the exception list in another BIP2628 exception indicating that the MQInput node caught the exception and made some effort to process it (it checked if the catch terminal was wired for example in case the user wanted to process the exception in the flow). The “(Msg X/Y)” indicator was added at IIB v9 and shows the relative position of messages within the ExceptionList. The MQInput node has also output a separate BIP message, not part of the exception list to say that a message was backed out because it couldn’t be processed.

The main benefit of the ExceptionList structure is that you can work with failures within the flow to either provide a unified reporting framework or to handle particular error with changes in application logic. For example the following ESQL snippet walks the exception list using standard tree navigation and flattens the ExceptionList into an XMLNSC message containing an element per exception:

CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
CALL CopyMessageHeaders();
CREATE LASTCHILD OF OutputRoot DOMAIN(‘XMLNSC’);
SET OutputRoot.XMLNSC.(XMLNSC.XmlDeclaration)*.(XMLNSC.Attribute)Version = ‘1.0’;
SET OutputRoot.XMLNSC.(XMLNSC.XmlDeclaration)*.(XMLNSC.Attribute)Encoding = ‘UTF-8’;
CREATE LASTCHILD OF OutputRoot.XMLNSC NAME(‘ERRORMESSAGE’);
DECLARE outputRef REFERENCE TO OutputRoot.XMLNSC.ERRORMESSAGE;
DECLARE exceptionRef REFERENCE TO InputExceptionList.*[1];
WHILE LASTMOVE(exceptionRef) DO
IF exceptionRef.Number IS NOT NULL THEN
CREATE LASTCHILD OF outputRef NAME(‘ERROR’) VALUE (‘ERRORNUM: ‘ ||
CAST(exceptionRef.Number AS CHARACTER) || ‘ TEXT:’ ||
exceptionRef.Text);
END IF;
MOVE exceptionRef LASTCHILD;
END WHILE;
RETURN TRUE;
END;

This produces output like the following:

ERRORNUM: 2230 TEXT:Caught exception and rethrowing
ERRORNUM: 7499 TEXT:CLAIMSPROCESSING FAULT. Code: ERR004
ErrorText: Could not view claim. Reason: Claim CLAIM101
does not exist.

Standard Output and Standard Error
Problem Type Third-party libraries, Java, JCN Code
Environment Type Development, Test, Production
Applicable Versions All

All applications have a stdout and stderr stream associated with them. The output of these streams are what you see when you run a typical application from the console. Message Broker is designed to run as a non-interactive task and so its stdout and stderr streams are not automatically output to the command console.

Instead these streams are redirected to files on the file system. On Windows systems the streams are re-directed to:

Broker

%MQSI_WORKPATH%\components\\console.txt (contains both stdout and stderr)

Execution Group

%MQSI_WORKPATH%\components\\\console.txt (contains both stdout and stderr)

On Unix Systems the streams are re-directed to:

Broker

$MQSI_WORKPATH/components//stdout

$MQSI_WORKPATH/components//stderr

ExecutionGroup

$MQSI_WORKPATH/components///stdout

$MQSI_WORKPATH/components///stderr

On zOS the stdout and stderr streams are appended to the JOBLOG.

Typically WMB tries to avoid relying on the stdout and stderr streams for debugging information, however if you are using any third party libraries with Message Broker or if Message Flow developers use System.out.println statements in Java Compute Nodes then debugging information can end up in these files. It is also common for Third-party Java code to call the printStackTrace() function when catching errors.

For these reasons it is sometimes useful to check the stdout and stderr streams when troubleshooting as a failure in a Java Compute Node or a Third-party library may not be properly logged by the Message Flow developer.

Note that these files are not rotated or otherwise managed so they will continue to grow until they fill all available file system space. If you ened to clear out these files it is safe to delete them when the Broker is stopped. The Broker will automatically recreate the files again on startup.

Join The Discussion

Your email address will not be published. Required fields are marked *