iStock_000020343379_XXXLargeWe’re thrilled to announce the availability of IBM App Connect Enterprise v11.0.0.4. This is the fourth fix pack released for App Connect Enterprise software. We provide regular fix packs, approximately once per quarter, a cadence which we intend to continue through 2019. Fix packs provide both regular maintenance for the product, and also new functional content. This blog post summarizes all the latest and greatest capabilities:

  • LDAP Authentication for ACE Administration
  • Global Cache
  • Record and Replay
  • Policy Redeploy
  • Sticky settings for Monitoring and Statistics including REST API PATCH verbs
  • Support for zLinux
  • Toolkit Enhancements including an editor for user defined policies
  • New Resource Manager settings
  • The new Exception Log Resource Manager

Expand the sections below to find out more!

LDAP Authentication for ACE Administration

When executing administrative actions against a Fix Pack 4 App Connect Enterprise integration node (or stand-alone integration server), you can now configure the ACE runtime to delegate the task of authenticating your user identity to an LDAP server. The example we will follow here uses an LDAP server at host ibmexample1. If you wish to try out a similar simple example, and you don’t have an LDAP server to hand, you may wish to install Apache Directory Studio on Windows. Some more detailed instructions for getting this software installed and configured were included in one of our blog posts from earlier this year. In our example, we have an LDAP server set up as follows:

LDAP server ibmexample1

  • This LDAP server is set up to receive requests on port 10389
  • This LDAP server has an administrator user set up named admin with password admin123
  • Anonymous connections to ibmexample1 are not allowed
  • The userid bthomps is defined in the LDAP server (this is the person sjown in the picture above with cn=Ben Thompson)

With the LDAP server set up, next we create an integration node and define some credentials:

  • mqsicreatebroker TESTNODE
  • mqsistart TESTNODE
  • mqsicreateexecutiongroup TESTNODE -e default
  • mqsisetdbparms TESTNODE -n ldap::alias -u “uid=admin,ou=system” -p admin123
  • mqsiwebuseradmin TESTNODE -c -u bthomps -x -r aceadmin

The userid bthomps is defined in the LDAP server. After authentication, this userid will be mapped to the role aceadmin. Having created the integration node, open its configuration file, node.conf.yaml and provide the settings shown:

  • The ldapUrl property specifies the URL used to locate the LDAP server.
  • The ldapBindDn and ldapBindPassword properties specify an alias (which we’ve handily named ldap::alias!) which is used to refer to the admin credentials for communicating with the LDAP server. This maps to the entry we made using the mqsisetdbparms command.

Now everything is prepared, restart the integration node:

  • mqsistop TESTNODE
  • mqsistart TESTNODE

Open a web browser pointing at the integration node’s admin port, and you should be faced with a login box. Supplying the correct LDAP userid and password (in this example bthomps) should get you logged in successfully:

If you would like to try some more tests, note that there is a handy logout option available from the menu in the top right corner:

Global Cache

Added to App Connect Enterprise for the first time in fix pack 4, is the embedded global cache feature. This new v11 capability enables flows to store data in memory and for information to be shared between message flows which are deployed to separate integration servers. The participating servers can potentially even be distributed across separate physical machines or containers. The embedded cache is delivered using WebSphere eXtreme Scale technology. From the point of view of a developer, this feature will be very familiar for users of IBM Integration Bus. Just like in v10, when designing a message flow you can use graphical Mapping nodes or JavaCompute nodes to store and retrieve data in the global cache. From an administration point of view, there are a few changes and we have attempted to simplify the configuration. Placing global cache configuration information at the integration server level has also enabled us to support this technology both for servers owned by integration nodes, and also from independent integration servers. The global cache is disabled by default. You can explicitly control cache participation of each integration server, by setting properties in its server.conf.yaml configuration file. For example, you might want to specify particular integration servers to host the catalog and container components, for performance tuning reasons. Here’s a quick recap describing the two main components of the global cache:

Global Cache Container servers:

A container server is a component that is embedded in the integration server that holds a subset of the cache data. Between them, all container servers in the global cache host all of the cache data at least once. If more than one container exists, the default cache policy ensures that all data is replicated at least once. In this way, the global cache can cope with the loss of container servers without losing data.

Global Cache Catalog servers:

The catalog server is a component that is embedded in an integration server, and it controls the placement of data and monitors the health of containers. You must have at least one catalog server in your global cache. To avoid losing cache data when a catalog server is lost, you can specify more than one catalog server. If the cache is shared by two integration servers, each of which hosts a catalog server, if one catalog server fails, the remaining catalog server can still be accessed.

Common Cache Topologies

To easily get up and running with the global cache, ACE provides some sample configuration files in the directory product_install_directory\server\samples\globalcache. There are examples of commonly used global cache configurations that you might require:

  • basic_1_catalog_1_container: This scenario has one integration server, which hosts both a catalog server and a container server. It is recommended for development purposes.
  • basic_1_catalog_4_containers: This scenario has four integration servers. One integration server hosts both a catalog server and a container server, and the other three integration servers host a container server each.
  • basic_2_catalogs_4_containers: This scenario has four integration servers. Two integration servers host both a catalog server and a container server, and the other two integration servers host a container server each.
  • ha_multi_instance: This scenario has four integration servers. Two integration servers host both a catalog server and a container server, and the other two integration servers host a container server each and must be used as part of a multi-instance integration node. Just like in IIBv10, a multi-instance integration node cannot be used to host an integration server with a catalog server.

Each scenario is provided with an architectural diagram, such as the examples below showing the first two scenarios from the bulleted list above:

A Quick note about Migration:

In IBM Integration Bus V10, cache policy XML files were used to configure the global cache. In IBM App Connect Enterprise, the properties that were set in those files are now set in the server.conf.yaml configuration files instead. In IIBv10, users of the Global Cache will also be familiar with the objectgrid.xml and deployment.xml files which had to be copied into set locations beneath your integration node and integration server workpaths, before they would take effect. In ACEv11, you set the objectGridCustomFile and deploymentPolicyCustomFile properties in the server.conf.yaml configuration file to point to these files. You can choose to put the files in the same workpath locations as used previously in IIBv10 but if you do so, you must specify these custom file properties in the server.conf.yaml file to point to this location.

Connect to an External WebSphere eXtreme Scale grid:

In addition to the embedded global cache, it is also possible to store data in a WebSphere eXtreme Scale grid external to App Connect Enterprise, and then access data in the grid from ACE message flows.
Configuration information to tell App Connect Enterprise how to locate the grid used to be held in a configurable service, but in ACEv11 this concept is replaced with a WXS Server Policy in a Policy Project.The screen shot below shows an example of such a policy:

Like many other policies, the security identity property provides an abstracted alias which links with credentials supplied using the mqsisetdbparms command such as this example:

mqsisetdbparms IntegrationNodeName -n wxs::myaliasidentity -u userId -p password

Record and Replay

This App Connect Enterprise fix pack returns the Record and Replay facility which many users may be familiar with from IIBv10, but updated and improved with a new Web user interface. This feature may be helpful if you need an audit record of messages that pass through the integration node, if you need to keep a history of messages for development and test purposes, or to help in problem determination. To determine which data should be recorded, you must configure monitoring events on your message flows. After you have recorded data, you can view it using the web interface, or interact with the recorded data using the ACE administrative REST API.

Here is an example of some basic Record and Replay configuration from the integration server’s server.conf.yaml file:

On the messages panel, you can see a display of all the messages that have been recorded in a particular data source. You can select which columns are displayed, and also choose to display timestamps in browser local time or Coordinated Universal Time (UTC).

You can also build queries which will filter the number of messages on display down to those meeting certain criteria such as those with a particular Event Name or Time.

Individual messages can be downloaded to your local machine by clicking on the downward arrow symbol:

You can select messages and mark them for replay:

Switching to the replay tab, you can select a message, choose a destination and then hit the Replay button:

Policy Redeploy

App Connect Enterprise v11 provides developers with the ability to create Policies inside Policy Projects using the ACE Toolkit. Policies are used to control connection properties and operational properties which are required by the ACE runtime. A policy can be used by an administrator to override or abstract some specific property values. For example sensitive data which might differ between runtime environments such as Dev / Test / Production. Policies can be deployed to the ACE runtime alongside message flows as part of a Broker Archive (BAR) file. Whilst many users of ACEv11 will be deploying the software within a container-based architecture in which containers are simply and quickly restarted in order to be reconfigured, we also acknowledge that some users still like to use architectural approaches involving long-lived servers and nodes. With this requirement in mind, fix pack 4 introduces the ability to effective dynamic changes to an integration server’s configuration through the redeploy of certain types of policy. You can now change and redeploy the following types of policy after they have been deployed, by redeploying the policy project:

  • Aggregation
  • CDServer
  • CICSConnection
  • Collector
  • EmailServer
  • FtpServer
  • Resequence
  • SAPConnection
  • SMTP
  • Timer
  • WorkloadManagement

When you redeploy the policy project, all message flows that are using the policy are stopped and restarted. Other types of policy (not listed above) cannot be redeployed. In this situation you must delete all deployed resources from the integration server and then deploy a new version of the policy. If you attempt a redeploy from the Toolkit which is not allowed then you will receive an error as shown below:

Fix pack 4 also introduces a new Toolkit option to delete a deployed Policy Project:

Note that in order to delete a policy project which contains non-dynamic policies, at fix pack 4 it is necessary to delete all deployed resources from the integration server and then redeploy all resources.

Sticky settings for Monitoring and Statistics including REST API PATCH verbs

ACEv11.0.0.4 introduces “sticky” settings which are preserved across the restart of an integration server. These changes are applied using PATCH verbs in the ACE administrative REST API. The result of PATCH changes are persisted to disk using json format files stored in the overrides sub-folder of the server’s working directory. These REST API enhancements have also been built into the commands which ACE provides for changing the monitoring and statistics. This means that for the first time, the changes made using these commands will be persisted across server restarts by default. This change in default behaviour is being brought about after strong feedback over a long period from the majority of our users who would prefer a restart not to undo previous instructions. Note that if you prefer to maintain the previous behaviour, the commands have a –non-persist flag which is provided too. Consider this simple example of a stand-alone integration server which has an Application deployed named App1 which contains a message flow named Flow1. Here’s the output from the mqsireportflowstats command:

Note that the lines are indented to provide information about the Integration Server level of the hierarchy, the Application level and then the Message Flow level. Each of these levels has two lines of information, the first one reporting the currently active settings, and the second line reporting the configured settings. Using the PATCH verbs available in the administrative REST API, will cause the persisted (configured) settings to be updated as well as the active settings. If you use the non-persisent “non-CRUD” actions which are also available in the administrative REST API, then only the active settings will be effected and the configured settings won’t change. The screenshot below shows an example mqsichangeflowstats command (which under the covers utilises the PATCH verbs in the administrative REST API), and then a repeat of the mqsireportflowstats command afterwards. Note in particular that the active and configured settings have both been updated, and also note that due to the configured inherit settings on the message flow, the outputformat has been updated to json, usertrace.

The following commands have been updated to utilise PATCH:

  • mqsichangeflowstats
  • mqsichangeresourcestats
  • mqsichangeflowmonitoring

The Toolkit has also been enhanced to make it easy to update monitoring and statistics settings:

Support for zLinux

In addition to options on Linux on x86-64 and Microsoft Windows x86-64, fix pack 4 introduces a new supported platform, zLinux:

Linux on IBM Z / LinuxONE (ACEv11.0.0.4 and above)

  • Red Hat Enterprise Linux V7.6
  • Ubuntu 16.04

Toolkit Enhancements including an editor for user defined policies

There have also been enhancements made to the Toolkit in ACEv11.0.0.4, which provide menu options for changing the runtime’s monitoring and statistics settings:

The Toolkit also provides a new editor which makes it much easier for you to create User-defined Policies as shown in the screen shot below:

By clicking the Add, Delete and Edit buttons, you can easily create new properties or change existing ones!

New Resource Manager settings

Fix pack 4 enhances ACE to provide a broad range of Resource Manager settings. Exposed in the Integration Server’s configuration file, server.conf.yaml, you will find several new sub-sections under ResourceManagers:

The Resource Managers which can now be configured in this way are as follows:

  • JVM
  • HTTPConnector
  • HTTPSConnector
  • ActivityLogManager
  • DatabaseConnectionManager
  • SocketConnectionManager
  • ContentBasedFiltering
  • FTEAgent
  • ParserManager
  • ESQL
  • XMLNSC
  • JSON
  • MQConnectionManager
  • XPathCache
  • AsyncHandleManager
  • GlobalCache

You can also issue queries against the Integration Server’s REST Administration API to retrieve information about the Resource Managers:

The new Exception Log Resource Manager

New to both ACEv11.0.0.4 and also IIBv10.0.0.16 is the Exception Log. This is a new form of tracing that can be enabled on an Integration Server that logs exceptions at their point of creation. This can be used by a flow developer or administrator to quickly diagnose issues with their message flows, especially ones which might be caused by incorrect exception handling.

The Exception Log is not enabled by default, and is controlled by three properties:

  • enabled – Set this option to true to enable the exception log.
  • showNestedExceptionDetails – By default, the exception log will not include the details of any nested exceptions, as they will already have been printed earlier in the log. However, setting this option to true will re-print nested exceptions and is useful if your exception log has many concurrent exceptions (in which case it may not be obvious which previous entry is the child of the current exception).
  • includeFlowThreadReporter – If the exception occurred on a message flow thread, and this option is enabled, this will enable an additional contribution from the Flow Thread Reporter to be added to the exception which shows the flow thread history and stack. These details are quite verbose but are useful if you want to work out the precise location in a flow that an exception occurred, or what the passage through the flow was. Set this property to true to enable it.

In ACEv11 these properties can be found in the ExceptionLog sub-section of the ResourceManagers section of the server.conf.yaml file. In IIB v10.0.0.16 these properties can be viewed and modified using the mqsireportproperties and mqsichangeproperties command, for example:

  • mqsireportproperties IntegrationNodeName -e IntegrationServerName -o ExceptionLog -a

Once enabled, the exception log will be written to WorkDirectory/config/common/log/integration_server.IntegrationServerName.exceptionLog.txt for an ACEv11 integration server, or in MQSI-REGISTRY/common/log/IntegrationNodeName.IntegrationServerName.exceptionLog.txt for an ACEv11 node-owned server. This is a rolling log file, so restarting the integration server will cause the old log to be moved and a numeric suffix appended to it.

The Exception Log includes a summary of the exception, a list of its inserts and the message numbers of any nested exceptions. For example, if an HTTP Input node is configured to received JSON messages with immediate parsing, but the input message is not JSON then the exception log will look something like this:

Notice that the exceptions are listed in the order of creation, they include a short description, a list of message inserts where appropriate, and a list of the message numbers of any nested exceptions.

Join The Discussion

Your email address will not be published. Required fields are marked *