How to fix integration server roles in a Global Cache configuration in IBM Integration Bus and WebSphere Message Broker V8

 View Only

How to fix integration server roles in a Global Cache configuration in IBM Integration Bus and WebSphere Message Broker V8 

Fri May 15, 2020 11:32 AM

Originally published Oct 2 2013

Introduced with WebSphere Message Broker 8.0.0.1, the Global Cache allows message flow developers to design, develop and deploy message flows that share data across integration servers (execution groups) and integration nodes (brokers). The Global Cache is built on top of another IBM product, WebSphere eXtreme Scale, and WebSphere eXtreme Scale components are hosted within integration server JVMs in order to provide the cache. The components that can be hosted within an integration server are:

  • Catalog servers, which controls the placement of data and monitors the health of container servers.
  • Container servers, which hold a subset of the data stored in the cache. Between them, all container servers in the global cache host all of the cache data at least once.
IBM Integration Bus 9.0 offers two mechanisms for configuring the Global Cache. Integration servers can be automatically configured at the integration node level by defining a cache policy on the integration node. Alternatively, integration servers can be manually configured on an individual basis. Configuring integration servers manually requires an understanding of WebSphere eXtreme Scale and the format of the properties required by the Global Cache.

Built in to the product is the default policy, which when selected automatically configures a Global Cache across all the integration servers in a single integration node. That cache is limited in scope to that integration node, and the default policy cannot be used to configure a cache across two or more integration nodes.

In order to automatically configure a Global Cache across two or more integration nodes, an Integration Bus administrator can supply a cache policy file that contains a list of integration nodes. The cache policy file also specifies the host names and port ranges for each integration node, as well as how many catalog servers each integration node should host (none, one or two). When all of the integration nodes are configured with the same cache policy file, they join together to provide a single cache.

When a cache policy is in use, the integration servers within an integration node have no fixed role. The order in which the integration servers start determines their roles. For example, for a single integration node configured with the default policy:

  • The first integration server to start hosts both a catalog server and a container server.
  • The second, third and fourth integration servers to start host only a container server.
  • Subsequent integration servers host no Global Cache components - however, they can still connect to the Global Cache as clients.
You can determine what role an integration server has taken after it has started by examining the integration server level cache manager properties. From the command line, run mqsireportproperties:
The important settings are enableCatalogService and enableContainerService, both of which are set to true - so this integration server hosts both a catalog server and a container server.

You can also check these properties from the Integration Explorer. Navigate to the integration server you wish to check, right-click it and click on "Properties...". You can then click on the "Global Cache" tab on the left to see the current set of properties - this integration server is only hosting a container server:


Whilst this policy based control is simple to use, and makes it easy for both Integration Bus developers and administrators to get started with the Global Cache and configure cache policies across multiple integration nodes, Integration Bus administrators may have valid reasons for fixing the roles of the integration servers. Some of these reasons include, but are not limited to:

  • The administrator wants to know where these components are hosted at all times. For example, if all the catalog servers for the Global Cache are shut down, then the Global Cache is lost. If that occurs, then all integration servers participating in the Global Cache must then be restarted in order for the cache to work again. Knowing in which integration servers all of the catalog servers are currently located allows the administrator to avoid accidentally shutting them all down at the same time.
  • The administrator wants to keep the Global Cache components separate from the message flows deployed within that integration node. The Global Cache components run inside an integration servers JVM, and use JVM heap storage. If a message flow also makes heavy of the JVM heap storage, then that integration server may require a much larger JVM heap in order to host both the Global Cache components and the message flows. Also, if the message flows cause the integration server to restart, or crash, then that will also terminate the Global Cache components being hosted within the integration server at that time.
  • The administrator needs to deploy message flows that connect to external WebSphere eXtreme Scale grids using SSL to integration servers that are guaranteed not to start catalog or container servers. Integration servers that host catalog or container servers cannot make client connections to external WebSphere eXtreme Scale grids using SSL.
To address these problems, an Integration Bus administrator can configure the Global Cache using a cache policy, and then fix that configuration in place by switching from the automatic configuration provided by the cache policy to manual configuration.

By setting the integration node cache policy to none, the integration servers preserve their Global Cache configuration from the last time they were started under the control of a cache policy. This configuration stays in place until the cache policy on the integration node is changed from none, or until the integration server level cache manager properties are manually altered.

Setting up a new integration node with fixed integration server roles

First, I'll walk through setting up a new Global Cache configuration with two integration nodes. Each integration node will host four integration servers. One integration server in each integration node will host a catalog server and a container server, and the other three integration servers will just host container servers. Further integration servers can then be added to the integration nodes to host message flows.

Building the cache policy file and creating the integration nodes

Because we want to build a Global Cache that spans two integration nodes, we need to build a cache policy file. My integration nodes will be called IBNODE1 and IBNODE2. Here's the cache policy file I'll be using - alter it for your own systems - the template is available from your Integration Bus install as <install directory>/sample/globalcache/policy_two_brokers_ha.xml:


Now that the cache policy file is ready, we can create the integration nodes. Create the integration nodes IBNODE1 and IBNODE2 using either the Integration Explorer, or the command line. Ensure that both integration nodes are running, but don't create any integration servers inside the nodes just yet.

Once the integration nodes have been created, we need to set the cache policy on both integration nodes to the file we just created. From the command line, run mqsichangeproperties:Alternatively, this can be accomplished from the Integration Explorer. For each integration node, right click it in the Integration Explorer and click on "Properties...". Then click on the "Global Cache" tab on the left. You need to enter the path to the cache policy file in the "Cache policy" field and then click "Apply":

Creating the integration servers

Now that the integration nodes are up and running, and configured with the cache policy file, it is time to create the integration servers. It is important that the order of these steps is followed, and that the integration servers are not restarted in between steps. Otherwise, the integration servers may end up hosting different components to those expected!

The first integration server to start in each integration node will host both a catalog server and a container server. Create an integration server in both integration nodes. I have called this integration server ISCATALOG. Check the integration server level cache manager properties of ISCATALOG to confirm that it is running both a catalog server and a container server. Try to create these two integration servers at around the same time. Because there are multiple catalog servers, they need to handshake before they can fully start up. Until they start, the cache will not be available.

The second, third and fourth integration servers to start in each integration node will host a container server. Create three further integration servers in both integration nodes. I have called these integration servers ISCONTAINER2, ISCONTAINER3, ISCONTAINER4. Note that I've started with two, because ISCATALOG is hosting container server number one. Check the integration server level cache manager properties of these three integration servers to confirm that they are running container servers. You should now have a configuration similar to the following displayed in the Integration Explorer:

Fixing the roles of the integration servers

Now that all the integration nodes and servers are running with the configuration we want, it is time to fix the roles of the integration servers. We do this by changing the cache policy of both of the integration nodes to none. This can be done on the command line by running mqsichangeproperties:
Alternatively, change the cache policy to none in the Integration Explorer:
The change to the none policy takes place immediately, and the roles of the integration servers are now fixed. Stop both the integration nodes, and then start them again. Once all of the integration servers have successfully started, confirm that each integration server has the expected integration server level cache manager settings. The integration servers ISCATALOG should be hosting catalog and container servers, and the integration servers ISCONTAINER2, ISCONTAINER3 and ISCONTAINER4 should be hosting container servers.

The administrator can now proceed with creating further integration servers to host message flows, or deploy message flows to the integration servers that are now hosting the Global Cache components.

Fixing integration server roles within an existing integration node

Finally, I'll walk through reconfiguring a pre-existing set of integration nodes and integration servers to enable the Global Cache, and also fix the roles of the integration servers. I have two integration nodes, named IBNODE1 and IBNODE2, and they each have a set of eight integration servers - IS1 to IS8. I will set up the integration servers so that IS1 hosts a catalog server and a container server, and IS2, IS3 and IS4 each host a container servers.

Building the cache policy file

We will need a cache policy file. Since the names of the integration nodes haven't changed, I'll use the same cache policy file from the first walk through:
We need to set the cache policy on both integration nodes to the file we just created. From the command line, run mqsichangeproperties:
Alternatively, this can be accomplished from the Integration Explorer. For each integration node, right click it in the Integration Explorer and click on "Properties...". Then click on the "Global Cache" tab on the left. You need to enter the path to the cache policy file in the "Cache policy" field and then click "Apply"

Stopping and starting the integration servers in the right order

Now that the integration nodes are configured with the cache policy file, if the integration servers are restarted they will determine which Global Cache components they host dependent on the order they start in. Since we want to control which integration servers host which components, we need to control the order that the integration servers start. To accomplish this, we need to first stop all of the integration servers.

In order to stop an integration server, run mqsistopmsgflow on the command line:
[sstone1@sagitta ~]$ mqsistopmsgflow IBNODE1 -e IS8
BIP1188I: Stopping the execution group 'IS8'...
BIP1189I: The execution group 'IS8' is reported as stopped.
BIP8071I: Successful command completion.
Alternatively, stop the integration server from within the Integration Explorer by right clicking the integration server, going to "Stop" and then clicking on "Integration server".

Once all of the integration servers IS1 to IS8 in both IBNODE1 and IBNODE2 are stopped, your configuration should now be similar to this:
The next step is to start the integration server IS1 in both integration nodes. Since it will be the first integration server to start, it should host a catalog server and a container server. Start the integration server IS1 in both integration nodes. Try to start these two integration servers at around the same time. Because there are multiple catalog servers, they need to handshake before they can fully start up. Until they start, the cache will not be available. In order to start an integration server, run mqsistartmsgflow on the command line:
[sstone1@sagitta ~]$ mqsistartmsgflow IBNODE1 -e IS1
BIP1186I: Starting the execution group 'IS1'...
BIP1187I: The execution group 'IS1' is reported as started.
BIP8071I: Successful command completion.
[sstone1@sagitta ~]$ mqsistartmsgflow IBNODE2 -e IS1
BIP1186I: Starting the execution group 'IS1'...
BIP1187I: The execution group 'IS1' is reported as started.
BIP8071I: Successful command completion.
Alternatively, start the integration server from within the Integration Explorer by right clicking the integration server, going to "Start" and then clicking on "Integration server". Now that the integration server IS1 is started in both IBNODE1 and IBNODE2, confirm that both integration servers are running both a catalog server and a container server by checking the integration server level cache manager properties.

Now, start the integration servers IS2, IS3 and IS4 in both integration nodes. Since these integration servers will be the second, third and fourth to start, they will host a container server. When they have started, confirm that all of these integration servers are running a container server by checking the integration server level cache manager properties.

The Global Cache components are now all running in their intended integration servers. Note that the remaining integration servers IS5, IS6, IS7 and IS8 are still in the stopped state. Leave them in the stopped state for now, until the configuration of the first four integration servers has been fixed. Your configuration should now be similar to this:

Fixing the roles of the integration servers

Now that all the integration nodes and servers are running with the configuration we want, it is time to fix the roles of the integration servers. We do this by changing the cache policy of both of the integration nodes to none. This can be done on the command line by running mqsichangeproperties:Alternatively, change the cache policy to none in the Integration Explorer:
The change to the none policy takes place immediately, and the roles of the integration servers are now fixed. Stop both the integration nodes, and then start them again. Once all of the integration servers have successfully started, confirm that each integration server has the expected integration server level cache manager settings. The integration servers IS1 should be hosting catalog and container servers, and the integration servers IS2, IS3 and IS4 should be hosting container servers.

The administrator can now restart the integration servers IS5, IS6, IS7 and IS8 in order to restore all previously deployed message flows to a running state. However, in order to connect to the Global Cache they will now need to be correctly configured so that the integration servers can locate the catalog servers. See the next step for changing the value of the connectionEndPoints parameter.

Creating new integration servers in an integration node with fixed integration server roles

When an integration node has been configured with fixed integration server roles, the Integration Bus administrator may wish to create further integration servers to host message flows. If the message flows deployed to these integration servers are to connect to the Global Cache, then they need to be configured with the correct properties in order to locate the catalog servers that have been configured. When the integration node is configured with a cache policy, this configuration is supplied automatically.

There's only one property to configure on the integration server level cache manager - connectionEndPoints. Once you have created a new integration server, then you need to find the value for this parameter from an existing integration server in the integration node that can successfully connect to the cache. This can be done by running mqsireportproperties on the command line:
You can also check the value of this property from the Integration Explorer. Navigate to the integration server you wish to check, right-click it and click on "Properties...". You can then click on the "Global Cache" tab on the left to see the current set of properties:
Once you've retrieved the value of the connectionEndPoints parameter, change the settings of the new integration server so it has the same setting. This can be done by running mqsichangeproperties on the command line:
NOTE: Pay special attention to the quotes required around the value - any commas in the value will be treated as separate parameters unless the quotes are correct!

Alternatively, this can be changed in the Integration Explorer. Navigate to the integration server you wish to check, right-click it and click on "Properties...". You can then click on the "Global Cache" tab on the left to see the current set of properties. Change the value of the "Connection endpoints" parameter and click apply:
After the new value for connectionEndPoints has been set, then restart the integration servers so that they pick up the configuration change and connect to the cache.

Isolating catalog and container servers

As an Integration Bus administrator, you may also wish to isolate the catalog server so that it is hosted in an integration server that doesn't host a container server. This way, there is no risk of the container server that is running in the same JVM consuming all of the available JVM heap storage and causing an out of memory error that causes the catalog server to crash. This can be achieved by changing the enableContainerService parameter of the integration server level cache manager to false. You can do this by running mqsichangeproperties on the command line:
Alternatively, this can be changed in the Integration Explorer. Navigate to the integration server you wish to check, right click it and click on "Properties...". You can then click on the "Global Cache" tab on the left to see the current set of properties. Untick the checkbox labelled "Container server enable" parameter and click apply:
After the new value for enableContainerService has been set, then restart the integration servers so that the container server that was being hosted is successfully shut down.

Conclusion

Using the policy based control over the Global Cache components hosted within integration nodes gives both Integration Bus developers and administrators the power to easily create Global Cache configurations that span either a single integration node, or multiple integration nodes. Once the desired Global Cache configuration has been created, the configuration for the individual integration servers can then be fixed with little effort.
#IIBV9
#IIB
#Global-Cache

Entry Details

Statistics
0 Favorited
20 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Related Entries and Links

No Related Resource entered.