The global cache provides a repository for data that you want to reuse beyond a specific message flow node, instance of a message flow, integration server or integration node. The global cache builds on WebSphere eXtreme Scale technology, and integration servers may host certain components to provide the cache. The components that may be hosted are:
- Catalog servers – control the placement of data and monitor the health of container servers.
- Container servers – hold a subset of the cache data. Between them, all container servers in the global cache host all of the cache data at least once.
The global cache components are managed by a cache manager – a cache policy is specified on the integration node and provided to the cache manager to define the topology of the global cache components. In addition to the cache policy an ObjectGrid descriptor XML file and deployment policy descriptor XML file are used to define the structure and configuration of the global cache. IBM Integration Bus 10.0.0.4 provides a mechanism to allow customised descriptor XML files to be used in place of the default files – enabling greater control over the configuration of your maps within the embedded global cache based on usage, to improve cache management and optimise performance.
Setting the descriptor XML files to be used
The IBM Integration Bus 10.0.0.4 installation provides sample ObjectGrid and deployment policy descriptor XML files – these are located in theĀ install_dir/server/sample/globalcache directory. Create a copy of the objectgrid.xml and deployment.xml files and modify them appropriately for your global cache implementation.
The customised descriptor XML files must now be sourced by the cache manager, the following precedence order is used by the cache manager to determine which descriptor XML files to use:
- Integration server level property
- Integration node level property
- IBM Integration Bus workpath
- Default descriptor XML files
Once your descriptor XML files are ready you can complete one of the following steps to configure the embedded global cache to use properties from these files:
- Configure the location of the ObjectGrid and deployment policy descriptor XML files at the integration server level by running the following commands:
mqsichangepropertiesĀ integrationNode -eĀ integrationServer -o ComIbmCacheManager -n objectGridCustomFile -vĀ pathToFile/objectgrid.xml
mqsichangepropertiesĀ integrationNode -eĀ integrationServer -o ComIbmCacheManager -n deploymentPolicyCustomFile -vĀ pathToFile/deployment.xml
- Configure the location of the ObjectGrid and deployment policy descriptor XML files at the integration node level by running the following commands:
mqsichangepropertiesĀ integrationNode -b cacheManager -o CacheManager -n objectGridCustomFile -vĀ pathToFile/objectgrid.xml
mqsichangepropertiesĀ integrationNode -b cacheManager -o CacheManager -n deploymentPolicyCustomFile -vĀ pathToFile/deployment.xml
- Alternatively, copy the objectgrid.xml and deployment.xml files to theĀ workpath/common/wxs directory, whereĀ workpath is the full path to the working directory on the integration node.
NB. When a cache policy is in use on an integration node, integration server properties are read only and inherit from the integration node properties. You can turn off policy control by selecting an integration node cache policy of none and can then set properties explicitly for each integration server. The properties most recently set by the integration node-level policy are retained as a starting point for customisation. Please see How to fix integration server roles in a Global Cache configuration in IIB for more information.
Once you have completed one of these steps to set the configuration files you must restart the integration node to reset all cache components for the changes to take effect.
If you have configured your global cache to span multiple integration servers or integration nodes each integration node and integration server that is participating in the solution must use the same ObjectGrid descriptor XML and deployment policy descriptor XML files to maintain data consistency.
Configuring the descriptor XML files
The following customisations can optionally be made to the sample descriptor XML files provided in IBM Integration Bus 10.0.0.4:
- Configure the lockStrategy for your backing maps appropriate to usage
- Configure read operations from replica shards to distribute workload
- Configure maps independently when more than one is utilised
These customisations allow the lockStrategy to be set appropriately for the use of individual cache maps, which in addition to enabling the replicaReadEnabled property can provide significant performance advantages in comparison to the default settings.
The deployment policy descriptor XML file must be used with an ObjectGrid descriptor XML file and must be compatible with the ObjectGrid descriptor XML file. The map elements that are defined within the objectgridDeployment element must be consistent with the backMaps found in the ObjectGrid descriptor XML file.
Template maps are defined in the ObjectGrid descriptor XML file by setting the template attribute to true. When a map is requested that has not already been defined then if the new map name matches the regular expression of a template map, the map is created dynamically and assigned the name of the requested map. This newly created map inherits all of the settings of the template map as defined by the ObjectGrid descriptor XML file. When defining template maps, ensure that map names that you use are unique enough so that the application can match to only one of the template maps. If the map name used matches more than one template map pattern an IllegalArgumentException results.
Choosing a locking strategy
The available locking strategies are PESSIMISTIC, OPTIMISTIC_NO_VERSIONING and NONE. When choosing a locking strategy the use case and acess pattern for the particular cache map needs to be considered:
- PESSIMISTIC – Use the PESSIMISTIC locking strategy when cache data is updated frequently from multiple sources and where keys may often collide.
- OPTIMISTIC_NO_VERSIONING – Use this strategy when the majority of operations are read, and if cache data is updated from multiple sources but the cache record is unlikely to be updated by two sources simultaneously.
- NONE – Use the NONE locking strategy for applications that are read only – such as those using the cache as a look-up table. The NONE locking strategy does not obtain any locks or use a lock manager. Therefore, this strategy offers the most concurrency, performance and scalability.
To configure locking strategies for cache maps in the embedded global cache open your ObjectGrid descriptor XML file and add or edit the appropriate backingMap template entries in the objectGrid element with patterns that match the names or naming conventions for your cache maps, and set the lockStrategy parameter to PESSIMISTIC, OPTIMISTIC_NO_VERSIONING or NONE for each backingMap element. For example:
Here the backingMap configuration is a template for any cache maps that start USER.OPTIMISTIC that specifies use of the OPTIMISTIC_NO_VERSIONING locking strategy.
When using the OPTIMISTIC_NO_VERSIONING or NONE locking strategies a “near cache” is provided to optimise performance. If your application is performing a number of operations that update the cache (update, remove, etc.) then data in the near cache can become stale. Set the nearCacheInvalidationEnabled attribute to true on your backingMap to enable the removal of stale data from the near cache as quickly as possible.
NB. Near cache invalidation requires the embedded grid to be configured to use the eXtreme IO (XIO). To enable this the integration node must have a function level set to 10.0.0.2 or later. For information about checking and setting function levels, see Changing the function level of your integration nodes.
The following example file shows four entries, and includes an entry (SYSTEM.BROKER.*) that matches the default cache map for the IBM Integration Bus embedded global cache.
Configuring to read from a replica shard
If more than one container exists, the default cache policy ensures that all data is replicated at least once – such that each container server can host primary and replica shards of the cache data.
By default read operations are performed against the primary shards and the replica shards are provided to facilitate fail-over. If your application mostly performs reads then allowing reads on replica shards improves performance by distributing the workload.
To configure your solution to enable read operations from replica shards open your deployment policy configuration file and set the replicaReadEnabled parameter to true on the mapSet element.
The following example file is compatible with the example ObjecgGrid descriptor XML file in the previous section, and shows the replicaReadEnabled property set on the mapSet element:
If the replicaReadEnabled attribute is set to true, read requests are distributed between a partition primary and its replicas. If the replicaReadEnabled attribute is false, read requests are routed to the primary only.
Enabling read requests to be distributed between primary and replica shards enables workload to be more evenly distributed between the container servers and can improve application performance. If the global cache is performing mostly read operations then allowing reads on replica shards can improve performance. If the global cache is performing a large proportion of write operations, then the replica may return stale data – so your application must tolerate this.
Using the steps provided to customise the ObjectGrid and deployment policy descriptor XML files enables greater control over the configuration of your maps based on usage.
- If your application is mostly performing read operations then consider the NONE or OPTIMISTIC_NO_VERSIONING locking strategies and the replicaReadEnabled attribute.
- If your application is performing a number of update operations but is still mainly read (or cannot tolerate stale data) then also configure the nearCacheInvalidationEnabled attribute.
- If you application is frequently performing update operations from multiple sources the PERSSIMISTIC locking strategy may be most appropriate.
Configuring these files apropriately can help to optimise performance and reduce running costs and resource usage.