An active/active deployment of IBM Integration Bus
The following diagram shows a simple active/active environment, such as you get if you installed IBM Integration Bus on two machines and created a single Integration node on each, and balanced your workloads externally.

active/active deployment for IIB

Administration of an active/active environment
IBM Integration Bus does not have an equivalent feature to the WebSphere ESB deployment manager. Each Integration node is self contained, and integration flows are deployed independently to each Integration node.

Instead simple scripting interfaces are provided to allow you to script remote deployment across your integration nodes, and a Java API is also available for remote administration and deployment.

While a small amount of planning is needed to design your operation procedures for deployment, the decentralized approach has benefits.

From an operational perspective, you can perform hot rollout of new versions selectively to only some integration nodes.

From a functional perspective, you can choose to deploy different sets of Integration flows to each node, which is important for certain stateful integration flows, or sequence sensitive flows.

Cases where a simple active/active environment is sufficient
A simple active/active environment such in the above diagram is easy to configure, but is not suitable in all cases.

It is suitable for stateless integration flows exposed over HTTP or JMS, that do not require strict ordering, do not use XA transactions, and are externally load balanced.

Most WebSphere ESB deployments have some Integration flows that are suitable for an active/active deployment.

11 comments on"Active/Active environment in IBM Integration Bus"

  1. Hi, I have the same problem IIB installed on 2 physical machine . Both have flows in which DatabaseInput Node is used and both flows are pooling on single table . Duplicate records are processed in flows. I need to know how to handle that situation so that if a record processed from Machine 1 of IIB and during processing Machine 2 of IIB must not process the same record .

  2. Danny Goyes August 10, 2019

    We have an active/active(2 multi-instance broker-BRK1, BRK2) environment and we have consumer adapter flows using DBInput Node, the problem is that both flows in diferent nodes consume the same table and create duplicated transactions, there s something we can do in nodes configuration or it s something SYBASE database must be configured?. Thanks

    • BenThompsonIBM August 13, 2019

      Hi Danny, although most users stick with the default template, the ESQL code which is used by the DatabaseInput node when communicating with the Event table across ODBC is under the user’s control … so if you would like, you could change this to control which records are handled by which of your two active flows. For example, you could get BRK1 to read even numbered records and BRK2 to read odd numbered records. In the event of a failover, then during the period of the failover itself this would mean that half of the events would not be processed, but given this should be a small window of time before the effected node is live again this design tends to be sufficient for most users. Cheers, Ben

  3. Danny Goyes August 08, 2019

    I ve implemented active active HA for a couple of nodes that runs the same consumer adapter using Database Input Nodes, we have duplicate transactions because both nodes consumes the same table events almost at the same time, the database brand is Sybase and I ve read some articules about locking tables, but DBAs of company says it is not possible, theres something else I can do to avoid this problem?.

    • BenThompsonIBM August 20, 2019

      Hi Danny, apologies this post went unanswered last week … I approved your other comment, which I think covers the same issue, but this one slipped through the net.

  4. Samuel Alejos April 10, 2019

    Nice post Ben. In an insurance i worked on we had a integration server dedicated only to manage queue message processing whilst other 2 instance for online working

  5. Waleed Abu Yahia February 09, 2018

    What is the best approach for MQ workload if I have Active/Active environment? what I mean if I have Active/ Active and I need to send MQ messages to IIB shall I send it to the Local Queue Manager in IIB or I can create another remote server for MQ and connect both IIB message flows to get the message from IT? please note I do not mean the queue manager which IIB use it for managing the Global transaction and aggregate node..

    • BenThompsonIBM February 15, 2018

      IIBv10 provides MQ Input nodes which can be configured to use a local server binding connection to the IIB node’s queue manager (if you choose to associate your node with a queue manager) or alternatively use a client connection to a remote queue manager. If you have an Active / Active environment (two separate IIB nodes, with separate identities, both running all the time on different machines) then you could deploy message flows to both these nodes which are functionally equivalent to facilitate scaling of your integration logic to run on both machines. These nodes need a feed of messages. If you distribute messages to the local queue managers underpinning the two nodes (for example by making the two queue managers part of an MQ cluster), then this will achieve workload distribution. You could also choose to send messages to a single queue manager (separate from the two nodes) and then make client connections to that queue manager to feed the two nodes. The latter topology has a slight advantage in the sense that if one of the two nodes were to failover, then during the period of failover, the other node would continue to take messages from the queue manager. In the former topology, it would be a similar story because the cluster would continue to send newly arriving messages to the queue manager which was still active … however, any messages which were already on the queue manager which was on the failing machine would be inaccessible during the period whilst the node was failing over. So, one answer to your question would be “use client connections to a remote queue manager is better” … but life is a bit more complicated than this, because if you went with the client connection option, you would be justified in asking “what happens when that queue manager fails over?” … You would need to ensure there is HA for that queue manager too. Essentially the client connection option does have the advantage of separating your HA needs for MQ from those of IIB, but it is not as simple as saying one is “better” than the other. If you are in need of a local queue manager under the IIB node for other purposes such as 2 phase commit coordination or aggregate nodes in any case, then it is probably simplest to use the classic local bindings connection with MQ cluster approach. You might also want to bear in mind performance comparisons of MQ server bindings versus client bindings as this might be another factor to bear in mind. Hope this helps …


      • Samuel Alejos April 10, 2019

        Nice post Ben. In an insurance i worked on we had a integration server dedicated only to manage queue message processing whilst other 2 instance for online working

  6. Marisa –

    We have an active/active(2 multi-instance broker-MIBRK1, MIBRK2) environment and would like to use the embedded global cache feature in IIB ( . However, the IBM KC tells me that the catalog servers cannot be started on a multi-instance broker. So , if I configure the catalog server on MIBRK1 , and it fails over, to MIBRK1′ – I would lose the catalog server and would have to reinitialize it once the fail over has been reset ?

    • BenThompsonIBM February 15, 2018

      Hi Sadhvenk,
      Apologies your comment has slipped through our net unanswered for so long.
      In case others stop by and see this, there is a page in our Knowledge Center documentation which covers this kind of scenario here:
      Applying this information to the scenario you describe … You have 2 separate multi-instance brokers, one identified as MIBRK1 and the other MIBRK2. When MIBRK1 fails across to the standby instance, global cache container servers belonging to MIBRK1’s integration servers would start up again, and so long as there is an available catalog server which was *not* defined on MIBRK1, these container servers would rejoin the global cache. However, note that a multi-instance node cannot host a catalog server (if a node owning a catalog server is stopped, then the catalog server cannot “rejoin” the cache). For this reason, folks who are using multi-instance would typically separate their catalog servers to be run in a server associated with a non-multi-instance node. You are correct that in the circumstance of a catalog server being stopped, the cache would need to be entirely reinitialized. This can be scheduled as soon as possible (to avoid being exposed by running with a single catalog server for the shortest time possible) after a failover has occurred.

Join The Discussion

Your email address will not be published. Required fields are marked *