Let us look at some key reasons (other than correlation state which we will cover in detail in a different section) why you might choose to implement HA failover, either as the entire HA solution for your environment, or as an extension to active/active processing.
While an active/active topology design allows new work to be submitted almost immediately after an Integration node terminates, the persistent messages on that Integration Node will not be recovered automatically unless HA failover is configured on each integration node. If HA failover is configured, then the integration node will be failed over to the secondary machine, including the Messaging runtime, and the messages will be processed on that secondary machine once the failover is complete.
One common reason for this is if you have sequence-sensitive flows.
These cannot generally be deployed to multiple active integration nodes, and hence must be active/passive (as is the case with flows controlled by the event sequencing feature in WebSphere ESB, although the details of the event sequencing feature are not covered in this article).
If you configure your Integration Nodes for HA Failover, then you can deploy each sequence-sensitive flow to a single Integration Node in the environment, hence making that specific flow active/passive within the environment.
However, if you are triggering processing on the arrival of a file over a network via SFTP, or via Managed File Transfer (Connect:Direct, MQ etc.), then you need to ensure there is only one destination Integration node that could pick up the file.
In this case, you might configure HA Failover of that Integration node to prevent the destination Integration Node becoming a single point of failure.
Your account will be closed and all data will be permanently deleted and cannot be recovered. Are you sure?