Do you want an easy to implement Disaster Recovery (DR) solution for your IBM MQ Queue Managers? If so, the replicated data DR capabilities added in IBM MQ Advanced 9.0.5 may be just what you need.

The DR solution uses the same replication technology that is used for the RDQM High Availability (HA) support that was added to IBM MQ Advanced in 9.0.4. For DR, the replication is between two servers, there is no automatic failover and there is a choice of synchronous or asynchronous data replication.

What do I need?

Two systems (Red Hat Enterprise Linux 7.3 or 7.4 x86_64) with a volume group for the queue manager data. And that’s it. There’s no need for shared storage and cluster file systems.

Is it easy to configure?

Yes, that’s the intention. On both systems:

  • Install the RDQM feature in IBM MQ Advanced 9.0.5 (a sample script Advanced/RDQM/installRDQMsupport is provided)
  • Create a volume group called drbdpool
  • Configure the firewall if required (a sample script MQ_INSTALLATION_PATH/samp/rdqm/firewalld/ is provided)
  • Configure SELinux if required (semanage permissive -a drbd_t)

And that’s all the system configuration that’s required.

How do I create a DR queue manager

The creation and deletion of a replicated data DR queue manager requires root authority. Either issue the commands as root, or request the system administrator grant the mqm user sudo permission to the crtmqm and dltmqm commands.

You’ll need the following pieces of information:

  • The IP addresses of the two servers (these are the IP addresses of the network interfaces that will be used for replication)
  • The system name of the secondary system (uname -n)
  • A unique port per DR queue manager (the sample firewall script uses ports in the range 7000-7100)

To create a DR queue manager on the primary system the following command can be used:

crtmqm -rr p -rl DRLocalIP -ri DRRemoteIP -rn DRRemoteName -rp DRPort QMgrName

For example:

crtmqm -rr p -rl -ri -rn drsys2 -rp 7050 qm1
Creating replicated data queue manager configuration.
IBM MQ queue manager created.
Directory '/var/mqm/vols/qm1/qmgr/qm1' created.
The queue manager is associated with installation 'Installation1'.
Creating or replacing default objects for queue manager 'qm1'.
Default objects statistics : 84 created. 0 replaced. 0 failed.
Completing setup.
Setup completed.
Issue the following command on node 'drsys2' to create the DR secondary queue manager:
crtmqm -rr s -rl -ri -rn drsys1 -rp 7050 qm1

The DR queue manager can then be created on the secondary system with the command that was output above.

How do I start a DR queue manager?

The DR queue manager can be started on the primary system with the strmqm command.

How do I display the replication status my DR queue manager?

The rdqmstatus command can be used. For example:

rdqmstatus -m qm1
Queue manager status:                   Running
CPU:                                    0.03%
Memory:                                 103MB
Queue manager file system:              58MB used, 2.9GB allocated [2%]
DR role:                                Primary
DR status:                              Normal
DR type:                                Asynchronous
DR port:                                7050
DR local IP address:          
DR remote IP address:         

How can I test a failover to the secondary system?

The rdqmdr command can be used. Like the crtmqm and dltmqm commands, the rdqmdr command requires root authority. Either issue the command as root, or request the system administrator grant the mqm user sudo permission to the rdqmdr command.

The following steps can be used:

  1. End the DR queue manager on the primary system
    endmqm -w qm1
  2. Make the DR queue manager secondary on the primary system
    rdqmdr -m qm1 -s
  3. Make the DR queue manager primary on the secondary system
    rdqmdr -m qm1 -p
  4. Start the queue manager on the secondary system
    strmqm qm1

Where can I get more information?

The IBM Knowledge Center contains detailed information on RDQM for both DR and HA.

RDQM disaster recovery:

RDQM high availability:

Join The Discussion

Your email address will not be published.