Overview

Skill Level: Advanced

Q Replication MQ Db2 SQL

Sometimes a productive Q Replication site needs to change the MQ setup, e.g., after naming standards have changed or when scenarios have to be migrated to the new standards. This is part 2 of a series how to change the Q Rep MQ setup non-disruptively.

Ingredients

IBM Q Replication

IBM Data Replication is a data replication platform to replicate information between cross-platform heterogeneous data stores in near real time. Q Replication is one data replication discipline within this platform. Q Replication includes log based change capture for Db2 z/OS, Db2 (LUW), and Oracle databases. Common usage scenarios are the delta provisioning of data warehouse and data lake systems including historization, the real-time maintenance of read-mirrors, zero-downtime data migrations, data synchronization between multiple active databases (continuous availability, active/active), and others.

Q Replication Control Tables

The Q Replication Control Tables are regular database tables in the replication source and replication target database. The tables are created during the Q Replication setup. The Control Tables store meta data and monitoring data about all replication subscriptions (a subscription defines how a table is replicated from the source to the target database). Usually the Control Tables are updated only by the Q Replication setup interfaces (UI or ASNCLP commands) and by the runtime programs Q Capture and Q Apply; but for some procedures - like this one - you can update the Control Tables with custom SQL. Remember: If you chose to do that, do it carefully.

Q Managers and Queues

Q Replication leverages IBM MQ as fast, reliable, and secure transport infrastructure. Q Replication supports a 'local' setup with one Q Manager only (Q Capture and Q Apply run locally to this Q Manager or connect to the Q Manager via MQ Client) and a 'remote' setup with two Q Managers (which are connected with MQ server channels). Meta data about the MQ setup is stored within the Q Replication Control Tables.

Remark: At the time writing this recipe the procedures documented here were not part of the official product documentation and though not fully supported. Nonetheless, they are tested in different scenarios on different platforms (z/OS and LUW) and were used in a live project. So, if you want to reuse the procedures, test carefully to be sure they will also work in your dedicated setup.

Step-by-step

  1. Overview

    The standard procedures of Q Replication always allow to change a running replication scenario by stopping and deleting all replication subscriptions and by creating a new setup from scratch. But, to stop/drop and recreate/start replication subscriptions – regardless of how fast and automated this can be achieved – usually requires to re-load the target tables to prevent data loss because transactions happening between stop and start of the subscriptions will not be replicated. A dedicated approach to change the replication queues for existing Q Replication subscriptions can be found in the standard product documentation here. But this procedure also requires an interim stop of source applications to maintain replication consistency.

    If re-loading the target tables or an interim stop of source applications is not tolerated, more sophisticated procedures are required which update an existing and active Q Replication setup without losing any transaction. The procedure documented here requires the intermediate stop/start of replication processes and therefore temporarily effects the replication latency, but it maintains the replication consistency at any time. 

    This recipe is the second of a series documenting the procedures to change replication queues of an active Q Replication setup. Part 1 of the series deals with moving Q Replication subscriptions from one MQ queue manager to another. It can be found here.

    Other Q Replication maintenance procedures can be found in the IBM Knowledge Center here.

  2. Definition of the new MQ Queues

    To start with, create the new replication send queues and receive queues in your MQ queue manager(s) according to the Q Replication guidelines and your naming standards. This procedure is can be used to change the replication queues at the Q Capture server and at the Q Apply server. If you only want to change the send queue or the receive queue the timing would have to be adopted slightly.

    If you not only want to change the replication queues, but you want to change the queue managers, too, do it in a two-step approach. First change the queue managers as documented in another DeveloperWorks recipe here, and after you have successfully achieved that, change the queue names as documented in this recipe.

  3. Definition of a new Replication Queue Map

    After the definition of the new replication send queues and receive queues in MQ, define a new Q Replication Queue Map to introduce the new MQ queues to Q Replication. You can use the Q Replication UI or the ASNCLP command language to accomplish this task. Here’s an exemplary ASNCLP CREATE QMAP command to create a new Queue Map:

    CREATE REPLQMAP SOURCEDB_ASN_TO_TARGETDB_ASN
    USING ADMINQ "ASN.SOURCEDB.ADMINQ"
    RECVQ "ASN.SOURCEDB.TARGETDB.MYNEWRECVQ"
    SENDQ "ASN.SOURCEDB.TARGETDB.MYNEWSENDQ"
    NUM APPLY AGENTS 12 MEMORY LIMIT 512 HEARTBEAT INTERVAL 60000 MAX MESSAGE SIZE 1000;
  4. Stop and Start of Q Capture and Q Apply and let it run for a while

    Stop and Start the Q Capture process / task and the Q Apply process / task (both with STARTALLQ=Y if that is not your default anyway). With this step, both Q Capture and Q Apply pick up the new send and receive queues, even though no replication subscription has been defined for that Queue Map yet.

    To check if Q Capture and Q Apply are using STARTALLQ=Y, query the Control Tables IBMQREP_CAPPARMS and IBMQREP_APPLYPARMS and verify that the parameters found in the Control Tables are not overwritten be startup parameters in the Q Capture and Q Apply start scripts / JCL.

    -- connect to <capture_server>;
    -- SET CURRENT SCHEMA = '<capture_schema>';
    SELECT STARTALLQ FROM IBMQREP_CAPPARMS;

    -- connect to <apply_server>;
    -- SET CURRENT SCHEMA = '<apply_schema>';
    SELECT STARTALLQ FROM IBMQREP_APPLYPARMS;

    After starting Q Capture, the new send queue is listed and maintained in the RESTARTQ. You can verify this by browsing the RESTARTQ, for example using the asnqmfmt command:

    asnqmfmt <RestartQ> <Capture_QMgr>

    After starting Q Apply, have a look at Q Apply’s log messages to see that Q Apply started a browser thread and agent threads for the new receive queue.

    It is very important that Q Capture and Q Apply pick up the new queues in advance before you actually proceed with the following steps. Let Q Capture and Q Apply run like this as long as you like. Stop and start the processes as often as necessary. But to stop and start Q Capture and Q Apply once as done in this step is absolutely necessary to guarantee replication consistency when moving ths subscriptions to the new queues.

  5. Stop Q Capture with a SIGNAL

    Now we are preparing the actual change of the existing replication setup. First we need to stop the processes with empty queues.

    Before stopping Q Capture with a SIGNAL, check that Q Capture and all Q Apply processes replicating from that Capture instance are running and that the replication latency is low (usually a few seconds or less).

    Next check that no monitoring or automation is in place which restarts your Q Capture task in case it is down. Then, stop Q Capture with a STOP SIGNAL using the option ‘DATA_APPLIED’.

    -- connect to the Q Capture Server
    SET CURRENT SCHEMA = '<capture_schema>';
    INSERT INTO IBMQREP_SIGNAL
    ( SIGNAL_TIME
    , SIGNAL_TYPE
    , SIGNAL_SUBTYPE
    , SIGNAL_INPUT_IN
    , SIGNAL_STATE)
    VALUES
    (CURRENT TIMESTAMP
    , 'CMD'
    , 'STOP'
    , 'CURRENT_TIMESTAMP;DATA_APPLIED'
    , 'P');

    After receiving the SIGNAL (after reading the log record of the INSERT into the SIGNAL table from the Db2 log), Q Capture stops publishing transactions. Then it waits for a confirmation message (via the ADMIN queue) from all affected Q Apply programs that all sent transactions have been applied to the target. After receiving all confirmation messages for all send queues, Q Capture will stop.

    If Q Capture does not stop, check the Q Capture log messages (e.g., in IBMQREP_CAPTRACE) to find out which Q Apply process / which send/receive queue did not send the confirmation yet.

    After Q Capture has stopped all send and receive queues are empty.

  6. Check if Q Apply has processed all messages of the receive queue and stop Q Apply

    To convince yourself that there is no remaining work for Q Apply, check the queue depth of all your receive queues. It must be 0. You can do this with MQ means or with this asnqmfmt command:

    asnqmfmt <receive_queue_name> <q_mgr_name> -l 5

    The output of the asnqmfmt command should be empty (no message in the queue).

    After this check, stop your Q Apply program(s).

    To be save, also check the new send and receive queues that these are also empty (could be that they had been used for a previous test). If the new queues are not empty, clear all messages from these new queues.

  7. Update the Q Capture Control Tables

    Now, as Q Capture is down, update the Q Capture Control Table IBMQREP_SUBS to link the existing Q Replication subscriptions to the new send queue which has been created and introduced during the previous steps:

    -- connect to the Q Capture Server
    SET CURRENT SCHEMA = '<capture_schema>';

    UPDATE IBMQREP_SUBS
    SET SENDQ = '<new send queue name>'
    WHERE SENDQ = '<old send queue name>'
    -- optionally just some subscriptions
    -- AND SUBNAME = '<subname>'
    ;
  8. Update the Q Apply Control Tables

    Next, as Q Apply is down, update the Q Apply Control Tables IBMQREP_TARGETS and IBMQREP_TRG_COLS to link the existing Q Replication subscriptions to the new receive queue which has been created and introduced during the previous steps:

    -- connect to the Q Apply Server
    SET CURRENT SCHEMA = '<apply_schema>';

    UPDATE IBMQREP_TARGETS
    SET RECVQ = '<new receive queue name>'
    WHERE RECVQ = '<old receive queue name>'
    -- optionally just some subscriptions
    -- AND SUBNAME = '<subname>'
    ;

    IBMQREP_TRG_COLS
    UPDATE IBMQREP_TRG_COLS
    SET RECVQ = '<new receive queue name>'
    WHERE RECVQ = '<old receive queue name>'
    -- optionally just some subscriptions
    -- AND SUBNAME = '<subname>'
    ;
  9. Start Q Capture and Q Apply

    Next start Q Capture and Q Apply with your standard procedures.

  10. Conclusion

    Although it is not an everyday use case to switch the replication queues in an active and running Q Replication scenario – it happens. When starting to implement Q Replication naming standards are not immediately clear.

    Later when the scenario grows, it can be advantageous to separate Q Replication workloads and to move replication subscriptions to dedicated queues – for example to increase the degree of replication parallelism. In this case, some or all Q Replication Subscriptions need to move to new MQ queues. But this should be possible without reloading hundreds or even thousands of Q Replication target tables.

    Good that IBM Q Replication offers ways to switch the replication queues in a non-disruptive manner – without the need to drop and recreate all Q Replication subscriptions and without the need to reload all replication target tables.

    The procedures documented here have been tested in different scenarios on different platforms (z/OS and LUW) and were used in a live project. But they are not a standard procedure. So, if you want to reuse the procedures, test carefully to be sure they will also work in your dedicated setup.

1 comment on"Q Replication Maintenance - Changing Replication Queues"

  1. Olaf.Stephan January 03, 2018

    Great article …..again !

Join The Discussion