In this article we will find out how you can connect to your Queue Manager that runs on Cloud Pak for Integration. Since in OCP 4.2 cluster the worker nodes would not have a public IP address, you may not be able to connect to the Queue Manager the same way that you used to do earlier.

You can connect to the Queue Manager running on Cloud Pak for Integration from within the cluster or from outside the cluster depending on your requirement.

1. Connecting to the Queue Manager from within the Cluster.

If your application that connects to the Queue Manager is also deployed on the same cluster, you would use the service name to connect to the Queue Manager.

To get the service name, run the command below:

oc get svc -n <namespace>

Note that 9443 is the default port for the WebUI and 1414 is the default port for the Queue Manager listener.
If your application is deployed on a different namespace from where the Queue Manager is deployed, qualify the name of the service with the namespace as below;

<service-name>.<namespace>.svc


For example, in this case the service name will be:

mq-tls-rel-ibm-mq.mq.svc


The connection information would be as below:

Queue Manager Name : <Name of the Queue Manager>
Host: <service-name>.<namespace>.svc
Port: <Listener Port>
Channel: <Server connection Channel name>


The image below represents how you can connect to it from the MQInput node in ACE

For this connection scenario, SSL configuration has not been done on the channel ‘DEF.SVRCONN’ and the connection is not over TLS.

2. Connecting to the Queue Manager from outside the Cluster

In many scenarios the application that needs to connect to the queue manager, may be deployed outside the OpenShift Cluster where the Queue Manager is deployed. Till CP4I 2019.3.x, which was deployed on OCP 3.11, you could expose NodePort for the queue manager listener and connect using the cluster hostname and NodePort. But from CP4I 2019.4, which runs on OCP 4.2, this is not an option anymore.

You need an OpenShift Route to connect an application to an IBM® MQ queue manager from outside a Red Hat OpenShift cluster. You must enable TLS on your IBM MQ queue manager and client application, because Server Name Indication (SNI) is only available in the TLS protocol. The OpenShift Container Platform (OCP) Router uses SNI for routing requests to the IBM MQ queue manager. The required configuration of the OpenShift Route depends on the SNI behavior of your client application.

The SNI will be set to the MQ channel under the following conditions:

CONDITION 1:

  • IBM MQ C Client v8 and above
  • Java/JMS Client version 9.1.1 and above with a TLS v1.2 or higher CipherSuite and Java 8
  • NET Client unmanaged mode

  • The SNI will be set to the host name under the following conditions:

    CONDITION 2:

  • IBM MQ C Client v7.5 or below
  • IBM MQ C Client AllowOutboundSNI set to NO
  • Java/JMS Client version 9.1.0 and below
  • .NET Client managed mode
  • AMQP or XR client

Refer to the Knowledge Center link below:
Connecting to a queue manager deployed in an OpenShift cluster


  • 2.1 Configure a TLS secret while deploying an MQ helm-chart

    When you deploy a Queue Manager on Cloud Pak for Integration, create a TLS secret containing a private and public key that you would use for an MQ TLS connection. You may generate self-signed keys as below:

    openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -out tls.crt


    Now create a TLS secret as below:

    oc create secret tls tls-secret --key="tls.key" --cert="tls.crt" -n <namespace>

    Now supply this tls-secret and key and cert names when deploying the MQ helm chart from the Platform Navigator. Look at the screenshot below:

    You can add more Keys if you intend to use different keys for different Channels. However keep one key pair with the name ‘default’ as it is used by MQSC scripts supplied with the helm chart to configure ‘default’ CERTLABL.
    Below screenshot shows two key pairs being added, one with label name ‘default’ and the other one with the label name ‘label2’. Two secrets, corresponding to each key pair, need to be created here with the names ‘tls-secret’ and ‘tls-secret2’ in this case.

    Also, you would add public certificates of connecting clients if implementing mutual authentication. In this article, we are only explaining server-side authentication, so no certificate has been added.
    If you need to add Keys or certificates after the queue manager has been deployed, you can follow the standard procedure of helm release upgrade.
    Notice the Key name ‘default’. This will be set as CERTLABL in Queue Manager/Channel while doing the configuration. When you add more Keys, ensure that you give different names to them, so that CERTLABLs are created for each of the Keys that you supply and can be used appropriately at Queue Manager and/or Channel level.

    The CP4I helm chart takes care of doing the required configurations for TLS.


  • 2.2 Configure Queue Manager and Channel for TLS

    After the Queue Manager is deployed, configure the TLS settings at Queue Manager and Channel level.
    When you are implementing the DevOps pipeline, these steps would be part of your MQSC script.

    Click on Queue Manager Properties and go to the SSL tab:

    Notice the name of ‘Cert label’. This is the ‘name’ for the pair of the Key and Cert that we supplied in the helm chart. If you have configured more than one Key pairs, provide the appropriate label name here. This ‘Cert label’ will be used when you are connecting from the clients as described in this section above under ‘CONDITION 2’ and the CERTLABL supplied at the Channel level will be ignored.

    Note that by default CHLAUTH and CONNAUTH are enabled. You may keep them enabled or disable them if you do not require those. For this demonstration, to keep it simple, we have disabled those.

    Under ‘Extended’ tab, delete the entry in the ‘Connection authentication’ field.

    Under the ‘Communication’ tab, disable ‘CHLAUTH records’

    Now go to the Server Connection channel under the SSL tab. Specify the ‘SSL cipher spec’ and appropriate ‘Cert label’ that was created as part of the helm deployment. Since we are not using mutual authentication, so ‘SSL authentication’ has been made ‘Optional’. Setting it to ‘Optional’ will disable the client authentication.

    If your client application is using the clients as specified in this section above under ‘CONDITION 1’, the ‘Cert label’ specified at the channel will be used and the ‘Cert label’ specified at Queue Manager will be ignored.

    If you intend to remotely administer the Queue Manager, you may specify MCA user as ‘mqm’, which will give complete authority on the Queue Manager to the clients connecting to it via this channel. It is recommended not to do this and configure the authentication appropriately.

    Make sure that you ‘REFRESH SECURITY’ of the Queue Manager after making the security related changes. To refresh the security, go to the Queue Manager properties and refresh all three types of securities.


  • 2.3 Import the certificate in Client’s TrustStore

    The client that tries to connect to the Queue Manager over SSL, must accept the TLS certificate presented by the Queue Manager. This would require you to import the TLS certificate into the client’s truststore. If you are using a Java trust-store, you can use the Keytool command or the iKeyMan tool supplied with IBM MQ to import the certificate.
    In this case, we created tls.key and tls.crt at step 2.1. Import tls.crt into the client’s truststore and give it any label name.


  • 2.4 Connect from Client’s specified under CONDITION 2

    If you are using MQ clients as specified in this section above under ‘CONDITION 2’, you can proceed with the connection now.
    Let us connect to the Queue Manager from MQ Explorer 9.1.0.

    Click on ‘Add Remote Queue Manager’ and enter the Queue Manager Name and click Next

    Get the route name for the Queue Manager service.

    oc get route -n <namespace>


    Enter this route host name as Host name, port as 443 and the name of the channel.

    Click on ‘Next’ thrice to reach the SSL configuration page. Click on ‘Enable SSL key repositories’ and enter the path of client truststore and password for truststore.

    Click on Next. Enable SSL Options and select a Cipher spec. Here we have selected ANY_TLS12. Note that we specified ANY_TLS12 cipher spec at server connection channel also. Since at channel we have specified ANY_TLS12, we can select any cipher spec here that TLS12 supports.

    Click on Finish. It will connect to the Queue Manager successfully.


  • 2.5 Connect from Client’s specified under CONDITION 1

    If you are connecting from client’s specified under CONDITION 1, the SNI will be set to the MQ channel. Client applications that set the SNI to the MQ channel require a new OpenShift Route to be created for each channel you wish to connect to. You also have to use unique channel names across your Red Hat OpenShift cluster, to allow routing to the correct queue manager.
    To determine the required host name for each of your new OpenShift Routes, you need to map each channel name to an SNI address as documented here:
    https://www.ibm.com/support/pages/ibm-websphere-mq-how-does-mq-provide-multiple-certificates-certlabl-capability

    The SNI address used by MQ is based upon the channel name that is being requested, followed by a suffix of “.chl.mq.ibm.com”.

    Since here we are using the channel ‘DEF.SVRCONN’, it will translate to the SNI address below:

    def2e-svrconn.chl.mq.ibm.com


    Refer to the link above to translate the SNI address for your channel name.

    You must then create a new OpenShift Route (for each channel) by applying the following yaml in your cluster:

    apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        name: <provide a unique name for the Route>
        namespace: <the namespace of your MQ deployment>
      spec:
        host: <SNI address mapping for the channel>
        to:
          kind: Service
          name: <the name of the Kubernetes Service for your MQ deployment (for example "<Helm Release>-ibm-mq")>
        port:
          targetPort: 1414
        tls:
          termination: passthrough


    Let us create the yaml file for our ‘DEF.SVRCONN’ channel in this case.
    Get the service name.

    oc get svc -n mq


    Create a yaml file with the content below, say ‘mqroute.yaml’

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: defsvrconnmqroute
      namespace: mq
    spec:
      host: def2e-svrconn.chl.mq.ibm.com
      to:
        kind: Service
        name: mq-tls-rel-ibm-mq
      port:
        targetPort: 1414
      tls:
        termination: passthrough


    Now create the route with below command

    oc create -f <route yaml file>


    Now let us connect to the Queue Manager from MQ Explorer 9.1.3.

    Click on ‘Add Remote Queue Manager’ and enter Queue Manager name.

    Click Next and Enter Route name as host name, port as 443 and Channel name
    You can get the route host name using below command

    oc get route -n <namespace>

    Click on Next thrice to reach the SSL configuration page. Specify the client Truststore that has the public key for the CERLABL specified in the channel and enter password to open the Truststore.

    Click on Next. Enable SSL options and specify the SSL CipherSpec. Since on channel we have specified ‘ANY_TLS12’, you can use any of the Cipher spec supported by TLS12.

    Click on Finish and it would successfully connect to the Queue Manager.

    Refer to the article below to connect to Queue Manager over TLS.
    MQ with TLS

Join The Discussion

Your email address will not be published. Required fields are marked *