Co-Author – Anand Awasthi

Browse this page for answers to some of the frequently asked questions you might have about IBM App Connect Enterprise on Cloud Pak for Integration. Click on a question to see the answer.

For questions about this FAQ, you can add a comment to the bottom of this page.

Install and Configure

How do I install IBM Cloud Pak for Integration (CP4I ) 2019.4.1 on OCP4.2?

How do I pass the additional configuration or secrets to my Integration Server deployment?

If the integration server requires any configuration to be applied then you will need to use the option Download configuration package to provide the configuration prior to the helm install. Refer to the README.md inside the download on how to create the required secrets.

For detailed information, refer to section 7 of the blog;
Modernizing Integration – Migration from IIB to App Connect running on IBM Cloud Pak for Integration (CP4I)

How do I configure my MQ instance running in CP4I on OCP4.2 to accept the connections from external MQ Client apps (like MQ Explorer, RFHUtil) over SSL?

Connecting to Event Streams from ACE (using Kafka nodes)

In this section we describe the steps to integrate an ACE integration flow running in CP4I with an Event Streams service in the same CP4I cluster.

  1. Event Stream Topic and Connection details
  2. Configuring Kafka nodes in ACE Integration Flow with endpoint details
  3. Creating configuration secrets for accessing Event streams over SASL_SSL

Event Stream Topic and Connection details

Login to the CP4I platform navigator

Navigate to your Event Streams instance

Create a topic, if it hasn’t been created before.

Enter a name for the topic and you can leave all other option to default values.

Click on Create Topic

You can see the topic on the Topics panel

In order to connect to this kafka instance from other kafka client applications like ACE, you need to obtain the connection information of your Kafka cluster. To do this, click the ‘Connect to this cluster’ option as shown in the figure above.

Copy the Bootstrap Server address. We will need this while configuring the node properties on kafka nodes in ACE integration flow.

Click on Generate API Key.

In a pop up dialog box, provide a name for your Application.

Provide the name of the topic you want to allow access or select ‘All Topics’.

Click ‘Copy API Key’. Save the API key as we will need this when configuring the secrets for the ACE integration server.

Coming back to the Cluster Connection panel, in the Certificates section, download the PEM File.

The PEM certificate name will be typically es-cert.pem. Save this file as we will need this while creating the secret for the ACE integration flow.

Configuring Kafka nodes in ACE Integration Flow with Event Streams endpoint details

We create a simple integration flow as shown below to publish the message to the kafka topic. For this purpose we use the Kafka producer node available in ACE.

We update the Kafka node properties with the following details based on the information we obtained from the Event streams instance as described in Step 1.

On Basic Tab, enter Topic Name and Bootstrap server details.

On the Security tab, select Security protocol as SASL_SSL. The Event streams in CP4I instance supports only this protocol. Set SSL Protocol to TLSv1.2.

Creating configuration secrets for accessing Event streams over SASL_SSL

You need to pass the PEM certificate that you obtained from your Event streams instance as a truststore certificate for your integration server and supply the associated truststore password.

Configure truststoreCert-<key alias>.crt

Therefore, copy the es-cert.pem file that we had obtained from Event Streams as truststoreCert-mykey.crt in the configuration package that you download while deploying the BAR file. Here, mykey is an alias and you may change it to your naming conventions.

truststorePassword.txt file :

Update this file with the password of your truststore.

Then update the setdbparms.txt and serverconf.yaml file with the truststore password and location of the truststore file where it will be hosted inside the container.

setdbparms.txt file:

Add the truststore password and kafka token using setdbparms options as shown below;

setdbparms::mytruststorePass dummy <password>
kafka::KAFKA token YI8IUXHz7TAmz-P9ugmh1774fH64440Ood8xf_nJLCii


serverconf.yaml file :

The truststore file is stored inside the container at ‘/home/aceuser/ace-server/truststore.jks’

ResourceManagers:
 JVM:
   truststoreType: 'JKS'
   truststoreFile: '/home/aceuser/ace-server/truststore.jks'
   truststorePass: 'setdbparms::mytruststorePass'


Create Kubernetes Secret

Now we create the secret for using with ACE integration server.

Ensure that you have logged into the ‘ace’ namespace/project (or the project in which you will be deploying your integration server). Then Run the following command:

$ generateSecret.sh my-kafka-ace-secret

Deploy the BAR file to an ACE Integration Server using ACE Dashboard

Follow the usual procedure to deploy an integration server using helm chart. In addition to the usual configuration details on helm chart, enter the secret name and truststore alias name in the configuration fields as shown below:

Upon successful deployment of the integration server, you can invoke the integration server and verify that the message has been published to the kafka topic.

As an example: We post a message through our integration flow with HttpInput node as:

curl -X POST -i 'http://ace-kafka-prod-consume-http-ace.apps.acel3icp4i.os.fyre.ibm.com/publishtokafka' --data 'This is the test message'


In the Event Streams instance, you can go to the Topics panel, click on the Messages tab and view the published message.

How do I configure Https connectivity for my integration flows in CP4I

Configuring Keystore

Following are the main steps in setting up the keystore for your Https connection.

  1. Download the configuration package config.tar.gz when you are deploying a BAR file to your integration server using the ACE dashboard
  2. Unzip the package config package into a separate directory
  3. Copy your keystorekey (which could be your .pem file) as keystore-mykey.key, mykey here is the alias name. You can name it to any other name of your choice
  4. Copy your keystorecert (which could be your .crt file) as keystore-mykey.crt, mykey here is the alias name. You should use the same alias name for your key and certification files
  5. Add the lines below in serverconf.yaml

    Note : Keep the highlighted values in black bold as they are.

  6. Add the line below in setdbparms.txt

    setdbparms::keystorepass dummyuser <password>

    Note: Ensure that the highlighted value in blue matches with the value specified for keystorePass value in the serverconf.yaml file

  7. Create the Kubernetes secret with the above information. This secret will then be used while deploying the integration server
    $ generateSecrets.sh  myacesecret
  8. When deploying the integration server using helm chart, specify following details in the chart configuration:

    List of key aliases for the keystore: specify the key alias you defined in the above step (3). In this case it is mykey

    The name of the secret to create or to use that contains the server configuration: This is the secret name that we created using generateSecret.sh script. In this case it is myacesecret

  9. Once the integration server pod is deployed, you can check the keystore and certificate files by exec into your running pod/container.
    $ oc rsh <pod name> -n <namespace>

    Then navigate through various directories of your int server as shown below

  10. The event log will show a BIP message if the HTTP listener has started successfully on your https connections.
     $ cd /home/aceuser/ace-server/log
    $ cat integration_server.testhttpsflow.events.txt


    2020-04-23 22:03:22.945238Z: [Thread 502] (Msg 1/1) BIP3132I: The HTTP Listener has started listening on port ‘7843’ for ‘https’ connections.

Configuring Truststore

You can follow the similar procedure described above for keystore to define/configure truststore certificates for deploying with an integration server in a CP4I environment.

Development and Deployment

Can I deploy more than one bar file to an integration server?

The ACE dashboard via platform navigator allows you to deploy only one bar file per container. However, you may be able to deploy multiple BAR files by building a CI/CD pipeline. It is advisable however, to follow certain Agile Integration guidelines on how you group your application/BAR files as a unit of deployment per container instead of deploying several applications to a single integration server. This is described in the article;
Grouping integrations in a containerized environment

How do I implement a CI/CD pipeline for ACE BAR deployments in CP4I?

Refer to section 7.5 in the IBM Redbook on Agile Integration at;
http://www.redbooks.ibm.com/abstracts/sg248452.html?Open

I am using multi-instance integration nodes. How can I migrate them to CP4I?

At the application level it is the pods that provide high availability. Kubernetes allows you to run multiple pods (redundancy) and in the event of one of the pods or containers failing, Kubernetes will spin up a replacement pod. This way you can ensure the availability of your services at all times.

A detailed discussion on the high availability aspects is available in Section 7.7 of the IBM Redbook;
http://www.redbooks.ibm.com/abstracts/sg248452.html?Open

I am not able to change the User Defined Properties (UDPs) of my deployed flows

User Defined Properties have been traditionally used to control the runtime behavior of message flows. In traditional deployment, you could update the UDP value in a deployment message flow. However when you deploy the flow in a container on CP4I, any code or configuration change should create a new container replacing the existing one. When you are running multiple replicas of ACE container, your changes would not apply to all of them, leaving the state of replicas inconsistent. Also the changes would not persist when the container is recreated for any reason.

How do I configure Aggregation nodes (or in general EDA nodes) in the OpenShift environment?

Starting from Fix Pack 11.0.0.7, it is now possible to use the EDA message flow nodes without having a local queue manager, and instead direct them towards a remote queue manager using an MQ client connection. To get the maximum value from moving to a container environment, it is beneficial to be able to independently scale ACE containers from MQ queue manager containers.

For more information, refer to this blog post;
Explore the new features in App Connect Enterprise version 11.0.0.7

Migration and Upgrades

Can I migrate from OCP 3.11 to OCP 4.2?

Whilst ACE v11 is supported on OpenShift 4.2 there is no migration path from OpenShift 3.11 – i.e. you will need to start from scratch again. You can export the bar files and then re-import but there is no in-place migration as the OCP platform cannot be upgraded.

How can I update my CP4I with an ACE certified container image?

Refer to section 10. Upgrading ACE certified container image in CP4I in the following blog article;
Modernizing Integration – Migration from IIB to App Connect running on IBM Cloud Pak for Integration(CP4I)

Is there a migration guide to move existing IIB v10 Apps to ACE running in CP4I containers?

How can I rollback my deployment changes?

Helm release can be rolled back by simply going to the Helm release in the Cloud Pak Foundation console and click ‘Rollback’.

It will open a pop-up window where you can select the release to which you want to roll-back to. Select the release and click on Rollback.

Can I change the image type when upgrading the Helm release? For example if previously it was ACE only and now I need to upgrade the integration server with one flow that uses MQ client or Server or vice-versa?

Yes. It is possible that the initial release of your integrations were not using MQ, it was built with an ACE-only image. However new requirements might require connecting to a Queue Manager. So when you upgrade the helm release with changes in your integration artifacts, you can choose ACE with an MQ client or ACE with an MQ server image as per the case. Similarly you may change the image to ACE only while doing the helm upgrade if MQ connectivity is not required anymore.

Troubleshooting and Serviceability

Where can I see the logs for my ACE integration server pod?

Get the list of your ACE pods using

$ oc get pods

Then for each ACE pod that you want the logs for, run the following command

$ oc logs <ace pod>

You can also view the logs in the Kibana dashboard

How do I enable trace for ACE Integration Server in CP4I, or how do I trace my integration server running in container in CP4I?

Edit the deployment of your integration server using kubectl edit deployment

Find where it says - name: USE_QMGR value: "true"

Add the following below it

- name: MQSI_FORCE_DEBUG_TRACING

value: “true”

- name: MQSI_PLAINTEXT_TRACE

value: “true”

- name: MQSI_FORCE_TRACE_SIZE

value: 2G

Save the change.

How do I copy ACE Int Server Pod logs/traces out of the container?

oc rsync <pod>:<absolute path of log or trace inside the container> <local file system>

In ACE Containers, traces can be typically located under

/home/aceuser/ace-server/config/common/log/

The event log is available under : /home/aceuser/ace-server/log

How do I enable Logging for ACE Dashboard?

To enable debug level logging for the ACE Dashboard, run the following command;

'kubectl edit deployment acedash-ibm-ace-dashboard-icp4i-prod

Below the CIP_SERVICES_ENDPOINT env var please add the following two additional lines

name: ACE_DASHBOARD_LOG_LEVEL

value: “debug”

Save the changes.

How do I view ACE dashboard logs?

Run the following command;

kubectl logs <dashboard pod> -c <dashboard container>

(Note : There are 3 dashboard containers – Content server, Control UI, content server init)

How do I investigate performance issues for integration flows running in container/CP4I?

These are some of the options that you can consider to monitor the performance of your integration flows.

  1. Operations dashboard – for end to end (across multiple components within CP4I) tracing/view of your transaction
    https://www.ibm.com/support/knowledgecenter/SSGT7J_19.4/op_dashboard.html
  2. Message flow Accounting & Statistics
    Enable Snapshot accounting and statistics in csv format using configuration secret for integration server (i.e. by modifying the server.conf.yaml file) . This can be done by modifying your ACE config secret and then doing a helm upgrade with the new config secrets.

    Once you have recreated the performance issue, copy the stats out of the ACE pod using

    oc rsync <pod>:<absolute path of stats file inside the container> <local file system>

    The statistics files are created under the following directory path in the Ace container.
    /home/aceuser/ace-server/config/common/stats/

  3. Check how many replicas are allocated for your integration server and whether you need to increase them further to cater to the load.
  4. Check Pod Resource usage using the Grafana dashboard and see if CPU usage is nearing its allocated limits.

    OCP Console → Administrator → Monitoring → Dashboards
    Or
    ICP4I Platform navigator → Monitoring

    Select the namespace and pod from the dropdown list as shown below

Are ACE certified container images supported on CP4I or Openshift Kubernetes platform?

Yes, the ACE certified container images are fully supported on Openshift including the helm charts.

Are ACE certified container (ACEcc) images / dockerfiles supported on third party Kubernetes?

Use of ACEcc images are supported on any Kubernetes environment which uses one of the following virtualisation technologies: docker, CRI-O, containerd. The helm charts are not supported on non-openshift environments and any other configuration custom written in order to present the container image to the non-OpenShift Kubernetes system, for example operators and helm charts. Please refer to SOE (https://www.ibm.com/support/pages/node/609043) for the current support statement.

How do I stop my ACE integration server or integration flows running in CP4I?

In container world, if you do not typically control the integration server’s lifecycle, rather you would control it at the Pod/deployment level. In other words, if you do not need the integration server for some reason, you would typically remove the deployment/helm release and when you need the integration flows, you would deploy it back via a new helm release.

Join The Discussion

Your email address will not be published. Required fields are marked *