One of the challenges of the heavily centralized integration runtime — Integration Node with several Integration Servers running underneath it and each of these integration servers running several message flow applications – it was hard to safely and quickly make changes without affecting other integrations. Deploying changes could potentially destabilize other unrelated interfaces running on the centralized ESB.

In the section below, we describe how to apply a fixpack to your base ACE image and apply an i-fix for a specific issue you may have encountered while running one of your integration flows. The objective here is that we apply the fixes in a manner that it is applied only to the affected integration flows.

A) Upgrading the Helm chart for installing an ACE fixpack

Suppose you are running your integration servers at GA level or initial fixpack level and want to update the fixpack to the more recent available one.
You can do this by performing the following steps :

1) Create a new Docker image with the desired fixpack level

To create a Docker image with an updated fixpack, you can take the sample Dockerfile available at
https://github.com/ot4i/ace-docker/blob/master/11.0.0.0/ace/ubuntu-1604/base/Dockerfile
and make the necessary changes as described below.

  1. Update the Dockerfile with the location of the fixpack. It could be downloaded and stored locally, or you may want to directly fetch it from fixcentral etc.
  2. In this demonstration we have downloaded the fixpack locally and we keep it in the same directory as the Dockerfile. In the Dockerfile we make following highlighted changes.

    FROM ubuntu:16.04
    WORKDIR /opt/ibm
    # Copy in the bar file to a temporary directory
    COPY ace-11.0.0.1.tar.gz /tmp
    RUN apt update && apt -y install --no-install-recommends curl rsyslog sudo \
      && tar -xf /tmp/ace-11.0.0.1.tar.gz --directory /opt/ibm  \
      && /opt/ibm/ace-11.0.0.1/ace make registry global accept license silently \
      && apt remove -y curl \
      && rm -rf /var/lib/apt/lists/*

  3. Now build the Docker image
  4. $ docker build -t ace11fp01:11.0.0.1.

  5. Tag the image to push to the ICP image repository.
  6. $ docker tag ace11fp01:11.0.0.1 mycluster.icp:8500/default/ace11fp01:11.0.0.1

2) Push the image to the ICP repository

# docker login mycluster.icp:8500
# docker push mycluster.icp:8500/default/ace11fp01:11.0.0.1

3) Create the Helm Chart

  1. Update the Values.yaml with the new image name and the tag.
  2. image:
      # repository is the container repository to use, which defaults to IIB docker registry hub image
      repository: mycluster.icp:8500/default/ace11fp01
      # tag is the tag to use for the container repository
      tag: 11.0.0.1


  3. Update the Chart.yaml with the new version number and description.
  4. name: ibm-ace-prod-fp01
    version: 1.0.1
    description: App Connect Enterprise Server FP01



  5. Package and load the Helm chart in the ICP catalog.
  6. # helm package ibm-ace-prod-fp01
    # bx pr login -a https://mycluster.icp:8443 -u admin -p admin -c id-mycluster-account -skip-ssl-validation
    # bx pr load-helm-chart -archive ibm-ace-prod-fp01-1.0.1.tgz -clustername mycluster.icp

  7. Navigate to Catalog → Helm Charts → ibm-ace-prod-fp01. You will see the Helm chart with the updated version – 1.0.1

4) Upgrade the Helm release or delete the old release and deploy the new one.

You can upgrade the Helm release by navigating to Menu → Workloads → Helm Releases. Locate your Helm release and click on the Actions as shown below.



A pop-up dialog box will appear where you can select which version of your Helm chart you want to upgrade to. You also have an option to Delete your existing Helm release and then you can create a new one from the desired version of the Helm chart.

In the next section we will see detailed steps on how you can upgrade the currently running image. The upgrade operation spawns a new Pod from the new image and then terminates the old Pod.

B) Applying i-fixes to a specific deployment.

Assuming you are running your containers on ICP with ACE V11 FP01 level. Suppose you encounter a product bug with one of your integration servers and you receive an i-fix from IBM support. Now you want to apply the fix to the affected integration server without disturbing the other integration servers that may be running in your ICP environment.

The following procedure describes the procedure to apply an i-fix.

1) Create a Dockerfile with the commands to install i-fix.

FROM ace11fp01:11.0.0.1

WORKDIR /opt/ibm

# Copy in the bar file to a temporary directory
COPY 11.0.0.1-ACE-LinuxX64-TFP12776.tar.gz /tmp
USER root
WORKDIR /home/aceuser
RUN  tar -xf /tmp/11.0.0.1-ACE-LinuxX64-TFP12776.tar.gz --directory /home/aceuser
RUN /home/aceuser/mqsifixinst.sh  /opt/ibm/ace-11.0.0.1/ install

USER aceuser
WORKDIR /home/aceuser
ENV BAR1=LargeXMLProcessing.bar

# Unzip the BAR file; need to use bash to make the profile work
RUN bash -c 'mqsicreateworkdir /home/aceuser/ace-server1'

RUN bash -c 'mqsibar -w /home/aceuser/ace-server1 -a /tmp/$BAR1 -c'
# Set entrypoint to run management script
CMD ["/bin/bash", "-c", "/usr/local/bin/ace_license_check.sh && mqsiservice -v && IntegrationServer -w /home/aceuser/ace-server1"]


Note: We create the new Docker image from the base image of Ace V11 FP01.

Therefore, we mention

FROM ace11fp01:11.0.0.1


in the Dockerfile, assuming that you do have the image on your system as part of activity performed in section 1, or else you may have to provide the location of the repository for the Docker to look for the image.

2) Create a new Docker image and push it to the ICP image repository

# docker build -t acefp1fix:1.0 .
# docker login mycluster.icp:8500
# docker push mycluster.icp:8500/default/acefp1fix:1.0

3) Create the Helm Chart

  1. Update the values.yaml with the new image name and the tag.
  2. image:
      # repository is the container repository to use, which defaults to IIB docker registry hub image
      repository: mycluster.icp:8500/default/acefp1fix
      # tag is the tag to use for the container repository
      tag: 1.0


  3. Update the Chart.yaml with the new version number and description of the i-fix
  4. name: ibm-ace-prod-fp01
    version: 1.0.2
    description: App Connect Enterprise Server FP01 with i-fix 11.0.0.1-ACE-LinuxX64-TFP12776

  5. Package and load the Helm chart in the ICP catalog
  6. # helm package ibm-ace-prod-fp01
    # bx pr login -a https://mycluster.icp:8443 -u admin -p admin -c id-mycluster-account -skip-ssl-validation
    # bx pr load-helm-chart -archive ibm-ace-prod-fp01-1.0.2.tgz -clustername mycluster.icp

  7. Navigate to Catalog → Helm Charts → ibm-ace-prod-fp01. You will see the Helm chart with the updated version – 1.0.2

4) Upgrade version of the running Helm Release

In the following screen capture we can see that we have our Helm chart ibm-ace-prod-fp01 running at two different versions – 1.0.0 which was the initial base release, we have another Helm release deployed which is running with the image version 1.0.1 (as a result of our exercise from Section A). And now that we have published v1.0.2, it is seen as the most recent available version.


Since the i-fix is developed on top of v1.0.1 we will upgrade only the relevant Helm release, i.e. acefp01.



The pop-up dialog box will appear as shown below, this is where you can select which version of your Helm chart you want to upgrade to.


From the drop-down list, select the desired version. We select v1.0.2 in this demonstration as it is that version in which our i-fix is available. You can also set other parameters like License acceptance, and any other parameters available on the Helm chart.



Click the Upgrade button and it will initiate the deployment of the new Pod and terminate the old Pod. You will be able to observe the intermediate status as shown below.


Once the new Pod is fully up and running, you will see that the Helm release updates the status as ‘up-to-date’.


You can also confirm that your PoD is now running the latest code that contains the i-fix by checking the Deployment logs as shown below.

Conclusion

In a cattle like environment, if you want to make any adjustments such as;
change an integration, add a new one, change property values, add product fixpacks, apply a specific i-fix and so on,
this is done by creating a new container image, starting up a new instance based on it, and shutting down the current container.
The reason being, any live changes to a running server make it different from the image it was built from – it changes its runtime state. This would then mean that the container orchestration engine cannot re-create containers at will for failover and scaling.

Join The Discussion

Your email address will not be published. Required fields are marked *