***This article has moved, due to the impending closure of this site, and can now be found here: An approach to build DevOps pipeline for ACE on Cloud Pak for Integration


Please join us over at our new community site for discussion, blogging and other resources about IBM App Connect, IBM Integration Bus, IBM App Connect Enterprise software and more.


(click image to visit community site)


We had published a recipe in developerWorks to automate the build and deployment of ACE projects on Cloud Pak for Integration. In this blog, we will discuss a pragmatic approach on how enterprises can quickly configure the pipeline for the build and deployment of ACE integration flows on IBM Cloud Pak for Integration.

Link to Developer Works recipe: Building CI-CD Pipeline for IBM App Connect Enterprise on Cloud Pak for Integration

In this blog, we will demonstrate an approach by taking an example so that it is nearer to real time experience.

Scenario:
A customer wants to automate the build and release process for IBM App Connect Enterprise to deploy on CP4I. They have considered using:

  • Git as source control
  • Nexus to store versioned BAR files
  • Jenkins as the CI tool

The diagram below depicts their target state.

Note that you might have the multiple logical ACE environments on the same CP4I cluster isolated by namespaces or different ACE environments could be on different CP4I clusters spread across private or public clouds.

As you can see, this diagram depicts a basic DevOps flow. Lots of interesting things can be done on top of this, like:

  • Implement continuous testing
  • Automatically rollback to previous successful release if test fails
  • Automatically Create issue in a bug tracking tool, like JIRA, for failures and assign to a developer
  • Further enhance it to DevSecOps by introducing security test and so on.

One of the major benefits of moving to containers is that it eliminates the ‘compatibility issues’. It completely eliminates ‘it works on my machine’ problems. You can test your application with the same image locally that you are going to use in live environments. You would typically pull the respective ACE image from the container registry and deploy the app on local workstation. You can attach debugger for your flows running on containers on your local workstation. You could follow the below Github documentation to deploy ACE containers on your local workstation for development, testing and debugging as well.

https://github.com/ot4i/ace-docker

Let us see how this basic DevOps flow can be implemented. In most of the scenarios:

  1. Developer will check-in the code after testing it locally with same image
  2. There could be a manual or automated build trigger for Jenkins build job
  3. Jenkins build job will compile the ACE application/service and create versioned BAR files for respective environments by taking configuration values from respective environment’s properties files and tag the source code with the respective version number
  4. Jenkins Build Job will store these versioned BAR files into a repository, say Nexus
  5. There could be a manual or automated trigger to Jenkins deployment job for DEV environment
  6. Jenkins DEV deployment job will pull BAR file(s) from repository (here Nexus), create an image with BAR file(s) baked into base ACE image and push the image into OCP registry of CP4I
  7. Jenkins DEV deployment job will create configuration secret with required configurations for the Integration Server, perform deployment from the created DEV image and will configure other objects like Horizontal autoscalar policy, Route etc.
  8. Upon successful testing in DEV env, there could be a trigger to Jenkins QA deployment Job. The similar steps (6 and 7) will be performed for QA environment.
  9. After validation and all required testing, similar steps (6 and 7) will be performed for Production environment.

You could create the image(s) for your ACE application during the build process itself; however building the image in a separate job, i.e. in a deployment job, has the advantage described below:

  • You have versioned BAR files for respective environments and the source code is tagged with that version. So you can always trace the source code version associated with the respective BAR files.
  • You may include BAR files from one or more build jobs to deploy them in one Integration Server.
  • You have control over the deployment i.e. make deployment manual or automated. As in many scenarios, you may want to do manual testing of the applications before deploying in target environment.

For the scenarios where more than one ACE applications/services need to be deployed in the same Integration Server, the Jenkins deployment job can pull the respective BAR files from the repository (here nexus) and bake them into the ACE image.

The diagram below depicts two ACE applications being built and deployed in an IntegrationServer on CP4I

1. Configure Build Server

Ensure that you have configured the build server by following the steps from 1 to 5 from the recipe mentioned below:

Building CI-CD Pipeline for IBM App Connect Enterprise on Cloud Pak for Integration

Note that the instructions in this blog do not use OpenShift plugin and Docker plugin, so you may skip to install these plugins in Jenkins; however docker and oc client must be installed on build server.

2. Configure ACE project and Build Job

For this demonstration, we will create two additional folders inside the ACE project:

Properties folder: This folder will contain properties files for different environments, e.g. DEV.properties and QA.properties. These properties files would contain the environment specific parameters to be used in mqsiapplybaroverride command to override node properties and UDPs

Build folder: This folder will contain Jenkins pipeline script for the ACE project. So the build script for the ACE project is also checked-in to source control

For this demonstration, I have created two ACE projects which are checked-in at the location below:

Test.App: https://github.com/awasthan/Test.App.git

Sum_Service: https://github.com/awasthan/Sum_Service.git

Both these projects have two properties files (DEV.properties and QA.properties) inside the ‘properties’ folder. Let us examine the ‘jenkinsfile’ script for the Test.App project.

Now let us create the Jenkins build job for this Test.App project.

Create a Jenkins pipeline project with the name ‘Test.App’.

Specify to read pipeline script from SCM and enter the github project location, credential to access GitHub and path of the jenkinsfile in the project.

Click on Save.

Now build the project so that it uploads the BAR files on the repository and tags the code. The snippet below shows a successful build of the Test.App project.

Now let us look at the Nexus repository if BAR files for this version (version 10 here) have been uploaded. The snapshot below shows the zip file containing the version 10 BAR files.

Now let us look at Github if the Test.App source code has been tagged with version 10. Below snippet shows the tag.

Similarly configure the second ACE project (Sum_Service).

3. Configure Deployment Job

Now configure the deployment jobs for each environment. You may have a single parameterized job to deploy to a different CP4I environment; however due to security and compliance reasons, you would probably have different deployment jobs for respective environments.

Before the Integration Server starts, the container is checked for the folder /home/aceuser/initial-config.
Refer to https://github.com/ot4i/ace-docker to understand how ace images can be built dynamically.

We can have a deployment project containing configuration information, Dockerfile and Jenkinsfile as depicted in below diagram.

I have a sample deployment job that contains these two files:

  • Dockerfile: This file will have instructions to build the image baked with BAR files
  • Jenkinsfile: This file will have pipeline script to deploy to CP4I environment, create route, Horizontal Pod autoscalar policy etc.

ACE helm-chart version 3.1.0 (CP4I 2020.1.1) creates the route as well, so you may use that route unless you prefer a different hostname for the route.
To do other configurations on IntegrationServer, you must pre-install the configuration secret containing required configurations. A generic template for configuration parameters and an script to generate secrets can be downloaded from platform navigator while doing manual deployment.

The sample deployment job used in this example is checked-in to Github:
https://github.com/awasthan/Test.App.CP4I.Deployment.git

Another sample deployment job with configuration parameters to set-up connectivity with EventStreams instance is checked-into github as well. This deployment job shows how to do other required configurations for Integration Server in addition to baking BAR file on to base image.
https://github.com/awasthan/Kafka_APIs_CP4I_prod5_Deployment.git
The configuration folder in this project contains parameters in ‘server.conf.yaml’, ‘truststorePassword.txt’and ‘setdbparms.txt’. Other files are left blank as those parameters are not being used in this IntegrationServer configuration. ‘server.conf.yaml’ contains the fields that need to be overridden, ‘setdbparms.txt’ contains parameters for security credentials and ‘truststorePassword.txt’ contains password for truststore. Note that in real life, the sensitive information like passwords would not be stored in source control. You would put them in some protected place and have Jenkins to pull them dynamically while executing the pipeline.
This deployment job deploys BAR files from below two projects:
https://github.com/awasthan/Publisher_api.git
https://github.com/awasthan/Consumer_API.git

Let us examine the Dockerfile.

You can add other configuration parameters as well, for example to do SSL configuration etc. Follow the documentation at https://github.com/ot4i/ace-docker

Let us examine the parts of the jenkinsfile. This pipeline has been parameterized to accept some parameters dynamically. It does following tasks:

  • Checks-out the deployment project
  • Pulls the specified versions (ACE_APP1_VERSION and ACE_APP2_VERSION) of BAR files from Nexus repository for ACE_APP1 and ACE_APP2.
  • Logs-in to OCP registry of specified CP4I instance
  • Builds the docker image using the Dockerfile and DEV BAR file and pushes to OCP registry
  • Performs helm deployment. Notice the name of ACE image (image.aceonly) in helm install command. Since we are using ace only image, so value of ‘imageType’ has been specified as ‘ace’. If you are using ACE with MQ client or ACE with MQ server, value of ‘imageType’ would be ‘acemqclient’ and ‘acemqserver’ respectively. Make sure to specify the image parameter based on imageType
  • If it is a new helm release being installed, route and horizontal autoscalar policy (hpa) are being created
  • If it is an upgrade to existing helm release, it only upgrades the helm release and route & horizontal autoscalar policy are left unchanged
  • Performs the clean-up
timestamps {

node () {

wrap([$class: 'Xvfb']) {
	stage ('Test.App.CP4I.Deployment - Checkout') {
 	 checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'jenkins-github', url: 'https://github.com/awasthan/Test.App.CP4I.Deployment.git']]]) 
	}
	stage ('Test.App.CP4I.Deployment - Build') {
 	
artifactResolver artifacts: [artifact(artifactId: '${ACE_APP1}', extension: 'zip', groupId: 'com.ibm.esb', version: '${ACE_APP1_VERSION}'), artifact(artifactId: '${ACE_APP2}', extension: 'zip', groupId: 'com.ibm.esb', version: '${ACE_APP2_VERSION}')], targetDirectory: '/var/icp-builds/${ACE_APP1}'
sh label: '', script: '''#!/bin/sh
cd /var/icp-builds/${ACE_APP1}
unzip ${ACE_APP1}-${ACE_APP1_VERSION}.zip
chmod 777 *
cp ${ACE_APP1}_DEV_${ACE_APP1_VERSION}.bar /var/lib/jenkins/jobs/${JOB_NAME}/workspace
unzip ${ACE_APP2}-${ACE_APP2_VERSION}.zip
chmod 777 *
cp ${ACE_APP2}_DEV_${ACE_APP2_VERSION}.bar /var/lib/jenkins/jobs/${JOB_NAME}/workspace
cd ${WORKSPACE}
oc login ${OCP_API_URL} -u admin -p ${REGISTRY_PASSWORD} --insecure-skip-tls-verify
docker login ${OpenshiftRegistryURL} -u $(oc whoami) -p $(oc whoami -t)
docker build -t ${imagename}:${BUILD_NUMBER} .
docker tag ${imagename}:${BUILD_NUMBER} ${targetrepo}/${imagename}:${tag}-amd64
docker push ${targetrepo}/${imagename}:${tag}-amd64
source /etc/profile
cloudctl login -a ${ICP_CONSOLE_URL} -n ${Namespace} -u admin -p ${REGISTRY_PASSWORD} –skip-ssl-validation
cd /opt/certs
helm init --client-only
helm repo add local-charts --ca-file ${HELM_CERT_FILE_NAME} ${ICP_CONSOLE_URL}:443/helm-repo/charts
if test ${DeploymentType} = \'install\'; then
helm ${DeploymentType} --name $ReleaseName local-charts/ibm-ace-server-icp4i-prod --version v3.0.0 --namespace ${Namespace} --set imageType=ace --set image.aceonly=image-registry.openshift-image-registry.svc:5000/ace/${imagename}:${tag} --set image.acemqclient=image-registry.openshift-image-registry.svc:5000/ace/ibm-ace-mqclient-server-prod:11.0.0.6.1 --set image.acemq=image-registry.openshift-image-registry.svc:5000/ace/ibm-ace-mq-server-prod:11.0.0.6.1 --set image.configurator=image-registry.openshift-image-registry.svc:5000/ace/ibm-ace-icp-configurator-prod:11.0.0.6.1 --set image.designerflows=image-registry.openshift-image-registry.svc:5000/ace/ibm-ace-designer-flows-prod:11.0.0.6.1 --set image.pullSecret=${IMAGE_PULL_SECRET} --set persistence.enabled=false --set persistence.useDynamicProvisioning=false --set integrationServer.name=testace --set aceonly.replicaCount=1 --set license=accept --tls
oc project ace
oc expose svc ${ReleaseName}-ibm-ace-server-icp4i-prod --port=7800
oc autoscale deployment/${ReleaseName}-ibm-ace-server-icp4i-prod --min 1 --max 3 --cpu-percent=25
else
helm ${DeploymentType} $ReleaseName local-charts/ibm-ace-server-icp4i-prod --version v3.0.0 --namespace ${Namespace} --set imageType=ace --set image.aceonly=image-registry.openshift-image-registry.svc:5000/ace/${imagename}:${tag} --set image.acemqclient=image-registry.openshift-image-registry.svc:5000/ace/ibm-ace-mqclient-server-prod:11.0.0.6.1 --set image.acemq=image-registry.openshift-image-registry.svc:5000/ace/ibm-ace-mq-server-prod:11.0.0.6.1 --set image.configurator=image-registry.openshift-image-registry.svc:5000/ace/ibm-ace-icp-configurator-prod:11.0.0.6.1 --set image.designerflows=image-registry.openshift-image-registry.svc:5000/ace/ibm-ace-designer-flows-prod:11.0.0.6.1 --set image.pullSecret=${IMAGE_PULL_SECRET} --set persistence.enabled=false --set persistence.useDynamicProvisioning=false --set integrationServer.name=testace --set aceonly.replicaCount=1 --set license=accept --tls
fi
cd /var/icp-builds
rm -rf ${ACE_APP1}
rm -rf ${ACE_APP2}''' 
	}
}
cleanWs()
}
}

Now let us look at the Jenkins pipeline Job for this.
Parameters have been defined for all the placeholders in above jenkinsfile.

Pipeline script has been configured to pull from Github.

Similarly you can configure deployment job for QA environment and so on.

4. Perform Deployment

Now let us deploy an IntegrationServer that contains version 15 of Test.App and version 16 for Sum_Service. Also note that, the base image of ACE being pulled in docker file is version 11.0.0.6.1. In the next step we will apply fix-pac 11.0.0.7 on this IntegrationServer.

Click on ‘Build with Parameters’ for the deployment job.

Supply the parameter values

Click on Build.
Below is the snippet of build logs.

Let us take a look at the ACE Dashboard.


The Test.App contains a simple ping flow. You can test the flow by hitting the url in web-browser. The url will be:
http://<route>/ping

You can get the route by using ‘oc get route’. Also notice the hpa.

5. Apply Patch – Upgrading the release

You may need to apply patch to your IntegrationServer for various reasons. Some of the reasons could be:

  • Apply new fix pac for ACE
  • Deploying new version of an application
  • Deploying another application on same IntegrationServer
  • Removing a deployed application
  • Changing configuration parameters

And so on…

In this scenario, let us assume that we want to apply new ACE fix pac. The current image has been built on ACE 11.0.0.6.1. So first of all you need to upload IBM provided new image for ACE fix-pac. Follow the step 10 in below article to upload ACE fix-pac image in OCP registry.

Modernizing Integration – Migration from IIB to App Connect running on IBM Cloud Pak for Integration(CP4I)

If you are pulling the images online from IBM entitlement registry, you don’t need to do this step.

After this is done, let us update the Dockerfile of the deployment job to pull new ACE fix-pac, in this case 11.0.0.7-r1

Now run the click on ‘Build with Parameters’ for the deployment job.

Notice that the ‘DeploymentType’ is selected as ‘upgrade’ and the name of existing helm release for upgrade has been provided. Selecting the value ‘upgrade’ for DeploymentType will execute else condition in jenkinsfile, which means it will not try to recreate route and hpa policy.

Now click on Build. Below is the log snippet of the deployment job execution.

Now the IntegrationServer is running on ACE 11.0.0.7.r1 fix pac.

6. Rolling back to previous version

Though as part of DevOps process, you can implement automatic rollback in case deployment failure or test failure. However in certain scenarios, you may be asked to rollback to a previous version due to some other reason.
To do that, you can go to ‘Cloud Pak Foundation’→Administer→Helm Releases

Find the Helm release and click on Rollback.

Select the release to rollback to and click on Rollback.

You can perform the roll back using command line as well.

helm rollback <RELEASE> [REVISION] --tls

The sample Github projects used in this demo are attached here.

Test.App-master
Sum_Service-master
Test.App.CP4I.Deployment-master

Thanks to Amar Shah for his contribution to this article

2 comments on"An approach to build DevOps pipeline for ACE on Cloud Pak for Integration"

  1. pankaj gupta April 16, 2020

    one question,

    Instead of creating separate images for each environment, how about passing the ENV value (for various environments) as a parameter. Base image can be customized to look for this parameter and pick the barfile accordingly and deploy it on integration server.

    Do you see any major issues with this approach?

  2. Great article, all the steps worked as explained and I was able to deploy successfully using Jenkins, Nexus to Openshift v3.11 on IBMCloud. Thanks for publishing the article.

Join The Discussion

Your email address will not be published. Required fields are marked *