1. Overview

This article will provide guidance to teams who are looking to building a CI/CD for IBM App Connect Enterprise (ACE) bar file deployment onto IBMCloud/OpenShift/ICP4I Integration Server. It’s a simple REST ACE Message Flow that receives the request, writes the payload to a MQ Queue, and sends a reply back.

The Scenario,
1) Developer commits the ACE project into the github repository.
2) Jenkins Build job pulls ACE projects from Github repository.
3) Jenkins Build job creates bar files and uploads them into Nexus Artifactory.
4) Tags the github project with the Jenkins build number.
5) Jenkins deploy job,
> Downloads bar files from Nexus
> creates docker images containing bar files
> Uploads the docker image to the OpenShift Container Registry
> Deploy the docker image to create ACE Integration Server.

The Build server is a RHEL, and I installed Jenkins, Nexus, Xvfb, ACE on it.

For this receipe I have installed Jenkins, Nexus, Xvfb, IBM AppConnect Enterprise v11.0.0.9, CLI tools for IBMCloud,OpenShift,ICP,helm on Redhat linux.

1) Github as source code repository.
2) Jenkins for build & deployment.
3) Nexus to store bar files.
4) Xvfb – required to run mqsicreatebar command
5) IBM ACE v11.0.0.9 to run mqsi commands mqsicreatebar, mqsibaroverride.
6) Command LIne tools to connect to IBMCloud, OpenShift, helm, and IBM Cloudpak Foundation.

2. Component Diagram

3. OpenShift Cluster

> Create OpenShift v4.3 Cluster from IBMCloud. I created a cluster on a single zone, with 3 worker nodes, 8vCPUs with 64gb RAM on each worker.

4. IBM CloudPak For Integration – icp4i

Create IBM CloudPak for Integration Service.
From IBMCloud > search for “cloud pak for integration”, and select “IBM Cloud pak for integration”. Follow the wizard and create ICP4I instance.

5. Command Line Tools

5.1 OpenShift CLI Tools

Install the command line tools on the Build Server,

Open OpenShift Cluster Console > click ? on top right > select “Command Line Tools” option
Follow the CLI install instructions and install them on the Build Server.

5.2 icp-console, icp-proxy url’s

Capture icp-console, icp-proxy url’s, these will be used later during deployment.

5.3 cloudctl CLI

You need to install CloudPak Command LIne tools on the Build Server,

Open the “Offering Dashboard” for ICP4I. Navigate to it from IBM Cloud > Schematics > workspace > your-workspace > “Offering Dashboard” on top right.

Click on “Cloud pak foundation”

Click on the user icon on top right > Configure Client > Install CLI. Follow the download instructions, and install “IBM Cloudpak CLI”, “Kubernetes CLI”, and “Helm CLI” onto the build server.

6. ibm-entitled-charts

On the Build Server, Add ibm-entitled-charts to the local helm repository.
$ mkdir /home/admin-id/openshift
$ cd /home/admin-id/openshift
$ openssl s_client -connect raw.githubusercontent.com:443 -showcerts </dev/null 2>/dev/null| openssl x509 -outform PEM > ibm-entitled-cert.pem

$ export HELM_HOME=$HOME/.helm
$ helm repo add ibm-entitled-charts https://raw.githubusercontent.com/IBM/charts/master/repo/entitled/ –ca-file ibm-entitled-cert.pem –cert-file $HELM_HOME/cert.pem –key-file $HELM_HOME/key.pem
“ibm-entitled-charts” has been added to your repositories

7. MQ Queue Manager

Open the “Offering Dashboard” for ICP4I. Navigate to it from IBM Cloud > Schematics > workspace > your-workspace > “Offering Dashboard” on top right.


Create MQ Queue Manager from the offering dashboard.

$ oc project mq
$ oc get pods
$ oc rsh mq-yourqmgr-ibm-mq-0

Configuration

sh-4.4$ runmqsc QMESBD01
def qlocal(sb.rest2mq.in)
def chl(QMESBD01.SVRCONN) CHLTYPE(SVRCONN)
alter qmgr chlauth(enabled)
alter qmgr connauth(”)
set chlauth(QMESBD01.SVRCONN) TYPE(BLOCKUSER) USERLITS(‘nobody’)
set chlauth(QMESBD01.SVRCONN) TYPE(usermap) clntuser(‘aceuser’) mcauser(‘mqm’) usersrc(map)
end
sh-4.4$

8. App Connect Enterprise install

On the Build Server, Install ACE to /opt/IBM/ace-11.0.0.9. ACE mqsicreatebar, mqsiapplybaroverride commands will be used by Jenkins jobs.

Download App Connect Enterprise from https://idaas.iam.ibm.com/idaas/mtfim/sps/authsvc?PolicyId=urn:ibm:security:authentication:asf:basicldapuser

$ tar -xzvf 11.0.0-ACE-LINUXX64-FP0009.tar.gz
$ sudo mv ace-11.0.0.9 /opt/IBM
$ cd /opt/IBM/ace-11.0.0.9
$ sudo chmod -R 755 /opt/IBM/ace-11.0.0.9
Accept license
$ sudo ./ace make registry global accept license silently

9. Jenkins Installation/Configuration

9.1 Installation

Jenkins install instructions,
https://wiki.jenkins.io/display/JENKINS//Installing+Jenkins+on+Red+Hat+distributions

Open Jenkins URL.
Manage Jenkins –> Manage Plugins
Install below plugins also:
– OpenShift Client Plugin
– Git plugin
– Xvfb plugin
– docker-build-step plugin
– Repository Connector plugin
– Nexus Artifact uploader plugin
– SSH Agent plugin

Restart Jenkins.

9.2 Configuration

Manage Jenkins > Configure System
9.2.1

Obtain the URL from OpenShift Console > You (user on top right) > Copy Login Command > Get Token > ( the url on the oc login command ). This will be used later on as well.

9.2.2


Docker Builder: unix:///var/run/docker.sock

<Save> Configuration Changes.

Manage Jenkins > Global Tool Configuration

Check “oc” command path on the Build server, by running “which oc”.

<Save> Configuration Changes.

9.3 SSH to GitHub setup

On the Jenkins Build Server,
generate ssh keypair, load the private key into Jenkins and public key to Github.com.

$ sudo adduser -r jenkins
$ sudo su -s /bin/bash jenkins
$ cd /var/lib/jenkins
$ ssh-keygen
Hit , and accept the default values to all questions.

This generates id_rsa, and id_rsa.pub files under /var/lib/jenkins/.ssh folder.
$ cat .ssh/id_rsa.pub

Copy the public key to clipboard.

1) github.com – Add the public key,
Settings > SSH and GPG keys > New SSH Key > “jenkins-github” > paste id_rsa.pub contents.

2) Jenkins Url – Add the private key
> Credentials > System > Global Credentials (unrestricted) > Add Credentials > SSH Username with private key >
ID: jenkins-github
Description: Private key to connect from Jenkins to github.com
Username: jenkins
Private Key: Click “Enter Directly” > Click “Add” and paste the private key (id_rsa) content generated above. $ cat .ssh/id_rsa – copy & paste the private key.

We need to add GitHub to known hosts. Let’s do it by simply connecting to GitHub server.

On the Build Server,
whoami # make sure it says ‘jenkins’
ssh -T git@github.yourcompany.com. (if you using public github then git@github.com).
Result should be a welcome message.

10. Nexus Installation

Nexus will be used as a repository to store ACE Bar files:

$ sudo mkdir /app && cd /app
$ sudo wget -O nexus.tar.gz https://download.sonatype.com/nexus/3/latest-unix.tar.gz

$ sudo tar -xvf nexus.tar.gz
$ sudo mv nexus-3* nexus
$ sudo adduser nexus
$ sudo chown -R nexus:nexus /app/nexus
$ sudo chown -R nexus:nexus /app/sonatype-work

$ sudo vi /app/nexus/bin/nexus.rc
Copy / paste the content between “========”.
========
run_as_user=”nexus”
========

Running nexus as a service:
$ sudo vi /etc/systemd/system/nexus.service

Copy / paste the content between “========”.
========
[Unit] Description=nexus service
After=network.target

[Service] Type=forking
LimitNOFILE=65536
User=nexus
Group=nexus
ExecStart=/app/nexus/bin/nexus start
ExecStop=/app/nexus/bin/nexus stop
User=nexus
Restart=on-abort [Install] WantedBy=multi-user.target
========
$ sudo ln -s /app/nexus/bin/nexus /etc/init.d/nexus
$ sudo chkconfig –add nexus
$ sudo chkconfig nexu
$ sudo systemctl start nexus
(or)
sudo service nexus start
$ sudo service nexus status
$ tail -f /app/sonatype-work/nexus3/log/nexus.log (check nexus.log to make sure the service started successfully)

Check port 8081 is up and running.
$ netstat -an | grep 8081

if running, open Nexus URL,
http://nexus-server:8081 – Bookmark the URL.

Sign-in with the temporary admin/password. Follow the wizard and accept the defaults.

Create new repository, i created “raw (hosted)” repository, but you can use different one like Maven.
Settings > Repositories > Create Repository > raw (hosted > “generic” > “Create Repository” button in the bottom of the screen.

11. Xvfb Install

on the build server,
$ sudo yum install Xvfb
Start Xvfb:
$ export DISPLAY=
$ /usr/bin/Xvfb :1 -screen 0 1024x768x24 &
$ ps -ef | grep Xvfb (Make sure Xvfb process is running)

Auto starting Xvfb during system reboot
vi /etc/rc.local
Xvfb :0 -screen 0 1024x768x24&

12. Docker Install

On the Build Server,
$ sudo yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum install docker-ce
$ sudo systemctl start docker
$ sudo systemctl enable docker

Check the docker status and version:
$ sudo systemctl status docker
$ docker -v

13. /etc/profile

Update /etc/profile detault profile, and include /opt/IBM/ace-11.0.0.9/server/bin to the path as below.

PATH=$PATH:$HOME/bin:/opt/IBM/ace-11.0.0.9/server/bin:/usr/local/bin
export PATH

14. Configure Access to OpenShift Docker Registry

On the build server,

$ oc login https://youropenshiftcluster:port -u apikey -p YOUR_IBMCLOUD_APIKEY
Note: the url is the Openshift cluster API Url captured above during OpenShift Client Plugin setup. apikey – Create IBMCloud IAM Apikey if you do not have one already.
$ oc project openshift-image-registry
$ oc patch configs.imageregistry.operator.openshift.io/cluster –patch ‘{“spec”:{“defaultRoute”:true}}’ –type=merge
Note: “oc patch” will create default openshift route “default-route”
$ oc get route default-route -n openshift-image-registry –template='{{ .spec.host }}’
Note: copy the route – will be used later in the article.

$ cd /etc/docker
$ sudo mkdir certs.d — if not there already
$ cd certs.d
$ sudo mkdir “output-from-above-command_oc_get_route_docker-registry”
$ sudo chmod 777 “output-from-above-command_oc_get_route_docker-registry”
$ cd “output-from-above-command_oc_get_route_docker-registry”

$ ex +’/BEGIN CERTIFICATE/,/END CERTIFICATE/p’ <(echo | openssl s_client -showcerts -connect “output-from-above-command_oc_get_route_docker-registry”:443) -scq > client-ca.crt

$ sudo chown root:root client-ca.crt
$ sudo chmod 744 client-ca.crt
$ sudo vi /etc/docker/daemon.json
add the below line;
{
“insecure-registries” : [“output-from-above-command_oc_get_route_docker-registry”] }

$ sudo service docker restart
$ sudo systemctl is-active docker
$ sudo chmod 777 /var/run/docker.sock
$ docker login “output-from-above-command_oc_get_route_docker-registry” -u $(oc whoami) -p $(oc whoami -t)

15. App Connect Enterprise Toolkit

> Import sample project
Note: if you do not want to deal with MQ, then simply remove the MQOutput Node from the message flow.

SB_REST2MQ_API

> Update MQEndpoint Policy with your Queue Manager configuration.

> Check-in the Projects to github

16. Create Jenkins Jobs

16.1 Build Job

Open Jenkins URL, and create a Free Style job. This job will create App Connect Enterprise Bar files, and uploads them to Nexus repository.

Under Build Environment, check “Start Xvfb before the build start”, “Delete workspace before build starts”.

Under Build Section, add “Execute Shell” step.
===============
echo “Creating bar file”

APP_NAME=SB_REST2MQ_API
GROUP_ID=com.ibm.esb

ACE_HOME=/opt/IBM/ace-11.0.0.9

echo “Creating bar file”
$ACE_HOME/tools/mqsicreatebar -data $WORKSPACE -b barfiles/${APP_NAME}.bar -cleanBuild -a ${APP_NAME} -deployAsSource -trace -v createbartrace.txt

echo “Setup mqsi command environment – mqsiprofile”
. $ACE_HOME/server/bin/mqsiprofile

mqsiapplybaroverride -b barfiles/${APP_NAME}.bar -p ${APP_NAME}/properties/dev.properties -o barfiles/${APP_NAME}_DEV_${BUILD_NUMBER}.bar -r
mqsiapplybaroverride -b barfiles/${APP_NAME}.bar -p ${APP_NAME}/properties/qa.properties -o barfiles/${APP_NAME}_QA_${BUILD_NUMBER}.bar -r

# Artifactory Upload
zip barfiles/${APP_NAME}_${BUILD_NUMBER}.zip ${APP_NAME}/deploy/* barfiles/${APP_NAME}_DEV_${BUILD_NUMBER}.bar barfiles/${APP_NAME}_QA_${BUILD_NUMBER}.bar

curl -u admin:password –upload-file barfiles/${APP_NAME}_${BUILD_NUMBER}.zip \
http://localhost:8081/repository/generic/com/ibm/esb/${APP_NAME}/${BUILD_NUMBER}/${APP_NAME}.zip

# Git – tag the version
git tag -a ${BUILD_NUMBER} -m ‘Tagged by Jenkins build’
git push git@github.ibm.com:your-name/SB_REST2MQ_API.git –tags

echo “… END …”
=========================

Run the job “Build Now”, check console output. if ran successfully, the job should create bar files and uploaded them to Nexus artifactory.

16.2 Jenkins Deploy job

Create another Jenkins free-style job to,
> download the bar file generated by the build job
> creates docker image containing bar file
> uploads to openshift registry
> install the image to openshift

Execute Shell
==================
#!/bin/bash

APP_NAME=SB_REST2MQ_API
PARENT_BUILD_NUMBER=25
IAM_APIKEY=yourapikey
OCP_API_URL=openshiftapiurl
# Retrieve OCP_IMAGE_REGISTRY USING (oc get route default-route -n openshift-image-registry –template='{{ .spec.host }}’ )
OCP_IMAGE_REGISTRY=default-route-openshift-image-registry.esb-dev1-xxxxx-xxxxxx-0000.us-south.containers.appdomain.cloud
ICP_CONSOLE_URL=https://icp-console.esb-dev1-xxxx-Xxxxxxx-0000.us-south.containers.appdomain.cloud
ICP_USER=admin
# you can define password variable in jenkins>credentials section
ICP_PASSWORD=password

# Use link to get the entitlement key https://github.ibm.com/CloudPakOpenContent/cloudpak-entitlement
ICP4I_ENTITLEMENT_KEY_CP_ICR_IO=xxxxxxxxxxxxxxxxx

IMAGE_NAME=ace-sb-rest2mq-api
DEPLOYMENT_TYPE=install
HELM_RELEASE_NAME=ace-sb-rest2mq-api
INTEGRATION_SERVER=sb-rest2mqapi
tag=latest

# Artifact download (nexus download)
curl -X GET -u admin:Welcome@2020 \
http:/localhost:8081/repository/generic/com/ibm/esb/${APP_NAME}/${PARENT_BUILD_NUMBER}/${APP_NAME}.zip \
-o ${APP_NAME}.zip

unzip -o ${APP_NAME}.zip
chmod -R 755 .

# DEV bar file
cp barfiles/${APP_NAME}_DEV_${PARENT_BUILD_NUMBER}.bar /var/lib/jenkins/jobs/${JOB_NAME}
# Dockerfile
cp $WORKSPACE/${APP_NAME}/deploy/Dockerfile /var/lib/jenkins/jobs/${JOB_NAME}

oc login -u apikey -p ${IAM_APIKEY} –server=${OCP_API_URL}

# Build docker image with bar files
docker login cp.icr.io –username ekey –password ${ICP4I_ENTITLEMENT_KEY_CP_ICR_IO}
ls -l /var/lib/jenkins/jobs/${JOB_NAME}
docker build -t ${IMAGE_NAME}:${BUILD_NUMBER} /var/lib/jenkins/jobs/${JOB_NAME}
rm /var/lib/jenkins/jobs/${JOB_NAME}/*.bar
docker tag ${IMAGE_NAME}:${BUILD_NUMBER} ${OCP_IMAGE_REGISTRY}/ace/${IMAGE_NAME}:${tag}-amd64
docker logout

# Push the image to OpenShift Registry
oc login -u apikey -p ${IAM_APIKEY} –server=${OCP_API_URL}

docker login https://${OCP_IMAGE_REGISTRY} -u $(oc whoami) -p $(oc whoami -t)

docker push ${OCP_IMAGE_REGISTRY}/ace/${IMAGE_NAME}:${tag}-amd64

source /etc/profile

echo “{Begin} cloudctl login”
cloudctl login -a ${ICP_CONSOLE_URL} -n ace -u ${ICP_USER} -p ${ICP_PASSWORD} -skip-ssl-validation
echo “{End} cloudctl login”

oc login -u apikey -p ${IAM_APIKEY} –server=${OCP_API_URL}

openssl s_client -connect raw.githubusercontent.com:443 \
-showcerts /dev/null| openssl x509 -outform PEM > ibm-entitled-cert.pem

helm init –client-only
helm version –tls

helm repo add ibm-entitled-charts \
https://raw.githubusercontent.com/IBM/charts/master/repo/entitled/ \
–ca-file ibm-entitled-cert.pem \
–cert-file $HOME/.helm/cert.pem \
–key-file $HOME/.helm/key.pem

if test ${DEPLOYMENT_TYPE} = “install”; then
oc project ace
helm ${DEPLOYMENT_TYPE} \
–name $HELM_RELEASE_NAME ibm-entitled-charts/ibm-ace-server-icp4i-prod \
–version v3.1.0 \
–namespace ace \
–set imageType=acemqclient \
–set acemq.initVolumeAsRoot=false \
–set image.pullPolicy=Always \
–set image.acemqclient=image-registry.openshift-image-registry.svc:5000/ace/${IMAGE_NAME}:${tag} \
–set persistence.enabled=false \
–set persistence.useDynamicProvisioning=false \
–set integrationServer.name=${INTEGRATION_SERVER} \
–set aceonly.replicaCount=1 \
–set license=accept \
–tls
#oc expose svc ${HELM_RELEASE_NAME} –port=7800
oc autoscale deployment/${HELM_RELEASE_NAME}-ibm-ace-server-icp4i-prod –min 1 –max 3 –cpu-percent=25
else
echo “helm update — TBD”
fi

echo “… END …”
==================================
Run “Build Now”, check console output. if successful you should see output similar to below.

Output:
========
Running as SYSTEM
Building in workspace /var/lib/jenkins/jobs/SB_REST2MQ_API_NEXUS_ICP4I_DEPLOY_DEV/workspace
[WS-CLEANUP] Deleting project workspace…
[WS-CLEANUP] Deferred wipeout is used…
[WS-CLEANUP] Done
[workspace] $ /bin/bash /tmp/jenkins3614704886025186971.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 –:–:– –:–:– –:–:– 0
100 18051 100 18051 0 0 1568k 0 –:–:– –:–:– –:–:– 1602k
Archive: SB_REST2MQ_API.zip
inflating: SB_REST2MQ_API/deploy/Dockerfile
inflating: barfiles/SB_REST2MQ_API_DEV_29.bar
inflating: barfiles/SB_REST2MQ_API_QA_29.bar
Login successful.

You have access to 70 projects, the list has been suppressed. You can list all projects with ‘oc projects’

Using project “ace”.
WARNING! Using –password via the CLI is insecure. Use –password-stdin.
WARNING! Your password will be stored unencrypted in /var/lib/jenkins/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Step 1/4 : FROM cp.icr.io/cp/icp4i/ace/ibm-ace-mqclient-server-prod:11.0.0.8-r1-amd64
—> c092e49741ba
Step 2/4 : COPY *DEV*.bar /home/aceuser/initial-config/bars/
—> Using cache
—> 7cbca45bdf90
Step 3/4 : EXPOSE 7600 7800 7843 9483
—> Using cache
—> 8c1d4c326578
Step 4/4 : ENV LICENSE accept
—> Using cache
—> 92c101c57954
Successfully built 92c101c57954
Successfully tagged ace-sb-rest2mq-api:59
Not logged in to https://index.docker.io/v1/
Login successful.

You have access to 70 projects, the list has been suppressed. You can list all projects with ‘oc projects’

Using project “ace”.
WARNING! Using –password via the CLI is insecure. Use –password-stdin.
WARNING! Your password will be stored unencrypted in /var/lib/jenkins/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
The push refers to repository [default-route-openshift-image-registry.esb-dev1-nnn-nnnnnnnn-0000.us-south.containers.appdomain.cloud/ace/ace-sb-rest2mq-api] 7f0899f240e5: Preparing
5ca3db1c00fb: Preparing
cc462519999e: Preparing
434b83712593: Preparing
d120f8762989: Preparing
434b83712593: Layer already exists
cc462519999e: Layer already exists
7f0899f240e5: Layer already exists
d120f8762989: Layer already exists
5ca3db1c00fb: Layer already exists
latest-amd64: digest: sha256:26486a6a0bab2ab165a30577e1d2a465d786dabf99adf15e153729f1fa5bed52 size: 1372
“{Begin} cloudctl login”
Authenticating…
OK

Targeted account mycluster Account

Targeted namespace ace

Configuring kubectl …
Property “clusters.mycluster” unset.
Property “users.mycluster-user” unset.
Property “contexts.mycluster-context” unset.
Cluster “mycluster” set.
User “mycluster-user” set.
Context “mycluster-context” created.
Switched to context “mycluster-context”.
OK

Configuring helm: /var/lib/jenkins/.helm
OK
“{End} cloudctl login”
Login successful.

You have access to 70 projects, the list has been suppressed. You can list all projects with ‘oc projects’

Using project “ace”.
$HELM_HOME has been configured at /var/lib/jenkins/.helm.
Not installing Tiller due to ‘client-only’ flag having been set
Happy Helming!
Client: &version.Version{SemVer:”v2.12.3″, GitCommit:”eecf22f77df5f65c823aacd2dbd30ae6c65f186e”, GitTreeState:”clean”}
Server: &version.Version{SemVer:”v2.12.3+icp”, GitCommit:””, GitTreeState:””}
“ibm-entitled-charts” has been added to your repositories
Already on project “ace” on server “https://c100-e.us-south.containers.cloud.ibm.com:nnnnn”.
NAME: ace-sb-rest2mq-api
LAST DEPLOYED: Wed Jun 17 19:49:38 2020
NAMESPACE: ace
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod 1 1 1 0 1s

==> v1/Route
NAME AGE
ace-sb-rest2mq-api-http 1s
ace-sb-rest2mq-api-https 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod-78c4fc78d7-fw6rv 0/1 ContainerCreating 0 1s

==> v1/ConfigMap
NAME DATA AGE
ace-sb-rest2mq-api-ibm-ace–623f-create–d350 2 1s

==> v1/ServiceAccount
NAME SECRETS AGE
ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod-serviceaccount 2 1s

==> v1/Role
NAME AGE
ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod-role 1s

==> v1/RoleBinding
NAME AGE
ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod-rolebinding 1s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod-ace-metrics ClusterIP nn.nn.nn.nn 9483/TCP 1s
ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod ClusterIP nn.nn.nn.nn 7600/TCP,7800/TCP,7843/TCP 1s

NOTES:

If you launched the deploy from the ACE Dashboard, then you can return to the ACE Dashboard to manage the server.

The HTTP and HTTPS endpoints for the ACE Integration Server are exposed with Routes.

export ACE_HTTP_HOSTNAME=$(kubectl get route ace-sb-rest2mq-api-http -n ace -o jsonpath=”{.status.ingress[0].host}”)
export ACE_HTTPS_HOSTNAME=$(kubectl get route ace-sb-rest2mq-api-https -n ace -o jsonpath=”{.status.ingress[0].host}”)

echo “HTTP workload can use: http://${ACE_HTTP_HOSTNAME}”
echo “HTTPS workload can use: https://${ACE_HTTPS_HOSTNAME}”

Error from server (NotFound): deployments.extensions “ace-sb-rest2mq-api” not found
“… END …”
Finished: SUCCESS
============

$ oc get pods
NAME READY STATUS RESTARTS AGE
ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod-b644d677c-bb5lk 1/1 Running 0 110s

$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod ClusterIP 172.21.55.80 7600/TCP,7800/TCP,7843/TCP 4m8s
ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod-ace-metrics ClusterIP 172.21.203.104 9483/TCP 4m8s

$ oc port-forward ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod-b644d677c-bb5lk 7600 ( To check the Integration Server Console ).
Open browser, http://localhost:7600

$ kubectl get route ace-sb-rest2mq-api-http -n ace -o jsonpath=”{.status.ingress[0].host}”
Output: ace-sb-rest2mq-api-ibm-ace-server-icp4i-prod-ace.esb-dev1-nnnnnn-nnnnn-0000.us-south.containers.appdomain.cloud
Note: Use the output below to test the API.

17. Test the API

Request:
Note:
$ curl -X POST http://<output from the above kubectl get route>/rest2mq/v1/createcustomer –data ‘{“cnum”:”100″, “cname”: “larry Smith”}’
Response:
{“status”:”200″,”message”:”Hello from integration_server-development”}

18. ACE Project Interchange

SB_REST2MQ_API

19. Acknowledgements

I would like to thank Joel Gomez Ramirez from IBM WW Synergy Team for reviewing the article & implementing step-by-step on his lab environment. His feedback was extremely helpful for fine-tuning the article.

Join The Discussion

Your email address will not be published. Required fields are marked *