Learn more >
Rashid A Aljohani | Published April 9, 2019
API ManagementContainersCloudHybrid Cloud
With the rise of multi-cloud strategy, it is becoming crucial to rethink hybrid cloud integration strategy as well. According to Liftr Cloud Insights, 81% of enterprises are participating in a multi-cloud strategy and 51% are deploying a hybrid cloud solution that combines the public and private cloud.
To deal with unexpected workloads and variance types of environments, agility, scalability, and adaptability are critical characteristics that must be available on modern integration solutions. If you want to reach this kind of architecture, you can take a look at the cloud-native design pattern. One of the benefits of modernizing monolithic applications into micro-components is the agility to lift and shift the components as needed. Taking the same principles and applying them to the integration platform can empower organizations to adapt to the changes.
This concept is called agile integration architecture. The approach relies on a decentralized integration layer, and it benefits from the containerization and API technology. There are three main pillars in agile integration architecture: fine-grained integration deployment, decentralized integration ownership, and cloud-native integration infrastructure.
Before starting, make sure that you have the following tools and skills:
It should take you approximately 20 minutes to complete this tutorial.
One of the challenges of a traditional integration pattern is the coupling between the components, regardless of their relationships with each other. A simple modification made to the overhead work, such as an upgrade, might cause several issues.
After moving from a huge centralized integration hub into a more fine-grained integration layer, each of the integration services is running independently in small and managed components, as shown in the following illustration:
The cloud-native pattern affects an organization’s culture where a team can be formed by business-focused and project (service) requirements. For example, a single group might be mixed with a developer, an integrator, a designer and a consultant. The team has full autonomy and confidence to make changes without affecting other services.
Teams focus on what their responsibilities are, and expertise increases. This approach brings a new level of productivity and innovation.
A part of the transition to agile integration architecture is applying the cloud-native “cattle not pets” approach with the support of technology like Docker and Kubernetes. It takes seconds to create or dispose of containers to scale the workloads up and down. A considerable improvement is the start-up time, because containers are running in a shared host, which eliminates the need to reproduce the operating system files when a container starts or stops.
If you’re interested in learning more about the concept, check out Agile integration architecture: Useful links on the IBM Integration community.
Now let’s move on to the fun part! In the next sections, I demonstrate a few steps to get started with modernizing (containerizing) your integration solution. Also, you learn to test the portability of the solution into two different cloud environments: IBM Cloud Private and Google Cloud Platform, as shown in the following diagram:
I prepared a simple integration flow, which can be invoked through an HTTP POST operation. Also, I used the pre-built Cloud connectors at IBM Cloud Integration to integrate with other third-party systems like SAP, Salesforce, and Slack.
At this stage, you only need to use the BAR file. Navigate to your directory where you saved the BAR file, and then follow the next steps. The following screen capture shows IBM App Connect. To learn more, see Get started.
Use the following file to create your Dockerfile for the integration solution. You need to insert your BAR file name on line 3.
COPY $BAR /tmp
RUN bash -c 'mqsicreateworkdir /home/mywrk/myaceworkdir && mqsibar -w /home/mywrk/myaceworkdir -a /tmp/$BAR -c'
Run the following command to build the container image:
$ docker build -t <hostname>/<namespace>/<image-name>:<tag> .
$ docker build -t cluster.icp:8500/default/myintegration-flow:1.0 .
Run the following command to push the container image:
$ docker login <cluster-registry-address>
$ docker push <hostname>/<namespace>/<image-name>:<tag>
$ docker login cluster.icp:8500
$ docker push cluster.icp:8500/default/myintegration-flow:1.0
By now you should have the Docker image ready, and you can check by running the following command:
$ docker images | grep myintegration-flow
You see results like the following example:
To start deploying your integration solution into Kubernetes environments, you begin by creating the deployment file that includes the solution and app definitions (for example, the image path and environment variables). As shown in the following example, you can define the multiple instances (replicas) of the same app to make your app resilient (line 8).
- name: LICENSE
- name: ACE_SERVER_NAME
Update and save the file in your directory, and run the following command:
$ kubectl create -f deployment.yaml
Your deployment is ready to make available for your external users and applications so they can use your integration solution. The following example uses the Kubernetes Service to define a logical set of pods and a policy to access the service, such as protocols and ports. Similar to step 2, use the following .yaml file:
- port: 7600
- port: 7800
- port: 7843
Save the file in your directory, and run the following command:
$ kubectl create -f service.yaml
Double-check the status of your app by running the following commands:
$ kubectl get po
$ kubectl get svc
The following screen captures show the IBM Cloud Private environment and the Google Cloud environment.
As you can see, both environments are running your integration solution fine, and Kubernetes is taking care of incoming port traffic and mapping them to your target ports (for example 32060 mapped to 7600 on IBM Cloud Private and 31318 mapped to 7600 on Google Cloud Platform).
Now get the node public IP address to check your service:
$ kubectl get node --output=wide
Then, form your URL like the following example:
The outputs from the two environments should look like the following example:
In a few steps, you took one integration solution, containerized it, and then deployed it on multiple cloud environments (IBM and Google). You can apply the same measures on any other integration apps and solutions, still maintaining your integration platform as agile, scalable and adaptable. The secret behind this approach is the integration runtime environment.
The examples in this tutorial include the IBM App Connect Enterprise, which provides a light-weight integration runtime environment for cloud-native and container-based deployment. Similar to App Connect Enterprise, all other IBM integration components are ready-made for container-based implementation to support your multi-cloud strategy through the complete IBM Cloud Integration Platform. You can try it for yourself here.
September 2, 2019
June 24, 2019
Lightweight containerized integration offers benefits for agility, elastic scalability, and more individual resilience models. This article explores key concepts of…
Back to top