With the rise of multi-cloud strategy, it is becoming crucial to rethink hybrid cloud integration strategy as well. According to Liftr Cloud Insights, 81% of enterprises are participating in a multi-cloud strategy and 51% are deploying a hybrid cloud solution that combines the public and private cloud.
To deal with unexpected workloads and variance types of environments, agility, scalability, and adaptability are critical characteristics that must be available on modern integration solutions. If you want to reach this kind of architecture, you can take a look at the cloud-native design pattern. One of the benefits of modernizing monolithic applications into micro-components is the agility to lift and shift the components as needed. Taking the same principles and applying them to the integration platform can empower organizations to adapt to the changes.
This concept is called agile integration architecture. The approach relies on a decentralized integration layer, and it benefits from the containerization and API technology. There are three main pillars in agile integration architecture: fine-grained integration deployment, decentralized integration ownership, and cloud-native integration infrastructure.
Before starting, make sure that you have the following tools and skills:
It should take you approximately 20 minutes to complete this tutorial.
Fine-grained integration deployment
One of the challenges of a traditional integration pattern is the coupling between the components, regardless of their relationships with each other. A simple modification made to the overhead work, such as an upgrade, might cause several issues.
After moving from a huge centralized integration hub into a more fine-grained integration layer, each of the integration services is running independently in small and managed components, as shown in the following illustration:
Decentralized integration ownership
The cloud-native pattern affects an organization’s culture where a team can be formed by business-focused and project (service) requirements. For example, a single group might be mixed with a developer, an integrator, a designer and a consultant. The team has full autonomy and confidence to make changes without affecting other services.
Teams focus on what their responsibilities are, and expertise increases. This approach brings a new level of productivity and innovation.
Cloud-native integration infrastructure
A part of the transition to agile integration architecture is applying the cloud-native “cattle not pets” approach with the support of technology like Docker and Kubernetes. It takes seconds to create or dispose of containers to scale the workloads up and down. A considerable improvement is the start-up time, because containers are running in a shared host, which eliminates the need to reproduce the operating system files when a container starts or stops.
If you’re interested in learning more about the concept, check out Agile integration architecture: Useful links on the IBM Integration community.
Now let’s move on to the fun part! In the next sections, I demonstrate a few steps to get started with modernizing (containerizing) your integration solution. Also, you learn to test the portability of the solution into two different cloud environments: IBM Cloud Private and Google Cloud Platform, as shown in the following diagram:
- Create a container image.
- Push the image to the cloud registry.
- Run the application on multi-clouds.
Technologies used in this example
- IBM App Connect Developer Edition a platform that supports the full breadth of integration needs across a modern digital enterprise.
- Docker Containers a lightweight, stand-alone, executable package of software that includes everything that is needed to run an application: code, runtime, system tools, system libraries, and settings from one computing environment to another.
- Kubernetes an open-source system for automating deployment, scaling, and management of containerized applications.
I prepared a simple integration flow, which can be invoked through an HTTP POST operation. Also, I used the pre-built Cloud connectors at IBM Cloud Integration to integrate with other third-party systems like SAP, Salesforce, and Slack.
At this stage, you only need to use the BAR file. Navigate to your directory where you saved the BAR file, and then follow the next steps. The following screen capture shows IBM App Connect. To learn more, see Get started.
Step 1. Create the Dockerfile.
Use the following file to create your Dockerfile for the integration solution. You need to insert your BAR file name on line 3.
FROM ibmcom/ace ENV BAR=<filename>.bar COPY $BAR /tmp RUN bash -c 'mqsicreateworkdir /home/mywrk/myaceworkdir && mqsibar -w /home/mywrk/myaceworkdir -a /tmp/$BAR -c'
Step 2. Build the container image.
Run the following command to build the container image:
$ docker build -t <hostname>/<namespace>/<image-name>:<tag> .
$ docker build -t cluster.icp:8500/default/myintegration-flow:1.0 .
Step 3. Push the container image.
Run the following command to push the container image:
$ docker login <cluster-registry-address> $ docker push <hostname>/<namespace>/<image-name>:<tag>
$ docker login cluster.icp:8500 $ docker push cluster.icp:8500/default/myintegration-flow:1.0
Step 4. Create the deployment file.
By now you should have the Docker image ready, and you can check by running the following command:
$ docker images | grep myintegration-flow
You see results like the following example:
To start deploying your integration solution into Kubernetes environments, you begin by creating the deployment file that includes the solution and app definitions (for example, the image path and environment variables). As shown in the following example, you can define the multiple instances (replicas) of the same app to make your app resilient (line 8).
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: ace name: myintegration-deployment spec: replicas: 1 template: metadata: labels: app: ace spec: containers: - env: - name: LICENSE value: accept - name: ACE_SERVER_NAME value: ACESERVER name: myintegration-deployment image: <hostname>/<namespace>/<image-name>:<tag>
Update and save the file in your directory, and run the following command:
$ kubectl create -f deployment.yaml deployment/myintegration-deployment created
Step 5. Create the service file.
Your deployment is ready to make available for your external users and applications so they can use your integration solution. The following example uses the Kubernetes Service to define a logical set of pods and a policy to access the service, such as protocols and ports. Similar to step 2, use the following .yaml file:
apiVersion: v1 kind: Service metadata: name: myintegration-service labels: app: myintegration-service spec: type: NodePort ports: - port: 7600 targetPort: 7600 protocol: TCP name: webui - port: 7800 targetPort: 7800 protocol: TCP name: ace-http - port: 7843 targetPort: 7843 protocol: TCP name: ace-https selector: app: ace
Save the file in your directory, and run the following command:
$ kubectl create -f service.yaml service/myintegration-service created
Double-check the status of your app by running the following commands:
$ kubectl get po $ kubectl get svc
The following screen captures show the IBM Cloud Private environment and the Google Cloud environment.
As you can see, both environments are running your integration solution fine, and Kubernetes is taking care of incoming port traffic and mapping them to your target ports (for example 32060 mapped to 7600 on IBM Cloud Private and 31318 mapped to 7600 on Google Cloud Platform).
Now get the node public IP address to check your service:
$ kubectl get node --output=wide
Then, form your URL like the following example:
The outputs from the two environments should look like the following example:
In a few steps, you took one integration solution, containerized it, and then deployed it on multiple cloud environments (IBM and Google). You can apply the same measures on any other integration apps and solutions, still maintaining your integration platform as agile, scalable and adaptable. The secret behind this approach is the integration runtime environment.
The examples in this tutorial include the IBM App Connect Enterprise, which provides a light-weight integration runtime environment for cloud-native and container-based deployment. Similar to App Connect Enterprise, all other IBM integration components are ready-made for container-based implementation to support your multi-cloud strategy through the complete IBM Cloud Integration Platform. You can try it for yourself here.