Introduction

Today, every enterprise strives to achieve an agile business IT environment, by prioritising the portability, flexibility, and scalability of their cloud infrastructure. Enabling this agile vision can be difficult and expensive given the sheer variety of methods available, but also the number of requirements floating around that must be satisfied.  

Many enterprises find themselves using multiple cloud providers so a multi-cloud integration strategy is essential to enabling rapid development of new services and features because many services and applications need to communicate across and within clouds.

The IBM® Cloud Pak for Integration™ provides a combination of all of the IBM; integration capabilities deployed onto kubernetes with a unified integration experience. The platform is based on a kubernetes distribution delivered by IBM Cloud Private and this means that you can deploy the platform on premises or in whichever supported cloud provider you desire. Because the IBM Cloud Pak for Integration has a consistent architecture no matter where it is deployed,¬† you no longer need specialised skills and experiences siloed within your organisation, enabling you to drive productivity and efficiency.¬†

Now we’ll get you up and running with IBM Cloud Pak for Integration on Microsoft Azure. I’m doing this on MacOS, but the instructions should work on whichever standard OS you’re using. Anywhere this deviates I will at the very least point at the documentation you need to follow instead.

Setting up Azure

Account Creation

You need to create and setup a Microsoft Azure account. If you already have one or are sharing one, make sure you have full editor permissions in both the “Subscription” and “Azure Active Directory”. If you don’t have one, simply create an account at the Microsoft Azure homepage , and it should sort all this out for you.

Increase vCPU quota

In this example we are going to be setting up a single instance of every integration capability. To do this, you need more processing power than what comes as default with Azure. 

The need for processing power can be matched by the VM types you choose (which we’ll come to later), but for now you need to request a higher vCPU limit; specifically standard DSv3 family vCPUs.¬†

  1. Select your subscription
  2. Under settings, select “Usage + Quota” and select the “request increase” button.
    This should open a new support request. “Issue type” and “Subscription” should already be set up for you.¬†
  3. Set “Quote type” to “Compute-VM (cores-vCPUs) subscription limit increases”.¬†¬†
  4. Select the location and set “SKU family” to “DSv3 series” and then set new limit to 50.¬†
Screenshot of requesting extra cores on Microsoft Azure
View of the request options in the portal.

This should open a request to increase our cores, which will be accepted in a matter of minutes. You now have the permissions to use the required resources to run everything you need in this tutorial. 

Generate aadClientId and aadClientSecret

Now you need to generate a Service Principal with a Client secret, which allows you to pass this into the Terraform project and allows you to authenticate to Azure (for more information, check out the Terraform docs).

Start by downloading the Azure CLI and logging in to it from your terminal:

az login

This enables you to interact with any or all of your Azure instances from the terminal. Now, you need to get a subscription id:

az account list

You can also get this from the Azure Portal Browser itself, but this is a good way to easily list all subscriptions you belong to. Now you can generate your Service Principal, which is what gives you the aadClientId and aadClientSecret:

az account set --subscription="SUBSCRIPTION_ID"
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SUBSCRIPTION_ID"

Output:

{
  "appId": "{aadClientId}",
  "displayName": "azure-cli-2017-06-05-10-41-15",
  "name": "http://azure-cli-2017-06-05-10-41-15",
  "password": "{aadClientSecret}",
  "tenant": "00000000-0000-0000-0000-000000000000"
}

Make a note of the “appId” and “password” (or aadClientId and aadClientSecret) as you will need them later.

High Availability

For a production deployment you want to deploy a highly-available configuration by making sure that your Kubernetes cluster has no single point of failure. This is achieved by using multiple nodes of each type (master, worker, e.t.c) as well as multiple availability zones. The following diagram shows an architectural view of this:

A diagram showing the architecture of a Highly available system
High availability architecture

Azure and the Terraform script about to be discussed allow you to achieve this.

Installing ICP via Terraform

The next step is to get IBM Cloud Private installed onto Azure, which requires you to set up a bunch of VMs, a route table, disk space, virtual network. However, you could use the following terraform project: https://github.com/ibm-cloud-architecture/terraform-icp-azure. 

Terraform is “infrastructure as code”; it allows you to define a configuration and pass in specific variables to fit your exact requirements. This project installs IBM Cloud Private on Azure and you just have to pass it a file containing the specifics you need.

  1. Download and install Terraform from the official website: https://www.terraform.io/. 
  2. Clone the terraform-icp-azure repo from the same link above, then navigate into the /templates/icp-ce directory or /templates/icp-ee-az for a high availability configuration.
  3. Create a .tfvars file as a simple text file. Provide an appropriate name (for example, terraform-example.tfvars) as you’re going to explicitly point terraform at it.¬†
  4. Define the following parameters in the .tfvars file you just created. You can use the example I’ve provided but ensure that you fill in the blanks as needed (for the ssh public key, I generated this locally):
virtual_network_name = "johnsmith-vnet"
virtual_network_cidr = "10.0.0.0/21"
network_cidr = "10.0.0.0/22"
subnet_name = "johnsmith-subnet"
subnet_prefix = "10.0.4.0/27"
cluster_name = "johnsmith-net"
instance_name = "johnsmith-net"

storage_account_tier = "Premium"
route_table_name = "johnsmith-route"

aadClientSecret = ""
aadClientId = ""
ssh_public_key = "ssh-rsa randomfakepublickey"

resource_group = "johnsmith-rgroup"

boot = {
    nodes         = "0"
    name          = "jsmith-bootnode"
    os_image      = "ubuntu"
    vm_size       = "Standard_A2_v2"
    os_disk_type  = "Standard_LRS"
    os_disk_size  = "100"
    docker_disk_size = "100"
    docker_disk_type = "StandardSSD_LRS"
    enable_accelerated_networking = "false"
}

master = {
    nodes         = "1"
    name          = "jsmith-master"
    vm_size       = "Standard_D8s_v3"
    os_disk_type  = "Standard_LRS"
    docker_disk_size = "100"
    docker_disk_type = "Standard_LRS"
}

proxy = {
    nodes         = "1"
    name          = "jsmith-proxy"
    vm_size       = "Standard_D2s_v3"
    os_disk_type  = "Standard_LRS"
    docker_disk_size = "100"
    docker_disk_type = "Standard_LRS"
}

management = {
    nodes         = "1"
    name          = "jsmith-mgmt"
    #vm_size      = "Standard_A4_v2"
    vm_size       = "Standard_D8s_v3"
    os_disk_type  = "Standard_LRS"
    docker_disk_size = "100"
    docker_disk_type = "Standard_LRS"
}

worker = {
    nodes         = "3"
    name          = "jsmith-worker"
    vm_size       = "Standard_D8s_v3"
    os_disk_type  = "Standard_LRS"
    docker_disk_size = "100"
    docker_disk_type = "Standard_LRS"
}

If you’re opting for high availability, add an ICP image location, by specifying either of the following anywhere in your .tfvars file:

  • The variable “image_location” that looks for a tarball alongside the variable “image_location_key” if required.
  • The variables “private_registry”, “registry_username” and “registry_password” to point to a private docker registry where the ICP installation image is located.

For example:
image_location = “https://example-location/ibm-cloud-private-x86_64-3.1.1.tar.gz”

You will also need to alter your VM variables to have more nodes per type, as well as some extra setup. You’ll want to copy the setup out of variables.tf (which is from the terraform-icp-azure repo) and simply update the name and vm_size fields.

Next, you need to start the terraform build by running these two commands (this is going to take around an hour, so you may want to nohup it) :

terraform init
terraform apply -var-file={.tfvars_file} -auto-approve  

Once all this is done, there should be some output giving your cluster details (you will want to save these), it looks something like:

ICP Admin Password = {password}
ICP Admin Username = {username}
ICP Console URL = https://{cluster_ca_domain}:8443
ICP Kubernetes API URL = https://{cluster_ca_domain}:8001
ICP Proxy = {proxy_hostname}
cloudctl = cloudctl login --skip-ssl-validation -a https://{cluster_ca_domain}:8443 -u {username} -p {password} -n default -c {account_id}

You can now open the “ICP Console URL” link to display your ICP dashboard, and login using your credentials (“ICP Admin Username” and “ICP Admin Password”). You will also want to login to the CLI using the cloudctl = … command, as it sets up your kubernetes CLI to also interact with your cluster

Configure authentication for Docker CLI

At the moment, you now have a private image repository sat on your instance of Azure. Now you need to be able to work with that repository, as it’s where you’re going to upload and work with all of your images. To do this you just need to quickly set up some authorisation so your CLI can communicate with it. You’ll want to make sure you have Docker installed. I’m going to be using the documentation in IBM Knowledge Center and following the title specifically for MacOS. If you aren’t on Mac then follow that link and do the same for your OS.

Firstly, you need to edit your /etc/hosts file, appending the following:

{master_ip} {cluster_ca_domain}

Where {master_ip} is the IP of your master node (which you can find on your instance of Azure), and {cluster_ca_domain} comes from the URL to your dashboard (basically the bit after https:// but before :{port_number}…).¬†

Now you’ll want a copy of the registry certificate from the master node, and then you’ll want to add it into your keychain:

mkdir -p ~/.docker/certs.d/{cluster_ca_domain}\:8500
scp vmadmin@{cluster_ca_domain}:/etc/docker/certs.d/{cluster_ca_domain}\:8500/ca.crt ~/.docker/certs.d/{cluster_ca_domain}\:8500/ca.crt
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ~/.docker/certs.d/{cluster_ca_domain}\:8500/ca.crt

Finally, restart docker and then run…

docker login {cluster_ca_domain}:8500

…using the username and password generated by the Terraform script.

Setting up namespaces

For the purposes of this article we’re going to deploy all of the integration capabilities into their own namespaces. This involves using the UI to actually create a namespace with the correct pod security policy. It also needs the capability uploaded into that namespace which sets you up some image pull secrets that you will use later. Feel free to only grab the capabilities you’re interested in from this list.

Navigate to the namespace creation page by selecting the following menu options: 

Hamburger Button (Top Left) -> Manage -> Namespaces -> Create Namespace

Then create the namespaces, which can be called anything you like. For this example, I name the namespaces as follows:

Capability Name Pod Security Policy
ICP4I Platform Navigator icp4i ibm-restricted-psp
MQ icp4i-mq ibm-anyuid-psp
App Connect icp4i-ace ibm-anyuid-psp
API Connect icp4i-api ibm-anyuid-hostpath-psp
Event Streams icp4i-es ibm-restricted-psp
Datapower icp4i-dp ibm-anyuid-psp
Aspera icp4i-asp ibm-anyuid-hostpath-psp

Uploading images

With the namespaces all setup, you can now actually upload the images you need. I’ve got them saved in the form of .tgz files, and if you use the following commands for each file, it should upload the images required for each capability alongside most setup (sorting out image pull secrets and the like):

cloudctl target -n {namespace}
cloudctl catalog load-archive --archive {capability-package} --registry {cluster_ca_domain}:8500/{namespace}

Setting up Azure Premium Storage

You now need to set up some persistent storage, and Microsoft actually provides a method for this known as Azure Premium Storage. You’re going to set up both their Azure Disk and Azure File provisions, and we’re only going to go as far as making Storage classes of them. This will allow the capabilities to make their own storage space as required based off of these storage classes.

Make sure you’re logged into the cloudctl CLI as specified in Installing ICP via Terraform section.

To set up Azure Premium Storage, you’re going to need 4 yaml files, two for the actual storage classes and two for the ClusterRole (each yaml file is separated by the dashed line):

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:azure-cloud-provider
rules:
- apiGroups: ['']
  resources: ['secrets']
  verbs:     ['get','create']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:azure-cloud-provider
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: system:azure-cloud-provider
subjects:
- kind: ServiceAccount
  name: persistent-volume-binder
  namespace: kube-system
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=1000
  - gid=1000
parameters:
  skuName: Standard_LRS
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azuredisk
provisioner: kubernetes.io/azure-disk
parameters:
  skuName: Standard_LRS

Copy the preceding extract into each yaml file then run…

kubectl apply -f {file_name}

…on each of them. Then, if you check storage classes (kubectl get sc) you should see one called azurefile and one called azuredisk.

Deploying IBM Cloud Pak for Integration Platform Navigator

Now you get to deploy the actual navigator, from there you can then deploy all of the other capabilities. 

Log in to your ICP console, then select Catalog in the top right corner. This takes you to the range of things you can deploy onto ICP. For now we’re only interested in the navigator, so go ahead and search for CIP, which should return only one thing (if not, it’s called ibm-cip-prod). Select that and get it all configured…

Helm Release Name: nav
Target Namespace: icp4i ({navigator_namespace})
License: Ticked

All Parameters
Image pull secret: sa-icp4i (sa-{namespace})
Hostname of the ingress proxy to be configured: {proxy_hostname}

Then click Install, wait a couple of minutes for the pods to start running (you can see this in the helm release section of ICP) and then click Launch. You should be greeted by a welcome screen before being able to see an overview of the various capabilities, alongside the ability to create instances of them.

The IBM Cloud Pak for Integration Navigator
View of the Platform Navigator with a couple of instances in it

Capability Deployment Example (Event Streams)

Helm Release Name: events
Target Namespace: icp4i-es ({eventstreams_namespace})
License: Ticked


All Parameters

Image pull secret: sa-icp4i-es (sa-{namespace})
Enable persistent storage for Apache Kafka: Ticked
Use dynamic provisioning for Apache Kafka: Ticked
Storage class name: azuredisk

As usual, we select Install and wait a little while for it to all be set up. Launch it from the Navigator and it should take us to the ES dashboard. There’s a very nice little status check to tell us when everything is green and good to go.

The methods for other capabilities are very similar, and you can view the capability documentation to see specifics as to how to get them set up.

Conclusion

You now have the tools to successfully implement the Platform Navigator with an instance of each integration product running, all on Microsoft Azure. 

So, what next? 

Well, that depends on what you need. You now have a wealth of integration capabilities at your disposal which can unlock innovation, from message & event handling to API management. You can learn more about each capability, how they can work together, and how you can modernise your integration by reading the capability documentation.  

If you have any questions or want to continue the discussion, feel free to contact me directly on Twitter or LinkedIn. You can also join in the conversation by joining our Slack and joining the ICP4I channel or leave a comment below!

Join The Discussion

Your email address will not be published. Required fields are marked *