Introduction

Today, every enterprise strives to achieve an agile business IT environment, by prioritising the portability, flexibility, and scalability of their cloud infrastructure. Enabling this agile vision can be difficult and expensive given the sheer variety of methods available, but also the number of requirements floating around that must be satisfied.  

Many enterprises find themselves using multiple cloud providers so a multi-cloud integration strategy is essential to enabling rapid development of new services and features because many services and applications need to communicate across and within clouds.

The IBM® Cloud Pak for Integration™ provides a combination of all of the IBM; integration capabilities deployed onto kubernetes with a unified integration experience. The platform is based on a kubernetes distribution delivered by IBM Cloud Private and this means that you can deploy the platform on premises or in whichever supported cloud provider you desire. Because the IBM Cloud Pak for Integration has a consistent architecture no matter where it is deployed,¬† you no longer need specialised skills and experiences siloed within your organisation, enabling you to drive productivity and efficiency.¬†

Now we’ll get you up and running with IBM Cloud Pak for Integration on Microsoft Azure. I’m doing this on MacOS, but the instructions should work on whichever standard OS you’re using. Anywhere this deviates I will at the very least point at the documentation you need to follow instead.

Setting up Azure

Account Creation

You need to create and setup a Microsoft Azure account. If you already have one or are sharing one, make sure you have full editor permissions in both the “Subscription” and “Azure Active Directory”. If you don’t have one, simply create an account at the Microsoft Azure homepage , and it should sort all this out for you.

Increase vCPU quota

In this example we are going to be setting up a single instance of every integration capability. To do this, you need more processing power than what comes as default with Azure. 

The need for processing power can be matched by the VM types you choose (which we’ll come to later), but for now you need to request a higher vCPU limit; specifically standard DSv3 family vCPUs.¬†

  1. Select your subscription
  2. Under settings, select “Usage + Quota” and select the “request increase” button.
    This should open a new support request. “Issue type” and “Subscription” should already be set up for you.¬†
  3. Set “Quote type” to “Compute-VM (cores-vCPUs) subscription limit increases”.¬†¬†
  4. Select the location and set “SKU family” to “DSv3 series” and then set new limit to 50.¬†
Screenshot of requesting extra cores on Microsoft Azure
View of the request options in the portal.

This should open a request to increase our cores, which will be accepted in a matter of minutes. You now have the permissions to use the required resources to run everything you need in this tutorial. 

Generate aadClientId and aadClientSecret

Now you need to generate a Service Principal with a Client secret, which allows you to pass this into the Terraform project and allows you to authenticate to Azure (for more information, check out the Terraform docs).

Start by downloading the Azure CLI and logging in to it from your terminal:

az login

This enables you to interact with any or all of your Azure instances from the terminal. Now, you need to get a subscription id:

az account list

You can also get this from the Azure Portal Browser itself, but this is a good way to easily list all subscriptions you belong to. Now you can generate your Service Principal, which is what gives you the aadClientId and aadClientSecret:

az account set --subscription="SUBSCRIPTION_ID"
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SUBSCRIPTION_ID"

Output:

{
  "appId": "{aadClientId}",
  "displayName": "azure-cli-2017-06-05-10-41-15",
  "name": "http://azure-cli-2017-06-05-10-41-15",
  "password": "{aadClientSecret}",
  "tenant": "00000000-0000-0000-0000-000000000000"
}

Make a note of the “appId” and “password” (or aadClientId and aadClientSecret) as you will need them later.

High Availability

For a production deployment you want to deploy a highly-available configuration by making sure that your Kubernetes cluster has no single point of failure. This is achieved by using multiple nodes of each type (master, worker, e.t.c) as well as multiple availability zones. The following diagram shows an architectural view of this:

A diagram showing the architecture of a Highly available system
High availability architecture

Azure and the Terraform script about to be discussed allow you to achieve this.

Installing ICP via Terraform

The next step is to get IBM Cloud Private installed onto Azure, which requires you to set up a bunch of VMs, a route table, disk space, virtual network. However, you could use the following terraform project: https://github.com/ibm-cloud-architecture/terraform-icp-azure. 

Terraform is “infrastructure as code”; it allows you to define a configuration and pass in specific variables to fit your exact requirements. This project installs IBM Cloud Private on Azure and you just have to pass it a file containing the specifics you need.

  1. Download and install Terraform from the official website release page (I have 0.11.14): https://releases.hashicorp.com/terraform/. 
  2. Clone the terraform-icp-azure repo from the same link above, then navigate into the /templates/icp-ce directory or /templates/icp-ee-az for a high availability configuration.
  3. Create a .tfvars file as a simple text file. Provide an appropriate name (for example, terraform-example.tfvars) as you’re going to explicitly point terraform at it.¬†
  4. Define the following parameters in the .tfvars file you just created. You can use the example I’ve provided but ensure that you fill in the blanks as needed (for the ssh public key, I generated this locally):
virtual_network_name = "johnsmith-vnet"
virtual_network_cidr = "10.0.0.0/21"
network_cidr = "10.0.0.0/22"
subnet_name = "johnsmith-subnet"
subnet_prefix = "10.0.4.0/27"
cluster_name = "johnsmith-net"
instance_name = "johnsmith-net"

storage_account_tier = "Premium"
route_table_name = "johnsmith-route"

aadClientSecret = ""
aadClientId = ""
ssh_public_key = "ssh-rsa randomfakepublickey"

resource_group = "johnsmith-rgroup"

boot = {
    nodes         = "0"
    name          = "jsmith-bootnode"
    os_image      = "ubuntu"
    vm_size       = "Standard_A2_v2"
    os_disk_type  = "Standard_LRS"
    os_disk_size  = "100"
    docker_disk_size = "100"
    docker_disk_type = "StandardSSD_LRS"
    enable_accelerated_networking = "false"
}

master = {
    nodes         = "1"
    name          = "jsmith-master"
    vm_size       = "Standard_D8s_v3"
    os_disk_type  = "Standard_LRS"
    docker_disk_size = "100"
    docker_disk_type = "Standard_LRS"
}

proxy = {
    nodes         = "1"
    name          = "jsmith-proxy"
    vm_size       = "Standard_D2s_v3"
    os_disk_type  = "Standard_LRS"
    docker_disk_size = "100"
    docker_disk_type = "Standard_LRS"
}

management = {
    nodes         = "1"
    name          = "jsmith-mgmt"
    #vm_size      = "Standard_A4_v2"
    vm_size       = "Standard_D8s_v3"
    os_disk_type  = "Standard_LRS"
    docker_disk_size = "100"
    docker_disk_type = "Standard_LRS"
}

worker = {
    nodes         = "3"
    name          = "jsmith-worker"
    vm_size       = "Standard_D8s_v3"
    os_disk_type  = "Standard_LRS"
    docker_disk_size = "100"
    docker_disk_type = "Standard_LRS"
}

If you’re opting for high availability, add an ICP image location, by specifying either of the following anywhere in your .tfvars file:

  • The variable “image_location” that looks for a tarball alongside the variable “image_location_key” if required.
  • The variables “private_registry”, “registry_username” and “registry_password” to point to a private docker registry where the ICP installation image is located.

For example:
image_location = “https://example-location/ibm-cloud-private-x86_64-3.1.1.tar.gz”

You will also need to alter your VM variables to have more nodes per type, as well as some extra setup. You’ll want to copy the setup out of variables.tf (which is from the terraform-icp-azure repo) and simply update the name and vm_size fields.

Next, you need to start the terraform build by running these two commands (this is going to take around an hour, so you may want to nohup it) :

terraform init
terraform apply -var-file={.tfvars_file} -auto-approve  

This may fail if you have Terraform v0.12.xx as it has some breaking changes (as of July 2019). If this is the case you’ll want to get Terraform v0.11.14 (https://releases.hashicorp.com/terraform/0.11.14/) and run it again.

Once all this is done, there should be some output giving your cluster details (you will want to save these), it looks something like:

ICP Admin Password = {password}
ICP Admin Username = {username}
ICP Console URL = https://{cluster_ca_domain}:8443
ICP Kubernetes API URL = https://{cluster_ca_domain}:8001
ICP Proxy = {proxy_hostname}
cloudctl = cloudctl login --skip-ssl-validation -a https://{cluster_ca_domain}:8443 -u {username} -p {password} -n default -c {account_id}

You can now open the “ICP Console URL” link to display your ICP dashboard, and login using your credentials (“ICP Admin Username” and “ICP Admin Password”). You will also want to login to the CLI using the cloudctl = … command, as it sets up your kubernetes CLI to also interact with your cluster

Configure authentication for Docker CLI

At the moment, you now have a private image repository sat on your instance of Azure. Now you need to be able to work with that repository, as it’s where you’re going to upload and work with all of your images. To do this you just need to quickly set up some authorisation so your CLI can communicate with it. You’ll want to make sure you have Docker installed. I’m going to be using the documentation in IBM Knowledge Center and following the title specifically for MacOS. If you aren’t on Mac then follow that link and do the same for your OS.

Firstly, you need to edit your /etc/hosts file, appending the following:

{master_ip} {cluster_ca_domain}

Where {master_ip} is the IP of your master node (which you can find on your instance of Azure), and {cluster_ca_domain} comes from the URL to your dashboard (basically the bit after https:// but before :{port_number}…).¬†

Now you’ll want a copy of the registry certificate from the master node, and then you’ll want to add it into your keychain:

mkdir -p ~/.docker/certs.d/{cluster_ca_domain}\:8500
scp vmadmin@{cluster_ca_domain}:/etc/docker/certs.d/{cluster_ca_domain}\:8500/ca.crt ~/.docker/certs.d/{cluster_ca_domain}\:8500/ca.crt
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ~/.docker/certs.d/{cluster_ca_domain}\:8500/ca.crt

Finally, restart docker and then run…

docker login {cluster_ca_domain}:8500

…using the username and password generated by the Terraform script.

Setting up namespaces

For the purposes of this article we’re going to deploy all of the integration capabilities into their own namespaces. This involves using the UI to actually create a namespace with the correct pod security policy. It also needs the capability uploaded into that namespace which sets you up some image pull secrets that you will use later. Feel free to only grab the capabilities you’re interested in from this list.

Navigate to the namespace creation page by selecting the following menu options: 

Hamburger Button (Top Left) -> Manage -> Namespaces -> Create Namespace

Then create the namespaces, which can be called anything you like. For this example, I name the namespaces as follows:

Capability Name Pod Security Policy
ICP4I Platform Navigator icp4i ibm-restricted-psp
MQ icp4i-mq ibm-anyuid-psp
App Connect icp4i-ace ibm-anyuid-psp
API Connect icp4i-api ibm-anyuid-hostpath-psp
Event Streams icp4i-es ibm-restricted-psp
Datapower icp4i-dp ibm-anyuid-psp
Aspera icp4i-asp ibm-anyuid-hostpath-psp

Uploading images

With the namespaces all setup, you can now actually upload the images you need. I’ve got them saved in the form of .tgz files, and if you use the following commands for each file, it should upload the images required for each capability alongside most setup (sorting out image pull secrets and the like):

cloudctl target -n {namespace}
cloudctl catalog load-archive --archive {capability-package} --registry {cluster_ca_domain}:8500/{namespace}

Setting up Azure Premium Storage

You now need to set up some persistent storage, and Microsoft actually provides a method for this known as Azure Premium Storage. You’re going to set up both their Azure Disk and Azure File provisions, and we’re only going to go as far as making Storage classes of them. This will allow the capabilities to make their own storage space as required based off of these storage classes.

Make sure you’re logged into the cloudctl CLI as specified in Installing ICP via Terraform section.

To set up Azure Premium Storage, you’re going to need 4 yaml files, two for the actual storage classes and two for the ClusterRole (each yaml file is separated by the dashed line):

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:azure-cloud-provider
rules:
- apiGroups: ['']
  resources: ['secrets']
  verbs:     ['get','create']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:azure-cloud-provider
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: system:azure-cloud-provider
subjects:
- kind: ServiceAccount
  name: persistent-volume-binder
  namespace: kube-system
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=1000
  - gid=1000
parameters:
  skuName: Standard_LRS
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azuredisk
provisioner: kubernetes.io/azure-disk
parameters:
  skuName: Standard_LRS

Copy the preceding extract into each yaml file then run…

kubectl apply -f {file_name}

…on each of them. Then, if you check storage classes (kubectl get sc) you should see one called azurefile and one called azuredisk.

Deploying IBM Cloud Pak for Integration Platform Navigator

Now you get to deploy the actual navigator, from there you can then deploy all of the other capabilities. 

Log in to your ICP console, then select Catalog in the top right corner. This takes you to the range of things you can deploy onto ICP. For now we’re only interested in the navigator, so go ahead and search for ICP4I, which should return only one thing (if not, it’s called ibm-icp4i-prod). Select that and get it all configured…

Helm Release Name: nav
Target Namespace: icp4i ({navigator_namespace})
License: Ticked

All Parameters
Image pull secret: sa-icp4i (sa-{namespace})
Hostname of the ingress proxy to be configured: {proxy_hostname}

Then click Install, wait a couple of minutes for the pods to start running (you can see this in the helm release section of ICP) and then click Launch. You should be greeted by a welcome screen before being able to see an overview of the various capabilities, alongside the ability to create instances of them.

The IBM Cloud Pak for Integration Navigator
View of the Platform Navigator with a couple of instances in it

Capability Deployment Example (Event Streams)

Helm Release Name: events
Target Namespace: icp4i-es ({eventstreams_namespace})
License: Ticked


All Parameters

Image pull secret: sa-icp4i-es (sa-{namespace})
Enable persistent storage for Apache Kafka: Ticked
Use dynamic provisioning for Apache Kafka: Ticked
Storage class name: azuredisk

As usual, we select Install and wait a little while for it to all be set up. Launch it from the Navigator and it should take us to the ES dashboard. There’s a very nice little status check to tell us when everything is green and good to go.

The methods for other capabilities are very similar, and you can view the capability documentation to see specifics as to how to get them set up.

Conclusion

You now have the tools to successfully implement the Platform Navigator with an instance of each integration product running, all on Microsoft Azure. 

So, what next? 

Well, that depends on what you need. You now have a wealth of integration capabilities at your disposal which can unlock innovation, from message & event handling to API management. You can learn more about each capability, how they can work together, and how you can modernise your integration by reading the capability documentation.  

If you have any questions or want to continue the discussion, feel free to contact me directly on Twitter or LinkedIn. You can also join in the conversation by joining our Slack and joining the ICP4I channel or leave a comment below!

12 comments on"IBM Cloud Pak for Integration on Microsoft Azure"

  1. Hi James,
    Is there any additional steps involved (like configuring OpenShift) , if the OS is Red Hat Linux ?

    • Hey Kishore,

      Yes, if you are wanting to deploy this to an Openshift cluster there will be extra steps to set this up. We haven’t tested this specific scenario (ICP4I on Openshift on Azure) ourselves, but once you get the ICP & Openshift set up on Azure the ICP4I steps should be the same.

  2. Hi James, is there a similar document available for Cloud Pak Integration on Azure Stack ? Thank you.

    • Hey Kishore,

      This is the document for getting IBM Cloud Pak for Integration onto an Azure stack.

      Cheers,
      James

  3. Suresh Patnam July 04, 2019

    Thanks James. I increased the quota to 100 and i was able to fix the error. Now I am getting a new error

    Any advise or help is appreciated

    —Error Details—
    Error: Error applying plan:

    6 errors occurred:
    * module.icpprovision.null_resource.icp-cluster[3]: error executing “/tm
    p/terraform_363528867.sh”: Process exited with status 127
    * module.icpprovision.null_resource.icp-cluster[2]: error executing “/tm
    p/terraform_1414783906.sh”: Process exited with status 127
    * module.icpprovision.null_resource.icp-cluster[5]: error executing “/tm
    p/terraform_510671063.sh”: Process exited with status 127
    * module.icpprovision.null_resource.icp-cluster[0]: error executing “/tm
    p/terraform_641774278.sh”: Process exited with status 127
    * module.icpprovision.null_resource.icp-cluster[1]: error executing “/tm
    p/terraform_1079780461.sh”: Process exited with status 127
    * module.icpprovision.null_resource.icp-cluster[4]: error executing “/tm
    p/terraform_309349283.sh”: Process exited with status 127

  4. Suresh Patnam July 01, 2019

    Hi.. I am running this on Azure and facing some issues. I followed the steps for basic deployment. I get the below error when I issue terraform command from the templates\icp-ce folder

    I just updated the .tfvars file and did touvh any of the .tf files.

    Any help is appreciated

    Warning: This Data Source has been deprecated in favour of the ‘azurerm_role_definition’ resource that now can look up role definitions by names.

    As such this Data Source will be removed in v2.0 of the AzureRM Provider.

    on msi.tf line 6, in data “azurerm_builtin_role_definition” “builtin_role_definition”:
    6: data “azurerm_builtin_role_definition” “builtin_role_definition” {

    Warning: “public_ip_address_allocation”: [DEPRECATED] this property has been deprecated in favor of `allocation_method` to better match the api

    on network.tf line 49, in resource “azurerm_public_ip” “bootnode_pip”:
    49: resource “azurerm_public_ip” “bootnode_pip” {

    Warning: “public_ip_address_allocation”: [DEPRECATED] this property has been deprecated in favor of `allocation_method` to better match the api

    on network.tf line 59, in resource “azurerm_public_ip” “master_pip”:
    59: resource “azurerm_public_ip” “master_pip” {

    Warning: “public_ip_address_allocation”: [DEPRECATED] this property has been deprecated in favor of `allocation_method` to better match the api

    on network.tf line 69, in resource “azurerm_public_ip” “proxy_pip”:
    69: resource “azurerm_public_ip” “proxy_pip” {

    Error: Unsupported block type

    on instances.tf line 58, in resource “azurerm_availability_set” “workers”:
    58: tags {

    Blocks of type “tags” are not expected here. Did you mean to define argument
    “tags”? If so, use the equals sign to assign it a value.

    Error: Unsupported argument

    on instances.tf line 134, in resource “azurerm_virtual_machine” “master”:
    134: identity = {

    An argument named “identity” is not expected here. Did you mean to define a
    block of type “identity”?

    Error: Unsupported argument

    on instances.tf line 180, in resource “azurerm_virtual_machine” “proxy”:
    180: identity = {

    An argument named “identity” is not expected here. Did you mean to define a
    block of type “identity”?

    Error: Unsupported argument

    on instances.tf line 225, in resource “azurerm_virtual_machine” “management”:
    225: identity = {

    An argument named “identity” is not expected here. Did you mean to define a
    block of type “identity”?

    Error: Missing resource instance key

    on network.tf line 85, in resource “azurerm_network_interface” “boot_nic”:
    85: network_security_group_id = “${azurerm_network_security_group.boot_sg.id}”

    Because azurerm_network_security_group.boot_sg has “count” set, its attributes
    must be accessed on specific instances.

    For example, to correlate with indices of a referring resource, use:
    azurerm_network_security_group.boot_sg[count.index]

    Error: Missing resource instance key

    on network.tf line 91, in resource “azurerm_network_interface” “boot_nic”:
    91: public_ip_address_id = “${azurerm_public_ip.bootnode_pip.id}”

    Because azurerm_public_ip.bootnode_pip has “count” set, its attributes must be
    accessed on specific instances.

    For example, to correlate with indices of a referring resource, use:
    azurerm_public_ip.bootnode_pip[count.index]

    • James Kirk July 02, 2019

      Hey Suresh,

      Since releasing this blog, Terraform 0.12.xx was released that occurs in these breaking changes. I will update it with the fix but the best way to do this is to downgrade your Terraform version to 0.11.xx (which is the same version I used locally).

      Hope this helps!

      • Thanks James. I reverted back to older version of Terraform and now able to proceed with the steps. But the terraform process failed with below error. Any advise on how I can get pass through this error?

        On my .tfvars file, I used the same config as yours for ssh

        ssh_public_key = “ssh-rsa randomfakepublickey”

        ——Error Info——-

        Error: Error applying plan:

        12 errors occurred:
        * azurerm_virtual_machine.master: 1 error occurred:
        * azurerm_virtual_machine.master: compute.VirtualMachinesClient#CreateOr
        Update: Failure sending request: StatusCode=400 — Original Error: Code=”Invalid
        Parameter” Message=”The value of parameter linuxConfiguration.ssh.publicKeys.key
        Data is invalid.” Target=”linuxConfiguration.ssh.publicKeys.keyData”

        * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
        ail[2]: timeout – last error: Error connecting to bastion: dial tcp 40.74.37.206
        :22: connectex: A connection attempt failed because the connected party did not
        properly respond after a period of time, or established connection failed becaus
        e connected host has failed to respond.
        * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
        ail[4]: timeout – last error: Error connecting to bastion: dial tcp 40.74.37.206
        :22: connectex: A connection attempt failed because the connected party did not
        properly respond after a period of time, or established connection failed becaus
        e connected host has failed to respond.
        * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
        ail[1]: timeout – last error: Error connecting to bastion: dial tcp 40.74.37.206
        :22: connectex: A connection attempt failed because the connected party did not
        properly respond after a period of time, or established connection failed becaus
        e connected host has failed to respond.
        * azurerm_virtual_machine.proxy: 1 error occurred:
        * azurerm_virtual_machine.proxy: compute.VirtualMachinesClient#CreateOrU
        pdate: Failure sending request: StatusCode=400 — Original Error: Code=”InvalidP
        arameter” Message=”The value of parameter linuxConfiguration.ssh.publicKeys.keyD
        ata is invalid.” Target=”linuxConfiguration.ssh.publicKeys.keyData”

        * azurerm_virtual_machine.management: 1 error occurred:
        * azurerm_virtual_machine.management: compute.VirtualMachinesClient#Crea
        teOrUpdate: Failure sending request: StatusCode=400 — Original Error: Code=”Inv
        alidParameter” Message=”The value of parameter linuxConfiguration.ssh.publicKeys
        .keyData is invalid.” Target=”linuxConfiguration.ssh.publicKeys.keyData”

        * azurerm_virtual_machine.worker[0]: 1 error occurred:
        * azurerm_virtual_machine.worker.0: compute.VirtualMachinesClient#Create
        OrUpdate: Failure sending request: StatusCode=400 — Original Error: Code=”Inval
        idParameter” Message=”The value of parameter linuxConfiguration.ssh.publicKeys.k
        eyData is invalid.” Target=”linuxConfiguration.ssh.publicKeys.keyData”

        * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
        ail[3]: timeout – last error: Error connecting to bastion: dial tcp 40.74.37.206
        :22: connectex: A connection attempt failed because the connected party did not
        properly respond after a period of time, or established connection failed becaus
        e connected host has failed to respond.
        * azurerm_virtual_machine.worker[1]: 1 error occurred:
        * azurerm_virtual_machine.worker.1: compute.VirtualMachinesClient#Create
        OrUpdate: Failure sending request: StatusCode=400 — Original Error: Code=”Inval
        idParameter” Message=”The value of parameter linuxConfiguration.ssh.publicKeys.k
        eyData is invalid.” Target=”linuxConfiguration.ssh.publicKeys.keyData”

        * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
        ail[0]: timeout – last error: Error connecting to bastion: dial tcp 40.74.37.206
        :22: connectex: A connection attempt failed because the connected party did not
        properly respond after a period of time, or established connection failed becaus
        e connected host has failed to respond.
        * azurerm_virtual_machine.worker[2]: 1 error occurred:
        * azurerm_virtual_machine.worker.2: compute.VirtualMachinesClient#Create
        OrUpdate: Failure sending request: StatusCode=400 — Original Error: Code=”Inval
        idParameter” Message=”The value of parameter linuxConfiguration.ssh.publicKeys.k
        eyData is invalid.” Target=”linuxConfiguration.ssh.publicKeys.keyData”

        * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
        ail[5]: timeout – last error: Error connecting to bastion: dial tcp 40.74.37.206
        :22: connectex: A connection attempt failed because the connected party did not
        properly respond after a period of time, or established connection failed becaus
        e connected host has failed to respond.

        Terraform does not automatically rollback in the face of errors.
        Instead, your Terraform state file has been partially updated with
        any resources that successfully completed. Please address the error
        above and apply again to incrementally change your infrastructure.

        • James Kirk July 03, 2019

          Ah sorry! The sshkey I used there is a made up one (as I didn’t want to publicly post my own public key!) You’ll need to generate your own (example of how to do it on MacOS here – https://docs.joyent.com/public-cloud/getting-started/ssh-keys/generating-an-ssh-key-manually/manually-generating-your-ssh-key-in-mac-os-x).
          If you then look at your public key it should take the form of:
          “ssh-rsa “, just change that ssh-rsa randomfakepublickey to that!

          (NOTE: I’ve noticed sometimes Terraform fails unless I delete the .tfstate file, if you want a completely clean run then do that)

          • Thanks James. I got thru ssh error. Now the deployment keep failing because of the number of cores

            I increased the quota to 60. But it’s still asking for more. What should I set as maximum number of cores?

            I am not using HA. I am just doing the basic install

            Thanks for all the help so far.

            — Error Info—-

            10 errors occurred:
            * azurerm_virtual_machine.worker[1]: 1 error occurred:
            * azurerm_virtual_machine.worker.1: compute.VirtualMachinesClient#Create
            OrUpdate: Failure sending request: StatusCode=0 — Original Error: autorest/azur
            e: Service returned an error. Status= Code=”OperationNotAllowed” Message=”O
            peration results in exceeding quota limits of Core. Maximum allowed: 60, Current
            in use: 60, Additional requested: 8. Please read more about quota increase at h
            ttps://aka.ms/ProdportalCRP/?#create/Microsoft.Support/Parameters/{\”subId\”:\”3
            bdaee0d-2e87-486b-9d17-d22bda6de9dc\”,\”pesId\”:\”15621\”,\”supportTopicId\”:\”3
            2447243\”}.”

            * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
            ail[2]: timeout – last error: Error connecting to bastion: dial tcp 52.137.63.19
            7:22: connectex: A connection attempt failed because the connected party did not
            properly respond after a period of time, or established connection failed becau
            se connected host has failed to respond.
            * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
            ail[3]: timeout – last error: Error connecting to bastion: dial tcp 52.137.63.19
            7:22: connectex: A connection attempt failed because the connected party did not
            properly respond after a period of time, or established connection failed becau
            se connected host has failed to respond.
            * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
            ail[1]: timeout – last error: Error connecting to bastion: dial tcp 52.137.63.19
            7:22: connectex: A connection attempt failed because the connected party did not
            properly respond after a period of time, or established connection failed becau
            se connected host has failed to respond.
            * azurerm_virtual_machine.worker[0]: 1 error occurred:
            * azurerm_virtual_machine.worker.0: compute.VirtualMachinesClient#Create
            OrUpdate: Failure sending request: StatusCode=0 — Original Error: autorest/azur
            e: Service returned an error. Status= Code=”OperationNotAllowed” Message=”O
            peration results in exceeding quota limits of Core. Maximum allowed: 60, Current
            in use: 60, Additional requested: 8. Please read more about quota increase at h
            ttps://aka.ms/ProdportalCRP/?#create/Microsoft.Support/Parameters/{\”subId\”:\”3
            bdaee0d-2e87-486b-9d17-d22bda6de9dc\”,\”pesId\”:\”15621\”,\”supportTopicId\”:\”3
            2447243\”}.”

            * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
            ail[0]: timeout – last error: Error connecting to bastion: dial tcp 52.137.63.19
            7:22: connectex: A connection attempt failed because the connected party did not
            properly respond after a period of time, or established connection failed becau
            se connected host has failed to respond.
            * azurerm_virtual_machine.worker[2]: 1 error occurred:
            * azurerm_virtual_machine.worker.2: compute.VirtualMachinesClient#Create
            OrUpdate: Failure sending request: StatusCode=0 — Original Error: autorest/azur
            e: Service returned an error. Status= Code=”OperationNotAllowed” Message=”O
            peration results in exceeding quota limits of Core. Maximum allowed: 60, Current
            in use: 60, Additional requested: 8. Please read more about quota increase at h
            ttps://aka.ms/ProdportalCRP/?#create/Microsoft.Support/Parameters/{\”subId\”:\”3
            bdaee0d-2e87-486b-9d17-d22bda6de9dc\”,\”pesId\”:\”15621\”,\”supportTopicId\”:\”3
            2447243\”}.”

            * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
            ail[5]: timeout – last error: Error connecting to bastion: dial tcp 52.137.63.19
            7:22: connectex: A connection attempt failed because the connected party did not
            properly respond after a period of time, or established connection failed becau
            se connected host has failed to respond.
            * azurerm_virtual_machine.management: 1 error occurred:
            * azurerm_virtual_machine.management: compute.VirtualMachinesClient#Crea
            teOrUpdate: Failure sending request: StatusCode=0 — Original Error: autorest/az
            ure: Service returned an error. Status= Code=”OperationNotAllowed” Message=
            “Operation results in exceeding quota limits of Core. Maximum allowed: 60, Curre
            nt in use: 60, Additional requested: 8. Please read more about quota increase at
            https://aka.ms/ProdportalCRP/?#create/Microsoft.Support/Parameters/{\”subId\”:\
            “3bdaee0d-2e87-486b-9d17-d22bda6de9dc\”,\”pesId\”:\”15621\”,\”supportTopicId\”:\
            “32447243\”}.”

            * module.icpprovision.null_resource.icp-cluster-preconfig-hook-stop-on-f
            ail[4]: timeout – last error: Error connecting to bastion: dial tcp 52.137.63.19
            7:22: connectex: A connection attempt failed because the connected party did not
            properly respond after a period of time, or established connection failed becau
            se connected host has failed to respond.

          • James Kirk July 04, 2019

            Odd! It seems to be requesting 8 more, so maybe add that plus a bit of a buffer (75-80)?

Join The Discussion

Your email address will not be published. Required fields are marked *