Think 2021: New tools have the developer ecosystem and IBM building together Learn more

Develop a Terraform template to deploy enterprise applications

IBM Cloud Pak for Multicloud Management Managed Services provides Terraform templates that use Chef cookbooks to deploy a select set of middleware. The out-of-the -box middleware templates might not meet your organization’s requirements. In which case, you might need to develop new Terraform templates that use your own in-house Chef cookbooks or you might choose to use the Chef cookbooks that are provided by the Chef Supermarket.

The Chef Supermarket contains the largest selection of Chef cookbooks (more than three thousand cookbooks) available on the internet. Before you develop your own in-house cookbooks, consider looking for one in Chef Supermarket to reduce your development time and efforts.

In this tutorial, you will learn how to use the cookbooks in Chef Supermarket to create your own enterprise-ready applications using IBM Cloud Pak for Multicloud Management Managed Services. This tutorial specifically focuses on how to use the Elasticsearch cookbook from Chef Supermarket to develop a Terraform template, which you can use to deploy Elasticsearch on a virtual machine in a cloud environment using IBM Cloud Pak for Multicloud Management Managed Services.

Prerequisites

The following items are needed prior to working on this tutorial:

  1. A working IBM Cloud Pak for Multicloud Management installation. Follow the steps in the IBM Documentation.
  2. A working IBM Cloud Pak for Multicloud Management Managed Service environment deployed within the IBM Cloud Pak for Multicloud Management installation.
  3. Access to a working cloud environment, for example, VMware, AWS or IBM Cloud.

Also, you must have a working knowledge of IBM Cloud Pak for Multicloud Management Managed Services Content Runtime and Chef.

Steps

In this tutorial, you will use one of the IBM-provided Terraform templates from GitHub as an exemplar, and modify the template to deploy Elasticsearch.

Here are the steps that you will follow:

  1. Pull the latest pattern manager from Docker hub
  2. Load the Elasticsearch cookbook from the Chef Supermarket
  3. Clone and modify the example Terraform template
  4. Import the modified template into IBM Cloud Pak for Multicloud Management
  5. Deploy the template to a cloud environment

Step 1: Pull the latest pattern manager from Docker hub

The Pattern Manager is a component of the IBM Cloud Pak for Multicloud Management managed service content runtime, and it is responsible for interacting with the Chef server. The Pattern Manager APIs allow you to load and manage content in the form of cookbooks, roles, and recipes, and orchestrate deploying that content to virtual machines. For more information on IBM Cloud Pak for Multicloud Management managed service content runtime, refer to the IBM Documentation.

The first step to load cookbooks from Chef Supermarket is to pull an upgraded Pattern Manager Docker container that is capable of loading cookbooks from Chef Supermarket.

The preferred method to get an updated Pattern Manager Docker container is to deploy a new content runtime from the IBM Cloud Pak for Multicloud Management user interface. Follow the instructions in the IBM Documentation to deploy a new content runtime.

Alternatively, you can use the content runtime script that will pull the latest Docker images for the Pattern Manager and Software Repository without removing your existing content in your Chef server or the product binaries in your Software Repository. In addition to updating the Docker containers, the latest versions of the IBM middleware cookbooks will be loaded onto your Chef server. Back up any cookbooks prior to running the command. To use content runtime script, SSH to your existing content runtime virtual machine and run the following command:

cd /root/advanced-content-runtime; ./launch-docker-compose.sh

Step 2: Load the Elasticsearch cookbook from the Chef Supermarket

Adding a cookbook from the Chef Supermarket is as easy as knowing the cookbook name and its dependencies. If you don’t know its dependencies, the API will return an error message that will list the dependencies you are missing. In addition, there are optional parameters for specifying the cookbook version and for installing prerequisite cookbooks. If a prerequisite cookbook already exists in your Chef server, the API will not overwrite it unless the overwrite_existing parameter is set to true.

This tutorial shows you how to load the Elasticsearch cookbook from Chef Supermarket, including a number of prerequisite cookbooks. Use the Pattern Manager APIs to download the Elasticsearch cookbook and all of the dependencies from the Chef Supermarket in a single request. The Pattern Manager access token is required to make this request.

The examples in this tutorial are written generically to allow you to use the REST tool of your choice to make requests. The example below is not using the POST command provided by some Linux distributions.

POST https://{patmgr_ip}:5443/v1/upload/chef

Header: Authorization = Bearer $ACCESS_TOKEN

{
  "cookbook_name": "elasticsearch",
  "cookbook_version": "3.4.3",
  "source_repos": "chef_supermarket",
  "deps": [
    {
      "cookbook_name": "apt",
      "cookbook_version": "6.1.4"
    },
    {
      "cookbook_name": "ark",
      "overwrite_existing": "true"
    },
    {
      "cookbook_name": "build-essential"
    },
    {
      "cookbook_name": "chef-sugar"
    },
    {
      "cookbook_name": "homebrew"
    },
    {
      "cookbook_name": "java"
    },
    {
      "cookbook_name": "mingw"
    },
    {
      "cookbook_name": "ohai"
    },
    {
      "cookbook_name": "seven_zip"
    },
    {
      "cookbook_name": "windows"
    },
    {
      "cookbook_name": "yum"
    }
  ]
}

A successful request will result in a response like this, where you can see which cookbooks have been loaded from the Chef Supermarket:

{
    "cookbooks": {
        "message": "Cookbooks uploaded",
        "stderr": [],
        "stdout": ["Uploading apt            [6.1.4]",
                           "Uploading ark            [3.1.0]
                           "Uploading build-essential [8.0.4],
                           "Uploading chef-sugar     [3.6.0]",
                           "Uploading elasticsearch  [3.4.3]",
                           "Uploading homebrew       [4.3.0]",
                           "Uploading java           [1.50.0]",
                           "Uploading mingw          [2.0.1]",
                           "Uploading ohai           [5.2.0]",
                           "Uploading seven_zip      [2.0.2]",
                           "Uploading windows        [3.4.4]",
                           "Uploading yum            [5.1.0]",
                           "Uploaded all cookbooks."]
    }
}

Verify that the cookbooks have been loaded from the Chef Supermarket using the /v1/info/chef API:

GET https://{patmgr_ip}:5443/v1/info/chef

Header: Authorization = Bearer $ACCESS_TOKEN

{
  cookbooks: [
    "apt 6.1.4",
    "ark 3.1.0",
    "build-essential 8.0.4",
    "chef-sugar 3.6.0",
    "elasticsearch 3.4.3",
    "homebrew 4.3.0",
    "java 1.50.0",
    "mingw 2.0.1",
    "ohai 5.2.0",
    "seven_zip 2.0.2",
    "windows 3.4.4",
    "yum 5.1.0"
  ],
  recipes: [
    "apt",
    "apt::cacher-client",
    "apt::cacher-ng",
    "apt::unattended-upgrades",
    "ark",
    "build-essential",
    "build-essential::_windows",
    "chef-sugar",
    "elasticsearch",
    "java",
    "java::default_java_symlink",
    "java::homebrew",
    "java::ibm",
    "java::ibm_tar",
    "java::notify",
    "java::openjdk",
    "java::oracle",
    "java::oracle_i386",
    "java::oracle_jce",
    "java::oracle_rpm",
    "java::purge_packages",
    "java::set_attributes_from_version",
    "java::set_java_home",
    "java::windows",
    "mingw",
    "ohai",
    "seven_zip",
    "windows",
    "yum",
    "yum::dnf_yum_compat"
  ]
  roles: [
  ]
}

Step 3: Clone and modify the example template

First, clone the template_ibm_mq_v9_standalone repository using the git clone command. Once you clone the repository you will see a directory named template_ibm_mq_v9_standalone on your system. The repository you cloned contains templates to deploy IBM MQ 9.0 on a single virtual machine in VMware vSphere, IBM Cloud, and AWS.

In this tutorial, we will use the template under template_ibm_mq_v9_standalone/vmware/terraform folder as an exemplar and modify this template to deploy Elasticsearch. The original template is meant to deploy IBM MQ 9.0 into VMware vSphere. If you want to deploy Elasticsearch on IBM or AWS then you can use the template under the template_ibm_mq_v9_standalone/ibmcloud/terraform or template_ibm_mq_v9_standalone/amazon/terraform folder. In this tutorial, we will update the files to deploy Elasticsearch on VMware vSphere.

The two files that need to be modified are camvariables.json and ibm_mq_v9_standalone.tf. They are found in the vmware/terraform folder of the IBM MQ V9 template folder. The file ibm_mq_v9_standalone.tf is the Terraform template file and camvariables.json contains a list of all of the variables needed to deploy the template.

|-- template_ibm_mq_v9_standalone
    |-- vmware
        |-- terraform
            |-- camvariables.json
            |-- ibm_mq_v9_standalone.tf

At a high level, you will remove all parameters from the mqnode01 group from both files, add a property to specify the public port for the Elasticsearch server, and update the call to the CAMC Provider to specify a new run list and node attributes to deploy the Elasticsearch Chef cookbook.

  1. Open camvariables.json and remove all variables that belong to the mqnode01 group. Be careful to only remove the variables that belong to the mqnode01 group and not the variables that have MQNode01 in the name but belong to the virtualmachine group. Keep the MQNode01 group in the input_groups as you will reuse it for the new variable.

  2. Add a single variable to the MQNode01 group in camvariables.json:

    {
       "name": "MQNode01-es_http_port",
       "type": "string",
       "description": "The HTTP port for elasticsearch",
       "default": "9200",
       "hidden": false,
       "label": "HTTP Port",
       "secured": false,
       "required": true,
       "immutable": false,
       "immutable_after_create": true,
       "group_name": "mqnode01"
     }
    
  3. Open ibm_mq_v9_standalone.tf and remove the same set of variables. The mqnode01 variables are all grouped together in a section that starts with MQNode01 variables. Again, be careful to only remove the variables that belong to the mqnode01 group. The mqnode01 group ends just before the virtualmachines variables section starts with a title of virtualmachine variables.

  4. Add the HTTP port variable to ibm_mq_v9_standalone.tf:

    #Variable : MQNode01-es_http_port
    variable "MQNode01-es_http_port" {
    type = "string"
    description = "The HTTP port for elasticsearch"
    }
    
  5. The vsphere_virtual_machine and camc_bootstrap resource does not require any updates.

  6. Update the runlist and node_attributes properties in the camc_softwaredeploy resources. Set the runlist to specify the Java and Elasticsearch recipes and add java and elasticsearch properties to the node_attributes:

    "runlist": "recipe[java],recipe[elasticsearch]",
    "node_attributes": {
    "ibm": {
    "sw_repo": "${var.ibm_sw_repo}",
    "sw_repo_user": "${var.ibm_sw_repo_user}"
    },
    "java": {
    "jdk_version": "8"
    },
    "elasticsearch": {
    "configure": {
    "configuration": {
      "http.port": "${var.MQNode01-es_http_port}",
      "network.host": "0.0.0.0"
    }
    }
    }
    }
    

Step 4: Import the template

To deploy your template, you first need to import the modified template into IBM Cloud Pak for Multicloud Management Managed Service. In this example, you will upload the individual files to the UI. When your templates are ready for production, store them in a git repository, such as GitHub or GitLab, and then you can import them from there.

  1. Log into the IBM Cloud Pak for Multicloud Management user interface. Select Automate Infrastructure > Manage Services to open the Manage Services web console.

  2. In the Manage Services user interface, click the hamburger menu (horizontal lines in top left corner), and then select Library > Terraform Templates.

  3. Click Import template. In the Import Template dialog, select From Scratch for Import template source. Enter a title and a description, and then select VMware vSphere as Cloud Provider. Then, click Import.

  4. On the page that loads, click the Manage Template tab. In the Manage Template tab, click Add your template source code here to open the Add template source dialog. In the Add template source dialog, select From File for Import type and upload the .tf file that you edited earlier. Finally, click Add to upload the file.

  5. Next, click Update parameters to upload your camvariables.json file.

  6. Click Save. You now have an Elasticsearch template that is ready to deploy.

Step 5: Deploy the template

  1. Deploy the template as you would any other template. The only special item here is to show that the port number is a customizable parameter. Update it from 9200 to 9201.

  2. Once the deployment is successful, the IP address assigned to the virtual machine and the Elasticsearch server port are shown in the deployment properties.

  3. Enter the IP address and port in your web browser and you will see your running Elasticsearch server.

Summary

In this tutorial, you learned how to load a cookbook from the Chef Supermarket, modify a Terraform template, and then use IBM Cloud Pak for Multicloud Management to deploy the template.