Taxonomy Icon

Containers

IBM Cloud Private is an application platform for developing and managing on-premises, containerized applications. The platform can be deployed off-premises on a public cloud provider or on-premises within a client data center. It is an integrated environment for managing containers that includes the Kubernetes container orchestrator, a private image repository, a management console, and monitoring frameworks.

The IBM Cloud Private pattern covered in this tutorial is a PureApplication system pattern that deploys an IBM Cloud Private application in different topologies, for a Community Edition (CE) or an Enterprise Edition (EE). The pattern offers a fast, reliable, and reproducible approach to installing and managing the IBM Cloud Private application on a PureApplication System.

This tutorial shows you how to install the IBM Cloud Private CE or EE on PureApplication environments using the IBM Cloud Private Pattern. It covers the steps required to install the pattern, describes the IBM Cloud Private topologies the pattern templates deploy, and the administrative operations available after deployment.

What you need to deploy IBM Cloud Private on IBM PureApplication

The IBM Cloud Private pattern is supported on both Intel and Power IBM PureApplication environments:

  1. PureApplication System W1500 V2.2.3 or higher
  2. PureApplication System W2500 V2.2.3 or higher
  3. PureApplication Platform W3500 V2.2.3 or higher
  4. PureApplication Platform W3550 V2.2.3 or higher
  5. PureApplication Software V2.2.3 or higher (Intel only)
  6. PureApplication Service on SoftLayer V2.2.3 or higher
  7. PureApplication System W3700 V2.2.3 or higher

Note: When using a Power-based IBM PureApplication System W3700 environment, clients must bring their own Linux Power PC 64 Little Endian virtual image.

Patterns are available for both Intel and Power from IBM Fix Central as an IBM PureApplication emergency fix.

  1. IBM Cloud Private Pattern type version 1.0.0.2 to install IBM Cloud Private version 2.1.0.2 for Linux 64-bit, x86_64
  2. IBM Cloud Private Pattern type version 1.0.0.2 to install IBM Cloud Private version 2.1.0.2 for Power PC 64 Little Endian

After you download the pattern, follow the instructions in IBMCloudPrivatePattern_QSG.pdf to install the pattern on your PureApplication environment. You need to have the pattern installed on your PureApplication environment before you can follow the steps described in this tutorial.

Both the Intel and Power pattern types have a dependency on the Docker pattern type version 1.0.6.0. This pattern type is included with PureApplication 2.2.5 on Intel. The pattern is also supported on PureApplication 2.2.3 and 2.2.4 Intel environment, but you will have to download and install the Docker pattern type version 1.0.6.0 or higher. This pattern type is available as part of the &Group_Content_PureApplicationSystem_2.2.5.0_Intel& on IBM Fix Central.

Note: On a Power-based IBM PureApplication System W3700 environment, you must download and install the Docker Pattern Type version 1.0.6.0 here on IBM Fix Central. (It is not part of &Group_Content_PureApplicationSystem_2.2.5.0_Power&.)

In addition to the IBM Cloud Private Pattern and its dependencies, you also need to ensure that you have the following in place when using Intel-based PureApplication environments:

  • Red Hat Enterprise Linux 7.x virtual image installed as virtual image: IBM OS Image for Linux for Red Hat Linux Systems V3.0.8.0 (included with IBM PureApplication 2.2.4) or V3.0.9.0 (included with IBM PureApplication 2.2.5).
  • If you are using an older version of the of the IBM OS Image for Linux for Red Hat Linux Systems, you also need a fully configured and working Red Hat Satellite Server 6.0 integration. This is required in order to download and install the OS packages that IBM Cloud Private requires. So you need to have the shared service &Red Hat Satellite Six Service (External)& deployed. This can be set up either to integrate with an existing Red Hat Satellite Server 6.0 server, or with a deployed instance of the &Red Hat Satellite Server Version 6.2& virtual system pattern.

Note: IBM PureApplication Software clients have to bring their own Red Hat Enterprise Linux 7.x virtual image.

1

Select the IBM Cloud Private Pattern template

The IBM Cloud Private pattern type includes a set of virtual system patterns that can be used to deploy an IBM Cloud Private cluster. These virtual system patterns are effectively a set of templates that can be found from the PureApplication user interface under Patterns > Virtual System Patterns. By simply entering “IBM Cloud Private” as a filter, only the IBM Cloud Private virtual system patterns will be shown.

Using IBM Cloud Private as a filter

These template virtual system patterns allow you to deploy a number of different IBM Cloud Private topologies. Refer to the IBM Cloud Private Knowledge Center to learn more about the architecture of IBM Cloud Private.

  1. IBM Cloud Private—deploys boot, master, proxy, management, and worker nodes on a separate host. You can choose to deploy 1, 3, or 5 master and proxy hosts, up to 3 management hosts, and up to 10 worker hosts.
  2. IBM Cloud Private Boot Master Proxy same host—deploys boot, management, master, and proxy nodes on the same host, and worker nodes on separate hosts. You can choose to deploy up to 10 worker hosts but only one host for the boot, master, and proxy.
  3. IBM Cloud Private Master Proxy same host—deploys boot on one host, master and proxy nodes on the same host, and worker and management nodes on separate hosts. You can choose to deploy 1, 3, or 5 master and proxy hosts, up to 3 management hosts, and up to 10 worker hosts. There will be only one boot host.
  4. IBM Cloud Private Test Environment—deploys all nodes (boot, management, master, proxy, and worker) on the same host.
  5. IBM Cloud Private – GPFS master registry—provides the same deployment configuration as the IBM Cloud Private template, with the addition of IBM General Parallel File System (GPFS) shared file system support. IBM GPFS is currently known as IBM Spectrum Scale. The GPFS shared file system here is used as shared storage across master and proxy nodes for /var/lib/registry and /var/lib/icp/audit (for the private image registry).
  6. IBM Cloud Private Master Proxy same host – GPFS master registry—provides the same deployment configuration as the IBM Cloud Private Master Proxy same host template, with the addition of GPFS shared file system support across master nodes for /var/lib/registry and /var/lib/icp/audit (for the private image registry).

For templates 5 and 6 above, you must have a GPFS Shared Service instance deployed in the same environment profile with this deployment. Make sure the file system name you specify on the GPFS Client Policy Master node matches the file system name available on the GPFS Server instance that the GPFS Shared Service points to, and that the file system size is at least 60GB.

Note: GPFS is not currently used to provide a highly available persistent storage provider for IBM Cloud Private.

All virtual system patterns except the IBM Cloud Private Test Environment support adding or removing IBM Cloud Private worker, proxy, or management nodes after deployment. This is done through Manual Scaling Operations from the Virtual System Instance console. Table 1 provides an overview of the types of IBM Cloud Private nodes, the number that are supported, and whether they can be scaled after deployment.

Table 1. IBM Cloud Private nodes
Number of nodes supported Nodes can be added/removed after deployment Comments
HA boot node 1 N/A
Master node 1, 3, 5 (must be odd) No &Virtual IP for Cluster Master HA& pattern parameter must be set when deploying multiple masters
Management node 1 or more Yes
Proxy node 1 or more Yes &Virtual IP for Proxy Master HA& pattern parameter must be set when deploying multiple masters
Worker node 1 or more Yes

Note: IBM Cloud Private (and Kubernetes) only supports an odd number of master nodes. This is in order to be able to ensure quorum. Refer to High Availability IBM Cloud Private clusters in the IBM Cloud Private Knowledge Center for more details.

Note: IBM Cloud Private supports one or more proxy nodes, but the current IBM Cloud Private pattern has a limitation: It only supports an odd number of proxy nodes at deployment time; however, proxy nodes can be added or removed afterward.

In this tutorial, we will use the IBM Cloud Private virtual system pattern template to demonstrate the automated deployment of a more complex IBM Cloud Private topology as shown in the following diagram.

IBM Cloud Private topology for this tutorial
2

Deploy the IBM Cloud Private virtual system pattern

Select Patterns > Virtual System Patterns and select the IBM Cloud Private Virtual System Pattern, then click on the Deploy icon to start a new deployment.

Deploying the pattern

Pattern attributes

The following pattern attributes will be displayed for the new deployment:

Pattern attributes

All required values have been set so that you can always deploy directly without making any changes. However, in this example we will make some changes.

Installation type

This is the type of the IBM Cloud Private binaries that are used by the deployment—&IBM Cloud Private-ee& for Enterprise Edition (EE) or &IBM Cloud Private-ce& for Community Edition (CE).

The CE binaries are packaged with the IBM Cloud Private pattern on IBM Fix Central. You can also import other IBM Cloud Private binaries, CE or EE, and use them with the pattern. Refer to the IBM Cloud Private pattern documentation in IBMCloudPrivatePattern_QSG.pdf to see how to upload these binaries to the Storehouse that will be used by your deployment.

If the VMs deployed on your PureApplication environment have access to the internet, you can skip the step that uploads the CE binaries to the Storehouse. You must set the Installation type to &IBM Cloud Private-ce& and specify the version of the IBM Cloud Private Community Edition to be downloaded from Docker Hub. At deployment time, the pattern will first attempt to get the binaries from the Storehouse, but when the binaries are not found there, it will download them from Docker Hub.

Remember that if you want to deploy more than one master and/or proxy host:

  1. You need to use IBM Cloud Private EE binaries. Deploying more than one master or proxy node is considered an HA configuration and is available only with the EE version.
  2. If more than one master host is deployed, the Virtual IP for Cluster Master HA property must be set. Similarly, you should only set this property if you select more than one proxy node.
  3. The Virtual IP for the Master or Proxy Cluster must be an IP that is not to be used by other deployments. It must also be accessible to the deployment that is using it, on the specified ethernet interface. A quick way to make a virtual IP available is to deploy a dummy Base OS virtual system, stop it, then use this instance IP as a virtual IP for the IBM Cloud Private HA deployment.

IBM Cloud Private version

This is the version of the binaries that’s uploaded to the Storehouse. You can set a different version here if you have uploaded other IBM Cloud Private binaries to the Storehouse. You can have more than one set of IBM Cloud Private binaries uploaded to the Storehouse; this could be a mix of CE and EE editions or different versions. Each deployment will pick up the right edition and version based on the selected installation type and IBM Cloud Private version properties. For example, you can deploy an IBM Cloud Private 2.1.0.1 CE instance using the IBM Cloud Private Test Environment template and an IBM Cloud Private 2.1.0.2 EE instance with the IBM Cloud Private template.

System administrator password

This is the password for the IBM Cloud Private console admin user. The default value is “admin,” but we strongly recommend changing this at deployment time.

Note: Both the username and password of the admin user can be changed after deployment. Refer to Changing the cluster administrator access credentials in the IBM Cloud Private Knowledge Center for detailed instructions.

Installation directory for IBM Cloud Private

The default installation directory for IBM Cloud Private is /ibm-cloud-private.

Kubernetes API Insecure Port

This is the port used by Kubernetes. The default is 8888, but this conflicts with the maestro agent of PureApplication which runs on all VMs and uses the same port. Therefore, the default for this port for the IBM Cloud Private pattern is 8989.

PureApplication Maestro port

This is the port that’s used by the PureApplication internal inlet application. You should update it only if the PureApplication has been changed to use a different port. The default is 8888.

IBM Cloud Private HA properties

An HA boot host is present in all virtual system patterns except the &IBM Cloud Private Test Environment.& The name of this VM can be confusing as the IBM Cloud Private boot node is not necessarily HA and there is just a single IBM Cloud Private boot node.

When deploying IBM Cloud Private with multiple master and/or proxy nodes—or when you intend to scale to multiple proxy nodes after deployment—you must specify additional pattern parameters at deployment time:

  • Ethernet interface name for master HA
  • Virtual IP for cluster master HA
  • Ethernet interface name for proxy HA
  • Virtual IP for proxy master HA

When these pattern parameters are provided, virtual IP addresses are automatically assigned to handle multiple IBM Cloud Private master and proxy nodes. Under the covers, the virtual IP manager assigns the Virtual IP for Cluster Master HA to the network interface of the active IBM Cloud Private master nodes (as specified by &Ethernet interface name for Master HA&). Should that IBM Cloud Private master node become temporarily unavailable, the virtual IP manager will re-assign the virtual IP address to one of the remaining IBM Cloud Private master nodes. So if you select to deploy multiple IBM Cloud Private master nodes, the Virtual IP for Cluster Master HA will also be used to access the IBM Cloud Private console.

The mechanism described here also applies to the Virtual IP for Proxy Master HA. Refer to High availability IBM Cloud Private clusters in the IBM Cloud Private Knowledge Center for more details.

Note: When using the Virtual IP for Cluster Master HA or Virtual IP for Proxy Master HA pattern parameters, make sure that those IP addresses are unique, available, and accessible from the corresponding network interface (i.e. eth1). One mechanism for doing this would be to deploy a dummy base OS virtual system instance, stop it, and then use the IP address of the VM of this instance as the virtual IP of the IBM Cloud Private pattern deployment.

Deployment options for the nodes

As you saw in the topology diagram above, this tutorial uses the following deployment options for the various IBM Cloud Private nodes. There is always just a single HA boot node.

  • 3 master nodes
  • 2 worker nodes
  • 2 proxy nodes
  • 2 management nodes

Note: All types of nodes except the master can still be scaled after deployment, as well. So adding an additional worker node to handle more deployments within the IBM Cloud Private cluster is not a problem. We will demonstrate this later.

Follow these steps to configure the deployment options for the IBM Cloud Private nodes:

  1. Configure a total of three master nodes to be deployed, as shown below. Make sure to enable CPU-based scaling and/or memory-based scaling if you want to dynamically scale in or out the master nodes after deployment. Configuring the Master nodes
  2. Configure to deploy a single proxy node; again enable CPU-based scaling or memory-based scaling if you wish to support dynamic scale in or out for the proxy nodes. Configuring the Proxy nodes
  3. In a similar fashion, set the number of IBM Cloud Private workers to 2 by configuring the deployment options of the worker host.
  4. Finally, set the number of IBM Cloud Private management nodes to 2 by configuring the deployment options of the management host.
  5. After you make the necessary changes to the deployment options, click Quick Deploy to start the deployment.

When deployment has completed, examine your IBM Cloud Private Virtual System Instance. It should consist of a single boot node, three master nodes, one proxy node, two management nodes and two worker nodes. And of course a single IBM Cloud Private boot node.

Virtual System Instances
3

Access the IBM Cloud Private console

After the Virtual System Instance has been deployed, you can access the IBM Cloud Private console.

  1. Under the Virtual Machines perspective of the IBM Cloud Private Virtual System Instance, expand the IBM Cloud Private HA Boot Host virtual machine. Expand IBM Cloud Private HA Boot Host VM
  2. At the bottom of the section, click the IBM Cloud Private Console link. Select console
  3. This will take you to https://172.17.37.102:8443/console. (Note that the IP address matches what we specified as the Virtual IP for Cluster Master HA at deployment time; in our case, this was 172.17.37.102.) Login page with IP address
  4. Log in to the IBM Cloud Private console and click Platform > Nodes to see the IBM Cloud Private worker nodes. This should match the topology you’ve deployed. IBM Cloud Private Worker nodes
4

Manage the deployed IBM Cloud Private virtual system instance

You can use the IBM Cloud Private Virtual System Instance Console to add or remove IBM Cloud Private worker, proxy, or management nodes, extend the size of the disks used by the deployment, or view the IBM Cloud Private cluster status. The Instance Console can also be used to access log files to troubleshoot deployment errors.

Adding or removing IBM Cloud Private worker nodes

As you saw earlier, you can specify the number of IBM Cloud Private worker nodes that are initially deployed. After deployment, you can add (scale out) or remove (scale in) those nodes.

  1. From the Instance Console, click Operations and select Worker_Host.Worker_Host-Image. Instance Console - select image
  2. The operations &Horizontal Scaling – Add nodes& and &Horizontal Scaling – Remove nodes& allow you manually increase or decrease the number of IBM Cloud Private worker nodes. Note that IBM does not currently recommend using CPU-based scaling.
  3. To manually add one or more IBM Cloud Private worker nodes, expand the &Horizontal Scaling – Add nodes& section under the Worker_Host.Worker_Host-Image section. For example, enter an &Instance count& of 2 to add 2 more IBM Cloud Private worker nodes to your deployment and click Submit. Manually adding IBM Cloud Private worker nodes
  4. The operation is displayed in the operation execution results. You should see the two new IBM Cloud Private worker node VMs listed within the Virtual System Instance, as shown in the following screen capture. Operation Execution results
  5. Now wait for the two new nodes to be added to the IBM Cloud Private cluster, this could take 10 to 15 minutes. The operation should be completed when the middleware status for the two worker nodes is set to “Running.” Worker node status set to Running
  6. After the add worker operation completes you should see the new nodes in the IBM Cloud Private console: New nodes in IBM Cloud Private console In a similar fashion, you can remove an IBM Cloud Private worker node.
  7. Expand the &Horizontal Scaling – Remove nodes& option and click Submit. This removes a single IBM Cloud Private worker node. Removing a node
  8. The operation is listed under the operation execution results. Operation execution results
  9. You should notice that the corresponding IBM Cloud Private worker stops and is removed from the Virtual System Instance. IBM Cloud Private worker removed from Virtual System Instance
  10. After the VM is removed from the Virtual System Instance, validate that the IBM Cloud Private worker node is no longer visible within the IBM Cloud Private console. IBM Cloud Private worker node no longer visible in IBM Cloud Private console

Following a similar approach, you can add or remove IBM Cloud Private proxy and management nodes.

Extend disk size

Use this operation on any of the IBM Cloud Private roles to extend the disk size for /var/lib/elasticsearch/data, /var/lib/docker, or /var/lib/registry. The IBM Cloud Private pattern sets the default disk size for these data paths as recommended by IBM Cloud Private. However, if your deployment requires it, you can use this operation to increase the disk size as needed.

  1. Expand the Extend disk size operation on one of the hosts, for example the HA_Boot_Host.HA_Boot_Node-Part where images and other IBM Cloud Private content resides. Extending disk size
  2. Select the mount point for the disk you want to extend.
  3. Enter the storage size you want to add to the existing disk and click Submit. Specifying storage size
  4. A new entry called &Extend disk size& appears under the Operation Execution Results list.
  5. When completed, the status of the operation changes from “Active” to “Complete,” and the Return Value column displays the result of the operation as a link. Operation Execution Results
  6. When you click that link, a new tab opens and displays the output of the operation. Verify that there are no errors in the log, as shown below:
       
    [11/01/17 19:05:02 UTC] Current size for mount point /var/lib/docker is 59(G)
    [11/01/17 19:05:02 UTC] Scale up mount point /var/lib/docker with extra 20(G)
    [11/01/17 19:05:02 UTC] Begin to start scaleNode task for node Test_Environment_Host.11509542655979, disksize: 20
    ...
    [11/01/17 19:05:59 UTC] current task 4435 is succeed
    [11/01/17 19:05:59 UTC] End to check scaleNode task 4435, rc = 0
    [11/01/17 19:05:59 UTC] Begin to handle post-scaleNode
    [11/01/17 19:05:59 UTC] format new VMFS disk
    [11/01/17 19:05:59 UTC] Begin to format new local disk
    [11/01/17 19:05:59 UTC] End to format new disk, rc = 0
    [11/01/17 19:05:59 UTC] succeeded in formatting local disk
    [11/01/17 19:05:59 UTC] succeed to handle 3post scaleup on VM Test_Environment_Host.11509542655979
    [11/01/17 19:05:59 UTC] End to handle post-scaleNode
    [11/01/17 19:05:59 UTC] End to scaleNode task for Node Test_Environment_Host.11509542655979
    [11/01/17 19:05:59 UTC] Current size for mount point /var/lib/docker is 79(G)
                    

Viewing the IBM Cloud Private pattern logs

Sometimes things do not work as you would expect. The IBM Cloud Private pattern logs information in a number of different places. Let’s quickly review some of the most common ones.

The output of the operations described above can be accessed from the Logging View on each of the deployed virtual machines. On the deployment, click Manage > Logging then select any of the deployed virtual machine sections and expand IBMCloudPrivate /../ICp/logs.

The IBM Cloud Private product installation logs are available on the boot host only, located under IBMCloudPrivate /../ICp/logs/icp.log. The other hosts also log deployment information under the IBMCloudPrivate /../ICp/logs/icp.log.

The cluster hosts and config.yaml files are also available under the same IBMCloudPrivate folder.

IBMCloudPrivate folder

Containers logs can be found in the IBMCloudPrivate../log/containers folder.

Containers folder

Conclusion

In this tutorial, you learned how to use the IBM Cloud Private virtual system pattern type to deploy and manage the IBM Cloud Private clusters in all topologies supported by the product. With this PureApplication pattern, you can install IBM Cloud Private in a fast, reliable, and reproducible manner. You can install different versions of the product (Community Edition or Enterprise Edition) directly from Docker Hub or without an internet connection.

Now you can get started working with IBM Cloud Private in your particular environment.

Acknowledgements: We would like to thank Joe Wigglesworth, Sandeep Minocha, and Dennis Lauwers for their help with this tutorial.