page-brochureware.php

IBM Cloud Private on IBM Power Systems

Starter Kit

IBM Private Cloud on IBM Power Systems – Starter Kit

So you’ve heard of IBM Cloud Private (ICP), and want to get it running on Power Systems? The good news is that ICP runs on all POWER8 servers — from an OpenPower based S822LC to the large enterprise E880. It even runs on the CS822 systems powered by Nutanix.

IBM Cloud Private delivers a container-based private cloud built on Kubernetes. It has basic infrastructure requirements, but can work across many different infrastructure layers. Read on to find out how to set up your infrastructure for Power and then deploy ICP in a starter configuration.

Deployment elements

We’ll deploy a total of two virtual machines (VMs). See the overview section of the knowledge center to understand the terminology.

  • VM-1: Boot, Master, Proxy, and Management Node
    Recommended size: 24GB RAM, 6 vCPU, 100GB disk.
  • VM-2: Worker Node
    Recommended Size: 32GB RAM, 8 vCPU, 100GB disk. You can go smaller, but fewer, bigger worker nodes are preferred.
  • Storage Eventually, you will need persistent storage that are accessible across your worker nodes. This topic gets its own section

Required hardware

IBM Cloud Private really just needs enough hardware resources to satisfy the above requirements for VM-1 and VM-2. That is, you don’t need dedicated hardware, but rather just two or more VMs (LPARs). For the starter configuration (without HA), consider the following example hardware:

  • PowerVM Enterprise Systems
    Any Enterprise POWER8 server. You’ll need a total of 56GB RAM, 14 virtual CPUs, and 200GB of disk.
  • IBM Hypercoverged Systems powered by Nutanix
    A three-node cluster of CS821 or CS822 systems. The three-node cluster is a minimum configuration for Nutanix, but gives you plenty of headroom to run additional worker nodes, or other VM-based workloads.
  • KVM-based OpenPower LC Systems
    An S822LC for Commercial Computing with 20-cores @ 2.92 GHz, 256GB RAM, and 4TB of SSD provides plenty of headroom for a starter environment. While the SSDs are not a hard requirement, you will naturally get a better experience to match what you’d see with the other infrastructure options.

If you don’t have your own Power hardware, not to worry. You can get access to Power resources to try out the instructions in this Starter Kit. Check them out many are free!

When you’re ready to get started, begin by creating the VM infrastructure.

Creating the VM infrastructure

Create your VM infrastructure via PowerVC

PowerVC can manage any PowerVM Enterprise Power server (and soon KVM managed LC systems). A very simple deployment mechanism is available for any OpenStack based Infrastructure (IaaS) layer. Hop over to GitHub to learn how to get things deployed. This leverages Terraform — you simply answer a few questions up front in a variables file, and then hit Go. The VMs get created and ICP gets installed.

Check out this SlideShare, Deploy IBM Cloud Private on Power Systems via PowerVC in Four Simple Steps

Create your VM infrastructure via Nutanix

For Power Systems powered by Nutanix, creating the VM infrastructure is similarly very simple. Create two Ubuntu 16.04 or RHEL 7.1/2/3 VMs according to the specs listed in the Getting Started section. Make sure the two VMs are on the same network.

Once your VMs are up and running, you can follow the manual installation instructions available in the IBM Cloud Private Knowledge Center.

Create your VM infrastructure via KVM

What if you have an OpenPower LC system that doesn’t (yet) have PowerVC or OpenStack? No problem — just follow the instructions listed for the Nutanix solution above. That is, create the VMs, and follow the Knowledge Center instructions.

Next up, creating the storage infrastructure.

Creating the storage infrastructure

Many containers/workloads need persistent storage. The storage provided for the worker node above is not intended to be used for anything other than ephemeral (non-persistent) storage. Let’s get a few concepts under our belt:

A PersistentVolume(PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).

A PersistentVolume can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV’s access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities.

The access modes are:
ReadWriteOnce — the volume can be mounted as read-write by a single node
ReadOnlyMany — the volume can be mounted read-only by many nodes
ReadWriteMany — the volume can be mounted as read-write by many nodes

“Persistent Volumes.” Kubernetes, kubernetes.io/docs/concepts/storage/persistent-volumes.

For simplicity’s sake, we’ll use use NFS for our storage volumes. Unfortunately, NFS doesn’t support dynamic provisioning natively — however if you really want to get this working, take a look at the nfs-provisioner incubator.

Each of the Power IaaS layers has native volume support — be it PowerVC Cinder Volumes or the Nutanix DSF. While we expect to support these native volumes in the future, the current recommended model is to leverage an NFS server. The glusterfs support on Power, which allows dynamic provisioning should be available soon, but for now, we’ll use NFS.

To do this:

  1. Create a large (E.G. 4TB) volume in your infrastructure layer (PowerVC or Nutanix), and assign it to your master node.
  2. Format and mount the filesystem on your master node.
  3. Install the nfs server packages on your master node.
  4. Create your NFS exports file to export the NFS directory to any/all worker nodes, and reload the exports. EG: /nfs 192.168.0.192/32(rw,sync), and run exportfs -a.

At this point, whenever you go to deploy an app from the catalog, you will first need to create a PersistentVolume (Menu -> Platform -> Storage) of type NFS, and plug in the following Key/Value parameters (server=<IP>, path=/nfs/volX). You must do this for every persistent volume that you need — so read the documentation for the app’s helm chart carefully to see what persistent storage it needs.

Conclusion and next steps

Congratulations — you’ve got IBM Cloud Private up and running! You can now start deploying helm charts from the catalog. One caveat to keep in mind as you start to deploy apps from the catalog is that some apps may require a tweak or two to work properly on Power. If the deploy fails, check to make sure it pulled the correct image. Some docker images support multiple architectures (including Power) while some require an explicit Power image (often denoted with image_name-ppc64le). If needed, you can often modify the image used in the helm deploy.

See Accessing IBM Cloud Private on GitHub for next steps.