page-brochureware.php

IBM Cloud Private enablement guide for ISV and open source software

Learn how to add your application to the IBM Cloud Private catalog

Overview of the enablement process

IBM Cloud Private is shipped with a catalog of components that developers can use to build the next generation of cloud-native applications. We’ve created this enablement guide to encourage ISVs and open source developers to help us extend the catalog by adding their application(s) to it. If you choose to complete the on-boarding process, we ask that you enable (develop, test, and support) your component on the IBM Cloud Private platform on one, or more, of the supported hardware infrastructures: IBM Power Systems, IBM z Systems, and x86.

This guide will show you how to create, test, and publish a Helm chart that will install your component on IBM Cloud Private. The high-level steps are as follows:

  1. Package the application as a Docker container
  2. Create a multi-architecture container (fat manifest)
  3. Upload the container
  4. Create the Kubernetes YAML files
  5. Create the Helm chart
  6. Publish the Helm chart

The Helm chart values – values metadata topic describes a minimum single server configuration that you could use as an initial pilot and to further explore the capabilities of IBM Cloud Private. If your plan is to work through Docker-based tutorials available on the web, we recommend that you establish a separate Docker only environment and not experiment with the Docker environment running your IBM Cloud Private infrastructure.

If you have feedback about your experience or have suggestions to help us improve any part of the process, please .

Packaging the application as a Docker container

After you have established the Docker/Kubernetes environment on your target platform, the process for packaging your application as a Helm chart should be platform agnostic. However, establishing the Docker environment on your particular platform does require some platform-specific steps. While the x86 processor-based steps are available from many sources, the IBM POWER processor-based steps are not abundant. For further details, refer to the following resource on the Linux on IBM Power Developer Portal at: https://developer.ibm.com/linuxonpower/docker-on-power/

After installing Docker, refer to the following guide for an overview on how to get started with Docker: https://docs.docker.com/get-started/

Note: We recommend that you use a separate sandbox environment for performing the initial Docker experimentation and avoid using the Docker environment that is hosting your IBM Cloud Private environment.

Next: Create a multi-architecture container (fat manifest)

Create a multi-architecture container (fat manifest)

A fat manifest describes a multi-architecture container. A fat manifest is effectively a wrapper for a container that points to a set of architecture dependent containers for the same application, ideally with identical configurations, entry points, functionality, and so on. This means that if you were to reference the container from an Intel, ARM, POWER, or z platform, you would get the container that is appropriate for that platform. The hope is that the specifics associated with a particular hardware platform can be managed by this fat manifest architecture enabling the Helm chart and the associated files to be hardware agnostic.

Learn more:

Next: Upload the container

Upload the container

Docker Hub or Docker Store are probably your easiest options for uploading your images. Using GitHub for your charts or Docker files gives maximum exposure but least control so you may want to use your own private repository because it can afford the best opportunity for control over usage tracking.

Next: Create Kubernetes YAML file

Create Kubernetes YAML file

The Kubernetes YAML file includes the resources and procedures needed by the containers, such as:

  • Deployment
  • Service
  • StatefulSet
  • PersistentVolumeClaim
  • And so on…

Kubernetes provides a basic overview of the process, including an all-in-one example file at: https://kubernetes.io/docs/concepts/configuration/overview

Next: Create the Helm chart

Create the Helm chart

To create your Helm chart, review the overview of Helm charts and their format at: https://github.com/kubernetes/helm/blob/master/docs/chart_best_practices/README.md The Helm chart wraps the Kubernetes YAML files you created previously and points to the container’s public location.

Examples:

NOTE: The IBM Cloud Private catalog utilizes filters to help users quickly locate content. The filters are based on keywords in the Chart.yaml file. One of the filters is for platform (x86, PPC64le, or Linux on z Systems) selection. You must include the “ppc64le” keyword in order for your application to show up when the user selects the “ppc64le” platform filter. For example:

apiVersion: v1
name:  sample 
version: 0.7.1
description: Helm chart for  xxxxxxx
tillerVersion: ">=2.7.2"
keywords:
   - amd64
   - ppc64le
   - Tech
   - ICP

Next: Publish the Helm chart

Publish the Helm chart

IBM Cloud Private allows your to extend the catalog by linking in additional repositories. It provides a filter that allows you to display only the content from a specific repository. One option you have as the provider of the Helm chart is to publish it in your own repository and provide users with instructions on how to add your repository to the IBM Cloud Private list of repositories.

The preferred option is to work with your to have them publish your Helm chart to this GitHub repository, which was created to provide a central repository for ISVs and open source developers who have gone through this process (and enabled on Power).

The current plan of record is to have this repository included in the list of repositories preloaded into the IBM Cloud Private catalog. Until then, you can easily add it to the IBM Cloud Private catalog by going through the Add Repository dialog and providing this URL for the repository link: https://raw.githubusercontent.com/ppc64le/charts/master/repo/stable/

Best practices and technical guidance

Review the following topics to get the most out of the IBM Cloud Private platform and to maximize your application’s value by enabling the many features and functions of IBM Cloud Private. IBM can provide additional details and technical guidance on any of the following topics if you need it.

IBM Cloud Private architecture and component versions

This topic, while not specifically related to creating a Helm chart or getting your component added to the IBM Cloud Private catalog, provides some high-level technical insight about what’s “in the box” that you may find helpful.

Architecture

An IBM Cloud Private cluster has four main classes of nodes: boot, master, worker, and proxy. You can optionally specify a management node in your cluster. You determine the architecture of your IBM Cloud Private cluster before you install it. After installation, you can add or remove only worker nodes from your cluster. You cannot add a management node, convert a standard cluster into a high availability cluster, or add more master or proxy nodes to a high availability cluster.

Important: The boot, master, proxy, and management nodes in your cluster must use the same platform architecture. Only the worker nodes can use a different platform architecture. For example, if you plan to use Linux on Power 64-bit LE nodes as master nodes, you must use a Linux on Power 64-bit LE boot node.

  • Boot node
    A boot or bootstrap node is used for running installation, configuration, node scaling, and cluster updates. Only one boot node is required for any cluster. You can use a single node for both master and boot. You can use a single boot node for multiple clusters. In such a case, the boot and master cannot be on a single node. Each cluster must have its own master node. On the boot node, you must have a separate installation directory for each cluster. If you are providing your own certificate authority (CA) for authentication, you must have a separate CA domain for each cluster.
  • Master node
    A master node provides management services and controls the worker nodes in a cluster. Master nodes host processes that are responsible for resource allocation, state maintenance, scheduling, and monitoring. Multiple master nodes are in a high availability (HA) environment to allow for failover if the leading master host fails. Hosts that can act as the master are called master candidates.
  • Worker node
    A worker node is a node that provides a containerized environment for running tasks. As demands increase, more worker nodes can easily be added to your cluster to improve performance and efficiency. A cluster can contain any number of worker nodes, but a minimum of one worker node is required.
  • Proxy node
    A proxy node is a node that transmits external request to the services created inside your cluster. Multiple proxy nodes are deployed in a high availability (HA) environment to allow for failover if the leading proxy host fails. While you can use a single node as both master and proxy, it is best to use dedicated proxy nodes to reduce the load on the master node. A cluster must contain at least one proxy node if load balancing is required inside the cluster.

Component versions

A detailed list of the component versions currently in use in ICP Version 2.1.0.1 is available at: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/getting_started/components.html

Best practices for multi-architecture images

An individual container image contains binaries that have been compiled for a specific architecture. However, using a concept, which is referred to as a fat manifest, or more technically a manifest list, it is possible to serve multiple architectures from a single image reference. When the Docker daemon accesses such an image, it will automatically redirect to the image that matches the currently running platform architecture.

In order to use this capability, a Docker image must be pushed to the registry for each architecture, followed by the fat manifest.

Deploying the fat manifest

The recommended method of deploying a fat manifest is to use Docker tooling, namely the manifest sub-command. It is currently in the PR review process but can be easily used to create a multi-arch image and push it to any Docker registry. To get the amd64 binary, run the following commands:

curl -fsSL -o docker-cli https://github.com/clnperez/cli/releases/download/v0.1/docker-linux-amd64 \
&& chmod +x docker-cli
               

There are other platform binaries (for example OSX, Microsoft Windows, ppc64le, and s390x), which can be found at: https://github.com/clnperez/cli/releases/tag/v0.1

Here’s how to use the new Docker CLI sub-command to create manifest lists.

A little terminology first:

  • Name of the multi-arch image: mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1
  • Name of the x86_64 image: mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1-x86_64
  • Name of the IBM Power image: mycluster.icp:8500/default/ibmcom//web-terminal:2.8.1-ppc64le
  • Name of the IBM z image: mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1-s390 – name of the IBM z image

You can interactively create and push a manifest list from your local host by using the following sequence of commands:

./docker-linux-amd64 manifest create mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1 mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1-86_64 mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1-ppc64le mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1-s390x

./docker-linux-amd64 manifest annotate mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1 mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1-x86_64 --os linux --arch amd64 

./docker-linux-amd64 manifest annotate mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1 mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1-ppc64le --os linux --arch ppc64le 

./docker-linux-amd64 manifest annotate mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1 mycluster.icp:8500/default/s390x/web-terminal:2.8.1-s390x --os linux --arch s390x 

./docker-linux-amd64 manifest inspect mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1 

./docker-linux-amd64 manifest push mycluster.icp:8500/default/ibmcom/web-terminal:2.8.1

Note: It is important to understand that pushing a multi-arch image to a registry does not push the image layers. It only pushes a list of pointers to accessible images. This is why it is better to think of a multi-arch image as what it really is: a manifest list. In addition, when creating the fat manifest, you must make sure that all your platform-specific Docker images have been pre-imported into the registry. Otherwise, you will get an error saying, cannot use source images from a different registry than the target image: docker.io != mycluster.icp:8500.

After you have pushed your manifest list to a registry, you can use it just as you would have previously used an image name.

If you want to keep a local copy of the manifest list, remove the –purge flag. We recommend that you do this, because if missed, manifest inspect will return the local copy and not the registry copy. This can be confusing.

If for some reason you have trouble using the Docker CLI, there is an alternative method for deploying a fat manifest. You can use the open source manifest tool provided by Phil Estes of IBM along with several other contributors. To install the tool, run the following commands:

cd $GOPATH/src

mkdir -p github.com/estesp

cd github.com/estesp

git clone https://github.com/estesp/manifest-tool

cd manifest-tool && make binary```

Then, push the manifest using the following commands:

./manifest-tool push from-args \

--platforms linux/amd64,linux/ppc64le,linux/s390x \

--template foo/bar-ARCH:1.0 \

--target foo/bar:1.0

For detailed usage, refer to the Readme.md

Although the public IBM Bluemix® registry supports images with multiple architectures, it does not yet have support to define the manifest list. For more information, refer to the following blog: Create and use multi-architecture docker images.

Best practices and guidance for Helm charts

Review the following topics for some best practices and guidance for working with Helm charts and installing them in IBM Cloud Private:

Helm chart values grouping and naming

Helm chart values are used to configure Helm charts during installation of the Helm chart. The Helm Best Practices for Values page contains guidelines for naming conventions, usage (maps, not arrays), YAML formatting, and clarifying types. IBM Cloud Private standards are built on these to provide a consistent user experience across charts by using common names, allowed values, and a grouping mechanism. A nested structure has been defined with a grouping as first token if multiple instances exist (that is, when multiple persistent volume claims are required, parameters should be nested under grouping token such as pvc1, pvc2, and so on).

Required parameters

Parameters must consist of one or more tokens with nested values separated by a period (‘.’). Reading from left to right, the tokens must consistently be in the following order and naming (if parameter is applicable to a given chart):

  1. Grouping / Naming token (if multiple instances – that is, pvc1, pvc2)
  2. Qualifier (that is, persistence)
  3. Parameter (that is, enabled)
Parameter Definition Allowed values
image.pullPolicy Kubernetes image pull policy Always, Never, or IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise
image.repository Name of image, including repository prefix (if required) See Extended description of Docker tags on the Docker hub
image.tag Docker image tag See Docker tag description
persistence.enabled Persistent volume enabled true, false
persistence.storageClassName or [volume].storageClassName StorageClass pre-created by the Kubernetes system admin ibmc– (that is, ibmc-file-bronze). Future classes proposed include ibmc-ObjectFS-TBD and ibmc–v2-
persistence.existingClaimName or [volume].existingClaimName Name of specific pre-created persistent volume claim
persistence.size or[volume].size Amount of storage that applications require (Gi, Mi).
resources.limits.cpu Describes the maximum amount of CPU allowed See Kubernetes – meaning of CPU
resources.limits.memory Describes the maximum amount of memory allowed See Kubernetes – meaning of memory
resources.requests.cpu Describes the minimum amount of CPU required. If not specified, will default to limit or otherwise (if specified). Use the implementation-defined value. See Kubernetes – meaning of CPU
resources.requests.memory Describes the minimum amount of memory required. If not specified, will default to limit or otherwise (if specified). Use the implementation-defined value. See Kubernetes – meaning of memory
service.type Specify type of service Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. See Publishing services – service types

General guidance

  • Use Quoted strings to avoid type conversion errors.
  • License token (if required) should be defaulted to “not accepted” in values.yaml and set to “accept” on installation. License approach is subject to change as it evolves to cover license to accept on image, license for helm chart source, and license to use the chart.
  • Name values to be the same or similar (taking into account user experience) to the attribute under spec, where information will be mapped.

Helm chart values metatdata

IBM Cloud Private provides an option for creating metadata definitions that allow your Helm chart to describe how values are presented and validated by the IBM Cloud Private user interface (UI). The metadata definitions will allow you to display a description for each parameter, specify required/not required support, define lists that go into a dropdown list, create password fields, define the value type for a parameter, and possibly set a field to read-only.

Values-metadata.yaml specifications

Metadata for each parameter in the YAML file is defined with the __metadata key. It can have the following possible attributes:

  • label: Title of the attribute. If label is not specified, then the key from the values.yaml file is used.
  • description: Description of the attribute to assist user and inform user about the attribute.
  • type: (optional) Type of the attribute. Default type is string. Possible types are string, boolean, number, and password.
  • required:(optional) Describes if the attribute is required. Possible values: true or false
  • validation: (optional) Regular expression to validate an attribute value.
  • immutable: (optional) Mark attribute as a read-only. Possible values are true or false (not supported in the current version of ICP).
  • hidden: (optional) Hide attribute in the ICP UI. Possible values are true or false (not supported in the current version of ICP)
  • options: (optional) Array of label and value.

Helm chart predefined values

The following predefined values are available to every template, and cannot be overridden. As with all values, the names are case sensitive.

  • Release.Name: The name of the release (not the chart).
  • Release.Time: The time the chart release was last updated. This will match the Last Released time on a release object
  • Release.Namespace: The namespace the chart was released to.
  • Release.Service: The service that conducted the release. Usually this is Tiller.
  • Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
  • Release.IsInstall: This is set to true if the current operation is an install.
  • Release.Revision: The revision number. It begins with 1 and increments with each helm upgrade.
  • Chart: The contents of the Chart.yaml. Thus, the chart version is obtainable as Chart.Version and the maintainers are in Chart.Maintainers.
  • Files: A map-like object containing all non-special files in the chart. This will not give you access to templates, but will give you access to additional files that are present (unless they are excluded using .helmignore). Files can be accessed using {{index .Files "file.name"}}, {{.Files.Get name}} or {{.Files.GetString name}} function. You can also access the contents of the file as []byte using {{.Files.GetBytes}}.
  • Capabilities: A map-like object that contains information about the versions of Kubernetes ({{.Capabilities.KubeVersion}}, Tiller ({{.Capabilities.TillerVersion}}, and the supported Kubernetes API versions ({{.Capabilities.APIVersions.Has “batch/v1”).

Helm chart syntax

Helm runs each file in the templates directory through a Go Template rendering engine. Helm extends the template language, adding a number of utility functions for writing charts.

GoLang syntax can be found at the following URL: https://golang.org/pkg/text/template/

Best practices and guidance for working with Kubernetes services

Kubernetes provides a resource type called a service. Conceptually, a service is used to “front” a set of running pods within a cluster or to point to an externally named function. Services provide a layer of abstraction between the (pods/external functions) running behind the service and other functions that want to communicate with them.

There are several types of Kubernetes services:

  • ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
  • NodePort: Exposes the service on each node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You will be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
  • ExternalName: Maps the service to the contents of the externalName field (for example, foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. This requires version 1.7 or later of kube-dns.

Services use selectors and resource labels to determine the back-end resources that should be fronted by that service. You should have a basic understanding of Kubernetes services before proceeding. Review the Services topic found on the Kubernetes web page for more information.

Recommenation

Do not expose ports.port, ports.targetPort, or ports.name to users through values.yaml. ports.port refers to the listening port of the ClusterIP address. NodePort is a secondary listener for the service which has its own address and port. Consequently, this impacts how your service is accessed within the Kubernetes cluster. For instance, if you run an IBM WebSphere® container on port 9080, you can map that back to 80 for HTTP and 443 for HTTPS, then also listen on some node port. Communication within the cluster does not need to know the IBM WebSphere Application Server (WAS) specific 9080 address, it just uses HTTP port 80. From outside the cluster, you need to know the NodePort number. As a best practice, the role of the service is to facade the implementation details found at the pod level, and so the service should expose the protocol standard ports.

Adding secure endpoints

When you externalize a port in the Service template, in the IBM Cloud Private console > Workloads > Deployments > page, you will see an endpoint link that points to your running application. Because IBM Cloud Private generates a random external port (also known as NodePort), the endpoint link makes it easy to access the running application. In IBM Cloud Private 2.1.0.0, it was a known defect where it did not generate secure (that is, HTTPS) endpoint links. This defect is fixed in IBM Cloud Private 2.1.0.1. In order to generate secure endpoints, you must add https as part of the port name in the service template. For example:

spec:
   type: NodePort
   ports:
   - port: 9443 
     targetPort: 9443
     protocol: TCP
     name: abc-https

Best practices and guidelines for setting up storage

Kubernetes abstracts details of how storage is provided and how it is consumed using two API resources: PersistentVolume and PersistentVolumeClaim. In addition, a StorageClass abstraction exists which enables containers to dynamically provision storage based on the category required for a given workload (that is, ibm-file-silver for higher-intensity workloads).

Review the following topics to help you set up storage for your workload:

Storage scenario 1: emptyDir – default POD volume

emptyDir storage is allocated on a node when the pod is assigned to that node. When the pod exits the node, all data will be lost. This type of storage is referred to as emptyDir. You might choose this mode when your application is stateless or when your application uses file storage as a dynamic cache.

To use this type of storage, your deployment volumes spec section would look as shown in the following example:

1.	kind: Deployment
2.	apiVersion: extensions/v1beta1
3.	  spec:
4.	     volumes:
5.	        - name: "{{ template "fullname" . }}-{{ .Values.myDataPVC.name }}"
6.	      emptyDir: {}


Note: A container crashing does not remove a pod from a node. So, the data in an emptyDir volume is safe across container crashes.

EmptyDir can also refer to memory-based “tmpfs” based storage. For example:

1.	kind: Deployment
2.	apiVersion: extensions/v1beta1
3.	  spec:
4.	      volumes:
5.	    - name: "{{ template "fullname" . }}-{{ .Values.myDataPVC.name }}"
6.	    emptyDir: 
7.	         medium: Memory

Note: Be aware that unlike disks, tmpfs is cleared on node reboot and any files you write will count against your container’s memory limit.

Storage scenario 2: Pre-create a persistent volume

Pre-create a persistent volume then define a persistent volume claim in your chart which will bind or reserve the pre-created persistent volume.

Note: There is a new type of persistent volume called a local-volume. It is listed as alpha in Kubernetes 1.8 but several IBM Cloud Private platform components are already using it. See https://kubernetes.io/docs/concepts/storage/volumes/#local for more information.

You may choose PV mode when:

  • Your IBM Cloud Private cluster is not set up with dynamic provisioning.
  • You need to pre-create a pool of volumes that could be used by many different workloads. Leave it up to the charts persistent volume claim to bind to one from the pool. You could create different pools of storage using PV labels and PVC selectors.

Note: Creating a Kubernetes persistent volume resource does not interact with your storage infrastructure! Your Kubernetes admin must allocate the real storage before you create a PV.

NFS prerequisite:

Your NFS admin must insure that the proper NFS exports are created. This may require modifications to the /etc/fstab setup on your NFS server with proper path and access setup. The worker nodes in your IBM Cloud Private cluster must be able to act as an NFS client. On Ubuntu worker nodes, this requires the installation of nfs-common using the following command:

apt-get install nfs-common 

GlusterFS prerequisite:

Your GlusterFS admin must pre-create a gluster volume to be referenced by the PV. Gluster volumes can be created in many different ways. The following snippet from a sample create glusterpv chart creates a gluster backed volume that was pre-created using Heketi.

Note: The label is set to env=development

1. kind: PersistentVolume 
2. apiVersion: v1 
3. metadata: 
4.    name: {{ .Release.Name }}-pv 
5.    labels: 
6.      env: development 
7.  spec: 
8.    storageClassName: "manual" 
9.    capacity: 
10.      storage: 1Gi 
11.   persistentVolumeReclaimPolicy: Delete   
12.   accessModes: 
13.   - ReadWriteOnce     
14.   glusterfs:          
15.      endpoints: {{ .Release.Name }}-ep  
16.      path: {{ .Values.glusterVolName }} 


To use this PV, your chart must include a resource definition to create a PVC. The following code snippet will create a PVC that binds to a PV that has storageClassName=manual, accessMode=ReadWriteOnce, size => 1 Gi and label of env=development.

1. kind: PersistentVolumeClaim 
2. apiVersion: v1 
3. metadata: 
4.    name: "{{ template "fullname" . }}-{{ .Values.myDataPVC.name }}" 
5. spec: 
6.    storageClassName: "manual" 
7.    # use selectors in the binding process 
8.    selector: 
9.    #  matchLabels: 
10.    #   {{ .Values.myDataPVC.selector.label }}: "{{ .Values.myDataPVC.selector.value }}" 
11.      matchExpressions: 
12.        - {key: "env", operator: In, values: ["development"]} 
13.    accessModes: 
14.      - "ReadWriteOnce" 
15.    resources: 
16.      requests: 
17.        storage: 1Gi 

Use the PVC created above by its claimName.

1. kind: Deployment 
2. apiVersion: extensions/v1beta1 
3.    spec: 
4.       volumes: 
5.         - name: "{{ template "fullname" . }}-{{ .Values.myDataPVC.name }}" 
6.           persistentVolumeClaim: 
7.             claimName: {{ template "fullname" . }}-{{ .Values.myDataPVC.name 

Storage scenario 3: Pre-create both persistent volume and persistent volume claim

Pre-create both a persistent volume and a persistent volume claim. Use the pre-created persistent volume claim by name within your Helm chart.

You may choose this mode when:

  • Your IBM Cloud Private cluster is not set up with dynamic provisioning.
  • You want to create the persistent volume and pre-reserve it so that it cannot be accidently bound to another chart looking for storage with the same characteristics. There is a one to one relationship between a PV and a PVC.

To use this type of storage, your chart would not include a resource definition for a PVC. Your deployment would refer to the name of existingClaimName as shown in the following example:

1. kind: Deployment 
2. apiVersion: extensions/v1beta1 
3.    spec: 
4.       volumes: 
5.         - name: "{{ template "fullname" . }}-{{ .Values.myDataPVC.name }}" 
6.           persistentVolumeClaim: 
7.             claimName: {{ .Values.myDataPVC.existingClaimName }} 

Tip: If you have a dynamic provisioning setup with a default storageClass, you can still use the IBM Cloud Private user interface to pre-create a PVC. Creating the PVC will dynamically create a persistent volume backed by a real storage implementation.

Storage scenario 4: Dynamic provisioning

Use Kubernetes dynamic provisioning to create both persistent volume and persistent volume claim as part of the Helm chart deployment.

You may choose this mode when:

  • You want to minimize the pre-requisite or setup steps required to deploy a Helm chart.
  • You optionally configured GlusterFS or VMware storage during your IBM Cloud Private installation.
  • Your IBM Cloud Private cluster is set up with a storage infrastructure that supports dynamic provisioning. There are many backing storage implementations that currently support dynamic provisioning.
  • Your Kubernetes admin has created StorageClasses.
  • Your organization allows users to manipulate storage.

Here is a snippet that creates a StorageClass named gluster-test:

1. kind: StorageClass 
2. apiVersion: storage.k8s.io/v1 
3. metadata: 
4.   name: gluster-test 
5. provisioner: kubernetes.io/glusterfs 
6. parameters: 
7.   resturl: "http://10.1.198.2:8080" 

Notes:

  • Use kubectl get storageclasses to get a list of storageclasses defined in your environment.
  • Use kubectl describe storageclass/<name> to show details about a specific storage class

To use dynamic provisioning, your chart will include resource definitions to create a PVC using storageClassName=gluster-test and reference the PVC by name in the deployment, as shown in the following example:

1. kind: PersistentVolumeClaim 
2. apiVersion: v1 
3. metadata: 
4.   name: "{{ template "fullname" . }}-{{ .Values.myDataPVC.name }}" 
5. spec: 
6.   # dynamic provisioning using storageClass 
7.   storageClassName: "gluster-test" 

Use the PVC by referencing it by name:

1. kind: Deployment 
2. apiVersion: extensions/v1beta1 
3.    spec: 
4.       volumes: 
5.         - name: "{{ template "fullname" . }}-{{ .Values.myDataPVC.name }}" 
6.           persistentVolumeClaim: 
7.             claimName: {{ template "fullname" . }}-{{ .Values.myDataPVC.name }} 

Compliance recommendations

  • Do not create persistent volumes within your chart. PVs must either be pre-created ahead of time or dynamically provisioned based on how you define your persistent volume claims.
  • Do not use hostPath as a persistent volume (single-node testing only – local storage is not supported in any way and will not work in a multi-node cluster).
  • Do not use the apha level new Local Storage storage provisioner. We should not be using alpha support at this time. We will readdress this in future IBM Cloud Private releases.
  • For charts that use the kind: Deployment resource; create 1 to N persistent volume claims in your chart or re-use an exiting existing persistent volume claim by specifying an existingClaimName (this does not apply to statefulsets).
  • For charts that use the kind: StatefulSet resource; create 1 to N volumeClaimTemplates in your chart.
  • Include a section in your README.md file that indicates any unique storage requirements your chart has. The README.md file must also indicate the access modes used by all persistent volume claims.

Guidelines to consider

  • For all charts, externalize the following attributes at the top level in the values.yaml file. These settings are global for the chart.
    Parameter Definition Default
    persistence.enabled Persistence volume enabled true
    persistence.useDynamicProvisioning Use StorageClasses to dynamically create persistence volumes true
  • For charts that are kind: Deployment based, externalize the following attributes for each persistent volume claim in the values.yaml file. Each of the following parameters should be prefixed with a unique grouping name. This allows your chart to have multiple persistent volume claims each with their own persistence attribute.
    Parameter Definition Default
    .name Name used to uniquely identify a PVC application specific
    .storageClassName StorageClass pre-created by the Kubernetes sysadmin. An empty string will use the storageClass specified as default IBM StorageClass that provides type of storage application requires
    .existingClaimName Name of specific pre-created persistent volume claim nil (not used in conjunction with statefulsets)
    .size Amount of storage applications require (in Gi, Mi) application specific
    selector.label Label to be used during PVC bind application specific
    selector.value Value for selector.label application specific
  • For charts that are kind: StatefulSet based, externalize the following attributes for each entry in volumeClaimTemplates in the values.yaml file. Each of the following parameters should be prefixed with a unique grouping name. This allows your chart to have multiple persistent volume claims, each with their own persistence attribute.
    Parameter Definition Default
    .name Name used to uniquely identify a PVC application specific
    .storageClassName StorageClass pre-created by the Kubernetes sysadmin. An empty string will use the storageClass specified as default IBM StorageClass that provides type of storage application requires
    .size Amount of storage applications require (in Gi, Mi) application specific
  • Use persistent volume labels and persistent volume claim selectors to control how PVCs bind or reserve storage. If you would like to provide finer grained control over how PVCs decide to bind or reserve persistent volumes, use labels and selectors.

    Here is a usage scenario:

    • IBM Cloud Private is installed in a customer’s data center where the same IBM Cloud Private cluster is used for both developer usage and integration testing.
    • Dynamic provisioning is not configured.
    • Developer-based deployments should be using slower persistent storage. Integration testing should be using fast-SSD persistent storage.
    • Both use cases require storage that is 10 Gi ReadWriteOnce. So each time developers or integration testers deploy a Helm chart, they need a PV with the same size and accessMode but the developer should be using slower storage and tester should be using fast-SSD storage.

    One way to achieve this is by using pv labels and pvc selectors. The Kubernetes admin could create two “pools” of available persistent volumes. Each pool contains 10Gi ReadWriteOnce volumes with one pool coming from a slower 1G network (“slower”) and the other coming from a 40Gi high-speed network with SSD backed storage (“fast-SSD”).

      labels: env:"slower"
    • The PVs that make up the “fast SSD” pool are labeled:
      labels:
        env:"fast-SSD"
      
    • When developers deploy a Helm chart, they set up a PVC selector to match the slower label:
      selector:
        matchLabels:
          env: "slower"
      
    • When the integration testers deploy a Helm chart, they set up a PVC selector to match the fast-SSD label:
      selector:
        matchLabels:
          env: "fast-SSD"
  • During deployment, the developer chart will bind to a 10Gi/ReadWriteOnce volume in the slower pool whereas the integration tester will bind to fast-SSD storage.
  • Storage nuances you should understand:
    • When you are trying to bind a PVC to a PV, size is not matched exactly, if there is an available PV with size greater than or equal to what you are requesting with matching labels and AccessMode, Kubernetes will claim it.
    • If you are dynamically provisioning persistent volumes, the retain policy by default is set to Delete. If you delete the chart, you loose your data.
    • If you are using StatefulSets, your PVCs and PVs are not deleted by running helm delete. You will need to have a manual process to cleanup storage.
    • A persistent volume claim can be bound to one and only one persistent volume.
    • A persistent volume claim can be used or referenced by more than one pod within your deployment/statefulset chart.
    • AccessModes are about how the nodes access the storage. Here are the three modes:
      • ReadWriteOnce – the volume can be mounted as read/write by a single node. ReadWriteOnce means that at any point in time only one of the nodes can be reading from and writing to the device (that is, block device). If you are using this mode, you also need to think about how your pods run within the cluster. You may need to use a nodeSelector or node_affinity setup to ensure that multiple pods that are using the same PVC are scheduled to run on the same node.
      • ReadOnlyMany – the volume can be mounted read-only by many nodes. ReadOnlyMany means that at any point in time many nodes can simultaneously read from the backing device (that is, shared read-only file system).
      • ReadWriteMany – the volume can be mounted as read/write by many nodes. ReadWriteMany means that at any point in time many nodes can simultaneously read and write the backing device (that is, a shared file system such as NFS).
    • PersistentVolumeReclaimPolicy setup on PersistentVolumes. When setting up a persistent volume you will need to select the desired behavior when the volume is released by the pod:
      • Retain: When the PVC for this PV is deleted, the real backing storage is not deleted and the PV will be in a phase/state of Released. The data on the backing store is not deleted and you manually need to clean it up. PVs in this phase are not considered as candidates for another PVC creation.
      • Recycle: When the PVC for this PV is deleted, the real backing storage is not deleted and the PV will be in a phase/state of Available. The data on the backing store can be scrubbed.
      • Delete: This is the default policy when PVs are dynamically created. When the PVC is deleted, both the Kubernetes PV object and the real backing storage are deleted.
      Note: Not all storage implementations support all three of these policies. Check the specifications for the storage environment that you are using.

Using subpath in a deployment

If you would like to use one volume for multiple purposes in a single pod, the volumeMounts.subPath property can be used to specify a sub-path inside the referenced volume instead of its root, like this:

1.	apiVersion: v1
2.	kind: Pod
3.	metadata:
4.	name: my-lamp-site
5.	spec:
6.	containers:
7.	- name: mysql
8.	image: mysql
9.	volumeMounts:
10.	- mountPath: /var/lib/mysql
11.	name: site-data
12.	subPath: mysql

IBM Public Cloud StorageClasses

To enable workloads to easily move from IBM Cloud Private to IBM Public cloud, a common set of storageClasses is defined. A default StorageClass can be created during installation or by following the Knowledge Center instructions.

Notes:

  1. IBM Cloud Private pre-packages two provisioners in October: GlusterFS, and VMware vSphere datastore (only valid for VMware environment as Iaas).
  2. Refer to this blog for comparison: File vs Block vs Object Storage.

Local volumes versus hostpath

There is a new storage option you might want to consider if you are currently using a hostpath called local-volume. If you are currently using hostpath, you should be aware of the following implications:

  • Hostpath is not a persistent volume, the Kubernetes scheduler is not aware of hostpath. If your pod has to be recycled, there is no guarantee that it will be restarted on the host that it was originally running on. It could be scheduled and run on another node where the hostpath contains completely different information. You can work around this by manually making sure that all worker nodes have the same shared file system backing the hostpath.
  • The files or directories created on the underlying hosts are writable only by root. You either need to run your process as root in a privileged container or modify the file permissions on the host to be able to write to a hostPath volume.

Local-volume support eliminates these issues.

Local-volumes are now proper objects in kubernetes. They add in the ability to use node-affinity rules within the PV spec to influence how the pods that use the PV get scheduled to a node. The following snippet illustrates the new capability.

1. apiVersion: v1 
2. kind: PersistentVolume 
3. metadata: 
4.   name: example-pv 
5.   annotations: 
6.         "volume.alpha.kubernetes.io/node-affinity": '{ 
7.             "requiredDuringSchedulingIgnoredDuringExecution": { 
8.                 "nodeSelectorTerms": [ 
9.                     { "matchExpressions": [ 
10.                         { "key": "kubernetes.io/hostname", 
11.                           "operator": "In", 
12.                           "values": ["example-worker-node-1"] 
13.                         } 
14.                     ]} 
15.                  ]} 
16.               }' 
17. spec: 
18.     capacity: 
19.       storage: 100Gi 
20.     accessModes: 
21.     - ReadWriteOnce 
22.     persistentVolumeReclaimPolicy: Delete 
23.     storageClassName: local-storage 
24.     local: 
25.       path: /mnt/disks/ssd1 

This example shows the use of a new volume.alpha.kubernetes.io/node-affinity annotation. It sets up a constraint that reads: “This persistent volume at location /mnt/disks/ssd1 exists on the Kubernetes worker node with hostname = example-worker-node-1” . The Kubernetes scheduler will use this constraint to ensure that it always schedules pods that are consuming this PV on the same worker node.

Using Kubernetes local volume provisioner

Follow the instructions from the GitHub repository at: https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume

The local volume provisioner is an out-of-tree storage provisioner. This means that it is not packaged as part of base Kubernetes and is still considered alpha.

The local volume provisioner is different than others. It does not perform dynamic provisioning. Its purpose is to help manage persistent volumes in Kubernetes. It will automatically create PVs as well as manage them after they are created. You can install it using a Helm chart but there are several prerequisites that need to be set up before you run the chart. The prerequisites and installation are covered below. Here is a high-level diagram of how it works:

Diagram showing how local volume provisioner works

Kubernetes local volume provisioner behavior:

  • When the daemonset pod starts, it reads data in the Discovery path and creates persistent volumes.
  • When a new file system is added (Gold Vol3), the daemonset pod will detect and add PVs

Setup and enable the required feature gates

The current version of IBM Cloud Private 2.1.0.1 is built on Kubernetes 1.8 and has the setting PersistentLocalVolumes=true but not MountPropagation.

To enable the MountPropagation=true gate, you need to update /etc/cfc/pods/master.json and then restart the k8s-master pod. This sets the gate for the following three containers that are running in this pod:

  • controller-manager
  • apiserver
  • scheduler


Verify the settings using kubectl describe -n kube-system po/k8s-master:

1.	$ kc describe po/k8s-master-9.5.28.17 -n kube-system
2.	Name: k8s-master-9.5.28.17
3.	Namespace: kube-system
4.	Node: 9.5.28.17/9.5.28.17
5.	Start Time: Fri, 12 Jan 2018 13:35:57 -0600
6.	Labels: 
7.	Annotations: kubernetes.io/config.hash=846cb2eb2e68a7de9acff292c3c79f0b
8.	kubernetes.io/config.mirror=846cb2eb2e68a7de9acff292c3c79f0b
9.	kubernetes.io/config.seen=2018-01-15T19:05:40.572589416Z
10.	kubernetes.io/config.source=file
11.	kubernetes.io/psp=privileged
12.	scheduler.alpha.kubernetes.io/critical-pod=
13.	Status: Running
14.	IP: 9.5.28.17
15.	Controllers: 
16.	Containers:
17.	controller-manager:
18.	Container ID: docker://36ad21dfc1a6bfca8e48757dc7359e55c253964e594dda410373f463b740096e
19.	Image: registry.ng.bluemix.net/mdelder/kubernetes:v1.8.3-ee
20.	Image ID: docker-pullable://registry.ng.bluemix.net/mdelder/kubernetes@sha256:fc877ca687d279f5e1997afe92e60b980b220aa2becee9bb79ad0d68fadce528
21.	Port:
22.	Command:
23.	/hyperkube
24.	controller-manager
25.	--master=https://9.5.28.17:8001
26.	--service-account-private-key-file=/etc/cfc/conf/server.key
27.	--feature-gates=TaintBasedEvictions=true,PersistentLocalVolumes=true,MountPropagation=true
28.	--root-ca-file=/etc/cfc/conf/ca.crt
29.	--min-resync-period=3m

Make sure that the kubelet on all worker nodes have the right gates

1.	/etc/systemd/system/kubelet.service 
2.	stop the service with systemctl stop kubelet
3.	edit /etc/systemd/system/kubelet.service
4.	start the service. Make sure that it picks up the changes to the service definition. I found that just running systemctl daemon-reload did not pick up the changes and I needed to stop and start again.
5.	You should see the values from systemctl status kubelet.
6.	systemctl status kubelet
7.	kubelet.service - Kubelet Service
8.	Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
9.	Active: active (running) since Tue 2018-01-16 13:00:47 UTC; 41min ago
10.	Docs: https://github.com/kubernetes/kubernetes
11.	Main PID: 30016 (hyperkube)
12.	Tasks: 40
13.	Memory: 76.7M
14.	CPU: 1min 54.682s
15.	CGroup: /system.slice/kubelet.service
16.	─30016 /opt/kubernetes/hyperkube kubelet --feature-gates Accelerators=true,PersistentLocalVolumes=true,MountPropagation=true,ExperimentalCriticalPodAnnotation=true --allow-privileged=true --docker-disable-shared-pid --require-kubeconfig --k

Deploy the local-volume provisioner Helm chart

First, you need to create new service accounts: clusterrole and clusterrolebind. The kube yaml file needed to do this is contained in the local-volume directory. In the following example we cloned the external-storage git repository to ~/repos. Change the local-volume directory by running the following commands:

$ pwd

OUTPUT: 
/Users/huizenga/repos/external-storage/local-volume

$ kubectl create -f ./provisioner/deployment/kubernetes/admin_account.yaml

OUTPUT:
serviceaccount "local-storage-admin" created
clusterrolebinding "local-storage-provisioner-pv-binding" created
clusterrolebinding "local-storage-provisioner-node-binding" created

Deploy the Helm chart using the new service account

Deploying the Helm chart by using the new service account actually installs the provisioner. provisioner, and the provisioner is really a daemonset on each node and a configmap.

$ pwd

OUTPUT: 
Users/huizenga/repos/external-storage/local-volume/helm/provisioner

$ helm install -n local-provisioner . --debug

Verify that the provisioner is working

1.	$ kc logs local-volume-provisioner-tjhsb    
2.	ERROR: logging before flag.Parse: $  Could not read file: /etc/provisioner/config/..data due to: read /etc/provisioner/config/..data: is a directory
3.	ERROR: logging before flag.Parse: $ Configuration parsing has been completed, ready to run...
4.	 Creating client using in-cluster config
5.	 Starting controller
6.	 Initializing volume cache
7.	 Starting Informer controller
8.	 Waiting for Informer initial sync
9.	Controller started
10.	 Found new volume of volumeType "file" at host path "/mnt/disks/vol2" with capacity 8412532736, creating Local PV "local-pv-ff2737db"
11.	 Created PV "local-pv-ff2737db" for volume at "/mnt/disks/vol2"
12.	 Found new volume of volumeType "file" at host path "/mnt/disks/vol3" with capacity 8412532736, creating Local PV "local-pv-77ee3266"
13.	 Added pv "local-pv-ff2737db" to cache
14.	 Added pv "local-pv-77ee3266" to cache
15.	 Created PV "local-pv-77ee3266" for volume at "/mnt/disks/vol3"
16.	 Found new volume of volumeType "file" at host path "/mnt/disks/vol1" with capacity 8412532736, creating Local PV "local-pv-24019a08"
17.	 Updated pv "local-pv-ff2737db" to cache
18.	 Added pv "local-pv-24019a08" to cache
19.	 Created PV "local-pv-24019a08" for volume at "/mnt/disks/vol1"
20.	 Updated pv "local-pv-77ee3266" to cache
21.	 Updated pv "local-pv-24019a08" to cache
22.	$
23.	$ kubectl get pv
24.	local-pv-d0bb00bf 8022Mi RWO Delete Available fast-disks 2h
25.	local-pv-f6c4e1e5 8022Mi RWO Delete Available fast-disks 2h
26.	$
27.	$ kubectl get pv/local-pv-24019a08 -o yaml
28.	apiVersion: v1
29.	kind: PersistentVolume
30.	metadata:
31.	annotations:
32.	pv.kubernetes.io/provisioned-by: local-volume-provisioner-9.5.28.20-e0727daa-f7cf-11e7-b368-005056ba76d1
33.	volume.alpha.kubernetes.io/node-affinity: '{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/hostname","operator":"In","values":["9.5.28.20"]}]}]}}'
34.	creationTimestamp: 2018-01-15T18:08:14Z
35.	name: local-pv-24019a08
36.	resourceVersion: "407894"
37.	selfLink: /api/v1/persistentvolumes/local-pv-24019a08
38.	uid: 0e4f844c-fa1f-11e7-b368-005056ba76d1
39.	spec:
40.	accessModes:
41.	- ReadWriteOnce
42.	capacity:
43.	storage: 8022Mi
44.	local:
45.	path: /mnt/disks/vol1
46.	persistentVolumeReclaimPolicy: Delete
47.	storageClassName: fast-disks
48.	status:
49.	phase: Available

Verify that adding a new local-volume after setup is working

This is done by creating a new shared file system under /mnt/disks:

1.	$ for vol in vol4; do mkdir /mnt/disks/$vol; mount -t tmpfs $vol /mnt/disks/$vol; done
2.	Verify that the change is detected by the local-volume provisioner
3.	$ kubectl logs local-volume-provisioner-hdzkh
4.	 Found new volume of volumeType "file" at host path "/mnt/disks/vol4" with capacity 8412532736, creating Local PV "local-pv-45482171"
5.	Created PV "local-pv-45482171" for volume at "/mnt/disks/vol4"
6.	Added pv "local-pv-45482171" to cache
7.	Updated pv "local-pv-45482171" to cache

Deploy a chart using storageClassName

To deploy a chart that uses a local PV, disable it use dynamic provisioning and use the storage class name that matches what was automatically used by the local-volume-provisioner when you configured it.

The magic happens as part of the Kubernetes scheduling of where to deploy the pod. Recall that with local-volumes, they now contain node affinity rules that tie the PV to the physical node it was created on. The Kubernetes scheduling function now uses the PV-level node affinity rules during the scheduling process.

Delete a chart and its corresponding PVC

When you delete a PVC that is bound to a local-volume that was created by the local-provisioner, the corresponding PV will go from bound to available.

Best practices for connecting workloads to the logging service

There are two primary means of shipping application logs to a deployed ELK stack. The simplest approach, if your application design can support it, is to write logs to the stderr and stdout stream, which are collected automatically by Docker, and shipped to the IBM Cloud Private logging service by default. The second approach, packaging a Filebeat container as a sidecar is more involved, but flexible enough to adapt to most workloads that write their logs to one or more log files.

Automatic log collection using stderr and stdout

Most Docker-enabled applications launch their main process using the Dockerfile command, ENTRYPOINT. For example:

FROM busybox:latest 
# Copy application binaries 
ENTRYPOINT ['npm', 'start']

In this example, Docker will automatically capture any content written by the Node.js application to the stdout or stderr pipes. That content is then written by Docker to a system-level log file accessible to Kubernetes daemonset pods.

The logging service deploys Filebeat into daemonset pods that monitor and automatically ship the Docker system-level logs to the Elastic stack. So your application logs that are written to stdout and stderr will be available in the logging service with no additional work.

Using a Filebeat sidecar image

Most other applications store logs locally in discrete files. On Linux systems, they are typically stored somewhere under the /var/log/ directory. In either case, these files are not directly accessible to other containers, and you must configure another method to stream the logs outside of the container. The most effective solution is to add another container to your pod (often termed as a sidecar) that has visibility to those logs and runs a streaming tool, such as Filebeat. The sidecar approach is effective because the main application container shares the folders under which logs are stored without affecting the application in any way.

For detailed instructions on configuring a sidecar image for your pods, refer to the Build a sidecar section of the public logging documentation in the IBM Cloud Private Knowledge Center at: https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0.3/manage_metrics/logging_elk.html

Best practices for monitoring your cluster and applications

You can use the IBM Cloud Private cluster monitoring dashboard to monitor the status of your cluster and applications. The monitoring dashboard uses Grafana and Prometheus to present detailed data about your cluster nodes and containers. For more information about Grafana, see the Grafana documentation. For more information about Prometheus, see the Prometheus documentation.

Workloads should integrate with the platform-provided monitoring tools, rather than packaging their own Prometheus and Grafana containers when possible.

Workload products and applications need to make metrics data available in the Prometheus format to integrate with the logging service. Pre-built Prometheus exporters are available for many open source products at https://Prometheus.io/docs/instrumenting/exporters/. For products that do not already expose metrics data using a Prometheus endpoint, refer to https://Prometheus.io/docs/instrumenting/ for guidance on adding this capability to your product. By default, IBM Cloud Private deploys Prometheus and Grafana for system monitoring. You can also deploy more monitoring stacks from the catalog with customized configurations to monitor your environment.

Integrating Workloads with the Monitoring Service

For workloads that expose a Prometheus metrics endpoint you will need to define the metrics endpoint as a Kubernetes service with the annotation: Prometheus.io/scrape: ‘true’. For example, you could create a file named metrics-service.yaml that contains the following script:

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  labels:
    app: {{ template "fullname" . }}
  name: {{ template "fullname" . }}-metrics
spec:
  ports:
  - name: {{ .Values.service.name }}-metrics
    targetPort: 9157
    port: 9157
    protocol: TCP
  selector:
    app: {{ template "fullname" . }}
  type: ClusterIP

Using Collectd

Collectd is an open source metrics gathering tool. If your workload already provides metrics in the collectd format, or a collectd plugin exists for your workload (as is the case for many open source products) the collectd exporter provides a way to easily expose your data to Prometheus. If your workload does not already use collectd or have a collectd plugin, you can disregard this section. For workloads that use collectd and depend on collectd-exporter to expose metrics to Prometheus inside the application container, you will need to update the collectd configuration file. Add the network plugin and point to the collectd exporter.

LoadPlugin network

Server "${RELEASE}-exporter" "25826"


Best practices for metering and licensing workloads

Metering and licensing for workloads on IBM Cloud Private are based on virtual processor cores available, capped, and/or utilized by the containerized components that make up the product offering.

Virtual core information is automatically collected by a metering daemon running in the ICP Kubernetes cluster. Workloads must identify themselves to this daemon so that the appropriate metrics can be gathered and attributed to the running offering.

The metadata is used to associate metrics gathered for metering purposes with the offering deployed. Licensing for these offerings are dependent on the terms and conditions of entitlement when the offering is purchased and not by the metering service. The metering service measures the metrics for the running offering, and reports this usage to the customer.

Integration with the metering service is required even for free workloads, because customers may also use the utilization data generated by the metering service for chargeback purposes.

Workload products must specify their product ID, product name, and product version for metering purposes. This can be done either in their Helm charts as metadata annotations on the pods, or by using Docker image labels. Both approaches will continue to be supported, and workloads should choose the approach that best meets their needs. Note that if metering metadata is specified at both the chart level and the image level, the metadata in the chart will override the metadata provided by the image.

Defining metering metadata in a Helm chart

Product teams should specify their product ID, product name, and product version for the meter reader using metadata annotations on the pods. This is defined in the spec template section of the Helm chart for a specific deployment.

  • A product identifier (productID) uniquely identifies the offering
  • A product name (productName) is the human readable name for the offering
  • A product version identifier (productVersion) specifies the version, release, modification, and fix level (v.r.m.f) of the offering

These are specified as follows in the deployment.yaml file of a Helm chart (in addition to any other existing pod metadata annotations defined in that file):

kind: Deployment
spec:
 template:
  metadata:
   annotations:
     productName: WebSphere Application Server Liberty
     productID: fbf6a96d49214c0abc6a3bc5da6e48cd
     productVersion: 17.0.0.1

Charts containing multiple products in separate containers

If multiple containers are contained in a single chart, separate values per container can be specified in the same string by providing a pipe character (‘|’) followed by a containerName:productString key/value pair for each container (the first character will be the ‘|’ pipe character). For example, in a chart containing three containers, this productName YAML would specify a different product name for each container:

productName: '|containerName1:Product Name 1|containerName2:Second Product Name|containerName3:Final Product'

Use the same syntax for productID and productVersion.

Product ID formatting

Each offering of each product on IBM Cloud Private must be able to be uniquely identified by the metering service. This means that if you want two different editions of the same product to be seen differently by the metering service (for example, a developer edition and an enterprise edition) you need to provide different product ID metadata for the two offerings.

Workloads should adhere to the following standard for the product ID string: productName_productVersion_licenseType_uniqueKey. Each of the four fields, described below, define your product ID and must be separated with an underscore ( _ ) character; Do not use spaces or special characters other than underscores in your product ID string.

  • productName: A string representing your company and product name
  • productVersion: A string representing the version of the product in the image
  • licenseType: A string that reflects the type of license in this build (remember this is just a unique string, so if you need to capture multiple factors in this field, you can)
  • uniqueKey: a 5-digit numerical string that allows you to build or track multiple versions of a product with the same version and license type over time, if needed. If you do not require this field, use ‘00000’. If you need to ship a revision later, you can augment to 00001, and so on.

For example: IBMIntegrationBusStandardEdition_10009_perpetual_00000

These rules are designed to accomplish the following two goals:

  • Ensure that each product on the platform has a unique product ID label.
  • Ensure that self-defined product IDs have an obvious visual meaning.

Enabling GPU support

Refer to Knowledge Center for configuring GPU support at: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/manage_applications/deploy_app_gpu.html

For applications that utilize the NVIDIA “persistence mode” please note the following: NVIDIA has deprecated their current or “legacy” persistence implementation in favor of a more elegant “Persistence Daemon”. You can read more about this change at their Developer Zone site in sections 3, 4, and 5.

ICP has switched over to utilizing the Persistence Daemon implementation. To complete the implementation for your application you will need to add a mount of the “persistence indicator”. For example, add the following to your “spec:” section:

 spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
          volumeMounts:
            - mountPath: "/var/run/nvidia-persistenced"
              name: nvidia-pers-indicator
          resources:
            limits:
              alpha.kubernetes.io/nvidia-gpu: 1
      volumes:
        - name: nvidia-pers-indicator
          hostPath:
            path: /var/run/nvidia-persistenced
            type: Directory

Hardware configuration recommendations

IBM Cloud Private is a Docker-based infrastructure. You can deploy it in a single node for initial testing and proof of concept (POC) work or ideally, you’ll deploy it on a cluster of servers.

The minimum hardware configuration is a single IBM Power® server that supports either single node deployment or a four kernel-based virtual machine (KVM) cluster. More information can be found at: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/supported_system_config/hardware_reqs.html

Minimum configuration recommendations

  • Single node: At least eight cores with at least 2.4 GHz, 16 GB RAM, and 151 GB free disk space
  • Multi-node cluster
    • Boot / Master (one or more): At least two cores with at least 2.4 GHz, 4 GB RAM, and 151 GB free disk space
    • Proxy (one or more): At least two cores with at least 2.4 GHz, 4 GB RAM, and 40 GB free disk space
    • Worker (one or more): At least one core with at least 2.4 GHz, 4 GB RAM, and 100 GB free disk space

Single physical server topology example

Diagram of single physical server example for IBM Cloud Private

Note: Consider using 64 GB RAM if you’re planning for KVM deployment. 64 GB would also maximize the use of the available memory bandwidth (populate at least half of the DIMM slots).

Support and technical resources

We’re committed to supporting your enablement efforts and have several ways for you to get help should you need it, including: