***This article has moved, due to the impending closure of this site, and can now be found here: Managing IBM Platform Common Services within IBM Cloud Pak for Integration


Please join us over at our new community site for discussion, blogging and other resources about IBM App Connect, IBM Integration Bus, IBM App Connect Enterprise software and more.


(click image to visit community site)


The IBM Platform Common Services (also referred to in this article as the common services) are used within Cloud Paks to provide consistency across the IBM Cloud Paks, this post discusses how to manage the IBM Platform Common Services within OpenShift clusters where you want to control which nodes they run on, focusing on the IBM Cloud Pak for Integration (CP4I).

This post applies to any CP4I installation that was installed using the inception style installer. You can tell if this method was used by checking that the common services pods (eg auth-idp) are running in the kube-system namespace – if they are then they were installed using the inception style installer.

What are the IBM Platform Common Services?

The common services were originally part of the IBM Cloud Private Kubernetes distribution and make up the Value Add of using IBM Cloud Private vs a basic Kubernetes cluster. They provide enterprise quality, cluster wide services that have been hardened for security.

The services used by IBM Cloud Pak for Integration include:

  • Single Sign-On
  • Identity and Access Management (IAM)
  • Helm
  • Management UI for helm
  • License advisor for tracking product usage
  • An nginx ingress controller for applications
  • Cross-cluster logging service (based on the ELK stack)
  • Cross-cluster monitoring service (based on Prometheus and Grafana)

How do I run the common services on specific nodes?

Under IBM Cloud Private, the common services run on dedicated nodes. The same principles are used to run the common services on OpenShift. When installing the Cloud Pak, you can control which nodes are used for the services in the config.yaml file:

# Nodes selected to run common services components.
#
# The value of the master, proxy, and management parameters is an array,
# by providing multiple nodes the common services will be configured in
# a high availability configuration.
#
# It is recommended to install the components onto one or more openshift
# worker nodes. The master, proxy, and management components can all share
# the same node or set of nodes.
cluster_nodes:
  master:
    - <your-openshift-node-to-deploy-master-components>
  proxy:
    - <your-openshift-node-to-deploy-proxy-components>
  management:
    - <your-openshift-node-to-deploy-management-components>


The common services are broken down into 3 sets, master, proxy, and management. These sets should not be confused with the OpenShift node types; there is no need for the services in the master set to run on the OpenShift master node, or for the services in the management set to run on an OpenShift infrastructure node. They should all be run on OpenShift worker nodes.

The services in each set are:

  • Master: Single sign-on, IAM, Helm, Helm UI, License Advisor
  • Proxy: Nginx ingress controller
  • Management: Logging, Monitoring

The common services also run an instance of MongoDB as part of the master set, which is used for data persistence within many of the common services.

You can specify more than one node for each of the common services sets. By specifying more than one node for a set, the services in that set will run in an HA configuration:

  • Some services will run a replica on each active node in the set using a daemonset (IAM, Helm UI, Nginx ingress controller)
  • Some services will run one replica per node in the set using a deployment (MongoDB, Elasticsearch, Logstash, Platform API)
  • Other services will run one replica and failover to an alternate node in the set if the scheduled node fails

Note: In CP4I 2019.4.1, a few of the common services are missing their node affinities. These will be corrected in a future release. Until then, there is a script on gist to add the node affinities that are missing. You can run this on any common services version installed using the inception style installer, and running it more than once is fine too.

How do I make common services nodes dedicated to only common services workload?

The common services all run with a toleration for the NoSchedule effect of the taint with a key of dedicated and any value, making it simple to stop other workload from running on a node dedicated to common services.

You can apply the taint to the node with oc or kubectl:

kubectl taint nodes <node> dedicated=infra:NoSchedule


or

oc adm taint nodes <node> dedicated=infra:NoSchedule


These commands will not remove running workload from the dedicated nodes, so if required, remove any running pods from those nodes that are not part of common services and allow Kubernetes to reschedule them to alternative nodes. Deleting pods directly does not respect any configured PodDisruptionBudget, so you can also consider draining and uncordoning the node to cleanly move the services.

Note: In CP4I 2019.4.1, a few of the common services are missing their tolerations. These will be corrected in a future release. Until then, there is a script on gist to add the node affinities that are missing. You can run this on any common services version installed using the inception style installer, and running it more than once is fine too.

Join The Discussion

Your email address will not be published. Required fields are marked *