Accelerate the value of multicloud with collaborative DevSecOps Learn more

Example exercises to differentiate OpenShift and Kubernetes

Recently, I was asked what I thought the difference was between OpenShift and Kubernetes, and an analogy came to mind: Red Hat OpenShift is to Kubernetes as Red Hat Enterprise Linux is to Linux kernel. These technologies are not fundamentally different. Rather, and to a large extent, one contains the other.

In particular, Red Hat OpenShift can be thought of as a distribution of Kubernetes, although with some important and useful differences. Rather than rehash blogs that do a great job of pointing out and explaining these differences (see, for for example, 10 most important differences between OpenShift and Kubernetes and What’s the difference between Kubernetes and OpenShift?), in this tutorial, I provide getting started type of exercises that emphasize a few of these differences. Specifically, this tutorial shows routes and routers, and the BuildConfig/ImageStream/DeploymentConfig triad that allows simple yet very useful touchless pipelines. The examples in this tutorial use Red Hat OpenShift on IBM Cloud.

Prerequisites

You should be familiar with basic Kubernetes knowledge. The tutorial assumes that an OpenShift cluster has been already created. The cluster used in these tutorial examples was created on Red Hat OpenShift on IBM Cloud. The tutorial also assumes that the files that comprise the sample application are in a GitHub repository, so a webhook can be added. Finally, the content also assumes that Calico can be installed on the local environment used to control the target OpenShift cluster through the command-line interface.

To begin, you should have already created an OpenShift cluster to deploy your applications. A basic cluster with three worker nodes in a single zone should be enough for this tutorial. After you have access to it, you should be able to open the cluster’s web console, using the OpenShift web console button (in blue at the top right), as shown in the following screen capture from Red Hat OpenShift on IBM Cloud:

Cluster web console with OpenShift web console button

Your cluster’s web console also has an Access tab that provides information and instructions for how to set up the OpenShift CLI tools that you use to log on and run various commands to control the functions of your cluster.

Estimated time

Assuming the creation of the OpenShift cluster and the GitHub repository and the installation of Calico as described in the prerequisites, completing the steps in this tutorial should take about 30 minutes. Reading time is 10 to 15 minutes.

Create a simple pipeline

Consider a simple scenario for an application that shows how the route and router work and shows the BuildConfig/ImageStream/DeploymentConfig features of OpenShift. You need a simple HTTP server that takes GET REST requests and responds with a simple message. The HTTP server should be accessible externally through a public URL. Also, you should be able to make changes to the code of the server that are reflected on the running instance without any more intervention than a push to the server’s Git repo.

For this tutorial, assume that you have a Git repo containing the code for your server. This repo contains, at least, three files. The first file is the actual code for the server, like the following example:

File with example code for your server

#!/usr/bin/env python
# -*- coding: utf-8 -*-

# Refreshes the k8s config, obtains the id-token, and automates the following:
# curl -X POST -H "Kube-Id-Token: <token> -d payload --resolve httpserver.npspoc.com:443:169.48.64.46 <url> --cacert <sslcertfile>
# Notice that as long as the host/ip-address pair is in /etc/hosts, --resolve is not needed, and requests does not need to deal with it

import logging
import argparse
import json
import subprocess
import requests

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('--url', default='https://simple-http-server-route-default.voc-sandbox-cluster-7d4bdc08e7ddc90fa89b373d95c240eb-0001.us-east.containers.appdomain.cloud', help='url is set to (default: %(default)s)')
    parser.add_argument('--sslcertfile', default='/Users/isilval/Devt/ssl-cert/CertBundle.p7b', help='sslcertfile is set to (default: %(default)s)')
    return parser.parse_args()

if __name__ == '__main__':
    config = vars(parse_args())
    logger.info(json.dumps(config))

    headers = {'Content-Type' : 'application/json'}
    response = requests.get(config['url'], headers=headers, verify=False)

    logger.info('response code: {}, message: {}'.format(str(response.status_code), response.text))

The second and third files contain a Dockerfile that OpenShift uses to create a Docker image for your app, and a script to start the HTTP server, like the following examples:

FROM python:3

COPY simple_http_server.py .
COPY start_simple_http_server.sh .
COPY ssl ./ssl
CMD [ "sh", "./start_simple_http_server.sh" ]
#!/usr/bin/sh

python simple_http_server.py

To create the pipeline that maintains your app and takes it seamlessly from code in a Git repo to a running instance in a Kubernetes pod in OpenShift, you first create a BuildConfig, which builds a Docker image out of the three files you have in your repo.

A BuildConfig can take one or more triggers of various types (for example, Generic, GitHub or ConfigChange). For this tutorial, you want the GitHub trigger that kicks off a build of your image whenever there is a new commit to your repo (through a push, for example). The source of the BuildConfig in this case is also Git, where you specify the GitHub server, the project, and repo where your application code lives.

You also need to provide a sourceSecret that contains the authentication information that OpenShift needs to connect to your Git repo. You can specify all this information using a .yaml file of the BuildConfig kind. But the OpenShift CLI also has a new-build command that takes the important values needed and creates the .yaml file for you. Here’s an example of the command:

oc new-build --name=simple-http-server-bc git@github.ibm.com:MarketingSystems/OpenShiftTest.git --source-secret gitsecret

where gitsecret is created using the following command:

oc create secret generic gitsecret --from-file=ssh-privatekey=<ssh-private-key-file> --type=kubernetes.io/ssh-auth

For GitHub to actually notify your BuildConfig, and thus trigger the desired build, you also need to provide GitHub with a webhook that allows it to make the connection back into OpenShift and your BuildConfig. You can get the specific URL to use in this webhook from the OpenShift console page corresponding to your BuildConfig, like the following example:

Example script to start the HTTP server

The output of the BuildConfig that is created is an ImageStream. By default, its tag is the name given to the BuildConfig in the new-build command. Also, notice that OpenShift defines the strategy of the BuildConfig to be a dockerStrategy, using an ImageStream as the source for the base image, as given by your Dockerfile. This additional ImageStream also gets created, with a tag given by the name of the base (or FROM) image in your Dockerfile. You can inspect the ImageStreams that are created using the CLI as well, for example:

Inspect the ImageStream created using the CLI

Notice that the Docker repo that gets used is an internal one, maintained by OpenShift, and contained within your cluster.

By now, you might have an idea of what an ImageStream represents: a sequence of image versions, contained in an internal Docker registry or repo, that you can use as sinks and sources for BuildConfig on one hand, and for DeploymentConfig on the other. It takes care of the second step in your pipeline, and you don’t even need to explicitly create anything.

The third step in your pipeline is to deploy your app into a pod that can be managed and that runs your app. However, rather than using a regular Deployment object to manage the deployment of our app into pods, as you would do in plain Kubernetes, OpenShift provides a DeploymentConfig kind of object. As Tomasz Cholewa points out, “OpenShift chose to have a different way of managing deployments” that “is implemented not by controllers, but rather by sophisticated logic based on dedicated pods controlling whole process.”

A DeploymentConfig, as specified in YAML, is very similar to a regular Deployment, including the Docker registry path to the image for your app, in a containers section. Notice that this Docker registry path for the image is given by the Docker repo item that is part of the ImageStream that was previously created as the output of the BuildConfig. However, one important difference in a DeploymentConfig is a triggers element. There are various types, including ConfigChange and ImageChange. For this tutorial, the important trigger type is ImageChange.

The labels element is an important element in a DeploymentConfig (part of its metadata). It is also common to other artifact kinds, including Deployment. Whatever labels you decide to use are key to identifying the DeploymentConfig to a Service that you use to expose the app later. The following example DeploymentConfig is used in this tutorial:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  name: simple-http-server-dep
  namespace: default
  labels:
    app: simple-http-server-dep
spec:
  template:
    metadata:
      labels:
        name: simple-http-server-dep
        app: simple-http-server-dep
    spec:
      containers:
        - name: simple-http-server-ctr
          image: 'docker-registry.default.svc:5000/default/simple-http-server-bc:latest'
          ports:
            - containerPort: 80
              protocol: TCP
  replicas: 1
  triggers:
    - type: "ConfigChange"
    - type: "ImageChange"
      imageChangeParams:
        automatic: true
        containerNames:
          - "simple-http-server-ctr"
        from:
          kind: "ImageStreamTag"
          name: "simple-http-server-bc:latest"
  strategy:
    type: Rolling
  paused: false
  revisionHistoryLimit: 2
  minReadySeconds: 0

Use the following command to apply the DeploymentConfig file:

oc apply -f simple-http-server-dc.yaml

This command creates a pod that contains a running container for your app, which you can see at the bottom of the page for the replica of the deployment:

Example page for the replica of the deployment in Red Hat OpenShift on IBM Cloud

Example page for the replica of the deployment in Red Hat OpenShift on IBM Cloud

Expose your app externally

So far, your simple HTTP server is running, but you can only access it internally, from a terminal connected to the pod running its container.

Exposing your app externally is also simple, however. All you need to do is to expose it with a Service, and then create a route connected to a router that directs traffic to its running container. To create a Service, you can define a YAML file much in the same way you do in plain Kubernetes. This YAML file looks like the following example:

apiVersion: v1
kind: Service
metadata:
  name: simple-http-server-service
  namespace: default
  labels:
    name: simple-http-server-service
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8089
  selector:
    app: simple-http-server-dep

A few items are important to highlight. One is the ports element that defines an external port, and a targetPort that the app is listening to inside its container. Another key item is the selector, which refers declaratively to the DeploymentConfig that manages the running app. Notice that this selector must match the labels in the metadata of the DeploymentConfig, both in label name and value. In the following example, you see a single label:

app: simple-http-server-dep

With this Service, you can create a route to direct traffic. A route needs to be associated with a router, which we must also create. If you are willing to live in the default project or namespace (like the simple example in this tutorial), OpenShift provides a default router to use, so you don’t need to create one.

As a sidebar, notice how this tutorial talks about a project and a namespace as synonymous. It is another difference between OpenShift and Kubernetes, although an OpenShift project is basically a namespace with a few add-ons. For more details on the differences between project and namespace, see Section 9: OpenShift projects are more than Kubernetes namespaces.

Back to creating your route, you can also use the default router page:

Example router page

Click Create route, provide a name, a host name and path for the app’s external URL, and select the Service that you defined to expose our app. For the host name, you have two options. One is to leave it blank and let OpenShift generate it from the name you gave to the route, the name of the project, the cluster name, and a unique identifier that is generated. For the example in this tutorial, the following host name is generated:

simple-http-server-route-default.voc-sandbox-cluster-7d4bdc08e7ddc90fa89b373d95c240eb-0001.us-east.containers.appdomain.cloud

I know, it’s kind of a mouthful. But the alternative option is to provide your own host name, which you then need to configure into OpenShift, and for which you need to handle whatever DNS configuration is necessary. For the purposes of the simple example in this tutorial, keep the generated host name. That way, there is nothing else for you to do to enable it.

An additional item you can provide in the definition of your route is its security level. You can check on a Secure route box and enable TLS level security and the use of an https protocol in our URL. Use Passthrough level TLS Termination, rather than Edge or Re-encrypt, for this tutorial, and let OpenShift use its default certificates.

Notice that an OpenShift route is very similar to a Kubernetes ingress, or an Istio gateway.

The last piece of the puzzle to enable the external access to your app is to open the appropriate firewall flow. In your installation, use Calico to manage firewall flows. After Calico is installed, you can use the following command to generate the corresponding Calico configuration from the OpenShift CLI:

ibmcloud oc cluster config --cluster <cluster-id> --admin --network

The <cluster-id> can be found on the Overview tab for the main OpenShift cluster.

Now you can point Calico to your cluster’s config by moving the corresponding file, as in:

mv ~/.bluemix/plugins/container-service/clusters/<cluster-id>-admin/calicoctl.cfg /etc/calico

Then you can define the Calico flow with a YAML like the following example:

apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: allow-simple-http-server
spec:
  applyOnForward: true
  ingress:
  - action: Allow
    destination:
      nets:
      - 169.63.135.10/32
      ports:
      - 443
    protocol: TCP
    source: {}
  preDNAT: true
  selector: ibm.role=='worker_public'
  order: 1800
  types:
  - Ingress

You can get the nets IP address in this example from the Ingress Points item in the the default router’s page as previously shown. Also notice that you use the 443 port, which is the conventional port used for HTTPS traffic.

You apply this YAML not with the OpenShift CLI but with the Calico CLI, using the following command:

calicoctl apply -f allow-http-simple-server.yaml

Now you can now simply invoke your app with curl using a URL containing the generated host name.

Summary

So, now you have a simple HTTP server that takes GET REST requests and responds with a simple message, and the HTTP server is accessible externally through a public URL.

You can also verify that you can make changes to the code on the server that are reflected on the running instance, without any more intervention than a push to the server’s Git repo.

I made the following video, starting with changing the code locally: OpenShift exercise recording. After a push to Git, you can see how the next steps flow, and you can verify that the new output message is displayed when you issue the same curl command that you ran earlier. This video helps you see how you can try these steps in your own environment.