A brief history of Kubernetes, OpenShift, and IBM

The recent introduction of Red Hat® OpenShift® as a choice on IBM Cloud sparked my curiosity about its origins, and why it is so popular with developers. Many of the developers I sat beside at talks, or bumped into at lunch, at a recent KubeCon Conference, mentioned how they used OpenShift. I heard from developers with financial institutions running analytics on transactions and with retailers creating new experiences for their customers.

OpenShift is a hybrid-cloud, enterprise Kubernetes application platform. IBM Cloud now offers it as a hosted solution or an on-premises platform as a service (PaaS). It is built around containers, orchestrated and managed by Kubernetes, on a foundation of Red Hat Enterprise Linux.

With the growth of cloud computing, OpenShift became one of the most popular development and deployment platforms, earning respect based on merit. As cloud development becomes more “normal” for us, it is interesting to consider where OpenShift fits, as another tool from the toolbox for creating the right solution. It might mix with legacy on-premises software, cloud functions, Cloud Foundry, or bare metal options.

In this blog post, my colleague Olaph Wagoner and I step back in time to understand where OpenShift came from, and we look forward to where it might be going in the world of enterprise application development with Kubernetes.

The following graphic shows a timeline of OpenShift, IBM, and Kubernetes:

OpenShift, IBM, and Kubernetes timeline

Early OpenShift: 2011-2013

OpenShift was first launched in 2011 and relied on Linux containers to deploy and run user applications, as Joe Fernandes describes in Why Red Hat Chose Kubernetes for OpenShift.

When OpenShift was born in 2011, it relied on Linux containers to deploy and run user applications. OpenShift V1 and V2 used Red Hat’s own platform-specific container runtime environment and container orchestration engine as the foundation.

However, the story of OpenShift began sometime before its launch. Some of the origins of OpenShift come from the acquisition of Makara, announced in November of 2010. That acquisition provided software as an abstraction layer on top of systems and included runtime environments for PHP and Java applications, Tomcat or JBoss application servers, and Apache web servers.

Early OpenShift used “gears”, which were a proprietary type of container technology. OpenShift nodes included some kind of containerization. The gear metaphor was based on what was contained. OpenShift called the isolated clusters gears: something capable of producing work without tearing down the entire mechanism. An individual gear was associated with a user. To make templates out of those gears, OpenShift used cartridges, which were acquired from Makara.

OpenShift itself was not open source until 2012. In June 2013, V2 went public, with changes to the cartridge format.

Docker changes everything

Docker was started as a project by a company called dotCloud, made available as open source in March 2013. It popularized containers with elegant tools that enable people to build and transfer existing skills into the platform.

Red Hat was an early adopter of Docker, announcing a collaboration in September 2013. IBM forged its own strategic partnership with Docker in December 2014. Docker is one of the essential container technologies that multiple IBM engineers have been contributing code to since the early days of the project.


Kubernetes surfaced from work at Google in 2014, and became the standard way of managing containers.

Although originally designed by Google, it is now an open source project maintained by the Cloud Native Computing Foundation (CNCF), with significant open source contributions from Red Hat and IBM.

According to kubernetes.io, Kubernetes aims to provide “a system for automating deployment, scaling, and operations of application containers” across clusters of hosts. It works with a range of container tools, including Docker.

With containers, you can move into modular application design where a database is independent, and you can scale applications without scaling your machines.

Kubernetes is another open source project that IBM was an early contributor to. In the following graphic you can see the percentage of IBM’s contribution to Docker, Kubernetes, and Istio in the context of the top 5 orgs to contribute to each of those container related projects. It highlights the importance of container technology for IBM, as well as some of the volume of open source work.

Some of IBM's contributions to open source container technology

OpenShift V3.0: open and standard

Red Hat announced an intent to use Docker in OpenShift V3 in August 2014. Under the covers, the jump from V2 to V3 was quite substantial. OpenShift went from using gears and cartridges to containers and images. To orchestrate those images, V3 introduced using Kubernetes.

The developer world was warming to the attraction of Kubernetes too, for some of the following reasons:

  • Kubernetes pods allow you to deploy one or multiple containers as a single atomic unit.

  • Services can access a group of pods at a fixed address and can link those services together using integrated IP and DNS-based service discovery.

  • Replication controllers ensure that the desired number of pods is always running and use labels to identify pods and other Kubernetes objects.

  • A powerful networking model enables managing containers across multiple hosts.

  • The ability to orchestrate storage allows you to run both stateless and stateful services in containers.

  • Simplified orchestration models quickly allow applications to get running without the need for complex two-tier schedulers.

  • An architecture understood that the needs of developers and operators were different and took both of those requirements into consideration, eliminating the need to compromise either of these important functions.

OpenShift introduced powerful user interfaces for rapidly creating and deploying apps with Source-To-Image and pipelines technologies. These layers on top of Kubernetes simplify and draw in new developer audiences.

IBM was already committing code to the key open source components OpenShift is built on. The following graphic shows a timeline of OpenShift with Kubernetes:

OpenShift and Kubernetes timeline

OpenShift V4.0 and the future

Red Hat clearly proved to be at the forefront of container technology, second only to Google in contributions to CNCF projects. Another recent accomplishment of Red Hat I want to mention is the the acquisition of CoreOS in January of 2018. The CoreOS flagship product was a lightweight Linux operating system designed to run containerized applications, and Red Hat is making available in V4 of OpenShift as “Red Hat Enterprise Linux CoreOS”.

And that’s just one of many exciting developments coming in V4. As shown in the previous timeline graphic, OpenShift Service Mesh will combine the monitoring capability of Istio with the display power of Jaeger and Kiali. Knative serverless capabilities are included, as well as Kubernetes operators to facilitate the automation of application management.

The paths join up here, also. IBM is a big contributor of open source code to Istio, Knative, and Tekton. These technologies are the pathways of container-based, enterprise development in the coming decade.

OpenShift V4.0 has only recently been announced. And Red Hat OpenShift on IBM Cloud™ is a new collaboration that combines Red Hat OpenShift and IBM Cloud Kubernetes Service. For other highlights, review the previous timeline graphic.

Some conclusions

Researching the origins and history of OpenShift was interesting. Using OpenShift as a lens recognizes that in terms of software development, this decade really is the decade of the container.

It is impressive how much energy, focus, and drive Red Hat put into creating a compelling container platform by layering significantly, progressing the same technologies that IBM has shown interest in, and dedicating engineering resources to over the past decade.

We’re looking forward to learning and building with all of these cloud technologies in the years ahead.

Catch up with the latest Kubernetes details at KubeCon

As an open source platform, Kubernetes has become a core component in the digital transformation of enterprises worldwide. Kubernetes minimizes outages and disruptions through self-healing, intelligent scheduling, horizontal scaling, and load balancing. Developers can easily roll out and roll back application versions, whether collaborating in development and test environments or deploying to production. IBM is a key player in Kubernetes, Istio, and Knative open source projects, and IBM developers are also contributing to several high-profile Special Interest Groups.

If you are attending KubeCon Europe 2019 May 20-23 in Barcelona, Spain, check out the following sections to learn how developers at IBM are participating. And even if you can’t attend this time, you can learn about the areas we are focusing on during the changing times in the cloud-native landscape.

The containerd project

A relatively new project with the Cloud Native Computing Foundation (CNCF), containerd is a core component of many cloud platforms and other types of products based on container runtime environments. Docker-compatible container support and virtual machine support in Kubernetes integrations swelled tremendously over the past year from IBM and other cloud platform providers. Mike Brown and Phil Estes from IBM are two of the owners and maintainers on containerd. Mike focuses on the container runtime integration with Kubernetes, and Phil focuses on the core components of containerd. Members of Open Containers Initiative (OCI), Mike and Phil are frequently contribute and maintain the projects. OCI is an important standards-based group that IBM invests in to ensure customer success with hybrid cloud and multi-cloud solutions.

Mike is joined by Wei Fu of Alibaba (recently added as a containerd maintainer), for a session on how users can enhance containerd without needing to first modify containerd’s internals. They cover building custom snapshotters for special storage needs and integrating with custom runtime environments for stronger isolation. Mike highlights the architecture and data flow, which are key to extending containerd’s built in functionality.

The session also explains the internals of containerd, covering its components and dataflows and how external plug-ins work. The presentation discusses using containerd’s smart client API and plug-ins to make new and custom integrations. Mike and Wei demonstrate how various sandbox technologies can be integrated with containerd to work with Kubernetes, including Amazon’s Firecracker and Google’s gVisor. Attendees can expect to leave the talk understanding how they can extend and modify containerd to support enhanced integration for custom production deployments.


Knative is a Kubernetes-based platform to build, deploy, and manage modern serverless workloads. IBM is contributing to the Knative project on several fronts. Greg Haynes is one of the maintainers of the Serving component for Knative.

One of the main advantages of serverless platforms is scale-to-zero functionality, which means application developers have an almost-no cost when the applications are idle. But with this approach, developers face the dreaded cold-start problem: When your application is not in use for an extended period, an activating request can take significantly longer to complete.

Greg, leading the effort to reduce cold-start time in Knative, presents a talk on his cold-start performance work. He discusses low-level performance issues that Knative has encountered with Kubernetes as it tries to deliver sub-second, cold-start time.


Istio is an open platform to connect, manage and secure network of microservices. Istio was jointly launched by IBM, Google, and Lyft. Today IBM is actively collaborating with open community and contributing to Istio open source project.

IBM team is hosting Kubernetes and Istio workshop Get the Cert: Build Your Next App With Kubernetes + Istio hosted by IBM. Join us to learn how Kubernetes and Istio make it easy to bind your app to advanced services like Watson, Blockchain, and IoT. Our developers walk through each step in this hands-on lab, and you can walk away with a certification badge.

The Kubernetes Service Catalog

IBM contributes to and uses the Kubernetes Service Catalog in IBM product offerings. Jonathan Berkhahn, a co-chair of the Service Catalog Special Interest Group (SIG), presents both an introduction and deep-dive sessions about Service Catalog. The intro session covers a basic overview of Service Catalog, what the Open Service Broker (OSB) API is, and why Kubernetes would want to use it. The deep-dive session includes a more in-depth overview of how Service Catalog actually works, some design challenges faced while implementing it, and recent work such as namespaced resources and future plans for the project.

Jonathan also is participating in the contributor summit and meeting with other members of the SIG to discuss the group’s activities. The SIG plans for the next major release to include transitioning Service Catalog to custom resource definitions (CRDs), implementing support for user-provided services, and updating Service Catalog to the include the most recent features of the OSB API.

The Kubernetes Conformance program

IBM is actively participating in many aspects of the Kubernetes Conformance program. Srinivas Brahmaroutu is leading and driving conformance work with the Cloud Native Computing Foundation community to add more conformance coverage.

Srinivas presents a new proposal on validation suites at the Conformance Deep Dive. In the working group meeting Srini highlights the current progress on the work and also discusses other topics about documenting conformance work for each of the Kubernetes releases.

The IBM Cloud Special Interest Group

The IBM Cloud Special Interest Group (SIG) for Kubernetes focuses on building, deploying, maintaining, supporting, and using Kubernetes on IBM public and private clouds. This group plans many activities in Barcelona.

At the Contributor Summit, Sahdev Zala leads a face-to-face session for current contributors to discuss SIG activities and a future roadmap for IBM Cloud SIG. Sahdev also represents the group at the meet-and-greet session for the new Kubernetes contributors who want to learn and contribute to the IBM Cloud SIG.

At KubeCon, Sahdev joins Khalid Ahmed, Nimesh Bhatia, and Brad Topol for an introduction and deep-dive session about the IBM Cloud SIG. Join us to learn about SIG activities and sub-projects, including the newly created IBM Cloud Provider for Cluster API and ongoing work on the IBM Cloud Provider interface.

Twelve-Factor methodology

The Twelve-Factor App Methodology is a software methodology for building scalable microservice applications that includes best practices for building applications deployed to the web with portability, resilience, and scalability. IBM’s Brad Topol and Michael Elder give an overview of the methodology and describe how developers can leverage the core constructs provided by Kubernetes to support the 12 factors for scalable web apps. This talk includes live demonstrations of how Kubernetes can support Twelve-Factor App Methodology for both newer cloud native applications and legacy enterprise middleware applications that include stateful and transactional workloads.

“Kubernetes in the Enterprise”

The IBM booth at KubeCon hosts a series of talks at the mini-theater. Dr. Brad Topol presents Deploying Kubernetes in the Enterprise on Wednesday, May 22 at 2:30. This session shows how to use Kubernetes to deliver existing applications or more resilient cloud-native applications with speed and efficiency. A book signing event for the O’Reilly book “Kubernetes in the Enterprise” immediately follows the session.

Catch IBMers at the following talks

Monday, May 20

Tuesday, May 21

Wednesday, May 22

Thursday, May 23

Define a simple CD pipeline with Knative

In this blog post, I describe how to set up a simple CD pipeline using Knative pipeline. The pipeline takes a Helm chart from a git repository and performs the following steps:

  • It builds three Docker images by using Kaniko.
  • It pushes the built images to a private container registry.
  • It deploys the chart against a Kubernetes cluster.

I used IBM Cloud™, both the container service IBM Cloud Kubernetes Service (IKS) and the IBM Container Registry, to host Knative as well as the deployed Health app. I used the same IKS Kubernetes cluster to run the Knative service as well as the deployed application. They are separated by using dedicated namespaces, service accounts, and roles. For details about setting up Knative on IBM Cloud, check out my previous blog post. The whole code is available on GitHub. This post was originally published at andreafrittoli.me on Feb 6th, 2019.

Knative pipelines

Pipelines are the newest addition to the Knative project, which already included three components: serving, eventing, and build. Quoting from the official README, “The Pipeline CRD provides k8s-style resources for declaring CI/CD-style pipelines”. The pipeline CRD is meant as a replacement for the build CRD. The build-pipeline project introduces a few new custom resource definitions (CRDs) to extend the Kubernetes API:

  • tasks
  • tasksrun
  • pipeline
  • pipelinerun
  • pipelineresource

A pipeline is made of tasks; tasks can have input and output pipelineresources. The output of a task can be the input of another one. A taskrun is used run a single task. It binds the task inputs and outputs to specific pipelineresources. Similarly, a pipelinerun is used to run a pipeline. It binds the pipeline inputs and outputs to specific pipelineresources. For more details on Knative pipeline CRDs, see the official project documentation.

The “Health” application

The application deployed by the helm chart is OpenStack Health, or simply “Health,” a dashboard to visualize CI test results. The chart deploys three components:

  • An SQL backend that uses the postgres:alpine docker image, plus a custom database image that runs database migrations through an init container.
  • A python based API server, which exposes the data in the SQL database via HTTP based API.
  • A javascript front end, which interacts on client side with the HTTP API.

openstack health

The OpenStack Health dashboard

The continuous delivery pipeline

The pipeline for “Health” uses two different tasks and several type of pipelineresources: git, image, and cluster. In the diagram below, circles represent resources, boxes represent tasks:

continuous deliver pipeline

As of today, the execution of tasks is strictly sequential; the order of execution is that specified in the pipeline. The plan is to use inputs and outputs to define task dependencies and thus a graph of execution, which would allow for independent tasks to run in parallel.

Source to image (task)

The docker files and the helm chart for “Health” are hosted in the same git repository. The git resource is thus input to the both the source-to-image task, for the docker images, as well as the helm-deploy one, for the helm chart.

apiVersion: pipeline.knative.dev/v1alpha1
kind: PipelineResource
  name: health-helm-git-knative
  type: git
    - name: revision
      value: knative
    - name: url
      value: https://github.com/afrittoli/health-helm

When a resource of type git is used as an input to a task, the Knative controller clones the git repository and prepares it at the specified revision, ready for the task to use it. The source-to-image task uses Kaniko to build the specified Dockerfile and to pushes the resulting container image to the container registry.

apiVersion: pipeline.knative.dev/v1alpha1
kind: Task
  name: source-to-image
      - name: workspace
        type: git
      - name: pathToDockerFile
        description: The path to the dockerfile to build (relative to the context)
        default: Dockerfile
      - name: pathToContext
          The path to the build context, used by Kaniko - within the workspace
          The git clone directory is set by the GIT init container which setup
          the git input resource - see https://github.com/tektoncd/pipeline/blob/master/pkg/reconciler/v1alpha1/taskrun/resources/pod.go#L107
        default: .
      - name: builtImage
        type: image
    - name: build-and-push
      image: gcr.io/kaniko-project/executor
        - /kaniko/executor
        - --dockerfile=${inputs.params.pathToDockerFile}
        - --destination=${outputs.resources.builtImage.url}
        - --context=/workspace/workspace/${inputs.params.pathToContext}

The destination URL of the image is defined in the image output resource, and then pulled into the args passed to the Kaniko container. At the moment of writing the image output resource is only used to hold the target URL. In the future Knative will enforce that every task that defines image as an output actually produces the container images and pushes it to the expected location, it will also enrich the resource metadata with the digest of the pushed image, for consuming tasks to use. The health chart includes three docker images; identically three image pipeline resources are required. They are very similar to each other in their definition, only the target URL changes. One of the three:

apiVersion: pipeline.knative.dev/v1alpha1
kind: PipelineResource
  name: health-api-image
  type: image
    - name: url
      description: The target URL
      value: registry.ng.bluemix.net/andreaf/health-api

Helm deploy (task)

The three source-to-image tasks build and push three images to the container registry. They don’t report the digest of the image, however. They associate the latest tag to that digest, which allows the helm-deploy task to pull the right images, assuming only one CD pipeline runs at the time. The helm-deploy has five inputs: one git resource to get the helm chart, three image resources that correspond to three docker files, and finally a cluster resource, that gives the task access to a Kubernetes cluster where to deploy the helm chart to:

apiVersion: pipeline.knative.dev/v1alpha1
kind: PipelineResource
  name: cluster-name
  type: cluster
    - name: url
      value: https://host_and_port_of_cluster_master
    - name: username
      value: health-admin
    - fieldName: token
      secretKey: tokenKey
      secretName: cluster-name-secrets
    - fieldName: cadata
      secretKey: cadataKey
      secretName: cluster-name-secrets

The resource name highlighted on line 4 must match the cluster name. The username is that of a service account that has enough rights to deploy helm to the cluster. For isolation, I defined a namespace in the target cluster called health. Both helm itself, as well as the health chart, are deployed in that namespace only. Helm runs with a service account health-admin that can only access the health namespace. The manifests required to set up the namespace, the service accounts, the roles and role bindings are available on GitHub. The health-admin service account token and the cluser CA certificate should not be stored in git. They are defined in a secret, which can be setup using the following template:

apiVersion: v1
kind: Secret
  name: __CLUSTER_NAME__-secrets
type: Opaque
  cadataKey: __CA_DATA_KEY__
  tokenKey: __TOKEN_KEY__

The template can be filled in with a simple bash script. The script works after a successful login was performed via ibmcloud login and ibmcloud target.

eval $(ibmcloud cs cluster-config $CLUSTER_NAME --export)

# This works as long as config returns one cluster and one user
SERVICE_ACCOUNT_SECRET_NAME=$(kubectl get serviceaccount/health-admin -n health -o jsonpath='{.secrets[0].name}')
CA_DATA=$(kubectl get secret/$SERVICE_ACCOUNT_SECRET_NAME -n health -o jsonpath='{.data.ca\.crt}')
TOKEN=$(kubectl get secret/$SERVICE_ACCOUNT_SECRET_NAME -n health -o jsonpath='{.data.token}')

sed -e 's/__CLUSTER_NAME__'/"$CLUSTER_NAME"'/g' \
    -e 's/__CA_DATA_KEY__/'"$CA_DATA"'/g' \
    -e 's/__TOKEN_KEY__/'"$TOKEN"'/g' cluster-secrets.yaml.template > ${CLUSTER_NAME}-secrets.yaml

Once a cluster resource is used as input to a task, it generates a kubeconfig file that can be used in the task to perform actions against the cluster. The helm-deploy task takes the image URLs from the image input resources and passes them to the helm command as set values; it also overrides the image tags to latest. The image pull policy is set to always since the tag doesn’t change, but the image does. The upgrade --install command is required so that both the first as well as following deploys may work.

apiVersion: pipeline.knative.dev/v1alpha1
kind: Task
  name: helm-deploy
  serviceAccount: health-helm
      - name: chart
        type: git
      - name: api-image
        type: image
      - name: frontend-image
        type: image
      - name: database-image
        type: image
      - name: target-cluster
        type: cluster
      - name: pathToHelmCharts
        description: Path to the helm charts within the repo
        default: .
      - name: clusterIngressHost
        description: Fully qualified hostname of the cluster ingress
      - name: targetNamespace
        description: Namespace in the target cluster we want to deploy to
        default: "default"
    - name: helm-deploy
      image: alpine/helm
        - upgrade
        - --debug
        - --install
        - --namespace=${inputs.params.targetNamespace}
        - health # Helm release name
        - /workspace/chart/${inputs.params.pathToHelmCharts}
        # Latest version instead of ${inputs.resources.api-image.version} until #216
        - --set
        - overrideApiImage=${inputs.resources.api-image.url}:latest
        - --set
        - overrideFrontendImage=${inputs.resources.frontend-image.url}:latest
        - --set
        - overrideDatabaseImage=${inputs.resources.database-image.url}:latest
        - --set
        - ingress.enabled=true
        - --set
        - ingress.host=${inputs.params.clusterIngressHost}
        - --set
        - image.pullPolicy=Always
        - name: "KUBECONFIG"
          value: "/workspace/${inputs.resources.target-cluster.name}/kubeconfig"
        - name: "TILLER_NAMESPACE"
          value: "${inputs.params.targetNamespace}"

The pipeline

The pipeline defines the sequence of tasks that will be executed, with their inputs and outputs. The from syntax can be used to express dependencies between tasks, i.e. input of a tasks is taken from the output of another one.

apiVersion: pipeline.knative.dev/v1alpha1
kind: Pipeline
  name: health-helm-cd-pipeline
    - name: clusterIngressHost
      description: FQDN of the ingress in the target cluster
    - name: targetNamespace
      description: Namespace in the target cluster we want to deploy to
      default: default
    - name: src
      type: git
    - name: api-image
      type: image
    - name: frontend-image
      type: image
    - name: database-image
      type: image
    - name: health-cluster
      type: cluster
  - name: source-to-image-health-api
      name: source-to-image
      - name: pathToContext
        value: images/api
        - name: workspace
          resource: src
        - name: builtImage
          resource: api-image
  - name: source-to-image-health-frontend
      name: source-to-image
      - name: pathToContext
        value: images/frontend
        - name: workspace
          resource: src
        - name: builtImage
          resource: frontend-image
  - name: source-to-image-health-database
      name: source-to-image
      - name: pathToContext
        value: images/database
        - name: workspace
          resource: src
        - name: builtImage
          resource: database-image
  - name: helm-init-target-cluster
      name: helm-init
      - name: targetNamespace
        value: "${params.targetNamespace}"
        - name: target-cluster
          resource: health-cluster
  - name: helm-deploy-target-cluster
      name: helm-deploy
      - name: pathToHelmCharts
        value: .
      - name: clusterIngressHost
        value: "${params.clusterIngressHost}"
      - name: targetNamespace
        value: "${params.targetNamespace}"
        - name: chart
          resource: src
        - name: api-image
          resource: api-image
            - source-to-image-health-api
        - name: frontend-image
          resource: frontend-image
            - source-to-image-health-frontend
        - name: database-image
          resource: database-image
            - source-to-image-health-database
        - name: target-cluster
          resource: health-cluster

Tasks can accept parameters, which are specified as part of the pipeline definition. One example in the pipeline above is the ingress domain of the target kubernetes cluster, which is used by the helm chart to set up of ingress of the deployed application. Pipelines can also accept paramters, which are specified as part of the pipelinerun definition. Parameters make it possible to keep environment and run specific values confined to pipelinerun and pipelineresource. You may notice that the pipeline includes an extra task helm-init which is invoked before helm-deploy. As the name suggests, the task initializes helm in the target cluster/namespace. Tasks can have multiple steps, so that could be implemented as a first step within the helm-deploy task. However, I wanted to keep it separated so that helm-init runs using a service account with an admin role in the target namespace, while helm-deploy runs with an unprivileged account.

apiVersion: pipeline.knative.dev/v1alpha1
kind: Task
  name: helm-init
  serviceAccount: health-admin
      - name: target-cluster
        type: cluster
      - name: targetNamespace
        description: Namespace in the target cluster we want to deploy to
        default: "default"
    - name: helm-init
      image: alpine/helm
        - init
        - --service-account=health-admin
        - --wait
        - name: "KUBECONFIG"
          value: "/workspace/${inputs.resources.target-cluster.name}/kubeconfig"
        - name: "TILLER_NAMESPACE"
          value: "${inputs.params.targetNamespace}"

Running the pipeline

To run a pipeline, a pipelinerun is needed. The pipelinerun binds the tasks inputs and outputs to specific pipelineresources and defines the trigger for the run. At the momement of writing only manual trigger is supported. Since there is no native mechanism available to create a pipelinerun from a template, running a pipeline again requires changing the pipelinerun YAML manifest and applying it to the cluster again. When executed, the pipelinerun controller automatically creates a taskrun when a task is executed. Any kubernetes resources created during the run, such as taskruns, pods and pvcs, stays once the pipelinerun execution is complete; they are cleaned up only when the pipelinerun is deleted.

apiVersion: pipeline.knative.dev/v1alpha1
kind: PipelineRun
  generateName: health-helm-cd-pr-
    name: health-helm-cd-pipeline
    - name: clusterIngressHost
      value: mycluster.myzone.containers.appdomain.cloud
    - name: targetNamespace
      value: health
    type: manual
  serviceAccount: 'default'
    - name: src
        name: health-helm-git-mygitreference
    - name: api-image
        name: health-api-image
    - name: frontend-image
        name: health-frontend-image
    - name: database-image
        name: health-database-image
    - name: health-cluster
        name: mycluster

To execute the pipeline, all static resources must be created first, and then the pipeline can be executed. Using the code from the GitHub repo:

# Create all static resources
kubectl apply -f pipeline/static

# Define and apply all pipelineresources as described in the blog post
# You will need one git, three images, one cluster.

# Generate and apply the cluster secret
echo $(cd pipeline/secrets; ./prepare-secrets.sh)
kubectl apply -f pipeline/secrets/cluster-secrets.yaml

# Create a pipelinerun based on the demo above and create it
kubectl create -f pipeline/run/run-pipeline-health-helm-cd.yaml

A successful execution of the pipeline will create the following resources in the health namespace:

kubectl get all -n health
NAME                                   READY   STATUS    RESTARTS   AGE
pod/health-api-587ff4fcfb-8hmpd        1/1     Running   64         11d
pod/health-frontend-7c77fbc499-r4r6p   1/1     Running   0          11d
pod/health-postgres-5877b6b564-fwzfk   1/1     Running   1          11d
pod/tiller-deploy-5b7c84dbd6-4vcgr     1/1     Running   0          11d

NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
service/health-api        LoadBalancer   80:32693/TCP     11d
service/health-frontend   LoadBalancer   80:30812/TCP     11d
service/health-postgres   LoadBalancer   5432:32158/TCP   11d
service/tiller-deploy     ClusterIP              44134/TCP        11d

NAME                              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/health-api        1         1         1            1           11d
deployment.apps/health-frontend   1         1         1            1           11d
deployment.apps/health-postgres   1         1         1            1           11d
deployment.apps/tiller-deploy     1         1         1            1           11d

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/health-api-587ff4fcfb        1         1         1       11d
replicaset.apps/health-frontend-7c77fbc499   1         1         1       11d
replicaset.apps/health-postgres-58759444b6   0         0         0       11d
replicaset.apps/health-postgres-5877b6b564   1         1         1       11d
replicaset.apps/tiller-deploy-5b7c84dbd6     1         1         1       11d

All Knative pipeline resources are visible in the default namepace:

$ kubectl get all
NAME                                                                           READY   STATUS      RESTARTS   AGE
pod/health-helm-cd-pipeline-run-2-helm-deploy-target-cluster-pod-649959        0/1     Completed   0          4m
pod/health-helm-cd-pipeline-run-2-helm-init-target-cluster-pod-4da57e          0/1     Completed   0          7m
pod/health-helm-cd-pipeline-run-2-source-to-image-health-api-pod-4f0362        0/1     Completed   0          32m
pod/health-helm-cd-pipeline-run-2-source-to-image-health-database-pod-60b15f   0/1     Completed   0          14m
pod/health-helm-cd-pipeline-run-2-source-to-image-health-frontend-pod-c23e71   0/1     Completed   0          24m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP           443/TCP   12d

NAME                                        CREATED AT
task.pipeline.knative.dev/helm-deploy       12d
task.pipeline.knative.dev/helm-init         11d
task.pipeline.knative.dev/source-to-image   12d

NAME                                                                                         CREATED AT
taskrun.pipeline.knative.dev/health-helm-cd-pipeline-run-2-helm-deploy-target-cluster        4m
taskrun.pipeline.knative.dev/health-helm-cd-pipeline-run-2-helm-init-target-cluster          7m
taskrun.pipeline.knative.dev/health-helm-cd-pipeline-run-2-source-to-image-health-api        32m
taskrun.pipeline.knative.dev/health-helm-cd-pipeline-run-2-source-to-image-health-database   14m
taskrun.pipeline.knative.dev/health-helm-cd-pipeline-run-2-source-to-image-health-frontend   24m

NAME                                                    CREATED AT
pipeline.pipeline.knative.dev/health-helm-cd-pipeline   10d

NAME                                                             CREATED AT
pipelinerun.pipeline.knative.dev/health-helm-cd-pipeline-run-2   32m

NAME                                                          CREATED AT
pipelineresource.pipeline.knative.dev/af-pipelines            12d
pipelineresource.pipeline.knative.dev/health-api-image        12d
pipelineresource.pipeline.knative.dev/health-database-image   12d
pipelineresource.pipeline.knative.dev/health-frontend-image   12d
pipelineresource.pipeline.knative.dev/health-helm-git-knative 12d

To check if the “Health” application is running, you can hit the API using curl:

HEALTH_URL=http://$(kubectl get ingress/health -n health -o jsonpath='{..rules[0].host}')
curl ${HEALTH_URL}/health-api/status

You can point your browser to ${HEALTH_URL}/health-health/# to see the front end.


Even if the project only started at the end of 2018, it is already possible to run a full CD pipeline for a relatively complex Helm chart. It took me only a few small PRs to get everything working fine. There are several limitations to be aware of, however, and the team is well aware of them as we are working quickly towards the resolution. Some examples of the limitations are:

  • Tasks can only be executed sequentially.
  • Images as output resources are not really implemented yet.
  • Triggers are only manual.

There’s plenty of space for contributions, in the form of bugs, documentation, code or even simply sharing your experience with Knative pipelines with the community. A good place to start are the GitHub issues marked as help wanted. All the code I wrote for this blog is available on GitHub under Apache 2.0, so feel free to re-use any of it. Feedback and PRs are welcome.


The build-pipeline project is rather new, at the moment of writing the community is working towards its first release. Part of the API may still be subject to backward incompatible changes and examples in this blog post may stop working eventually. See the API compatibility policy for more details.

Next steps

So you mastered how to create a CD pipeline with Knative. Now what? As mentioned above, feel free to provide feedback or PRs on my code. If you want to play around some more with Knative, IBM recently announced Knative support as an experimental managed add-on to our IBM Cloud Kubernetes Service.

You can also check out our other Knative tutorials and blogs here: https://developer.ibm.com/components/knative/

Develop Knative pipelines on the cloud

In this blog post, I will describe how to set up a development environment for Knative’s build pipeline by using your latop and IBM Cloud™, both the container service (IBM Cloud Kubernetes Service) as well as the IBM Container Registry. I won’t get into platform-specific details about how to configure kubectl and other tools on your laptop; instead I will provide links to existing excellent documents. In the next blog post, I will describe how to set up the CD pipeline. This post was originally published at andreafrittoli.me on Feb 5th, 2019.

Knative pipelines

Pipelines are the newest addition to the Knative project, which already included three components: serving, eventing, and build. Quoting from the official README, “The Pipeline CRD provides k8s-style resources for declaring CI/CD-style pipelines.” The build-pipeline project introduces a few new custom resource definitions (CRDs) that make it possible to define pipelineresources, tasks/taskruns, and pipelines/pipelineruns directly in Kubernetes.

Preparing your laptop

Before you start, you need to set up the development environment on your laptop. Install git, go and the IBM Cloud CLI. Make sure your GOPATH is set correctly. Either /go or ~/go are good choices, but I prefer the former to keep paths shorter.

# Create the required folders
sudo mkdir -p $GOPATH
sudo chown -R $USER:$USER $GOPATH
mkdir -p ${GOPATH}/src
mkdir -p ${GOPATH}/bin

# Set the following in your shell profile, so that you may download, build and run go programs
export $GOPATH
export PATH=${GOPATH}/bin:$PATH

# Install the IBM Cloud CLI
curl -sL https://raw.githubusercontent.com/IBM-Bluemix/developer-tools-installer/master/linux-installer/idt-installer | bash

You also need an IBM Cloud account. If you don’t have one, you can create one for free. Knative development benefits from ko to build and deploy its components seamlessly. You will use ko to build knative container images and publish them to the container registry. Let’s go ahead and install it.

go get github.com/google/go-containerregistry/cmd/ko

Next you need to configure ko to push images to the cloud registry:

# Login to the cloud and to the container registry (CR)
ibmcloud login
ibmcloud cr login
REGISTRY_ENDPOINT=$(ibmcloud cr info | awk '/registry/{ print $3 }' | head -1)

# Create a CR token with write access
CR_TOKEN=$(ibmcloud cr token-add --readwrite --description ko_rw --non-expiring -q)
# Backup your docker config if you have one
cp ~/.docker/config.json ~/.docker/config.json.$(date +%F)
# Setup docker auth so it may talk to the CR.
echo '{"auths":{"'$REGISTRY_ENDPOINT'":{"auth":"'$(echo -n token:$CR_TOKEN | base64)'"}}}' > ~/.docker/config.json
# Create a CR namespace
ibmcloud cr namespace-add $CR_NAMESPACE
# Configure ko

You need a Kubernetes cluster where to deploy Knative. If you don’t have one, provision one in the IBM Cloud Kubernetes Service (IKS). Store the cluster name in the IKS_CLUSTER environment variable.

export IKS_CLUSTER=<your cluster name>
eval $(ibmcloud ks cluster-config $IKS_CLUSTER --export)

Installing Knative from source

Everything is ready now to set up Knative. Obtain the source code:

mkdir -p ${GOPATH}/src/github.com/knative
cd ${GOPATH}/src/github.com/knative
git clone https://github.com/tektoncd/pipeline

Deploy Knative build pipeline:

cd ${GOPATH}/src/github.com/knative/build-pipeline
ko apply -f config/

In the last step, ko compiles the code, builds the docker images, pushes them to the registry, updates the YAML manifests to include the correct image path and version and finally applies all of them to the kubernetes cluster. The first time that you run this, it will take a bit longer. The manifest file creates a namespace knative-build-pipeline and a service account within it called build-pipeline-controller. This service account won’t be able to pull the images from the CR until we define the default image pull secret to be used in every pod created with that service account.

# Copy the existing image pull secrets from the default namespace to the knative namespace
kubectl get secret bluemix-default-secret-regional -o yaml | sed 's/default/knative-build-pipeline/g' | kubectl -n knative-build-pipeline create -f -

# Patch the service account to include the secret
kubectl patch -n knative-build-pipeline serviceaccount/build-pipeline-controller -p '{"imagePullSecrets":[{"name": "bluemix-knative-build-pipeline-secret-regional"}]}'

Delete the controller pods so that they are restarted with the right secrets:

kubectl get pods -n knative-build-pipeline | awk '/build/{ print $1 }' | xargs kubectl delete pod -n knative-build-pipeline

If everything went well, you will see something like this:

$ kubectl get all -n knative-build-pipeline
NAME                                             READY   STATUS    RESTARTS   AGE
pod/build-pipeline-controller-85f669c78b-nx7hp   1/1     Running   0          1d
pod/build-pipeline-webhook-7d6dd99bf7-lrzwj      1/1     Running   0          1d

NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/build-pipeline-controller   ClusterIP           9090/TCP   7d
service/build-pipeline-webhook      ClusterIP           443/TCP    7d

NAME                                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/build-pipeline-controller   1         1         1            1           7d
deployment.apps/build-pipeline-webhook      1         1         1            1           7d

NAME                                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/build-pipeline-controller-7cd4d5495c   0         0         0       5d
replicaset.apps/build-pipeline-controller-85f669c78b   1         1         1       5d
replicaset.apps/build-pipeline-controller-dd945bf4     0         0         0       5d
replicaset.apps/build-pipeline-webhook-684ccc869b      0         0         0       7d
replicaset.apps/build-pipeline-webhook-7d6dd99bf7      1         1         1       5d

Prepare a service account

You configured ko to be able to push images to the registry and the build-pipeline-controller service account to be able to pull images from it. The pipeline will execute the build and push images by using the PIPELINE_SERVICE_ACCOUNT in the PIPELINE_NAMESPACE, so you need to ensure that PIPELINE_SERVICE_ACCOUNT can push images to the registry as well. Create a container registry read/write token, in the same way as you did for configuring ko. Define the following secret template:

apiVersion: v1
kind: Secret
  name: ibm-cr-token
    build.knative.dev/docker-0: __CR_ENDPOINT__
type: kubernetes.io/basic-auth
  username: token
  password: __CR_TOKEN__

Fill in the endpoint and token values from the environment variables:

# Create the secret manifest
sed -e 's/__CR_TOKEN__/'"$CR_TOKEN"'/g' \
    -e 's/__CR_ENDPOINT__/'"$REGISTRY_ENDPOINT"'/g' \
    cr-secret.yaml.template > cr-secret.yaml

# Create the secret in kubernetes
kubectl apply -f cr-secret.yaml

# Alter the service account to use the secret
kubectl patch -n $PIPELINE_NAMESPACE serviceaccount/$PIPELINE_SERVICE_ACCOUNT -p '{"secrets":[{"name": "ibm-cr-token"}]}'

Making a code change to Knative

To verify that the development workflow is setup correctly, let’s make a small code change to the Knative pipeline controller:

$ git diff
diff --git a/cmd/controller/main.go b/cmd/controller/main.go
index e6a889ea..34cd26ae 100644
--- a/cmd/controller/main.go
+++ b/cmd/controller/main.go
@@ -63,7 +63,7 @@ func main() {
        logger, atomicLevel := logging.NewLoggerFromConfig(loggingConfig, logging.ControllerLogKey)
        defer logger.Sync()

-       logger.Info("Starting the Pipeline Controller")
+       logger.Info("Starting the Customized Pipeline Controller")

        // set up signals so we handle the first shutdown signal gracefully
        stopCh := signals.SetupSignalHandler()

You can build and deploy the modified controller with just one command:

ko apply -f config/controller.yaml

The output looks like the following:

2019/02/04 13:03:37 Using base gcr.io/distroless/base:latest for github.com/knative/build-pipeline/cmd/controller
2019/02/04 13:03:44 Publishing registry.ng.bluemix.net/knative/controller-7cb61323de6451022678822f2a2d2291:latest
2019/02/04 13:03:45 existing blob: sha256:d4210e88ff2398b08758ca768f9230571f8625023c3c59b78b479a26ff2f603d
2019/02/04 13:03:45 existing blob: sha256:bb2297ebc4b391f2fd41c48df5731cdd4dc542f6eb6113436b81c886b139a048
2019/02/04 13:03:45 existing blob: sha256:8ff7789f00584c4605cff901525c8acd878ee103d32351ece7d7c8e5eac5d8b4
2019/02/04 13:03:54 pushed blob: sha256:6c40cc604d8e4c121adcb6b0bfe8bb038815c350980090e74aa5a6423f8f82c0
2019/02/04 13:03:58 pushed blob: sha256:4497e3594708bab98b6f517bd7cfd4a2da18c6c6e3d79731821dd17705bfbee6
2019/02/04 13:03:59 pushed blob: sha256:7aaa1004f57382596bab1f7499bb02e5d1b5b28a288e14e6760ae36b784bf4c0
2019/02/04 13:04:00 registry.ng.bluemix.net/knative/controller-7cb61323de6451022678822f2a2d2291:latest: digest: sha256:f7640cd1e556cc6fe1816d554d7dbd0da1d7d7728f220669e15a00576c999468 size: 918
2019/02/04 13:04:00 Published registry.ng.bluemix.net/knative/controller-7cb61323de6451022678822f2a2d2291@sha256:f7640cd1e556cc6fe1816d554d7dbd0da1d7d7728f220669e15a00576c999468
deployment.apps/build-pipeline-controller configured

Changing the code of the controller causes the controller pod to be destroyed and recreated automatically, so if you check the controller logs you can see the customized startup message:

$ kubectl logs pod/$(kubectl get pods -n knative-build-pipeline | awk '/controller/{ print $1 }') -n knative-build-pipeline
{"level":"info","caller":"logging/config.go:96","msg":"Successfully created the logger.","knative.dev/jsonconfig":"{\n  \"level\": \"info\",\n  \"development\": false,\n  \"sampling\": {\n    \"initial\": 100,\n    \"thereafter\": 100\n  },\n  \"outputPaths\": [\"stdout\"],\n  \"errorOutputPaths\": [\"stderr\"],\n  \"encoding\": \"json\",\n  \"encoderConfig\": {\n    \"timeKey\": \"\",\n    \"levelKey\": \"level\",\n    \"nameKey\": \"logger\",\n    \"callerKey\": \"caller\",\n    \"messageKey\": \"msg\",\n    \"stacktraceKey\": \"stacktrace\",\n    \"lineEnding\": \"\",\n    \"levelEncoder\": \"\",\n    \"timeEncoder\": \"\",\n    \"durationEncoder\": \"\",\n    \"callerEncoder\": \"\"\n  }\n}\n"}
{"level":"info","caller":"logging/config.go:97","msg":"Logging level set to info"}
{"level":"warn","caller":"logging/config.go:65","msg":"Fetch GitHub commit ID from kodata failed: open /var/run/ko/HEAD: no such file or directory"}
{"level":"info","logger":"controller","caller":"controller/main.go:66","msg":"Starting the Customized Pipeline Controller"}
{"level":"info","logger":"controller.taskrun-controller","caller":"taskrun/taskrun.go:122","msg":"Setting up event handlers","knative.dev/controller":"taskrun-controller"}
{"level":"info","logger":"controller.taskrun-controller","caller":"taskrun/taskrun.go:134","msg":"Setting up ConfigMap receivers","knative.dev/controller":"taskrun-controller"}
{"level":"info","logger":"controller.pipeline-controller","caller":"pipelinerun/pipelinerun.go:110","msg":"Setting up event handlers","knative.dev/controller":"pipeline-controller"}
W0204 12:04:44.062820       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
{"level":"info","logger":"controller.taskrun-controller.config-store","caller":"configmap/store.go:166","msg":"taskrun config \"config-entrypoint\" config was added or updated: &{gcr.io/k8s-prow/entrypoint@sha256:7c7cd8906ce4982ffee326218e9fc75da2d4896d53cabc9833b9cc8d2d6b2b8f}","knative.dev/controller":"taskrun-controller"}
{"level":"info","logger":"controller","caller":"controller/main.go:143","msg":"Waiting for informer caches to sync"}
{"level":"info","logger":"controller","caller":"controller/main.go:156","msg":"Starting controllers"}
{"level":"info","logger":"controller.pipeline-controller","caller":"controller/controller.go:215","msg":"Starting controller and workers","knative.dev/controller":"pipeline-controller"}
{"level":"info","logger":"controller.taskrun-controller","caller":"controller/controller.go:215","msg":"Starting controller and workers","knative.dev/controller":"taskrun-controller"}
{"level":"info","logger":"controller.taskrun-controller","caller":"controller/controller.go:223","msg":"Started workers","knative.dev/controller":"taskrun-controller"}
{"level":"info","logger":"controller.pipeline-controller","caller":"controller/controller.go:223","msg":"Started workers","knative.dev/controller":"pipeline-controller"}

See the fifth line above:

{"level":"info","logger":"controller","caller":"controller/main.go:66","msg":"Starting the Customized Pipeline Controller"}

This means that you successfully set up your Knative pipeline development environment on IBM Cloud! Congratulations!


The Knative pipeline manifest that configures the build-pipeline-controller service account does not support configuring imagePullSecrets; this is why the service account has to be patched after the initial install. It is convenient, however, when developing on Knative, to simply issue a ko apply -f config/ command to apply to the cluster all code changes at once. That command would however revert the service account and drop the imagePullSecrets. I use git stash to work around this issue as follows:

  • On a clean code base, alter config/200-serviceaccount.yaml, to include the imagePullSecrets:

    apiVersion: v1
    kind: ServiceAccount
      name: build-pipeline-controller
      namespace: knative-build-pipeline
    - name: bluemix-knative-build-pipeline-secret-regional
  • Run git stash to restore a clean code base.

The deployment workflow then becomes:

  # Commit your changes locally, and the pop the service account changes out of stash

  # git add ... / git commit ...
  git stash pop

  # Redeploy
  ko apply -f config

  # Stash the service account change away
  git stash


You can follow a similar approach to set up other Knative components as well. In the next blog post, I will continue by describing how to set up a CD pipeline through the Knative pipeline service that you just installed.

Extending Kubernetes for a new developer experience

Most people already know about Kubernetes as a defacto hosting platform for container-based applications. And if you manage a Kubernetes cluster you probably already know about many of its extensibility points due to customizations that you installed. Or you may have developed something yourself, such as a custom scheduler. Maybe you even extended the Kubernetes resource model by creating your own Custom Resource Definition (CRD) along with a controller that will manage those new resources. But with all of these options available to extend Kubernetes, most of them tend to be developed for the benefit of Kubernetes itself as a hosting environment, meaning they help manage the applications running within it. Now with the recent introduction of two new projects, that when combined together, will radically change how application developers use and view Kubernetes.

Let’s explore these two projects and explain why they could cause a significant shift in the Kubernetes application developer’s life.

Istio: The next-gen microservice network management

Istio was introduced back in 2017 in a joint collaboration between IBM, Google, and Lyft as an open source project to provide a language agnostic way to connect, secure, manage, and monitor microservices. Built with open technologies such as Envoy, Prometheus, Grafana, and Jaeger, it provides a service mesh that allows you to:

  • Perform traffic management, such as canary deployment and A/B testing.
  • Gather, visualize, and export detailed metrics and tracing across your microservices.
  • Service authentication, authorization, and automatic traffic encryption.
  • Enforce mesh-wide policies, such as rate limiting and white/blacklisting.

Istio does all of the above, and more, without making any modifications to the application itself. Istio extends Kubernetes with new CRDs and injected Envoy proxy sidecars that run next to your application to deliver this control and management functionality.

If we look under the covers, we can see that the Istio architecture is split into two planes:

  • The data plane is composed of a set of intelligent proxies (Envoy), deployed as sidecars that mediate and control all network communication among microservices.
  • The control plane is responsible for managing and configuring proxies to route traffic and enforce policies at runtime.

Istio’s architecture is also comprised of these components:

  • Envoy – the sidecars running alongside your applications to provide the proxy
  • Pilot – configuration and propagation to the entire system
  • Mixer – policy and access control and gathering telemetry data
  • Citadel – identity, encryption and credential management
  • Galley – validates user authored Istio API configuration

While all of this by itself is pretty exciting (and Istio is definitely causing quite a buzz and adoption in the industry), it’s still targeted to a DevOps engineer/operator persona – someone who is responsible for administrative tasks on your Kubernetes cluster and applications. Yes, mere mortal software developers could configure Istio routing and policies themselves, but in practice it’s not clear that your average developer will do so – or even want to do so. They just want to focus on their application’s code, and not on all of the details that are associated with managing their network configurations.

However, Istio adds to Kubernetes’ many missing features that are required for managing microservices. And Istio does move the needle closer for Kubernetes becoming a seamless platform for developers to deploy their code without any configuration. Just like Kubernetes, Istio has a clearly defined focus and it does it well. If you view Istio as a building block or a layer in the stack, it enables new technologies to be built on top. That’s where Knative comes into the picture.

Knative: A new way to manage your application

Like Istio, Knative extends Kubernetes by adding some new key features:

  • A new abstraction for defining the deployment of your application to enable a set of rich features aimed at optimizing its resource utilization – in particular “scale to zero.”
  • The ability to build container images within your Kubernetes cluster.
  • Easy registration of event sources, enabling your applications to receive their events.

Starting with the first item, there’s a Knative component called “serving” that is responsible for running, exposing, and scaling your application. To achieve this, a new resource called a Knative “Service” is defined (not to be confused with the core Kubernetes “Service” resource). The Knative “Service” is actually more akin to the Kubernetes “Deployment,” in that it defines which image to run for your application along with some metadata that manages it.

The key difference between a Knative Service and a Deployment is that a Service can be scaled down to 0 instances if the system detects that it is not being used. For those familiar with Serverless platforms, the concept here is the same as the ability to “scale down to zero,” thus saving you from the cost of continually having at least one instance running. For this reason, Knative is often discussed as a Serverless hosting environment. In reality, it can be used to host any type of application (not just “Functions”), but this is one of the bigger use cases driving its design.

Within the Knative Service, there’s also the ability to specify a “roll-out” strategy to switch from one version of your application to another. For example, you can specify that only a small percentage of the incoming network requests be routed to the new version of the application and then slowly increase it over time. To achieve this, Istio is leveraged to manage this dynamic network routing. Along with this is the ability for the Service to include its “Route” or endpoint URL – in essence, Knative will set up all of the Kubernetes and Istio networking, load balancing, and traffic splitting that are associated with this endpoint for you.

One of the other big features available in the Knative Service is the ability to specify how the image used for deployment should be built. In a Kubernetes Deployment, the image is assumed to be built already and available via some container image registry. However, this requires the developer to have a build process that is separate from his/her application deployment. The Knative Service allows for all of this to be combined into one – saving the developer time and resources.

This “build” component that is referenced from the Service is the second key component of the Knative project. While there is flexibility to define any type of build process you want, typically the build steps will be very similar to what developers do today: it will extract the application source code from a repository (e.g., GitHub), build it into a container image, and then push it to an image registry. The key aspect here, though, is that this is now all done within the definition of the Knative Service resource, and does not require a separately managed workflow.

This brings us to the third and final component of the Knative project, “Eventing.” With this component, you can define and manage subscriptions to event producers and then control how the received events are then choreographed through your applications. For example, an incoming event could be sent directly to a single application, to multiple interested applications, or even as part of a complicated workflow where multiple event consumers are involved.

In bringing this all together, it should now be clearer how all of these components working together could be leveraged to define the entire workflow for an application’s lifecycle.

A simplistic scenario might be:

  1. A developer pushes a new version of his/her code to a GitHub repository.
  2. A GitHub event is generated as a result of the push.
  3. The push event is received by Knative, which is then passed along to some code that causes the generation of a new revision/version of the application to be defined.
  4. This new revision then causes the building of a new version of the container image for the application.
  5. Once built, this new image is then deployed to the environment for some canary testing, and then the load on the new version is slowly increased over time until the old version of the application can be removed from the system.

This entire workflow can be executed and managed within Kubernetes, and it can be version controlled right alongside the application. And, from the developer’s point of view, all he/she ever deals with is a single Knative Service resource to define the application – not the numerous resource types that developers would normally need to define when using Kubernetes alone.

While the above set of Knative features is pretty impressive, Knative itself (like Kubernetes) is just another set of building blocks available for the community to leverage. Knative is being designed with a set of extensibility points to allow for customizations and future higher order tooling to be developed.

Where will we go next?

What’s different about the development of Istio and Knative is that when combined together, they’re focused on making life easier for the application developer. As good as Kubernetes is, it’s likely that many developers’ first exposure to it (especially if they’re coming from other platforms like CloudFoundry) is probably a bit daunting. Between pods, replicaSets, deployments, ingress, endpoints, services and helm, there are a lot of concepts to learn and understand. When all a developer really wants to do is host some code, it can seem like more trouble than it’s worth. With Knative and its leveraging of Istio, it’s a big step forward in helping developers move back to being application developers instead of DevOps experts. It’ll be exciting to see how the community reacts to this as these projects mature.

Get all your containers news in one place

IBM Developer is teeming with so much great content for developers. But it can sometimes be daunting to navigate such a huge wealth of knowledge when you already know what you’re looking for. Which is why, if you’re a regular visitor to our Containers technology page, you can rest easy.

Beginning in February, you’ll be able to receive a monthly digest of everything you need to know about containers. We’ll send you direct links to code patterns, tutorials, blogs, and announcements – all relevant to you, the containers-minded developer. With the newsletter, it will be easier for you to find great content tailored to your needs. Our developer advocates are constantly working on awesome code and content for you and we don’t want you to miss any of it! We’ll also regularly highlight the items that other developers are watching the most, so you can keep an eye out.

The first newsletter will go out on February 5, so be sure to sign up in time for that first issue. It will also feature and highlight all the happenings around containers, Kubernetes, Knative, and more at this year’s Think conference, so make sure to check it out.

Subscribe now!

Introducing Knctl: A simpler way to work with Knative, Part 2

In Part 1 of this two-part blog series, we introduced Knctl, a command-line interface (CLI) for interacting with Knative, and we showed you how to deploy pre-built Docker images with Knative. In this part, we explore newer features of Knctl, specifically, deploying applications from source, splitting traffic across revisions, and connecting Knative services to databases.

Deploy from source code

Knative deploys applications based on container images. In part one, we showed how you can use Knctl to deploy an image with a simple command. But what if you are still actively working on the application source code, and therefore do not have an image for it yet?

Knative Build is a project that allows you to turn source code into a container image. There are two options using it to build from source: 1) with a Dockerfile, and 2) with a Buildpack build template.

Before you can deploy, you must set up secrets for Knative Build to connect to an image registry for pushing and pulling images. The following example uses DockerHub and assumes that your user name and password are saved in DOCKER_HUB_USERNAME and DOCKER_HUB_PASSWORD environment variables:

$ knctl basic-auth-secret create -s docker-push --docker-hub \

$ knctl basic-auth-secret create -s docker-pull --docker-hub \

$ knctl service-account create -a docker-hub -s docker-push -s docker-pull

After your service account and secrets are created, you clone the sample Go app, which includes a Dockerfile. Note that the example shows the generic case for DockerHub with a private account requiring both a pull and a push secrets. If you used a public account, you might not need to set up a pull secret (because all images are public).

$ git clone https://github.com/cppforlife/simple-app

$ cd simple-app


The Dockerfile included in the sample application at github.com/cppforlife/knctl is fairly straightforward and just builds our Go application with a go build command. See Best practices for writing Dockerfiles if you are unfamiliar with creating Dockerfiles.

$ cat Dockerfile

FROM golang:1.10.1
WORKDIR /go/src/github.com/mchmarny/simple-app/
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -v -o app

FROM scratch
COPY --from=0 /go/src/github.com/mchmarny/simple-app/app .

Then you can issue a knctl deploy ... command to build from source and pass options to access the DockerHub secrets:

$ knctl deploy --service hello \
    --directory . \
    --service-account docker-hub \
    --image docker.io/dkalinin/simple-app \
    --env SIMPLE_MSG=custom-built

Name  hello

Waiting for new revision to be created...

Tagging new revision 'hello-00001' as 'latest'

Tagging new revision 'hello-00001' as 'previous'

[2018-12-04T12:29:04-08:00] Uploading source code...

[2018-12-04T12:29:06-08:00] Finished uploading source code...

Watching build logs...

build-step-credential-initializer | {"level":"info","ts":1543955343.2327511,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized."}
build-step-build-and-push | INFO[0000] Downloading base image golang:1.10.1
build-step-build-and-push | ERROR: logging before flag.Parse: E1204 20:29:08.377124       1 metadata.go:142] while reading 'google-dockercfg' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg
build-step-build-and-push | ERROR: logging before flag.Parse: E1204 20:29:08.381355       1 metadata.go:159] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url
build-step-build-and-push | INFO[0000] Executing 0 build triggers
build-step-build-and-push | INFO[0000] Unpacking rootfs as cmd RUN CGO_ENABLED=0 GOOS=linux go build -v -o app requires it.
build-step-build-and-push | INFO[0016] Taking snapshot of full filesystem...


build-step-build-and-push | INFO[0029] cmd: /bin/sh
build-step-build-and-push | INFO[0029] args: [-c CGO_ENABLED=0 GOOS=linux go build -v -o app]
build-step-build-and-push | net
build-step-build-and-push | vendor/golang_org/x/net/lex/httplex
build-step-build-and-push | crypto/x509
build-step-build-and-push | vendor/golang_org/x/net/proxy
build-step-build-and-push | net/textproto
build-step-build-and-push | crypto/tls
build-step-build-and-push | net/http/httptrace
build-step-build-and-push | net/http
build-step-build-and-push | github.com/mchmarny/simple-app
build-step-build-and-push | INFO[0034] Taking snapshot of full filesystem...
build-step-build-and-push | INFO[0039] Skipping paths under /builder/home, as it is a whitelisted directory
build-step-build-and-push | INFO[0072] Skipping paths under /workspace, as it is a whitelisted directory
build-step-build-and-push | INFO[0072] COPY --from=0 /go/src/github.com/mchmarny/simple-app/app .
build-step-build-and-push | INFO[0072] Taking snapshot of files...
build-step-build-and-push | INFO[0072] EXPOSE 8080
build-step-build-and-push | INFO[0072] cmd: EXPOSE
build-step-build-and-push | INFO[0072] Adding exposed port: 8080/tcp
build-step-build-and-push | INFO[0072] ENTRYPOINT ["/app"]
build-step-build-and-push | ERROR: logging before flag.Parse: E1204 20:30:21.112897       1 metadata.go:142] while reading 'google-dockercfg' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg
build-step-build-and-push | ERROR: logging before flag.Parse: E1204 20:30:21.116650       1 metadata.go:159] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url
build-step-build-and-push | 2018/12/04 20:30:22 pushed blob sha256:515ea77a06a3031d0d58aec1afa1c0e6156fed191e01f3c26669bea1ec8d95c9
build-step-build-and-push | 2018/12/04 20:30:22 pushed blob sha256:5e6d1dd21ad5ce9ae385df20fa1323679ba29a26926467cf11162afe442c71be
build-step-build-and-push | 2018/12/04 20:30:23 index.docker.io/dkalinin/simple-app:latest: digest: sha256:2840ce2ae3da7a9caf53463bdc4cc33d54c6c3ce5b1e7b1e03b4f8c3063688c7 size: 428
nop | Build successful

Waiting for new revision 'hello-00001' to be ready for up to 5m0s (logs below)...

hello-00001 > hello-00001-deployment-7d6b588b8-mwv4t | 2018/12/04 20:30:29 Simple app server started...

Revision 'hello-00001' became ready

Continuing to watch logs for 5s before exiting


With the Knctl deploy command you can view the output during execution. When building from source, you see the Docker build process and see the image that is uploaded to the registry. After the build process is complete and successful, you should be able to access the deployed service and even re-deploy it with the resulting image. Re-deploying adds a new revision of the service if you use the same name.

$ knctl curl --service hello

Running: curl '-H' 'Host: hello.default.example.com' 'http://x.x.x.x:80'

Hello World: custom-built!


BuildTemplate objects in Knative Build allow you to build images in a variety of ways. The Buildpack template is based on the popular Buildpack process, first created by Heroku and now popularized by Cloud Foundry and other platforms as a service (PaaS).

First you need to add a BuildTemplate object to Kubernetes. The object provides directions on how to actually build and configure container image:

$ kubectl apply -f https://raw.githubusercontent.com/knative/build-templates/39146dffac752f187618ffef9d2d712aa9c8d243/buildpack/buildpack.yaml

Then you can use the buildpack template with Knctl. Issue the deploy command with the --template buildpack flag. You still need to include the secrets for your registry because the resulting container image needs to be saved somewhere for subsequent access:

$ knctl deploy --service hello \
    --directory . \
    --service-account docker-hub \
    --image docker.io/dkalinin/simple-app \
    --env SIMPLE_MSG=custom-built \
    --template buildpack \
    --template-env GOPACKAGENAME=main

Name  hello

Waiting for new revision (after revision 'hello-00001') to be created...

Tagging new revision 'hello-00002' as 'latest'

Tagging older revision 'hello-00001' as 'previous'

[2018-12-04T12:34:11-08:00] Uploading source code...

[2018-12-04T12:34:12-08:00] Finished uploading source code...

Watching build logs...

build-step-credential-initializer | {"level":"info","ts":1543955647.333156,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized."}
build-step-build | -----> Go Buildpack version 1.8.26
build-step-build | -----> Installing godep 80
build-step-build |        Download [https://buildpacks.cloudfoundry.org/dependencies/godep/godep-v80-linux-x64-cflinuxfs2-06cdb761.tgz]
build-step-build | -----> Installing glide 0.13.1
build-step-build |        Download [https://buildpacks.cloudfoundry.org/dependencies/glide/glide-v0.13.1-linux-x64-cflinuxfs2-aab48c6b.tgz]
build-step-build | -----> Installing dep 0.5.0
build-step-build |        Download [https://buildpacks.cloudfoundry.org/dependencies/dep/dep-v0.5.0-linux-x64-cflinuxfs2-52c14116.tgz]
build-step-build | -----> Installing go 1.8.7
build-step-build |        Download [https://buildpacks.cloudfoundry.org/dependencies/go/go1.8.7.linux-amd64-cflinuxfs2-fff10274.tar.gz]
build-step-build |        **WARNING** Installing package '.' (default)
build-step-build | -----> Running: go install -tags cloudfoundry -buildmode pie .
build-step-export | 2018/12/04 20:35:23 mounted blob: sha256:21324a9f04e76c93078f3a782e3198d2dded46e4ec77958ddd64f701aecb69c0
build-step-export | 2018/12/04 20:35:23 mounted blob: sha256:a5733e6358eec8957e81b1eb93d48ef94d649d65c69a6b1ac49f616a34a74ac1
build-step-export | 2018/12/04 20:35:23 mounted blob: sha256:6be38da025345ffb57d1ddfcdc5a2bc052be5b9491825f648b49913d51e41acb
build-step-export | 2018/12/04 20:35:23 mounted blob: sha256:1124eb40dd68654b8ca8f5d9ec7e439988a4be752a58c8f4e06d60ab1589abdb
build-step-export | 2018/12/04 20:35:23 pushed blob sha256:a9b41e0563c89746fc7a78bb3b96af6e34858027ac85b063ae785a22be733645
build-step-export | 2018/12/04 20:35:24 pushed blob sha256:0db6b2994ab377e8b484e8b96b5b65ca9bbd43b299fda85504ab38e0efd0bd7e
build-step-export | 2018/12/04 20:35:24 index.docker.io/dkalinin/test-simple-app:latest: digest: sha256:724c471dec76bcc2af0fe2abcb9bc15ed6fd17af51bdc4dce79f99ecf583fe69 size: 1082
nop | Build successful

Waiting for new revision 'hello-00002' to be ready for up to 5m0s (logs below)...

hello-00002 > hello-00002-deployment-76b4c6bf65-899nr | 2018/12/04 20:36:01 Simple app server started...

Revision 'hello-00002' became ready

Continuing to watch logs for 5s before exiting


Note that while we used the same application that includes a Dockerfile, you could remove this file when using the --template buildpack option. With this build template, the container image is created automatically. The buildpack process attempts to discover what your application type is and create the image accordingly. You can then verify that your service was deployed as we did above with the knctl curl command.

A word of caution: While deploying with buildpacks appears to be easier because you don’t need to create your own Dockerfile, it might be more difficult to track dependencies. Tracking dependencies is especially difficult when your application deviates from the common practices of frameworks used to create your application. We therefore recommend creating a Dockerfile for your applications and functions to guarantee repeatability of the image building process.

Traffic splitting

With traffic splitting, you can control the amount of traffic that Knative delivers to a specific revision of your service, which can be useful for a variety of reasons. One common example is to roll out service updates (and roll them back) in an incremental fashion to reduce risk of breaking users. The so-called blue-green deployment is easy to do with knctl rollout command.

The first thing to do when splitting traffic is to create a service with multiple revisions:

$ knctl deploy --service hello \
    --image gcr.io/knative-samples/helloworld-go \
    --env TARGET=first \

$ knctl deploy --service hello \
    --image gcr.io/knative-samples/helloworld-go \
    --env TARGET=second \

The latest revision is tagged as latest and the previous is tagged as previous. You can then use these tags to split traffic between these revisions. You can also chose to specify your own tags when deploying, for example, v1, v2, and so on. You can add these tags after deployment with the $ knctl revision tag ... command.

$ knctl rollout --route hello -p hello:latest=50% -p hello:previous=50%

Because the example split the traffic evenly across the two revisions, the easiest way to verify that traffic is indeed spread across is to curl the service multiple times. You should see that about 50% of the traffic is routed to the latest and other 50% to the previous revision.

$ while true; do knctl curl --service hello; done

Hello first!
Hello first!
Hello second!

Connecting to a database

Most cloud applications, services, and functions do not exist in a vacuum. That is, for any meaningful cloud service, you need to connect the service to some other external service. The most common example is to connect to a cloud database in order to store your data and allow it to be searched and queried securely and efficiently. Although you can hard code service dependency connections and credential information directly into source code, it is not typically a good idea.

A common solution is to use environment variables, or Kubernetes secrets, or external configuration files to hold sensitive information. We found a sample Go application that uses a MySQL database in order to store and retrieve blog posts. We made a change to the application to use DATABASE_URI environment variable for its database URI connection string.

Before we deployed this application, we created a MySQL service instance from IBM Cloud service catalog. After creating an instance of that service, we created credentials for the MySQL service instance and downloaded the JSON document that includes the connection information. The following screen capture shows the IBM Cloud user interface. The rest is self-explanatory after you create the service instance.

Image of MySQL instance creation on IBM Cloud

(For our adventurous readers, you can try to install the Kubernetes Service Catalog project and instead provision a MySQL service instance through the svcat command-line interface.)

After downloading the credential JSON, you need a YAML file with a Secret object similar to the one in our example:

apiVersion: v1
kind: Secret
  name: mysql-ibm
type: Opaque
  uri: "admin:xxx@(sl-us-south-1-portal.45.dblayer.com:17051)/compose"

Note that for a Go application to successfully connect to our MySQL database, the host-port pair (sl-us-south-1-portal.45.dblayer.com:17051) needs to be in parentheses.

The following example shows how to create the secret in the cluster:

$ kubectl apply -f mysql-ibm-secret.yml

After we created the secret, we deployed the Go MySQL sample application and reference created the secret through --env-secret=DATABASE_URI=secret-name/uri flag, as show in the following example:

$ git clone https://github.com/cppforlife/go-mysql-sample-app

$ cd go-mysql-sample-app

$ knctl deploy --service mysql \
    --directory . \
    --image docker.io/dkalinin/sample-mysql \
    --service-account docker-hub \
    --env-secret DATABASE_URI=mysql-ibm/uri # passing DB URI to allow service to connect to it

To verify that the application can successfully use database, you should open it in your browser. If you can’t set up DNS for your Knative installation, you can try using kwt to configure the DNS on your local machine temporarily:

$ sudo -E kwt net start --dns-map-exec 'knctl dns-map'

# ... once finished
$ kwt net clean-up

What’s next?

The important next steps for the Knctl project are not only socializing with Knative enthusiasts and new users but also connecting with the Knative community to increase adoption and perhaps make it the defacto default CLI.

We demonstrated these Knctl features recently at a “show and tell” meeting with the community. We will work with the community to achieve the next steps in defining the CLI for Knative, hopefully using Knctl as the base.


Now you have seen some examples of how to run Knative service from a local directory, split traffic across several revisions of a Knative service, and connect your service to a MySQL database. Using these features through a few commands helps demonstrate the realization of our goals for Knctl: simplifying Knative use for developers.

As we keep Knctl in sync with upcoming releases of Knative, we continue to welcome your your feedback via our Github project and look forward to your pull requests.

Michael Maximilien (IBM) and Dmitriy Kalinin (Pivotal)

Introducing Knctl: A simpler way to work with Knative

We believe that most application developers should not worry about lower level platform primitives. Rather, they should focus on their application code. The recently announced Knative open source software project was created to simplify the application developer experience on top of Kubernetes by offering higher level primitives.

Knative reduces the effort required to scale an application to the required capacity. It simplifies the ongoing deployment of new versions of an application, trivializes building source code by packaging it as an executable application within a container image, and advances event-driven application architectures.

To encourage adoption of Knative and aid early evaluators of this open source project, we both collaborated to create Knctl – a command-line interface (CLI) that makes interacting with Knative simple. Our aim is to make development and deployment workflows easier than using kubectl for managing Knative resources.

Motivation for creating Knctl

Knative goes a long way to help simplify the steps needed to use Kubernetes for deploying typical 12 factor applications. But it still requires developers to manipulate various YAML files through kubectl. While this workflow is usable, it is not as easy as it could be. More specifically, a Knative CLI would simplify developer experience and accelerate adoption of Knative.

With knctl, we tried to capture what we think a more streamlined Knative experience might look like. One of the interesting design points of knctl is that it doesn’t prevent direct Kubernetes resources use. Users can fall back to kubectl if necessary.

The following sections present an example deployment workflow. We start from a new Kubernetes cluster and deploy a sample application. Spin up a Kubernetes cluster (minimum required version is 1.10) on your favorite provider, and follow along.

Installing Knative and Knctl

Before installing Knative, make sure that your Kubernetes cluster is ready and you can communicate with it through kubectl. A good command to run is kubectl get nodes, which should list nodes in your cluster and their readiness. It look like the following example:

$ kubectl get nodes

NAME             STATUS    ROLES     AGE       VERSION   Ready     <none>    20d       v1.10.5+IKS   Ready     <none>    20d       v1.10.5+IKS   Ready     <none>    20d       v1.10.5+IKS

Next, install knctl by grabbing pre-built binaries from the Knctl releases page. On a MacOS X system, clicking on this link downloads the binary files into your home directory Download folder.

# compare checksum output to what's included in the release notes
$ shasum -a 265 ~/Downloads/knctl-*

# move binary to your system’s /usr/local/bin -- might require root password
$ mv ~/Downloads/knctl-* /usr/local/bin/knctl

# make the newly copied file executable -- might require root password
$ chmod +x /usr/local/bin/knctl

Use the knctl install command to install Knative Serving and Build (in this release knctl does not install Knative Eventing). (Here’s an ASCII cast of the installation procedure.) Depending on the size of your cluster (the available resources) and network latency, installing Knative can be complete in less than a minute, or it can take up to 5 minutes or so. (Note that we currently don’t recommend to use Minikube with Knative due to resource availability. However, if you are going to give it a try consider using knctl install --node-ports --exclude-monitoring instead.)

$ knctl install

Installing Istio
Installing Knative
# ...snip...
Waiting for Istio to start...
Waiting for Knative to start...


You can run a quick check to verify that your Knative installation is operational by listing the Istio ingresses that Knative configures. On a fresh installation, there should be one ingress created. It might take a little bit of time for your provider to provision a load balancer, therefore the Addresses column might not be populated immediately.

$ knctl ingress list


Name                    Addresses  Ports         Age
knative-ingressgateway  x.x.x.x    80,443,32400  18h

1 ingress


Example deployment workflow

Before deploying a sample application, there are several Knative concepts you should be familiar with. Consider the following definitions and a diagram from Knative Docs:

  • Revision: The revision resource is a point-in-time snapshot of the code and configuration for each modification that is made to the application. Revisions are immutable objects and can be retained for as long as they are useful. Every time an application is deployed, a new revision is created.
  • Route: The route resource maps a network endpoint to a one or more revisions. You can manage the traffic in several ways, including fractional traffic and named routes. By default, each service has one associated route.
  • Service: The service resource holds associated revisions and routes (commonly one). Each application is represented as a service, as shown in the following diagram.


Now that you have a basic mental model of Knative resources, you can try deploying a sample application with the knctl deploy command. (Watch an ASCII cast of the entire workflow.)

This example uses a pre-built container image that includes the helloworld Go application. This application responds with Hello World: ${TARGET} content. Near the end of this workflow, additional documentation links demonstrate how to use the knctl deploy command with the local source code directory:

$ knctl deploy --service hello \
    --image gcr.io/knative-samples/helloworld-go \
    --env TARGET=Max


The service named hello is created, and now is visible in the list of services:

$ knctl service list

Services in namespace 'default'

Name   Domain                     Annotations  Age
hello  hello.default.example.com  -            1d

1 service

Ultimately, services are backed by pods, so check that at least one pod is in the “Running” state:

$ knctl pod list --service hello

Pods for service 'hello'

Revision     Name                                    Phase    Restarts  Age
hello-00001  hello-00001-deployment-c9cc8b88c-8hw4x  Running  0         10s

1 pod

Make an HTTP request with a knctl curl command to the deployed service. You can verify that your deployed service responded with appropriate content:

$ knctl curl --service hello

Running: curl '-H' 'Host: hello.default.example.com' 'http://x.x.x.x:80'

Hello World: Max!

Because you originally configured your application with the TARGET=Max environment variable, this sample application includes Max in its response.

Note: Unless you have configured your DNS provider to point to the Knative ingress IP, you can’t use your browser to access your application. In the previous example, the curl command sends an explicit HTTP Host header when making a request. The HTTP Host header lets the ingress gateway decide which service you are trying to access.

You can also see logs emitted by the application. It happens to log a line when it starts, and when it receives requests. The knctl logs -f command continues following application logs, until you stop it with Ctrl+C.

$ knctl logs -f --service hello
hello-00001 > hello-00001-deployment-7d4b4c5cc-v6jvl | 2018/08/02 17:21:51 Hello world sample started.
hello-00001 > hello-00001-deployment-7d4b4c5cc-v6jvl | 2018/08/02 17:22:04 Hello world received a request.

Now that you deployed first version of our application, change a TARGET environment variable value so that you have a new version running:

$ knctl deploy --service hello \
    --image gcr.io/knative-samples/helloworld-go \
    --env TARGET=Tom

By deploying the service again with the same name but different environment variable, Knative creates a revision of that service. Confirm the update by making another HTTP request. It might take a little bit of time for the change to take effect, as new pods start and traffic shifts.

$ knctl curl --service hello

Running: curl '-H' 'Host: hello.default.example.com' 'http://x.x.x.x:80'

Hello World: Tom!

To verify that you have multiple application versions, or revisions as they are called in Knative world, use the knctl revision list command:

$ knctl revision list --service hello

Revisions for service 'hello'

Name         Allocated Traffic %  Serving State  Age
hello-00002  100%                 Active         2m
hello-00001  0%                   Reserve        3m

2 revisions

You can delete any deployed service using the knctl service delete --service hello command. This command deletes the service and all of its revisions.

Here’s a summary of what you achieved:

  • First, you deployed an application which Knative automatically started and assigned a route, so that it’s reachable.
  • Next, you used knctl to observe several application aspects such as logs and pods.
  • Then, you deployed an updated copy of this application and saw that a new version is actively running.

You did all of these tasks without having to dig deep and understand how Knative manages custom Kubernetes resources!

Now that you looked at how to deploy a sample application from a container image through knctl and Knative, you might want to follow similar workflows. Check out the following links, and learn how to use knctl deploy to deploy local source code or how to use buildpacks:

After you finish experimenting with Knative, you can uninstall it with the knctl uninstall command. (See the ASCII cast of the uninstall procedure.) This task takes a few minutes. The Knative and Istio system namespaces are removed. However, any Knative resources that you have not deleted beforehand remain (such as hello service). You can also delete the Knctl executable with rm /usr/local/bin/knctl.

What’s next?

Our immediate next step for Knctl is to collect feedback about the CLI user experience. We would also love to hear your feedback on features that should be our next priority, for example Knative Eventing commands or support for traffic splitting. We are also working closely with the Knative community to push for wide adoption of Knctl, hopefully as the standard CLI for Knative.


Knctl streamlines application developer and deployment workflows for using Knative and Kubernetes by exposing a curated set of commands. Then developers can focus on their code, and rely on Knative to do application management behind the scenes.

In combination, Knctl – with Knative on top of Kubernetes – can be a powerful and friendly way to deploy your applications without losing access to raw Kubernetes APIs. If you’re interested in more advanced features, like deploying apps from source, splitting traffic across revisions, and connecting Knative services to databases, see Part 2 of our blog series about Knctl.

We welcome your feedback through our Github project and look forward to your pull requests. Our goal is to keep knctl in sync with releases of Knative and test it on various leading Kubernetes cluster providers.

Michael Maximilien (IBM) and Dmitriy Kalinin (Pivotal)