The Blog

 

We believe that most application developers should not worry about lower level platform primitives. Rather, they should focus on their application code. The recently announced Knative open source software project was created to simplify the application developer experience on top of Kubernetes by offering higher level primitives.

Knative reduces the effort required to scale an application to the required capacity. It simplifies the ongoing deployment of new versions of an application, trivializes building source code by packaging it as an executable application within a container image, and advances event-driven application architectures.

To encourage adoption of Knative and aid early evaluators of this open source project, we both collaborated to create Knctl – a command-line interface (CLI) that makes interacting with Knative simple. Our aim is to make development and deployment workflows easier than using kubectl for managing Knative resources.

Motivation for creating Knctl

Knative goes a long way to help simplify the steps needed to use Kubernetes for deploying typical 12 factor applications. But it still requires developers to manipulate various YAML files through kubectl. While this workflow is usable, it is not as easy as it could be. More specifically, a Knative CLI would simplify developer experience and accelerate adoption of Knative.

With knctl, we tried to capture what we think a more streamlined Knative experience might look like. One of the interesting design points of knctl is that it doesn’t prevent direct Kubernetes resources use. Users can fall back to kubectl if necessary.

The following sections present an example deployment workflow. We start from a new Kubernetes cluster and deploy a sample application. Spin up a Kubernetes cluster (minimum required version is 1.10) on your favorite provider, and follow along.

Installing Knative and Knctl

Before installing Knative, make sure that your Kubernetes cluster is ready and you can communicate with it through kubectl. A good command to run is kubectl get nodes, which should list nodes in your cluster and their readiness. It look like the following example:

$ kubectl get nodes

NAME             STATUS    ROLES     AGE       VERSION
10.148.127.166   Ready     <none>    20d       v1.10.5+IKS
10.148.127.168   Ready     <none>    20d       v1.10.5+IKS
10.148.127.190   Ready     <none>    20d       v1.10.5+IKS

Next, install knctl by grabbing pre-built binaries from the Knctl releases page. On a MacOS X system, clicking on this link downloads the binary files into your home directory Download folder.

# compare checksum output to what's included in the release notes
$ shasum -a 265 ~/Downloads/knctl-*

# move binary to your system's /usr/local/bin -- might require root password
$ mv ~/Downloads/knctl-* /usr/local/bin/knctl

# make the newly copied file executable -- might require root password
$ chmod +x /usr/local/bin/knctl

Use the knctl install command to install Knative Serving and Build (in this release knctl does not install Knative Eventing). (Here’s an ASCII cast of the installation procedure.) Depending on the size of your cluster (the available resources) and network latency, installing Knative can be complete in less than a minute, or it can take up to 5 minutes or so. (Note that we currently don’t recommend to use Minikube with Knative due to resource availability. However, if you are going to give it a try consider using knctl install --node-ports --exclude-monitoring instead.)

$ knctl install

Installing Istio
Installing Knative
# ...snip...
Waiting for Istio to start...
Waiting for Knative to start...

Succeeded

You can run a quick check to verify that your Knative installation is operational by listing the Istio ingresses that Knative configures. On a fresh installation, there should be one ingress created. It might take a little bit of time for your provider to provision a load balancer, therefore the Addresses column might not be populated immediately.

$ knctl ingress list

Ingresses

Name                    Addresses  Ports         Age
knative-ingressgateway  x.x.x.x    80,443,32400  18h

1 ingress

Succeeded

Example deployment workflow

Before deploying a sample application, there are several Knative concepts you should be familiar with. Consider the following definitions and a diagram from Knative Docs:

  • Revision: The revision resource is a point-in-time snapshot of the code and configuration for each modification that is made to the application. Revisions are immutable objects and can be retained for as long as they are useful. Every time an application is deployed, a new revision is created.
  • Route: The route resource maps a network endpoint to a one or more revisions. You can manage the traffic in several ways, including fractional traffic and named routes. By default, each service has one associated route.
  • Service: The service resource holds associated revisions and routes (commonly one). Each application is represented as a service, as shown in the following diagram.

diagram

Now that you have a basic mental model of Knative resources, you can try deploying a sample application with the knctl deploy command. (Watch an ASCII cast of the entire workflow.)

This example uses a pre-built container image that includes the helloworld Go application. This application responds with Hello World: ${TARGET} content. Near the end of this workflow, additional documentation links demonstrate how to use the knctl deploy command with the local source code directory:

$ knctl deploy --service hello \
    --image gcr.io/knative-samples/helloworld-go \
    --env TARGET=Max

Succeeded

The service named hello is created, and now is visible in the list of services:

$ knctl service list

Services in namespace 'default'

Name   Domain                     Annotations  Age
hello  hello.default.example.com  -            1d

1 service

Ultimately, services are backed by pods, so check that at least one pod is in the “Running” state:

$ knctl pod list --service hello

Pods for service 'hello'

Revision     Name                                    Phase    Restarts  Age
hello-00001  hello-00001-deployment-c9cc8b88c-8hw4x  Running  0         10s

1 pod

Make an HTTP request with a knctl curl command to the deployed service. You can verify that your deployed service responded with appropriate content:

$ knctl curl --service hello

Running: curl '-H' 'Host: hello.default.example.com' 'http://x.x.x.x:80'

Hello World: Max!

Because you originally configured your application with the TARGET=Max environment variable, this sample application includes Max in its response.

Note: Unless you have configured your DNS provider to point to the Knative ingress IP, you can’t use your browser to access your application. In the previous example, the curl command sends an explicit HTTP Host header when making a request. The HTTP Host header lets the ingress gateway decide which service you are trying to access.

You can also see logs emitted by the application. It happens to log a line when it starts, and when it receives requests. The knctl logs -f command continues following application logs, until you stop it with Ctrl+C.

$ knctl logs -f --service hello
hello-00001 > hello-00001-deployment-7d4b4c5cc-v6jvl | 2018/08/02 17:21:51 Hello world sample started.
hello-00001 > hello-00001-deployment-7d4b4c5cc-v6jvl | 2018/08/02 17:22:04 Hello world received a request.

Now that you deployed first version of our application, change a TARGET environment variable value so that you have a new version running:

$ knctl deploy --service hello \
    --image gcr.io/knative-samples/helloworld-go \
    --env TARGET=Tom

By deploying the service again with the same name but different environment variable, Knative creates a revision of that service. Confirm the update by making another HTTP request. It might take a little bit of time for the change to take effect, as new pods start and traffic shifts.

$ knctl curl --service hello

Running: curl '-H' 'Host: hello.default.example.com' 'http://x.x.x.x:80'

Hello World: Tom!

To verify that you have multiple application versions, or revisions as they are called in Knative world, use the knctl revision list command:

$ knctl revision list --service hello

Revisions for service 'hello'

Name         Allocated Traffic %  Serving State  Age
hello-00002  100%                 Active         2m
hello-00001  0%                   Reserve        3m

2 revisions

You can delete any deployed service using the knctl service delete --service hello command. This command deletes the service and all of its revisions.

Here’s a summary of what you achieved:

  • First, you deployed an application which Knative automatically started and assigned a route, so that it’s reachable.
  • Next, you used knctl to observe several application aspects such as logs and pods.
  • Then, you deployed an updated copy of this application and saw that a new version is actively running.

You did all of these tasks without having to dig deep and understand how Knative manages custom Kubernetes resources!

Now that you looked at how to deploy a sample application from a container image through knctl and Knative, you might want to follow similar workflows. Check out the following links, and learn how to use knctl deploy to deploy local source code or how to use buildpacks:

After you finish experimenting with Knative, you can uninstall it with the knctl uninstall command. (See the ASCII cast of the uninstall procedure.) This task takes a few minutes. The Knative and Istio system namespaces are removed. However, any Knative resources that you have not deleted beforehand remain (such as hello service). You can also delete the Knctl executable with rm /usr/local/bin/knctl.

What’s next?

Our immediate next step for Knctl is to collect feedback about the CLI user experience. We would also love to hear your feedback on features that should be our next priority, for example Knative Eventing commands or support for traffic splitting. We are also working closely with the Knative community to push for wide adoption of Knctl, hopefully as the standard CLI for Knative.

Conclusion

Knctl streamlines application developer and deployment workflows for using Knative and Kubernetes by exposing a curated set of commands. Then developers can focus on their code, and rely on Knative to do application management behind the scenes.

In combination, Knctl – with Knative on top of Kubernetes – can be a powerful and friendly way to deploy your applications without losing access to raw Kubernetes APIs. If you’re interested in more advanced features, like deploying apps from source, splitting traffic across revisions, and connecting Knative services to databases, see Part 2 of our blog series about Knctl.

We welcome your feedback through our Github project and look forward to your pull requests. Our goal is to keep knctl in sync with releases of Knative and test it on various leading Kubernetes cluster providers.

Michael Maximilien (IBM) and Dmitriy Kalinin (Pivotal)