Digital Developer Conference: Cloud Security 2021 – Build the skills to secure your cloud and data Register free

IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

Learn about key features of the open source software for running serverless workloads on Kubernetes

Today we join the Knative community to celebrate the biggest milestone of the project. Knative 1.0 is generally available. In this blog post, we briefly retrace the history of Knative, discuss 1.0 features, highlight IBM and Red Hat contributions, and imagine possible future directions.


Kubernetes has captured the cloud, the enterprise, and modern application containerization. However, Kubernetes is designed as a base platform and not the end user experience. This means that Kubernetes is meant to be extended and abstracted with simplified layers on top to best meet the needs of enterprise users who are increasingly using it to modernize their workloads.


One missing set of features from the base Kubernetes is the primitives to build serverless workloads. By serverless, we mean workloads that you want to run in the cloud, but also want to scale down to zero to save costs when you are not using them. For example, cloud-scale resource pools that are available on demand as leveraged managed services. Meanwhile, by having all of this managed, you can focus on writing code rather than managing the hosting infrastructure.

Brief history

Knative as a project started at Google in 2018 to create a serverless substrate on Kubernetes. In addition to dynamic scaling (with the ability to scale to zero in Kubernetes), other original goals of the project include the ability to process and react to CloudEvents, and to build (create) the images for the components of your system.

While the two initial big components survived, the build aspect of Knative was folded into what is now the Tekton CI/CD open source software (OSS) pipelining project part of the CD Foundation. The rest of Knative continued to grow over the past two years, reaching 1.0 today.

Knative features

Now that Knative is finally at the 1.0 release, it’s worth examining the list of features that constitute this major milestone. We are summarizing in broad brush strokes since the Knative community has detailed release notes with more details than most people care to read about.


The primary feature of Knative is the serving component. This is the set of APIs and features that enable serverless workloads. Briefly, it defines a comprehensive custom resource for serverless workloads that includes current and past revisions of the resource.

Users can also define custom domains to access their services, and they can split traffic to their services with fine-grained control. Additional features to improve performance, such as freezing pods when not in use to allow quick startup, are being considered to make Knative Serving the best serverless substrate for Kubernetes.


A key component of the serving APIs is the autoscaling feature. We think this is the singular feature that enables Kubernetes to be a serverless platform. Knative users can define their choices for how their workloads scale. The scaling is both to increase the number of pods and to decrease to zero when the service no longer receives incoming requests.

Scaling is much harder to achieve in a smooth and efficient manner since, at any point in time, there is no prior knowledge of the incoming requests (we cannot predict the future requests flow) or how long a service takes to execute each request. So the Knative community devised sophisticated algorithms to use the current state of the system, past request information, resource utilization, and user preferences to determine how to scale each workload (up or down).


The second pillar of Knative is the eventing component, which is designed to provide the primitives to allow event-based reactive workloads. All events are internally converted into CloudEvents and can be produced, forwarded, converted, or all three from heterogeneous sources. The system enables the integration of custom events as CloudEvents in addition to brokering existing eventing sources.

Miscellaneous features

Beside the two main components of serving and eventing, there are smaller components that complete the Knative offering. Some of these are described below.

Client command-line interface (CLI)

The client CLI is the Knative user interface and experience for developers. By using the kn command, developers can manipulate all aspects of Knative at the command line with an interface that is quickly familiar to them and matches the Knative APIs.

Important features and recent additions to the CLI include the ability to connect to event sources and sinks, split traffic across revisions, and create custom domains, along with the primary features of creating serverless services and customizing their scaling characteristics.

CLI plug-ins

The CLI has a built-in extension mechanism that allows end users and third parties to add new commands and command groups. The plug-ins are self-contained and the end user can decide which plug-ins to add to their environments.

func CLI plug-in

The func plug-in is a canonical plug-in that allows end users to quickly build function-as-a-service (FaaS) style workloads with Knative. This means that the ability to define simple functions in different languages (Node, Java, Go, Python, and others). By using func, developers can convert that function into a running serverless service and connect with event sources to trigger the function.

Other plug-ins

The community created a variety of additional plug-ins to solve different needs from the community. For instance, the event source plug-ins make it easy to connect Knative services to event sources and event brokers directly with kn.

The kafka-source plug-in allows users to manage Kafka sources from the command line to import Kafka messages as CloudEvents into Knative Eventing.

There is an admin plug-in that streamlines DevOps activities with Knative clusters, such as the ability to control domains and the many knobs that a Knative cluster manager can change.

A quickstart plug-in helps you to get started quickly with Knative with one command.

A migration plug-in lets users migrate Knative services from one cluster to another.

The diag plug-in facilitates debugging of Knative services by showing you a comprehensive view of each service’s primitives and various annotations and labels, as well as displaying a visual textual graph at the command line.


The Knative operator is designed to make it easy for you to deploy, update, and administer Knative installations by using a custom-made Kubernetes operator. The operator’s advanced features make it easy for a Knative administrator to install ingress plug-ins (Istio, Contour, and Kourier); install eventing sources; configure node selectors, affinity, and toleration; configure replicas, labels, and annotations; and configure all ConfigMaps via the operator. In summary, the Knative operator 1.0 enables efficient and optimized management of any Knative installation.

IBM and Red Hat involvement

IBM and Red Hat were involved in the Knative project from the start. We continued this involvement by adding more engineers, and proposing and leading various aspects of the project. Indeed, we currently lead over 50% of the most active projects, including people elected to the Technical Oversight Committee (TOC), Steering Committee (SC), and Trademark Committee.

What’s next

While the 1.0 release constitutes a major milestone for the Knative community, it is the start of a journey. Early supporters of Knative who created products by using Knative, such as IBM Cloud Code Engine, Red Hat OpenShift Serverless, and Google Cloud Run, identified limitations. For example, the current release improves the startup time for workloads but that is still far from optimal.

As we celebrate Knative 1.0, let us imagine what might come next. For example, performance improvements to make services start and scale faster. We also have a special working group that is focused on security and multitenancy. We hope that the outcome of that work group increases the confidence of vendors that want to use Knative in a secure, multitenant, enterprise environment.

The Knative project is pushing the boundaries of innovation by working on cold start reductions by freezing containers and trying other optimization improvements. Which makes now a great time to join the community to contribute and learn more about serverless.

We look forward to continuing our work with the community to make Knative the best open source, serverless layer for Kubernetes developers, end users, and vendors.