Extending Kubernetes for a new developer experience
How Istio and Knative are changing the way developers approach Kubernetes
Most people already know about Kubernetes as a defacto hosting platform for container-based applications. And if you manage a Kubernetes cluster you probably already know about many of its extensibility points due to customizations that you installed. Or you may have developed something yourself, such as a custom scheduler. Maybe you even extended the Kubernetes resource model by creating your own Custom Resource Definition (CRD) along with a controller that will manage those new resources. But with all of these options available to extend Kubernetes, most of them tend to be developed for the benefit of Kubernetes itself as a hosting environment, meaning they help manage the applications running within it. Now with the recent introduction of two new projects, that when combined together, will radically change how application developers use and view Kubernetes.
Let’s explore these two projects and explain why they could cause a significant shift in the Kubernetes application developer’s life.
Istio: The next-gen microservice network management
Istio was introduced back in 2017 in a joint collaboration between IBM, Google, and Lyft as an open source project to provide a language agnostic way to connect, secure, manage, and monitor microservices. Built with open technologies such as Envoy, Prometheus, Grafana, and Jaeger, it provides a service mesh that allows you to:
- Perform traffic management, such as canary deployment and A/B testing.
- Gather, visualize, and export detailed metrics and tracing across your microservices.
- Service authentication, authorization, and automatic traffic encryption.
- Enforce mesh-wide policies, such as rate limiting and allowlist/blocklist.
Istio does all of the above, and more, without making any modifications to the application itself. Istio extends Kubernetes with new CRDs and injected Envoy proxy sidecars that run next to your application to deliver this control and management functionality.
If we look under the covers, we can see that the Istio architecture is split into two planes:
- The data plane is composed of a set of intelligent proxies (Envoy), deployed as sidecars that mediate and control all network communication among microservices.
- The control plane is responsible for managing and configuring proxies to route traffic and enforce policies at runtime.
Istio’s architecture is also comprised of these components:
- Envoy – the sidecars running alongside your applications to provide the proxy
- Pilot – configuration and propagation to the entire system
- Mixer – policy and access control and gathering telemetry data
- Citadel – identity, encryption and credential management
- Galley – validates user authored Istio API configuration
While all of this by itself is pretty exciting (and Istio is definitely causing quite a buzz and adoption in the industry), it’s still targeted to a DevOps engineer/operator persona – someone who is responsible for administrative tasks on your Kubernetes cluster and applications. Yes, mere mortal software developers could configure Istio routing and policies themselves, but in practice it’s not clear that your average developer will do so – or even want to do so. They just want to focus on their application’s code, and not on all of the details that are associated with managing their network configurations.
However, Istio adds to Kubernetes’ many missing features that are required for managing microservices. And Istio does move the needle closer for Kubernetes becoming a seamless platform for developers to deploy their code without any configuration. Just like Kubernetes, Istio has a clearly defined focus and it does it well. If you view Istio as a building block or a layer in the stack, it enables new technologies to be built on top. That’s where Knative comes into the picture.
Knative: A new way to manage your application
Like Istio, Knative extends Kubernetes by adding some new key features:
- A new abstraction for defining the deployment of your application to enable a set of rich features aimed at optimizing its resource utilization – in particular “scale to zero.”
- The ability to build container images within your Kubernetes cluster.
- Easy registration of event sources, enabling your applications to receive their events.
Starting with the first item, there’s a Knative component called “serving” that is responsible for running, exposing, and scaling your application. To achieve this, a new resource called a Knative “Service” is defined (not to be confused with the core Kubernetes “Service” resource). The Knative “Service” is actually more akin to the Kubernetes “Deployment,” in that it defines which image to run for your application along with some metadata that manages it.
The key difference between a Knative Service and a Deployment is that a Service can be scaled down to 0 instances if the system detects that it is not being used. For those familiar with Serverless platforms, the concept here is the same as the ability to “scale down to zero,” thus saving you from the cost of continually having at least one instance running. For this reason, Knative is often discussed as a Serverless hosting environment. In reality, it can be used to host any type of application (not just “Functions”), but this is one of the bigger use cases driving its design.
Within the Knative Service, there’s also the ability to specify a “roll-out” strategy to switch from one version of your application to another. For example, you can specify that only a small percentage of the incoming network requests be routed to the new version of the application and then slowly increase it over time. To achieve this, Istio is leveraged to manage this dynamic network routing. Along with this is the ability for the Service to include its “Route” or endpoint URL – in essence, Knative will set up all of the Kubernetes and Istio networking, load balancing, and traffic splitting that are associated with this endpoint for you.
One of the other big features available in the Knative Service is the ability to specify how the image used for deployment should be built. In a Kubernetes Deployment, the image is assumed to be built already and available via some container image registry. However, this requires the developer to have a build process that is separate from his/her application deployment. The Knative Service allows for all of this to be combined into one – saving the developer time and resources.
This “build” component that is referenced from the Service is the second key component of the Knative project. While there is flexibility to define any type of build process you want, typically the build steps will be very similar to what developers do today: it will extract the application source code from a repository (e.g., GitHub), build it into a container image, and then push it to an image registry. The key aspect here, though, is that this is now all done within the definition of the Knative Service resource, and does not require a separately managed workflow.
This brings us to the third and final component of the Knative project, “Eventing.” With this component, you can define and manage subscriptions to event producers and then control how the received events are then choreographed through your applications. For example, an incoming event could be sent directly to a single application, to multiple interested applications, or even as part of a complicated workflow where multiple event consumers are involved.
In bringing this all together, it should now be clearer how all of these components working together could be leveraged to define the entire workflow for an application’s lifecycle.
A simplistic scenario might be:
- A developer pushes a new version of his/her code to a GitHub repository.
- A GitHub event is generated as a result of the push.
- The push event is received by Knative, which is then passed along to some code that causes the generation of a new revision/version of the application to be defined.
- This new revision then causes the building of a new version of the container image for the application.
- Once built, this new image is then deployed to the environment for some canary testing, and then the load on the new version is slowly increased over time until the old version of the application can be removed from the system.
This entire workflow can be executed and managed within Kubernetes, and it can be version controlled right alongside the application. And, from the developer’s point of view, all he/she ever deals with is a single Knative Service resource to define the application – not the numerous resource types that developers would normally need to define when using Kubernetes alone.
While the above set of Knative features is pretty impressive, Knative itself (like Kubernetes) is just another set of building blocks available for the community to leverage. Knative is being designed with a set of extensibility points to allow for customizations and future higher order tooling to be developed.
Where will we go next
What’s different about the development of Istio and Knative is that when combined together, they’re focused on making life easier for the application developer. As good as Kubernetes is, it’s likely that many developers’ first exposure to it (especially if they’re coming from other platforms like CloudFoundry) is probably a bit daunting. Between pods, replicaSets, deployments, ingress, endpoints, services and helm, there are a lot of concepts to learn and understand. When all a developer really wants to do is host some code, it can seem like more trouble than it’s worth. With Knative and its leveraging of Istio, it’s a big step forward in helping developers move back to being application developers instead of DevOps experts. It’ll be exciting to see how the community reacts to this as these projects mature.