Apache OpenWhisk is a serverless, open source cloud platform that executes functions in response to events at any scale. As a distributed software, OpenWhisk can be deployed as a container cluster where each of the components can be run in its own container. This allows users to deploy OpenWhisk on any infra platforms, which support container management and orchestration.
Kubernetes is a well-known container orchestration tool, which can be used to deploy container-native applications. Deploying OpenWhisk over Kubernetes can leverage the capabilities provided by Kubernetes to better control and manage OpenWhisk containers, which can result in a stable OpenWhisk runtime. There is an Apache OpenWhisk project in progress for deploying OpenWhisk over Kubernetes, but it currently requires several manual steps to configure and deploy it. In this post, we explore using Kubernetes Helm which is a more Kubernetes-native way, to deploy OpenWhisk over Kubernetes.
Deploy OpenWhisk on Kubernetes by Helm
Helm is a tool for managing Kubernetes charts, while charts are packages of pre-configured Kubernetes resources. In other words, user can write charts, which are in template format, to define a set of Kubernetes resources (each resource stands for a component of your application), and use Helm to deploy the charts over a Kubernetes cluster. For more information about Helm and Charts, please visit Kubernetes official repo.
To deploy OpenWhisk over Kubernetes, we need to design charts for each of the necessary components of the OpenWhisk runtime:
- Kafka Node: Kafka provides the messaging service for the OpenWhisk controller and invoker. In a Kubernetes cluster, we can deploy a
StatefulSetto run Kafka queue and this
StatefulSetwill be bound with a corresponding
Servicein Kubernetes to provide a public service to the controller node and invoker node. Considering running Kafka requires the Zookeeper runtime, we also need to deploy a
Deploymentfor holding Zookeeper and bind a
Serviceto this Zookeeper
Deploymentto provide the public service for Kafka.
- Couchdb Node: Couchdb provides a database service for the OpenWhisk controller and invoker. In the Kubernetes cluster, we can deploy a
Deploymentto run Couchdb and bind the corresponding
Serviceto provide a public service endpoint for the controller node and invoker node.
- Controller Node: this is a standard OpenWhisk component that depends on a Kafka service and a Couchdb Service. To deploy the controller, we can simply create a
StatefulSet(so that multi controllers can have an index to identify themselves) in a Kubernetes cluster.
- Invoker Node: this is a standard OpenWhisk component that depends on a Kafka service and a Couchdb Service. To deploy the invoker, we can simply create a
Deployment(so that users can auto-scale invokers in the future) in a Kubernetes cluster. Note that all of the action containers are created and controlled by the invoker on a Kubernetes node and these containers are not under the control of Kubernetes.
- Nginx Node: this is the web server which forwards the outside client request to the controller node. To deploy a Nginx node, we can create a
Deploymentin Kubernetes to run the Nginx server and create a Kubernetes
NodePortto provide public IP/Port so that an outside client can visit the OpenWhisk cluster through this Nginx server.
Based on this design, we can create Helm charts for the above components. Each of the above components can be created as a single chart. Once we have collected all the necessary charts together, then we can use a single Helm command to deploy OpenWhisk. Now, there is already a charts project ready for deploying OpenWhisk, readers can follow the steps described in the README.md to deploy OpenWhisk.
Using Helm allows users to deploy OpenWhisk with a simple command, which facilitates an operator’s daily work. Using Helm also makes the configuration, management, and upgrade of complex distributed software easier. For more information about using Helm to deploy OpenWhisk, please visit: https://github.com/xingzhou/Deploy_OpenWhisk_with_Helm.