Orchestrate multi-tier application deployments on Kubernetes using Ansible, Part 1: A comparison between technologies

In the era of microservices, containers, Kubernetes, and DevOps, deployment of complex multi-tier systems in the public cloud is achieved with continuous integration and delivery (CI/CD) pipelines. However, things become complicated when attempting to deliver applications onto private cloud or on-premise systems, where CI/CD isn’t owned by development, or doesn’t even exist at all.

In this article, I’ll discuss existing approaches to an on-premise delivery of containerized software, identify their strengths, and share common pitfalls.

Orchestration in the world of Kubernetes

Deploying complex application systems is not a new problem. Over the past few decades, the need for automated configuration and management, or orchestration, of software has been identified many times. In the operating systems space, configuration management tools like Chef, Puppet, Salt, and finally, Ansible, orchestrate configuration of OS-native applications. Tools like AWS CloudFormation, OpenStack Heat, and Terraform, orchestrate the deployment of Infrastructure-as-a-Service (IaaS) virtual resources including machines, block storage devices, or software defined networks (SDN). The order in which these tools were created, illustrates how complex, server-oriented solutions have faded, creating space for specialized serverless tools that can perform one kind of task particularly well (and fast).

For containers, Kubernetes acts as the operating system and provides a desired state of containerized systems using a declarative object management and extensible object type model. While no doubt powerful, in practice, the declarative approach proves to be quite difficult to work with when implementing complex, multi-application deployments. This is especially true when it comes to automating lifecycle management procedures and naturally fitting scripting languages. Unfortunately, introducing scripts into Kubernetes resource definitions increases the complexity of the overall solution and triggers several problems, especially at technology integration points. This affects stability, testability, and justifies the need to grow new solutions that aid in conceptualizing the deployment flow. The lengthy list contained in Declarative application management in Kubernetes by Brian Grant shows that Kubernetes targeted deployment technologies are blossoming at scale. Perhaps, at a rate never seen before.

Now, let’s briefly discuss some of the most prominent players in the Kubernetes orchestration area.

Helm

Perhaps the most widely adopted and promoted Kubernetes orchestration tool is Helm. Incubating Cloud Native Computing Foundation (CNCF) technology and advertised as “the package manager for Kubernetes”, Helm charts are composed of metadata files and Kubernetes resource configuration files that are customized using Go templates. While many love Helm, others have pointed out the issues they have with the Kubernetes package manager. These issues include:

  • The use of Tiller, a server-side agent
  • Size-restricted configmap-based storage
  • The use of Go language, which isn’t very popular in a DevOps landscape

As of the time I’m writing this article, version 3 of Helm, which promises to get rid of Tiller and introduce Lua scripts support, is still in beta and not yet available. While great for automating single application deployments, Helm doesn’t provide a fully featured dependency resolution model.

Operator Framework

A technology coming from CoreOS, Operator Framework is one of the foundations of the OpenShift Container Platform (OCP), version 4. Operator Framework uses the idea of modeling applications as Kubernetes resources, extending Kubernetes API with custom, application-oriented object types. Think of Operators as “the runtime that manages this type of application on Kubernetes”. The Operators technology seems to best fit complex applications with difficult lifecycle management (databases, message brokers, etc.) and large multi-tenant clouds where the cost of running an Operator is negligible. While individual application orchestration is what makes Operators fly, it is not clear how to lift orchestration aspects to the level of a product composed of many applications. Since its inception, the Operator Framework supports writing operators code in Go language. Perhaps due to the relatively steep learning curve of Go, community adoption of Operator Framework seems to be on quite a low level for now. There seem to be many proof-of-technology projects, but few of them are maturing. But, the adoption rate may improve thanks to recently added support for Helm and Ansible as implementation technologies.

Kustomize

Recently, Kustomize gained a lot of popularity due to it officially becoming part of kubectl, starting with the Kubernetes 1.14 release. Kustomize defines complex applications in a declarative way and enables the possibility of applying domain specific, “common case” changes to resource configurations. One big differentiating factor about Kustomize is that it does not make use of templates. Instead, you can overlay (or patch) base resource configurations. This approach has many proponents but requires you to have a whole lot of self-discipline and focus to shift from a template-pervasive world. Surprisingly, this makes this simple tool have a pretty steep learning curve. Another commonly raised issue about Kustomize is that it poorly manages the burden of complex deployments.

Automation Broker

Automation Broker is a server-oriented solution that implements Cloud Foundry Open Service Broker API. It’s very different from lightweight tools like Kustomize and features a large server as a conscious design element. Applications deployed by Automation Broker are described using the Ansible Playbook Bundle (APB) format. An APB is essentially a container image that includes Ansible runtime and application-specific Ansible playbooks. The approach of bundling an interpreter together with application automation causes the resulting package to be relatively heavy, which becomes an issue when dealing with deployments that consist of tenths of applications. Focusing on self-service provisioning, together with its business aspects, the Automation Broker does not seem to fit the world of lightweight and distributed modern DevOps technologies.

Ansible Kubernetes Module

Ansible is a flexible, general purpose automation tool that comes with a complete set of capabilities to perform a variety of configuration management tasks over multiple systems at the same time. Much like when defining a domain-specific language (DSL), it can become the foundation of specialized automation solutions. The Kubernetes module allows the creation of tasks which invoke Kubernetes API. The module depends on the OpenShift python library, which needs to be installed separately and may pose questions about vendor neutrality. The Ansible Kubernetes Module is also an important ingredient in both the Operator Framework and Ansible Playbook Bundles.

Summary

In my opinion, deployment automation in the world of Kubernetes still seeks that one game changing technology. The abundance of solutions should match any taste, but somehow the Chardonnay of containers orchestration has not yet surfaced. In my next article, I’ll share an alternative approach to market leaders, which may become a viable alternative to DevOps engineers who seek flexibility powered by battle-proven, widely adopted open source tools. I’ll take lessons from historical achievements in the orchestration area and apply this knowledge in the context of containers and Kubernetes. Stay tuned!

Marcin Lewandowski