As discussed in Part 1, a narrow “lift-and-shift” approach to moving to containers is unlikely to reap the many benefits that containerization can provide. This article expands on some of the most common additional actions you need to take over and above the move to container infrastructure to get the best from the initiative:

  1. Fine-grained components
  2. Container orchestration
  3. Disposable components
  4. Pipeline automation
  5. Image-based deployment
  6. Infrastructure as code
  7. Organizational decentralization
  8. Agile development methodology
  9. Self-service developer experience

This article looks at each of these topics in turn, discussing how they complement the move to containers. Then a summary of all the benefits of moving to containers is discussed.

Fine-grained components

Containers are so light-weight that it is feasible, indeed advisable, to break your current components down into more fine-grained elements that can run completely independently of one another. In theory, you can do this task with virtual machines (VMs). But in reality, the resource overheads, the time to create images, and the entire operating system start-up time make it prohibitive.

In the application development space, you can break an application into discrete and independent functions (or services), which is a core aspect of microservice architecture. Fine-grained components can be designed, deployed, scaled, and maintained completely independently of the other components.

It is only fair to note that the decisions regarding the appropriate granularity of components are a complex topic in their own right. The associated re-factoring can vary in difficulty significantly, depending on how well decoupled the internal components are in the current design.

Container orchestration

Any container initiative quickly results in many more containers than the number of virtual machines that you would use. Why? They are so much easier to provision and scale, and you are likely to move to a more fine-grained model.

Today it pretty much goes without saying that you need a container orchestration platform, and this article uses Kubernetes for examples, because it is currently the most prevalent container orchestration platform.

These platforms provide a minimum framework for efficiently aligning resources with the containers, mechanisms to manage container lifecycle and scaling, load balancing across containers, routing between them, and control of how they are exposed beyond the container platform. Some provide standardized logging such as through an ELK stack. More recent advances introduce service meshes such as Istio to enable sophisticated routing, embedded logging, and traffic management patterns such as Circuit Breaker.

Disposable components

In this case, disposable means that the containers are designed so that they can be stopped at will by an owning orchestration platform (Kubernetes in the examples in this article). It stops as rapidly as possible with minimal shutdown actions, it does not exclusively own any state, and there is no exclusive affinity related to the component.

Another commonly used term for disposable is stateless, but it only conveys a subset of the aspects of disposability and is often mis-interpreted. Disposable speaks more to the “why” question: components need to be disposable so that an orchestration platform can look after their lifecycle without having to know anything about how they work on the inside. As a result, they enable the orchestration platform to enforce availability and scalability policies in a standardized way across all types of container.

Pipeline automation

Fine-grained containers enable you to make changes more independently and rapidly. By automating the build pipeline you iterate much more quickly on making changes.

A big difference in working with containers is their ability to be defined simply as a layered file system. For example, the actions in a Dockerfile used to define and build a Docker image primarily result in new files added onto the existing file-system image.

Modern runtime environments typically enable installing through scripts, so you can simply add them to the automated pipeline without needing proprietary installation tools.

Most of the build amounts to copying files from repositories (for example, GitHub) and then combining those together into a container image, which is itself placed in a standard repository (for example, DockerHub). Builds can be easily triggered by actions on the source code repository. You can use the resultant update of the image repository to trigger automated deployments and test early environments. Using standard commands such as file copies rather than complex proprietary commands for installing, compiling and deploying make it simpler to both automate and subsequently maintain the build and deployment pipeline.

Image-based deployment

Rather than deploying code to a running server, containers are so light-weight that you can deliver an entire stack. For example, the image can include both your code, its runtime, and any associated configuration. This approach enables dramatic improvements in consistency across deployments in different environments.

Compared to VM images, container images are quick to build, faster to both start and stop, and smaller to store and move around (and more so if you refactored fine-grained components). You have the opportunity to deliver your components to an environment in a completely new way. Traditionally you would deploy your components to a running infrastructure such as an application server running on a VM. You cannot bring down a VM because it also contains other components owned by other teams.

With containers, you can build an entire stack into the image including an example in your code), the specific configuration of the application server, and the language runtime environments and libraries it requires. Then you can deploy it to destination environments as a self-contained unit, started up rapidly, completely independently of everything else around it. This structure enables consistency across deployments in different environments, results in a standardized deployment mechanism that does not require knowledge of the contained runtimes, and contributes to the standardization of scaling.

Infrastructure as code

Developers are used to deploying components onto to an existing shared infrastructure. On a container platform, you effectively build out a unique topology for each container deployment. The characteristics of this topology, including scaling, load balancing, distribution across zones, routing, and security are specified declaratively in files. These files must become effectively part of the codebase of the component within the container. This approach ensure that the component is delivered consistently in every environment onto a topology that is specifically suited to its needs.

Infrastructure as code means that all the non-functional characteristics of the topology (such as availability and scaling) for the container are specified declaratively in configuration files that are stored alongside the functional code for the component. Indeed, they should be considered part of the code.

Using Kubernetes as an example, a helm chart can specify how and when the container it is to be scaled (replication policy), how it should be spread across multiple nodes, what componentry should remain together (Kubernetes pods), what the encapsulation boundary is (Kubernetes namespace), and what ports should be made accessible to other components. Any variable properties that change between development, test, and production are externalized (Kubernetes configMap or secrets) and picked up from those environments (typically at deployment time). This approach dramatically improves the consistency and predictability of deployment, ensuring maximum similarity across environments, simplifying the task of performing deployments, enabling greater deployment automation.

Organizational decentralization

Organizational decentralization is critical to empowering teams to make their own decisions, and enabling full, end-to-end ownership of componentry. Relax the strict governance and controls that often impose “one-size-fits-all” policies to simplify the management of complex IT landscapes. To be productive, teams need to be able to quickly make their own decisions about the technologies, frameworks and methods that they use. Equally important is to be able to self-provision the capabilities they need.

The following sections show how changes in architecture, design, and technology enable this autonomy. But the first step is about restructuring the organization to accommodate it. Create small teams, aligned with business needs, that each contain decision makers empowered to make autonomous decisions. Of course, the caveat is that this decentralization of control doesn’t make sense for every enterprise. It might make sense for only certain parts of an enterprise, or certain types of initiative in an enterprise.

Ultimately, seek a balance between enabling elements of a company to innovate and explore in order to retain market leadership, and ensuring that you do not compromise integrity of the core competencies of the business with constant change and increasing divergence. There is always a need to still pursue a level of consistency in approach and technology across the enterprise. Microservices and agile methodologies often approach this need through “guilds,” groups made from individuals drawn from the individual teams that focus on encouraging (rather than enforcing) common approaches and tools based on real world experiences in their own teams.

Agile development methodology

There is a clear synergy between agile methodologies and containers, significantly because they both enable empowered (decentralized) teams to achieve rapid change cycles that are more closely aligned with the business.

To deliver changes more effectively, technology alone is not enough. Agile methodologies use iterative development cycles and regular engagement with the business side of the organization to ensure that projects stay focused on the business goals. Also consider the agility that your teams can achieve if they deliver through more fine-grained components into a self-orchestrating container platform through an automated pipeline. And you can improve the method even more by focusing on consistency and quality provided by imaged-based deployment, infrastructure, and code.

Self-service developer experience

It could be argued that many of the previously described actions cannot be achieved unless you infrastructurally enable teams to work autonomously, with as few contact points with a central governance body as possible. They must be able to independently and trivially set up resources such as source code repositories, build automation capabilities, and image repositories. And, and of course they must be able to set up container orchestration platforms, and the associated compute, memory, storage and network resources.

Setting up this self-service experience might be as simple as allowing teams to use public managed services such as GitHub, DockerHub, and container orchestration capabilities provided by cloud vendors. However, for many enterprises, public capabilities might be inappropriate for some resources and workloads. To simplify what could be a relatively lengthy enablement of such capabilities internally, many enterprises are turning to private cloud platforms to enable them to establish these facilities more quickly and consistently.

Summary

So, what are the benefits of a broad containerization strategy? Containerization offers wide-reaching benefits across the following areas:

  • Agility and productivity: Accelerated development, improved consistency across environments, and empowered autonomous teams improve productivity and quality.
  • Fine-grained resilience: Independent deployment of highly available components remove single points of failure.
  • Scalability and infrastructure optimization: Fine-grained dynamic scaling and maximized component and resource density make the best use of infrastructure resources.
  • Operational consistency: Homogeneous administration of heterogeneous components reduces the range of skillsets required to operate the environments.
  • Component Portability: Portability across nodes, environments, and clouds, ensures choice when selecting platforms.

These benefits are explored in much greater detail in Part 1.

You can achieve broader benefits only if you take the right approach and recognize that a move to containers is an opportunity for broader modernization. Determine which of the actions discussed in this article are appropriate for the specific needs of your containerization program.

Acknowledgements

Many thanks to Brian Petrini, with whom the ideas behind this series of articles were formed.