As developers embrace containers in large numbers, the way in which we run containers in production environments is getting more attention. An increasing number of technology experts argue for redefining the cloud management stack around containers as a first-class citizen, for what can be called a container-native cloud.
A container-native cloud platform offers containers as the primary workload encapsulation mechanism and uses containers to virtualize the infrastructure. Unlike container services provided by many vendors, which essentially offer predefined images and automation to stand up a container management system in user-owned VMs, a container-native platform is fully managed by the cloud provider. Users are not aware of the infrastructure technology underneath, they do not have access to it, and they don’t take any responsibility for managing it or the container platform itself. They can focus on defining and managing image and container content.
Container-native platform = serious benefits
First, a container-native platform allows a cloud provider to leverage container transparency to offer fully managed, always-on, tamper-proof, workload-aware services. Perhaps the most interesting category of such services is security and compliance. Deep container introspection and monitoring allows a cloud provider to discover incorrect, insecure, or non-compliant packages and configurations to provide alerts and remediation guidance to users. An example of such a service is Vulnerability Advisor for live containers. Maintaining application security and compliance is a challenge that interferes with agile software evolution, so services that automate detection of such problems have potential to profoundly transform the way application DevOps is done.
Second, a container-native platform takes full advantage of hardware capabilities available in a given system, with no significant effort and no overheads. Lower overheads imply lower resource costs for a user. Containers running without a hypervisor layer have been shown to offer nearly bare metal performance, and to allow for much higher workload density. Special hardware and accelerators can be easily leveraged since no emulation is needed to make them available to running containers. Users get better control and visibility into the infrastructure their workloads are running on.
Finally, a container-native platform is simpler both for a user and for the cloud provider. A user interfaces with the platform by container-centric APIs, just like in the development environment, and does not need to go through any additional platform set-up and management steps. She would also be charged only for the containers she is using and not for the hosts they are running on. A cloud provider, on the other hand, deals with a leaner stack: there is no virtual machine layer, so only the container platform needs to be taken care of.
Towards a container-native platform: Meeting the challenges
A provider aspiring to this vision will likely find out that the only practical way to offer a container-native platform is to have multiple users share it. Multi-tenancy is required to keep the cost of operations reasonable. The biggest challenge is therefore exposing a container native orchestration API suitable for the multi-tenant nature of the cloud. Neither of the two current contenders — Kubernetes and Swarm — is engineered for the secure tenancy a provider requires. A provider currently either develops their own tenancy additions for the current APIs or dumps the tenant with a VM running Docker so they have to manage their own API (which eliminates the benefits of container-native approach).
The next challenge for a provider is deploying multiple tenants in a secure, isolated fashion — not because the base container technology is insecure, but because in a container-native setup, the configuration must be precisely managed to ensure security, as described in the whitepaper by Dimitrios Pendarakis et al. Container manageability must also be precisely restricted: containers can only be deployed with limited capabilities and access privileges. Some older, pre-container applications will not be able to cope with these restrictions. This precision is beyond most current providers’ orchestration systems and likely why they will advise you to deploy containers within VMs instead to ensure security and isolation. Yet, with the deployment that follows state-of-the-art tools and practices and employs dynamic introspection capabilities, containers can be more secure than VMs.
Also, in a fully managed platform, a cloud provider must exercise greater care when deciding which APIs to offer. She needs to consider the security aspects of any new API, as discussed above. She also needs to consider its operational cost. For this reason, users of a container-native platform should expect the set of APIs offered by it to be more restrictive than in a user-managed system. On the other hand, a cloud provider needs to augment the set of offered capabilities with manageability APIs, to compensate the user for the fact that they no longer get access to the host system, such that the user gets sufficient visibility and control to manage the life cycle of applications and diagnose problems.
Do benefits outweigh the challenges? We in IBM believe they do; this is why we built IBM Containers, one of the very few container-native cloud platforms in the industry. We believe that with this approach we can not only achieve workload portability and light-weight deployment, but also transform the way applications are operated when running on the cloud, thanks to dynamic introspection capabilities. As James Bottomley states it in his blog, this has a potential to transform the way security and compliance of workloads is performed in the cloud: to make it easier for a user and more robust.