We’re giving away 1,500 DJI Tello drones. Enter to win ›
By Michael Maximilien, Nima Kaviani, Julian Friedman | Published December 3, 2017 - Updated August 13, 2018
CloudContainersPlatform as a Service
As the use of cloud technology matures, companies find the need to move up the stack to get value. In particular, platform-as-a-service (PaaS) provides a one-stop, complete operating system for the cloud.
Enterprises that use PaaS solutions like Cloud Foundry don’t have to worry about the details of how to manage cloud resources to run their apps. The platform takes care of that for them. All they have to focus on is their application code and access to a catalog of services. The platform does the rest: scaling, healing, high availability, disaster recovery, and so on.
With constant innovation, early platforms like Cloud Foundry can continue to maintain their leadership in this space. However, it’s also important to keep up with the innovation and excitement that come with newer approaches to solving the challenge of orchestrating containers, such as Docker Swarm and Kubernetes.
While platforms such as Cloud Foundry enjoyed considerable successes in recent years, they also had their detractors. Specifically, by forcing a view of the cloud world (apps, services, org, users, and so on) these platforms put certain limits on the kind of applications you can deploy.
For example, Cloud Foundry is not well suited to deploying a service such as a database. In Cloud Foundry, this is done by using BOSH (a multi-cloud scalable release engineering tool) directly and deploying onto an infrastructure-as-a-service (IaaS) using VMs or container clusters running on the VMs. These kinds of limitations have led to the rise of alternative platforms like Kubernetes and Docker Swarm, which give developers complete freedom by allowing them to directly manage clusters of containers and run their applications on them.
Alternative platforms to Cloud Foundry are a welcome addition, because they encourage innovation on all sides. However, there are a number of myths or misconceptions about the Cloud Foundry cloud operating system that are worth dispelling. This article explores five things you might not know about Cloud Foundry.
We are not highlighting these issues to fully compare or contrast Cloud Foundry with other platforms, but rather to make sure that the misconceptions are rectified and facts are exposed. We hope to illuminate the current Cloud Foundry architecture and design points so that enterprise users who are trying to use a PaaS can make the best decisions possible.
As interest in containers has reached a fever pitch with the advent of Docker and Kubernetes, it’s important for us to understand how Cloud Foundry relates to these technologies. First, Cloud Foundry like many other PaaS environments uses containers. This approach was true from the very start, and it predates all the current container orchestration platforms. If you think about what containers do, you understand why containers have always been central to Cloud Foundry.
Containers are a means of isolation in UNIX/Linux systems. Using various kernel features, you can run applications in so-called containers in Linux so that they have their own isolated view of system resources, as well as limits on resource use.
Cloud Foundry used containers from the start. With recent releases, the container layer in Cloud Foundry (Garden runC) was upgraded to contribute and conform to emerging industry standards in container technologies by adopting the CNCF-led runC standard. So every time you run a Cloud Foundry application, you are running it inside a runC container.
What Cloud Foundry adds that is unique is the management of these containers. And, it hides the complexity from end users. The Diego runtime in Cloud Foundry is an efficient scheduler for containers. The goal (in part) is to maximize use of the underlying virtual machines that Cloud Foundry is being instantiated on.
Why do you need a scheduler for containers? Because resources in an IaaS are offered in discreet capacity. For example, you can create VMs with 10 GB of memory with 64 CPUS and 10 GB Ethernet networks. When you install Cloud Foundry on VMs, you might end up with 4 VMs that are dedicated to running the applications for end users. Part of the job of the Cloud Foundry Diego scheduler is to determine how and where you should place applications, and what portion of the available resources you should assign to any one application. The scheduler goal is to determine where to place applications to maximize use and to allow the largest number of apps to run efficiently within the limitations of your environment.
Specifically, the Diego scheduler allocates user application instances in a way that they stay available. It also recovers and re-provisions applications when they fail or are upgraded. Diego exploits the fact that all Cloud Foundry apps are stateless (so they can always be safely moved to another host as long as at least one instance stays running). The stateless apps enable Diego to provide resilience, health management, and efficient placement for user applications while remaining somewhat simple at its core. This capability differentiates Diego from other, more general schedulers.
When Cloud Foundry was initially released, it worked on a few clouds (such as VMware and AWS) and that was enough to satisfy early multi-cloud users. In time, as the first version of BOSH emerged and matured, other clouds were also added—mainly VSphere, VCloud, and OpenStack. BOSH supported a clear Cloud Provider Interface (CPI), but adding new CPIs could be difficult because these various CPIs were embedded in the BOSH director code.
In 2015, the BOSH team started rectifying this situation by cleaning up and externalizing CPIs in the BOSH director code base. CPIs are now packaged as BOSH releases, and they can be deployed and updated separately from the director. Also, the CPIs can be in any language and use whatever technology is required to target the cloud they are designed for.
After all of the existing CPIs were converted into their own repos and a separate CPI team was created, it was time for the Cloud Foundry community to take notice. As a consequence of this new extensible mechanism, many more clouds and even container platforms can be used to deploy Cloud Foundry. These platforms include SoftLayer, Azure, and GCP, as well as Docker and Kubernetes, to name just a few.
With more than 15 different CPIs available, including a bare-metal SSH CPI and a multi-CPI that allows you to target more than one cloud, Cloud Foundry is easily the most cross-platform PaaS available.
As Cloud Foundry matured, one of the biggest challenges it has faced is scalability. Companies like IBM, SAP, Pivotal, and GE bet on the Cloud Foundry code base as a core element of their public cloud strategy for application services. Multiple parts of the platform were rewritten in Golang, and the addition of more modern distributed systems technology with Diego provided a more modern and stable runtime environment for Cloud Foundry. But did these changes also provide a more scalable environment?
The Diego team spent the better part of 2016 fine-tuning the runtime environment to achieve linear scaling characteristics. The team revisited and refined various component choices and ran scaling experiments. They provisioned 1,250 Diego cells, each corresponding to a base VM for running applications. They scaled the platform to run 250,000 containers, as shown in the following graph.
Given the one-to-one mapping between an application instance and a container, Diego scalability tests showed the possibility for Cloud Foundry to manage 250,000 applications while keeping them routable and responsive throughout the entire scalability test. And that does not account for the fact that Cloud Foundry currently runs in live production environments with hundreds of thousands of apps.
We don’t actually know the limits of Cloud Foundry scaling, because it’s so costly to run such large experiments, and the Diego team did not try to push the limits to reach a breaking point. So in theory, Cloud Foundry could scale to 500,000 or even a million applications. More details on the scalability test are in the 250k Containers In Production: A Real Test For The Real World blog post.
From its beginning, Cloud Foundry was designed to work with any language and framework. This design point was maintained so that new languages and frameworks supported in Cloud Foundry are not only widely available, but they are also customizable. Java support is a good example. There are many approaches to running Java applications, and whether you want to use the OpenJDK or the IBM Liberty Java VM, there is a supported Java buildpack.
And it’s not just well-established languages and frameworks that get to play well in the Cloud Foundry ecosystem. It’s also new and esoteric languages. For example, as soon as Apple released its Swift language OSS in December 2015, Cloud Foundry buildpacks for Swift started showing up. Today, at least two versions are available through the community.
Furthermore, buildpacks have maintained compatibility with Heroku such that those released by that community are usable in a Cloud Foundry installation. Additionally, the Cloud Foundry community innovated buildpacks by exploring support for multi-buildpacks for applications that need more than one runtime. They also explored private buildpacks when you need to fork a language’s runtime environment or a framework to solve an esoteric issue with a application that cannot be solved through other means.
Cloud Foundry was criticized for lacking extensibility and extension points. While the platform is completely open and could, in theory, be modified by anyone who submits a Github pull request, there was no way to extend the platform in a systematic fashion. There was no way to allow exploratory works to be viewed by all and created by all, on all parts of the platform. In other words, how could you allow innovation with the spirit of the “let a thousand flowers bloom” strategy?
To solve this issue, in late 2016 the project management council (which controls the direction of the platform) decided to divide all the projects that constitute Cloud Foundry into three sub-PMCs: Runtime, BOSH, and Extensions. The mission of the new Extensions project management council was to encourage extensions to the platform, to put some structure into the evolution of these extensions, and to allow the community to explore all kinds of options (fruitful or not-so-fruitful).
For six months the council organized and retrofitted existing projects that fit as extensions and considered new projects. Cloud Foundry now has an established extension process and a means for anyone in the community to extend the platform. The extensions include APIs, tools, CPIs, connectors to other platforms, buildpacks, services, and more.
As the community continues to add more extensions, some will disappear and some will graduate to become core. Regardless, the primary goal is to ensure that the platform remains open and vibrant, and that anyone in the community can extend the platform with their next brilliant idea.
Platform as a service continues to evolve. With constant innovation, early platforms like Cloud Foundry can continue to maintain their leadership in this space. However, it’s also important to keep up with the innovation and excitement that come with newer approaches to solving the challenges of orchestrating containers, such as Docker Swarm and Kubernetes.
In many ways, container orchestration is not a zero-sum game, and we expect that many platforms can succeed. The problem space that these platforms are trying to solve includes all of IT applications and service management.
This article highlighted five important facts (some well known and some not) that can help enterprise IT teams and their managers make the right decision when deciding between Cloud Foundry and other PaaS environments.
April 24, 2018
August 12, 2019
Back to top