Serverless architectures are one of the hottest trends in cloud computing this year, and for good reason. There are several technical capabilities and business factors coming together to make this approach very compelling from both an application development and deployment cost perspective.
At the recent Cloud Native Day in Toronto, I outlined some of these trends and showed how the OpenWhisk open source project provides a serverless platform for emerging event-driven workloads and microservices applications. In this post, I’ll highlight the key points from my talk. You can find the full presentation (and recording) online.
The evolution of cloud means developers are able to write apps better, faster, and cheaper
Over the past ten years the industry has seen a rapid improvement in how quickly developers can create applications and deliver them to users. Starting with the moment that service providers matched virtualized compute, storage, and networking resources with self service APIs and called it a cloud, speed to market has been improving at an accelerating pace.
Recently we’ve seen developers shift from using Infrastructure-as-a-Service to Platform-as-a-Service where they can concern themselves more with the application runtimes (Node.js, Ruby, Java, for example) and the higher level services that they depend on (databases, key-value stores, queues) than the underlying virtualized operating systems, disks, and networks.
And this year, we’ve seen the rise of Functions-as-a-Service programming, enabled by platforms like Amazon Lambda and OpenWhisk. Here the developer focuses even more sharply on the unique features of their application by narrowing the scope of their concern down to smaller units of code – commonly packaged as single files – that provide their core business logic.
New architectures built on this new model are called “serverless” since a greater number of operational concerns are hidden from the developer and because the compute resources needed for applications are transient, leaving no trace on the bottom line when application code is not running.
Serverless architectures abstract many of the operations specific cloud native 12 Factors
Well designed cloud native applications – those that are built with the cloud in mind, rather than traditional three-tier systems migrated to the cloud – are developed and deployed around a set of proven practices for distributed applications. Several important guidelines were published by experts from Heroku and have become well known as the 12 Factors.
A Twelve-Factor App is developed according to a set of principles that govern how it should be created, deployed, and managed in production. These rules have proven enormously valuable as developers create cloud native microservices based applications that they push to Heroku or IBM Bluemix – two leading PaaS platforms.
Despite this prescriptive guidance, it’s still not a trivial matter to build and deploy applications properly, as the 12 Factors cover a lot of areas where the developer may not have strong skills, such as handling zero downtime deployments and working with operating system level concerns, such as managing container lifecycles.
This is a major reason why serverless architectures deployed to platforms such as OpenWhisk are becoming attractive. These systems address many of the operations focused 12 Factors on behalf of the developer, making it easier to create cloud native applications like microservices.
|I||Codebase||Handled by developer (Manage versioning of functions on their own)|
|II||Dependencies||Handled by developer, facilitated by serverless platform (Runtimes and packages)|
|III||Configuration||Handled by platform (Environment variables or injected event parameters)|
|IV||Backing services||Handled by platform (Connection information injected as event parameters)|
|V||Build, release, run||Handled by platform (Deployed resources are immutable and internally versioned)|
|VI||Processes||Handled by platform (Single stateless containers often used)|
|VII||Port binding||Handled by platform (Actions or functions are automatically discovered)|
|VIII||Concurrency||Handled by platform (Process model is hidden and scales in response to demand)|
|IX||Disposability||Handled by platform (Lifecycle is hidden from the user, fast startup and elastic scale is prioritized)|
|X||Dev/prod parity||Handled by developer (The developer is the deployer. Scope of what differs is narrower)|
|XI||Logs||Handled by platform (Developer writes to console.log, platform handles log streaming)|
|XII||Admin processes||Handled by developer (No distinction between one off processes and long running)|
In particular, serverless platforms like OpenWhisk cover the practices associated with important operational capabilities such as automating scale up or down in response to load. Combine this with the fact that serverless architectures can hide the complexity of deploying highly available, geographically distributed applications and the appeal becomes even more evident.
Newer workloads that are moving to cloud are a better fit for event driven programming
Another trend driving this new model of cloud native application development is the emergence of many more non-web workloads that require the benefits of cloud computing (for example, elasticity, scale, and cost reduction). These use cases join HTTP and REST based applications that have long taken advantage of IaaS and PaaS capabilities. This category includes applications that need to:
- Execute app logic in response to database triggers
- Execute app logic in response to sensor data
- Execute app logic in response to cognitive trends
- Execute app logic in response to scheduled tasks
- Provide easy server-side backends for mobile apps
For many of these scenarios, event-driven programming models that reply on alternative protocols (such as MQTT) – as opposed to synchronous request/response interaction with HTTP – are a better fit for serverless architectures that can provide temporary resources on demand for intermittent jobs.
One example I often highlight is the use of OpenWhisk in an IoT use case that improves customer service for a smarter home appliance maker. In this model a serverless architecture can be built with individual units of logic that come together to handle periodic messages from a connected refrigerator.
These functions – or actions in OpenWhisk parlance – can employ analytics to determine whether service is needed, then create order records to automatically deliver replacement parts based on the customer’s warranty state.
With the increasing number of devices connecting to the Internet, the ability to scale effectively becomes ever more critical, along with the need to asynchronously link disparate systems using an event-driven architecture that can be deployed on OpenWhisk.
Serverless cost models promise a better match between resources used and value delivered
Applications deployed using a serverless architecture rely on a platform that provides the resources needed on demand, at that very moment. This has obvious implications for managing scale more efficiently, but it also means that business logic execution can be mapped to actual compute time used (in milliseconds) rather than memory reserved for anticipated usage (by gigabyte per hour).
This forms a stronger linkage between the cloud resources an application consumes and the business operations that are executed. While many applications must still be deployed in a daemon model that reserves resources for a larger window of time, serverless deployment models provide an alternative that can mean substantial cost savings for a variety of event driven workloads.
So taken together, the availability of finer grained compute models, combined with the emergence of more event driven workloads such as IoT and mobile, aligned with cost models that mean business logic is tied directly to resource consumption are driving the growing interest in serverless architectures.
Enter OpenWhisk, a platform for cloud native, event-driven applications
To address this emerging need for event-driven programming workloads that run on “serverless” (abstracted, or temporary) cloud resources IBM created OpenWhisk. In a nutshell, OpenWhisk is an open source cloud platform that executes code in response to events. OpenWhisk is different from existing hosted serverless platforms like Amazon Lambda because it offers:
- An open source and open ecosystem built on proven open source foundations like Docker, Kafka, Consul, and Akka that can be extended with new languages and capabilities.
- The ability to be deployed in public, private, and hybrid models to enable serverless architectures outside of black box public cloud provided services.
Join us to build a cloud native platform for the future!
If the technical and business benefits of serverless architectures appeal to you as an application developer, you can experiment with them now on Bluemix. There’s a hosted instance of OpenWhisk pre-integrated with several IBM Watson and third party services like The Weather Company. Bluemix also provides a browser based IDE and integrated monitoring console for debugging applications. Searching the web for “OpenWhisk” will help you discover many sample apps to use as an inspiration (and foundation) for your next project.
If you have more of a systems interest in distributed, cutting edge, cloud software like OpenWhisk, you may want to start by poking around GitHub to learn about the open source technology. Then join the Slack team (in the #openwhisk channel) to engage the developers and ask any questions. We invite you to contribute your expertise in making the platform better for everyone.
Check out the full Cloud Native Day presentation here