You’ve heard of serverless computing, right? Aside from the term itself being somewhat of a misnomer  (like many things in IT), serverless computing was one of those topics that sort of made sense in my head, but when I tried to explain it, I just couldn’t do it. If you’re like I was, then this blog post is for you.
Any technology is only useful if it solves a problem (or ideally an entire set of problems). So the next logical question is: what problem(s) does serverless computing solve? That’s the first thing we’ll talk about.
DevOps is hard
As I point out in this dW blog post , there are many challenges in DevOps. Deployment automation is the dream for DevOps, but in reality DevOps engineers spend most of their time troubleshooting failed builds, and supporting development teams with build, development environment, and other deployment issues.
Regardless of how much development team support is needed, the development servers still need to be patched and regularly upgraded, and this usually falls to DevOps. And even in organizations that have dedicated staff to maintain the environments, if not actually applying patches, DevOps still have a mandate to perform some level of integration testing once patching is done (or recruit someone from the development team and then drive it to completion).
It’s no wonder that DevOps engineer jobs are among the toughest to fill .
Right-sizing production servers is risky
A dedicated server, or even a Virtual Private Server (VPS) with the capacity and speed necessary for some applications, is expensive. If you’ve done capacity planning, I don’t have to tell you that correctly anticipating capacity needs is tricky (sometimes more art than science).
And because server loads tend to occur during peak periods, not all applications need powerful servers all the time, so much of that capacity you’re paying for goes unused.
What Serverless Computing is NOT
Here’s the reality behind the buzzwords.
Right for every application
Serverless computing is often called (among many things) Function as a Service (FaaS) – or Infrastructure as a Service (Iaas) depending on whom you talk to – and by its very nature, it can take some finite amount of time to spin up the necessary resources to process a function request, creating a latency between the time the request is submitted and when the function actually runs. This can be fine for many applications, but if you need high-performance with little latency, serverless computing might not be for you.
By design, you are abstracted from the environment in which your application function(s) runs, so if you need tight monitoring or control of the servers your applications are using, then serverless computing might not be for you .
A flash in the pan
Not every good idea survives in IT, but I believe that serverless computing is not only a good idea, it represents a sea change in the way DevOps is done. That is a bold claim, I know, but take a look at the players getting into (or already in) the serverless computing provider arenas. Amazon has the most mature offering with AWS Lambda, but there’s also IBM with OpenWhisk, Microsoft with Azure, and Google’s Cloud Function API.
As we’ll see, because of the way the FaaS/IaaS model works, large companies like Amazon can offer powerful servers that small companies, who could not otherwise afford them, can use. It’s a win-win, so the model is definitely sustainable.
The downsides (like latency and monitoring tools) are also diminishing all the time. Services orchestration tools are coming out as well to support the ecosystem , to make it even easier to use, and easing the barrier to entry for companies who are particularly risk averse, and/or not early adopters.
So how does serverless computing address the problems I mentioned earlier?
DevOps is hard
Serverless computing is “no ops” from the developer’s standpoint, and “less ops” from the DevOps engineer’s view. It frees developers up to focus on writing code, and DevOps to concentrate on automated deployment. Repeatable code/test/deploy cycles finally become a reality.
And while DevOps still have to worry about supporting the builds, tackling integration/upgrade issues, and supporting the development team, they are freed of patching servers and many integration headaches (though I think it’s naive to assume all integration headaches will evaporate, at least at this stage of the game).
Right-sizing production server
Since the Faas/Iaas providers offer a wide range of on-demand capacity on a pay-as-you-go model, capacity planning with serverless computing should become much easier to do well.
Speaking of pay-as-you-go, serverless computing offers pricing models where you only pay for what you use, so your investment in infrastructure doesn’t sit idle waiting for a peak load. As a result, the savings over owning or leasing a server can be substantial.
And speaking of capacity, serverless computing offerings provide powerful servers. The big players like Amazon, IBM, and Google can offer servers that smaller organizations can use for their FaaS-based applications that they not really afford to buy or even lease.
Good Serverless Computing Use Cases
Now, you may wonder: what are a few Serverless Computing use cases? So I thought I would conclude this piece with three common use cases, all perfect for FaaS:
- Notifications (SMS and Email) – these are on-demand, inherently asynchronous, and potentially highly resource intensive (depending on how many recipients, and what needs to be sent).
- App-driven (sensor) data submission (IoT) – another potentially resource-intensive operation is processing IoT data, where the data formats vary wildly, and may require serious computing horsepower to meet the transformation requirements of the backend system.
- User-driven content submission – photo resizing and Optical Character Recognition are resource-intensive operations whose occurrence is needed on-demand.
DevOps and capacity planning are major challenges for enterprise IT, but Serverless Computing offers a solution that makes both easier. As Serverless Computing takes hold in the enterprise, companies will be able to get more for their IT dollar (and maybe let the DevOps folks have a day off once in a while).
Learn more about serverless computing and IBM OpenWhisk
- OpenWhisk at developerWorks Open
- OpenWhisk blogs
- Build a user-facing OpenWhisk application with Bluemix and Node.js