Win $20,000. Help build the future of education. Answer the call. Learn more

Lambda functions for rapid prototyping

In recent months, the word “serverless” has been popping up more and more at tech conferences and on blogs. The promise of serverless is that specific services or frameworks can free you from thinking about servers, allowing you to just push code out into the wild. Of course, servers are still out there, so what’s new in the current infrastructure as a service (IaaS) landscape? Lambda architectures are a new and interesting paradigm that divides projects into straight functions that are made available across networks. This is the same concept that enables tech teams to convince their CTOs to move to microservices.

Lambda architectures are powerful because they make projects easier to think about and much more cost effective. But like microservices, Lambda architectures pose new challenges: service discovery, team education, resilience, multi-languages, and multi-cloud, to name a few. Everything and everyone need to be coordinated. Despite that, lambda functions are a perfect fit for early projects and experimentations. This article introduces you to some tools you can use for your stack on your next project.

Effective workflow

Architectures based on lambdas basically deploy one compute unit for one service that performs one thing. Needless to say, when you need to manage hundreds of services with execution steps that are triggered by each other, you can end up with spaghetti-logic flows (see Figure 1).

Figure 1. Complex logic flow
Complex logic flow

I won’t go further into this scenario, but I do recommend that you consult with senior architects or read additional articles on how to manage technical debt, and how microservices can bite back.

But for now, let’s focus on a fresh project, one that is simple, clear, and effective. First, you want to quickly complete the basics—a landing page, user management, and payment—and then you can add other features. However, it’s still too early to tell which features will appeal to users enough to make them want to pay for the product.

At the end of the day, the goal is to iteratively and safely build the product by defining blocks that you can think of in isolation and only when you need them. Dedicating lambda functions to those blocks fit this agile process and make for a clear minimum viable product roadmap.

You probably don’t need to migrate all of your professional infrastructures under the lambda hat. You probably do need fewer frameworks and more winners that just work. And you definitely need to define how to deploy and maintain hundreds of functions.

Whatever your existing stack is, follow your intuition and quickly run a new lambda. Or kill it. Or replace it. You will have safe experimentations and quick iterations.

You may notice that this is similar to microservices, like Docker, but mentally cheaper. You can call it nano if you want to distinguish between them, but I believe containers or lambdas are a detail of implementation that enjoy the same architectural pattern benefits described above. Lambdas enforce one service by being one function, and have the benefit of dying when they are no longer needed. This forces developers to reason around small and stateless processing tasks. Code solves one problem with minimal side effects. Think of it as a combination of good old UNIX philosophy and the latest functional trend. I’m biased, but I think this serves developer happiness with strong community consensus. And what does it take to do this?

module.exports = function(context, callback) {
    callback(200, "Hello, world!\n");

Most frameworks and SaaS require you to expose a single function with an object that can hold secrets as HTTP query attributes. Of course, nothing prevents you from writing 2000 lines of code between module.exports and callback(), but that can quickly start to smell like an anti-pattern.

And the ease of deployment can quickly make it harder to maintain than splitting your problems into manageable endpoint solutions:

$ #completely copy/pasted from

$ fission function create ‑‑name hello ‑‑env nodejs 
‑‑code hello.js

$ fission route add ‑‑function hello ‑‑url /hello

$ curl http://router.fission/hello

Hello, world!

Now you have some code that will greet you over HTTP thanks to Fission. This requires that you have a Kubernetes setup on hand, but in 2018 who doesn’t have a self-hosted “open-source system for automating deployment, scaling, and management of containerized applications?” But seriously, if you don’t have time or don’t enjoy administrating distributed clusters, you are still in good company. Big hosting players now offer competitive platforms to push lambda on. Indeed, Webtask 101 can get you set up in 30 seconds, without costing you a penny:

$ #again, shamelessly copy/pasted from

$ echo "module.exports = function(cb) { cb(null, 'hello world');
 }" > foo.js

$ wt create foo.js     

After you have a chance to kickstart a project like, say, React or Webpack, you see that this is amazingly painless, yet full featured:

$ wt cron schedule ‑n mongocron ‑s 


47592/webtask‑examples 10m foo.js

And the lambda function now runs periodically, which as you know is a fairly common use case.

In all respects, the barriers to entry are very low here: You can write code as functions, and there is no new domain-specific language and no complex configuration. Or they may have been quickly wrapped behind popular frameworks, like serverless does for various cloud providers. This is important because new paradigms that don’t force you to re-learn everything, and yet are immediately actionable for projects, usually get a lot of traction. And a lot of traction means a robust ecosystem of services, libraries, help, and articles. This feeds the movement, and so on.

These frameworks and services are especially important for serverless because the whole point is to save you the hassle of managing stuff that’s not related to your core project. However, it uses non-trivial technologies and manages complex infrastructures of ephemeral compute instances that are dynamically mapped onto gateways to route traffic.

Self-hosted alternatives like OpenWhisk bring control over your serverless stack in house, without fancy technologies such as proxies and containers. So you might have your DevOps team and your investment ready to make that happen. This means you don’t have any third-party reliance; the full customization allows for very specific business cases (or shortcut solutions). But it also means that servers are back, so it’s important to be aware of the challenges related to that—provisioning, configuration, maintenance, monitoring, performances, and security.

In the meantime, let’s focus on how you can leverage existing tools for fun and profit.

Early cloud projects: startup feedback and iteration

As you have seen, code can be written quickly, and deployments can be cheap. The time it takes to go from the idea stage to online is as frictionless as possible, and this can be a double-edged sword. As with Twitter, you can put a vanity idea up in the cloud in no time, or you can live by the Lean Startup philosophy and use it to iterate fast. Whatever the quality of your initial idea, you can make it public or share it privately to gather feedback and experience. This can be a good proxy for assessing whether your initial design or technology is relevant. Alternatively, you can build a landing page and gauge interest. Then you will quickly find out if your idea is something people want, or something that isn’t worth additional time and effort. Engineers don’t usually like to cut off an arm and give up on something they really want to build (unless they decide to do it again from scratch or switch to another fancy tech). I can’t help you fight this syndrome, but at least you can have the numbers you need to make your decision.

Lambdas can be used according to the principle of don’t repeat yourself (DRY) at the architecture level. I feel this argument is a bit weak, like many best practices in computer science, it is only as good as the author who has implemented it. And yet, serverless workflow promotes decoupled services with a single purpose, so you can probably share some of them, like payment processing, static site rendering, user registration, and so on. I definitely did this, and as you develop projects you will start to extract patterns and share more and more services so you can go faster at the prototyping level. I do believe, however, that this will probably break as your project grows. From experience, we tend to specialize blocks to a point where it solves really specific edge cases. This means keeping things as simple as possible, implementing only the features you need, and following the open/closed principle.

Let’s add some open source and community ingredients to our discussion. Since we are talking about decoupled services with clear boundaries, sharing code can extend beyond your own stack. Stdlib recently raised 2 million USD for offering a common library of functions that are accessible through a network. Everything seems to point in this direction, but I’m not sure we are there yet. Maybe there is a lot to be gained by trusting third-party codes, in-house development trade-offs, and adoption rates, but there aren’t many projects that leverage this opportunity. Yet, as a small idea, this is something you can actually do to (once again) go faster. For example, every service that is common to early SaaS but doesn’t represent a competitive edge can just be plugged in.

Although this approach clearly has its limitations, it is worth noting that it is achievable inside an organization to empower development teams.

The cloud landscape: containers and trendy technologies

So far, this article has covered the arguments in favor of using lambda technologies for small prototypes. I hope it resonates with your experience and maybe pushes you to try it for your next project. But in case you got caught up in any of my points or keywords, I want to clear up any confusion about where lambdas fit into the current DevOps landscape (a small part of it, to be fair).

First, none of these terms are mutually exclusive: You can design a microservices architecture using lambdas that run code inside containers. This is no accident, since serverless emerged a few years after Docker and containers. They unlocked powerful cloud orchestration and provided the agility lambdas need to fire on events and die right after processing.

The way I see it, this all comes down to developer usage and business goals. Nothing prevents you from putting elephants in containers, or deploying microservices on bare-metal machines. Those are tools you can leverage and combine depending on your constraints. Kubernetes, for example, markets itself as a production-grade container orchestration, but with a few tweaks you can get a serverless framework:

  • Microservices is an architectural pattern that promotes decoupled services with clear, small boundaries and interfaces.
  • Containers are a Linux feature that allows for lighter isolation of processes than VMs thanks to cgroups.
  • Lambdas are ephemeral functions that wake up on specific events, process them, and then die.


The startup where I work recently organized its first hackathon. We built a lot of projects in 24 hours and many of us ended up deploying lambda functions because they are fast and cheap (free, actually). We were able to divide the work between teams of three or four. The scope was clearly defined, and it was a perfect fit for event-based projects (like developing slack bots). Some of us were working with the framework for the first time, and yet we were able to make them operational in just a few minutes, writing, deploying and monitoring lambda code.

All of this can quickly break as your projects grow. Yet as we improve developer tooling, I think this paradigm takes us in the right direction: lazy execution, effective costs, ease of usage, and ease of sharing. You probably don’t need to migrate all of your professional infrastructures under the lambda hat. You probably do need fewer frameworks and more winners that just work. And you definitely need to define how to deploy and maintain hundreds of functions.

So jump aboard! This is already an exciting technology for actionable reasons, and we still have a lot more to contribute.