Article

Beyond buzzwords: A brief history of microservices patterns

Explore the impact of past software design patterns on the creation of microservices

By

Kyle Brown

Microservices are the hot new thing in commercial application development. The term microservice has replaced Agile, DevOps, and RESTful as the hot new buzzword that all resumes and conference talks have to feature. But microservices are more than just a passing fad. In fact, they are the evolution of all of these previous concepts and an approach that has begun to show the promise of cutting through a number of long-standing issues in application development.

Evolution of microservices

To understand this evolution, we need to take a step back in time to examine what microservices are, what they replaced, and why they became necessary. Let’s start in the early 1980s with the introduction of the first major systems distribution technology: Remote Procedure Calls (RPC). RPC was the concept behind Sun Microsystems’ initial ONC RPC as well as the basic idea behind DCE (1988) and CORBA (1991).

In each of these technologies, the basic idea was to make remote calls transparent to developers. The promise was that if developers didn’t have to care whether the procedure calls or methods they were invoking were local or remote, then they could build larger machine-crossing systems that could avoid the processing and memory scalability issues that affected systems of the time. (Remember that the most common processors at this time were 16-bit processors with a 64K address space!)

As processors improved and local address spaces became larger, this issue became less important. What’s more, the first set of large implementations of DCE and CORBA taught architects an important observation about distributed computing: Just because something can be distributed doesn’t mean it should be distributed.

Once large memory spaces became commonplace, it became apparent that making poor choices about distributing your methods across machines can have a horrendous effects on system performance. The earlier push to distribute everything resulted in many systems with very chatty interfaces — even to the point of distributing variable getters and setters in object-oriented languages. In a system like this, the networking overhead vastly outweighed the advantages of distribution.

This led us to our first pattern, which was meant to address the observation above —and that was a pattern that I discovered independently with John Crupi and Martin Fowler. In each case, we started with Erich Gamma’s book Design Patterns: Elements of Reusable Object Oriented Design and noticed the Facade pattern. The Facade pattern was about encapsulating the unstructured interfaces of a large system within a single, more structured interface to reduce “chattiness.” In other words, it was about reducing the interface cross-section of the system. The Session Facade approach that we developed applied this pattern to distributed systems by identifying the key, large-grained interfaces of an entire subsystem and exposing only those for distribution.

SOA and SOAP

We implemented our first Session Facades with Enterprise JavaBeans (EJBs), which, while fine if you were only working in Java, were complicated, difficult to debug, and not interoperable with other languages or even other vendor products. That lack of interoperability led directly to the next effort of the early to mid 2000s: what would become known as Service Oriented Architecture (SOA). SOA didn’t start out with such grandiose terminology, though. It originally began as a "do the simplest thing that could possibly work" effort called Simple Object Access Protocol (SOAP), originally released by Microsoft in 1999.

At its heart, SOAP was nothing more than a way of invoking object methods over HTTP. It took advantage of two artifacts of the computing world in the early 2000s: the growing support for HTTP in corporate networks, and the fact that this support included mechanisms for logging and debugging text-based networking calls.

The first flush of effort around SOAP was helpful because it quickly established that you could easily interoperate between systems implemented in many different languages and on many different platforms. But where SOA as a whole failed was in going well beyond this simple beginning to adding layers and layers of additional concepts beyond simple method invocation: Exception handling, transaction support, security, and digital signatures were all added onto what some felt was already a complicated protocol. This led to the next major observation: Trying to make a distributed call act like a local call always ends in tears.

REST

The industry as a whole slowly began to pivot toward a rejection of the procedural, layered concepts inherent in SOAP and the WS-* standards. Instead, what became more widespread was the adoption of Representational State Transfer (REST), which dated back to Roy Fielding’s Ph.D. dissertation in 2000. The foundational principle of REST is maddeningly simple: Treat HTTP as HTTP. Rather than layer procedural call semantics over HTTP, REST instead treats the HTTP verbs in the way they were specified in terms of create, read, delete, and update semantics. It also outlines a way to specify unique entity names through another accepted principle of the web: the URI.

Application servers

At the same time, the industry was also moving to reject another legacy of the Java Platform, Enterprise Edition (JEE) and SOA world: the large farm of application servers. Ever since Enterprise Java was introduced in 1999 (oddly as version 1.2), there had been an ongoing tension between application owners and application administrators.

When JEE was introduced, many corporations moved to embrace the notion of an application server as a host for a number of different applications because it was similar to existing IT models from the mainframe world. A single operations group would control, monitor, and maintain a "farm" of identical application servers from Oracle or IBM and would deploy different departmental applications onto that farm. This standardization and consistency was great for the operations team and reduced operating costs overall. But this created conflict with application developers, because development and test environments were large, difficult to create, and required the involvement of the operations team. This often meant that it could take months for new environments to be created, slowing projects down and increasing their development costs. What’s more, because these environments were out of the team’s control, there were often inconsistencies between application server versions, patch levels, application data, and software installations between environments.

What developers preferred were smaller, lighter-weight application platforms — usually open-source application servers such as Open Liberty or Quarkus. At the same time, the complexity of JEE was being shunned in favor of the supposed simplicity of the Spring platform as techniques like Inversion of Control and Dependency Injection became common. The takeaway from this was that development teams found that gaining the ability to consistently build and deploy their applications themselves, in development, test, and production environments that were as close to one another as they could be, was not only faster, but less error-prone as entire classes of errors stemming from environmental inconsistencies were eliminated. This led to the next observation: Whenever possible, your programs and their runtime environments should be entirely self-contained.

Fowler's microservices design principles

These three observations are at the heart of what Fowler describes microservices to be all about. One of Fowler’s microservices design principles is that microservices are "Organized around business capabilities." That stems directly from the discovery that just because you can distribute something doesn’t mean you should. The entire notion of the Facade pattern in its various incarnations was about defining a specific external API for a system or subsystem. The subtext of that was that this API would be business driven. Fowler makes that context explicit.

Often, understanding what this means is a roadblock for some development teams — they simply are not used to designing in terms of business interfaces and might find themselves quickly devolving down to technical interfaces (such as Login or Logging). In these cases, what many teams have found applicable are several patterns from Eric Evans’ book, Domain Driven Design. In particular, his Entity and Aggregate Patterns are useful in identifying specific business concepts that map directly into microservices. Likewise, his Services pattern also provides a way of mapping operations that do not correspond to a single entity or aggregate into the Entity-based approach that you need for microservices.

Likewise, Fowler’s rule to employ "smart endpoints and dumb pipes" stems from the experience that teams using EJBs, SOAP, and other complex distribution technologies had resulted in the observation that trying to make a distributed system look local always ended in tears. Finally, Fowler’s dictates around Decentralized Governance and Decentralized Data Management stem from the hard-won discovery that your programs and runtime environments should be self-contained.

Where does this leave us

Fowler, Adrian Cockcroft, and others have now made a convincing case for why development teams should adopt microservices. But if we look at the ways in which all of the lessons that led to the microservices architecture were learned, we can draw a conclusion that differs a bit from the developer-centric story I’ve just told. In particular, you have to look at the realities of making microservices work in a corporate world of existing applications, and you also have to realize the added emphasis that the microservices architecture places on the operations side of DevOps.

Microservices in the corporate world

The microservices architecture began to garner attention after a number of success stories were published from companies like Netflix, Gilt.com, and Amazon. However, all of these companies, and many of the other successes of microservices, shared one thing in common — they were all from born on the web companies that were developing new applications or that did not have a substantial legacy code base to replace. When a traditional corporation adopts microservices, one of the issues that they run into after the first green-field applications are chosen to test the waters of microservices is that some of the tenets of the microservices architecture, particularly the "Decentralized Data Management" and "Decentralized Governance" principles, are difficult to put in place when you must refactor a large monolithic application.

But luckily, an approach to that issue has been around for several years in the form of a pattern that Martin Fowler originally documented in 2004, several years prior to his work on microservices. His concept is called the "strangler application pattern," and it is meant to address the fact that you almost never actually live in a green field. The programs that need microservices the most are the ones that are the biggest and nastiest on the web, but again, taking advantage of the architecture of the web can provide us with a strategy for managing the refactoring that is required.

The strangler application is a simple concept that's based on the analogy of a vine that strangles the tree it’s wrapped around. The idea is that you use the structure of a web application — the fact that it is built out of individual URIs that map functionally to different aspects of a business domain — to split an application up into different functional domains and replace those domains with a new microservices-based implementation one domain at a time. These two aspects form separate applications that live side-by-side in the same URI space. Over time, the newly refactored application strangles, or replaces, the original application until you are ultimately able to shut off the monolithic application.

But that’s not the only pattern that we’ve found to be useful in making the microservices approach work in the corporate world. Another important aspect is that in many cases, a development team does not get to have decentralized control over its data. This is the reason for a pattern we call the Adapter Microservice, which is an extension of the original Adapter pattern from Design Patterns by Erich Gamma and others.

In an Adapter Microservice, you adapt between two different APIs. The first one is a business-oriented API that's built using RESTful or lightweight messaging techniques and designed using the same domain-driven techniques as a traditional microservice. But the second API that it adapts to is an existing legacy API or traditional WS-* based SOAP service. Purists might object to this approach and try to insist that if you’re not adopting decentralized data, then you’re not using microservices. However, corporate data exists for a reason, and more often than not there are good reasons other than organizational inertia that has left it in place. It might be that there are significant numbers of legacy applications that still need access to that data in its current form that cannot be easily adapted to a new API, or perhaps the sheer weight of the data (often measured in hundreds of terabytes or in petabytes) precludes its migration to a new form that's owned by a single service.

Microservices helps put the ops back in DevOps

Another important aspect of microservices deals with the operations side of the set of practices known as DevOps. This has its roots in a number of patterns that were originally developed for conventional application management. Fowler emphasized the importance of this in his original paper on microservices, where he stated that it is necessary to adapt infrastructure automation as part of a DevOps process that's built on Continuous Delivery and Continuous Integration. However, the need for this is not always clear to teams that begin adopting microservices on a small scale. The issue is that while using microservices makes it easier to change and deploy a single service quickly, it also makes the overall effort of managing and maintaining a set of services greater than it would be in a corresponding monolithic application.

For instance, this is one reason why many common frameworks such as the Netflix framework for microservices and Amalgam8 adapt the Service Registry pattern: By avoiding hard-coding specific microservice endpoints into your code, it makes it possible to change not only the implementation of the downstream microservices, but it also allows the choice of service location to vary in different stages of your DevOps pipeline. Without Service Registry, your application would quickly flounder as changes to code started propagating upward through a call chain of microservices.

This idea of achieving better isolation while at the same time making it possible to more easily debug microservices is at the heart of several of the DevOps patterns that we’ve identified, particularly Correlation ID and Log Aggregator. The Correlation ID pattern was identified and documented in a specific form in Gregor Hohpe’s book Enterprise Integration Patterns, but we’ve now seen the concept generalized in projects like OpenTracing that allow trace propagation through a number of microservices that are written in several different languages. Log Aggregator is a new pattern that has been implemented in a number of open-source and commercial products (such as the open-source ELK stack); it complements Correlation IDs by allowing the logs from a number of different microservices to be aggregated into a single, searchable repository. Together, these allow for efficient and understandable debugging of microservices regardless of the number of services or depth of each call stack.

Finally, another aspect of DevOps that is a crucial bridge between the two is one that Fowler calls out in his article: the importance of designing for failure. In particular, Netflix’s Hystrix framework has become an important part of many microservice implementations because of its implementation of the Circuit Breaker pattern. Circuit Breaker was first documented in Michael Nygard’s 2007 book Release It!. With Circuit Breaker, you can avoid wasting time handling downstream failures if you know that they are already occurring, and you can handle this by planting a Circuit Breaker section of code in upstream service calls that can detect when a downstream service is malfunctioning and avoid trying to call it. The benefit of this is that each call “fails fast,” and you can provide a better overall experience to your users and avoid mismanaging resources like threads and connection pools when you know that the downstream calls are destined to fail.

This kind of resource management used to be the exclusive province of the operations side of DevOps, but the microservices architecture more effectively brings the two sides together as they both work toward making the resulting applications more reliable, performant, and resilient.

What's next

In this article, we explored the historical antecedents of microservices and examined how the microservices architecture came about. We also discussed what kind of patterns you need to follow to successfully apply microservices in the corporate world, and what kind of challenges you can encounter when applying the microservices architecture.