In a recent post on integration architecture Alan Glickenhouse touched on the question of when and where API management should be used in relation to microservice architecture. This post looks at that question at the next level of depth, exploring the positioning of API management to both embrace microservices architecture and yet still manage the complexity it can introduce.
It is not uncommon for a large enterprise to have hundreds or even thousands of core applications containing the data and functions that help them run their day to day business. If all these applications were refactored into microservices architecture, each application might result in tens or even hundreds of microservice components. Whilst many applications will never be refactored into microservice architecture, some will â€“ or at least parts of them will. Certainly many new applications will be written using these fine grained microservices components in order to gain the benefits of greater agility, more independent, elastic scalability and truly independent resilience models.
Clearly microservices architecture by its very nature implies an enormous increase in the number of components sitting on the network, and we need to consider how to handle that complexity.
One aspect of this complexity is the interfaces that these microservice components make available. Most microservice components will make their capabilities available via an interface such as RESTful HTTP/JSON based APIs. Just as the number of microservice components on the network increases, so does the number of exposed APIs on those components.
How do we find the APIs we want from the overwhelming set available. Which are we â€śallowedâ€ť to re-use in other contexts? Ideally of course microservice components are completely independent, but in reality, there will always be some invocations between them. How will we know which sets of components are dependent on one another and how far failures will permeate. We need to better understand this increasing number of possible linkages between components across the enterprise landscape.
Is this the level at which API management should work? It might be tempting to think that we should apply API management at this fine grained level and it would allow us to administer this increasing number of interfaces. However, whilst API management has its place in a microservice world, we first need to re-establish some notion of boundaries and ownership.
Things were a lot simpler with traditional siloed applications. These often represent only one large component sitting on the network, or perhaps two for high availability. Whilst within the silo there might be much communication between the different parts of the application, this was typically hidden, and indeed unavailable to anything beyond the applicationâ€™s boundaries. It certainly wasnâ€™t made available as APIs on the network. An example would be the calls between EJBs in a Java application. These are only internally available calls, and may well be done in memory as local calls, never reaching down to the network.
Only capabilities that the siloed application wanted to make available to other applications would be exposed via a network level interface such as a web service, or more typically now, a RESTful JSON/HTTP based interface – weâ€™ll generically refer to these as â€śAPIsâ€ť for simplicity from this point on.
In microservices architecture, an application is broken down into multiple independent microservice components. Although ideally these microservice components are as independent as possible, there will always be a need for some intercommunication, and since each microservice is a separate network component, they will typically do this internal intercommunication via APIs too.
So, now we have a plethora of APIs, some of which are really only used in the narrow context of a close knit set of microservice components, and some that are intended for much wider re-use across the enterprise, and perhaps beyond. Technically however, they all look the same. How do we find the ones we need, and indeed how do we stop consumers from calling the ones they shouldnâ€™t. Indeed the idea that there even is an application boundary has potentially been lost. It could be said that there is no boundary at all, unless we choose to create one.
Without this notion of an application boundary, anything can call anything. Perhaps more importantly, we have little indication of ownership and accountability. Who has the responsibility for ensuring that a set of microservice components work together reliably to provide a business capability? How do we provide the fine grained access control to ensure that microservice components are only called by those that know how to use them appropriately.
It becomes clear that the only way to manage such a large number of components is to bring back some notion of the original application boundary concept. We want components within the boundary to be able to talk to one anotherâ€™s APIs at will, and then only make some APIs available beyond the boundary.
We can use network level mechanisms to create protected communication within the boundary using for example Kubernetes namespaces and perhaps further security mechanisms such as certificates or token based authentication to ensure that internal communication is secured. We would not expect to see a full API management capability intercepting communication internal to the application. These components know about one another, and are likely created and maintained by the same group of people. The specifications of their interfaces are part of the internal design of the application. Ideally, these essentially internal components of the application should communicate directly with one another, with no additional latency or complexity introduced.
Next we need to explicitly expose specific URLs beyond the boundaries for use by other applications. We would likely use for example the Ingress functionality in Kubernetes to make these API available outside the namespace boundary. But how will the owners of the microservices make definitions of the APIs they want to expose easily discoverable? How will consumers explore what APIs are available? How will we administer access for them?
This is where API management comes in, enabling us to control which consumers can explore what available APIs, whether they can self-subscribe to use them, and enable us to capture analytics on that usage.
So, summarizing all that, weâ€™re saying that inter-microservice communication within an application boundary is different from inter-application communication that goes across different application boundaries. Although they may both be performed using web APIs, their implementation may be radically different.
Now whether these boundaries we introduce in a microservice architecture represent the same groupings as we would have had originally had they been siloed applications is an interesting question. We are certainly not tied to those boundary definitions. What is perhaps more interesting is that we could potentially change our mind on the shape of these new boundaries over time. It would have often been impractical and maybe even impossible to move code from one siloed application to another. Now, since the boundaries are just arbitrary decisions made by us and defined by mechanisms such as the exposure of services via API management, those boundaries could be changed more easily.
So, we have asserted a number of things in this blog post:
- Some form of grouping of microservices, which we might describe as an â€śapplicationâ€ť, is necessary to manage the increased complexity in the number of components this architectural style produces.
- Group of components must have an owner at the group level in addition to the owners at the component level in order to ensure a consistent design of the overall application.
- These groups of components need to live within some form of enforceable boundary, perhaps via security models, or even down at the network level to enable inter-communication within the boundary that should not be available beyond it.
- Communication within the boundary should be â€ślight touchâ€ť, meaning it does not need to go via a formal API gateway.
- Any APIs exposed beyond these boundaries are destined for broader re-use by consumers outside the ownership domain of the boundary. As such they should be exposed using some form of API management to provide discovery, self-subscription, traffic management, and more.
There is certainly more to be said on this topic. For the time being, hopefully this serves to provide clear guidance on where API management itself fits within a microservices architecture.
Update: 13 Nov 2018 – further post published extending this discussion to explore the role of a Service Mesh in comparison to that of API Management to see how their roles differ and yet also complement one another.