In this part of the world, Christmas is right around the corner and if you have a sweet tooth like me, you are probably thinking about your capacity to ingest vast amounts of cookies, candy and other goodies. So I thought I would write a piece about capacity related to APIs 🙂

The most basic capacity control mechanism on an API is the plan under which the API is shared and consumed. The API plan should define not only the security model, but also the capacity that you as an API consumer have “acquired” through signing up to consume the API under that plan. The question is; are API plans sufficient for end-to-end capacity planning and control?

If you are in IT operations, you know by experience that there is a difference between the amount of traffic that could happen if all systems are running at full load, and the amount of traffic that actually does happen. This is simply due to the fact that in a well-planned and well-balanced environment, not all systems have their expected peak loads in the same time periods. So you plan for “expected combined peak” plus a percentage that is your “insurance” against unforeseen changes in traffic patterns.


dollar-being-squeezed-by-measuring-tape
In the world of APIs there is the same fundamental difference between what the API owner sells in terms of API plan access and the traffic capacity the underlying IT infrastructure can and should provide. Having spare capacity to support the maximum potential traffic corresponding to all API plans “sold” can be very costly.

To prevent prohibitive runtime costs on the API platform itself, make sure that your API runtime is highly scalable (so that actual loads matter less) or apply traffic throttling when current load surpasses available capacity to smooth out traffic spikes. This addresses capacity planning for the API platform in isolation.

But what about the backend services providing data and function to perhaps not only one, but rather multiple different APIs? In that situation, any control added to an individual API will not help the service owner control and manage the overall capacity required on a given service. In APIs versus Services I argued that APIs and Services should be independently managed objects. When that is your approach, then the services provide an additional control point where global traffic policies can be applied – assuming of course that your service integration platform supports such traffic policies.

Look for end-to-end integration platforms where you can apply traffic policies independently to managed APIs (on your API platform) and the services supporting them (typically in your Enterprise Service Bus which per API platforms are different from just another ESB should be distinct from your API platform). Have the API business owner define the plans that control API consumption. Then have IT operations set the global traffic policies on backend services according to traditional capacity planning methods, based on a mixture of traffic from various APIs and potentially other direct service consumers.

Connect with me on @ClausTorpJensen or read my earlier blogs. You can also subscribe to the blog list here

Join The Discussion

Your email address will not be published. Required fields are marked *