A few months ago I happened upon an interesting post on InfoQ summarizing a debate on rate limiting and throttling on public APIs and their effect on innovation around such APIs.
As someone who has worked on publicly consumable APIs and is now focused on an internal (for the moment) API that gets hammered by other internal services, I can outright say that the owners of APIs are correct in this argument that hard limits need to be in place.
An organization who is exposing an API will typically only do so when there is a perceived business gain of opening their data and functionality for 3rd party apps to take advantage of. Such apps will only be written if there is utility to be gained by actually consuming that API. It’s a business partnership without money being exchanged. The partners are the provider of the API and the developers writing applications consuming that API. It’s a nice symbiosis that end users ultimately benefit from.
However, when such a partnership becomes too one-sided it behooves the short side to alter the deal. Without limits on the API, simple game theory will tell us that API consumers will use the API as much as they can irrespective of what that means to system providing the API.
However, if you are the provider you must ask yourself why would you ever let your system arrive in a state where it can be abused in the first place? Unless rate and access bounds on an API exist, the users of the apps written to those APIs will hit bounds anyway - except they will be technical bounds in the underlying infrastructure that would result in a denial of service instead of arbitrary but reasonable usage bounds. Applying back pressure is a useful tool in many aspects of software architecture, and applying back pressure by imposing rate limits at the API tier is one such aspect.
There once was a great Stackoverflow podcast where Jeff Atwood discussed their ultimate conclusion that enforcing bounds on everything is absolutely vital. Below is a snippet from that transcript:
Atwood: Well I look at it this way, if somebody is going to design a system like StackOverflow, asking “how do I design this?”, I would say look, you've got to bound everything. From day one. Just put in the boundings, because we didn't and, you know, I kind of realised we'd have to do some of it but I didn't realise how pervasive those boundings would be, like in every aspect of what we do there's boundings in the system that you have to have. Spolsky: So the system is counting basically. Atwood: Yeah you're just making sure that nothing happens too much. Because if anything happens too much it's just bad. It just leads to really really bad things happening, both from the reputation system to the scoring perspective to the hardware perspective, it's pervasive throughout the system.
If your system depends on implicit bounds that aren’t technically enforced, your users will inevitably find a way to exceed and exploit those bounds to their individual advantage and perhaps to your detriment and detriment to other users.
Unintended DoS from greedy API consumers and a user gaming a question/answer voting system to gain a gazillion points in a day have a common root cause: failure to enforce constraints.
In order for innovation around an API to occur, the API in question must have reasonable availability guarantees that probably involve a number featuring three or more 9s. Imposing no usage limits would make such a number even more difficult to achieve than it is already, irrespective of the talent and resources of the provider.