This would be improved by a recognition that security involves trade offs, particular when you have existing apps in production that you don't want to break compatibility with. That isn't a laughable consideration: "A security problem in customer's code? Fire them as a customer!" Would indeed be effective at reducing their exposure but is not in their interests and has certain drawbacks with regards to one's odds of remaining employed as a security professional. Many apps - probably most apps - are not under active development, because the customer believes them to work. Sudden breaking API changes are bad news for them, possibly irreversibly bad news if e.g. the original developer can't be located to fix the problem.
A more sensible resolution to this issue, as a hypotherical security professional at an API company, would be moving more of the burden for security onto the company rather than the API consumer, by e.g. limiting the downside risk of a credentials compromise. (Similar to how banks don't say "The bad guys got your password? Sucks to be you, your balance is now $0", one would be well-advised to eventually have something on the roadmap to e.g. disallow unauthorized or poorly considered code from causing business catastrophe prior to getting a human in the loop.). You'd also want to e.g. make sure your sample code is secure of of the box, and write your first-party libraries to "just work" to the maximum extent possible, such that greenfield development would tend to be secured by default.
That's being pretty generic, but it's my understanding that many API companies actually do put quite of bit of work into internal anti-abuse tooling, for this and related reasons.
I don't think this is something you can mitigate with libraries, especially not as you need client libraries for a dozen or so languages and I (at least) wouldn't be willing to install software just to try out some service -- you pointed out in your blog how much users dread installing software, in my mind client libraries are even worse as you have to install them and their dependencies (which may or may not conflict with your own) which may involve more or less crappy tools (e.g. my JRuby build now depends on maven, because I need to use your client libraries) before you can even test the service.
A more sensible resolution to this issue, as a hypotherical security professional at an API company, would be moving more of the burden for security onto the company rather than the API consumer, by e.g. limiting the downside risk of a credentials compromise. (Similar to how banks don't say "The bad guys got your password? Sucks to be you, your balance is now $0", one would be well-advised to eventually have something on the roadmap to e.g. disallow unauthorized or poorly considered code from causing business catastrophe prior to getting a human in the loop.). You'd also want to e.g. make sure your sample code is secure of of the box, and write your first-party libraries to "just work" to the maximum extent possible, such that greenfield development would tend to be secured by default.
That's being pretty generic, but it's my understanding that many API companies actually do put quite of bit of work into internal anti-abuse tooling, for this and related reasons.