Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tips to prevent adoption of your API (chrislukic.com)
85 points by clukic on June 16, 2021 | hide | past | favorite | 41 comments


This appears to be the Garmin API, which supports OAuth1a, has no public documentation, and the limitations are pretty well detailed as well on the company blog of the author:https://blog.smashrun.com/2021/05/21/a-new-garmin-api/.

What an awful experience, I hope I never have to use it.


If this is Garmin, I'm sadly not certain the difficulties are intentional. Their software has always been a major detraction from their really solid hardware. Connect IQ seems to be honestly trying to provide functionality, but ends up being a strange bastardization of Javascript and Java that compiles to Java bytecode. The experience was riddled with the same types of alpha-quality oversights and the target market started moving to Coros watches, but before I dropped the project all the watch crashes and firmware updates made much more sense.


When I saw the OAuth 1a, I cringed and audibly said "Oh no!"

I have to deal with OAuth 1a to make API calls to Jira and other Atlassian products. Hooray for whoever came up with OAuth 2 and did away with the requirement for the client to do cryptographic operations.

Surprisingly enough, OAuth 1a was the easy part for article author. May God have mercy on his soul.


I had the same reaction when I saw OAuth 1a. I wrote some code that called NetSuite's REST API, which also uses that; it was more than a little annoying.


I had a wild guess this would be Garmin too. Between this and the problems evidenced by their ransomware payout, their software engineering could use a serious overhaul.

A shame, because their hardware is generally good.


A couple of things I omitted for brevity:

The async server response doesn't contain the data. It contains a link to retrieve the data. So make a request and then wait to for a link to be sent to your server that you can use to retrieve the data for a limited time.

The responses sent to the server don't correspond one to one to the requests. One response may contain data for many requests, and many responses may contain data for one request. Each response contains data for many UATs.


This reminds me of the time I was dealing with a REST API that exposed an image thumbnail.. except the response contained a file path, and the response completed before the file was actually written to disk.

They were paying me to work with this, yet they couldn't be bothered to fix this on their end.

What a terrible experience.


wow, that sounds less like an API and more of a user/customer-facing feature, like Google Checkout.


I’m a bit confused as to why a “unit test” would ever be hitting a real, external API. I would call that an integration test, E2E test, etc. that probably shouldn’t be run frequently (as part of a normal build, for example).

This API sounds like garbage, don’t get me wrong; and because of that, mocking it’s behavior is going to be almost impossible.


I don't think I realized this was a thing. Are other developers mocking API endpoints, to test the code their as they develop and maintain it? It seems like a tremendous amount of work - recreating a service that will make an async call back to your endpoint as a means of testing. I've thought about it, but the fact that it seemed like so much additional work and that I'd be mocking a moving target just made me thing that would be a visit crazy town.


I avoid writing automated tests that hit an external API - especially one out of my control - because I don't want my CI runs to ever fail because someone else's service wasn't responding. I want CI to be a completely closed box, such that any failures mean there's a bug in my code.

If I want to test external APIs I'll do that in a separate set of integration tests which are run as part of a separate system, not as part of my CI for every code commit to my repo.

I mostly use Python, and the APIs I talk to are mostly accessed via the requests or httpx libraries - both of which have excellent libraries for productive mocking:

- https://requests-mock.readthedocs.io/en/latest/pytest.html

- https://github.com/Colin-b/pytest_httpx


Depends on the API and how much testing you need. You want to test your code, not the API's availability or correctness.

But it can be as easy as using a fake http library and mocking the responses, or using a httptest server: https://onsi.github.io/gomega/#ghttp-testing-http-clients

If the API is complicated and you have to write your own fake server, that might not make sense for small projects.


We did this at my previous company. What our app did was get data from a queue, enrich it with the API, then send it off to another queue. They provided test data and a test environment, but it was horrible, changed unpredictably and data frequently went missing. We recorded all endpoints hit during our integration tests, saved them to files and mocked their API (mostly GETs with no state so backed by a simple rest service with a hashmap).

The API team was an internal team to the company and there was absolutely no communication possible with them, so we kinda had no choice.


It depends on a lot of things, but let's say (for concreteness) you are using Java or something similar. I'd write classes to represent the request and response models, call them A and B. Then I'd write a interface/class with a method from A to Future<B>. That method would be mocked everywhere, and most of your code doesn't deal with the API nonsense. (B is the data you want not the link to get the data.)

I may or may not actually test the implementation of that method.

In short: wrap up your external dependency.


The startup costs are high, but once you're in the flow of doing it, it's not so bad.

I usually have a switch in our tests: default is to run against local mocks, but the test data also works against the third party sandbox environment. Periodically we revalidate our mocks against the sandbox responses to make sure the other API hasn't drifted.

CI always runs against local mocks, for speed and reliability.


It totally depends on the API and how critical of a code path you're writing tests for. For simple GET/POST calls, it's usually pretty easy to do this and I think you get a lot of value out of it. I've used an in-process HTTP server to mock such calls for a unit test, or you can add a layer to the nginx docker image to have a composable mock of the service.


The unit under testing could be the API interaction. If I wrap an external API in a client library I would test it’s behavior before then mocking that out when testing other code that uses the library.

I think situations like external API hit a fuzzy line between unit/integration. For testing an API unit I would reach for some way to save, re-play, and re-record those interactions just for sanity sake.

To me an integration test would be multi-step behavior, not just testing a specific request does a specific thing but a chain of requests, or verifying side-effects.


One reason for tests to hit a real external API is if you're using a "record and replay" test framework to capture the interactions so that you can run the tests against the recorded data quickly later. But because the API calls you make change (and the external implementation changes) you need to re-record from time to time.

This strikes a balance where 99% of the time you are making calls that never leave the process for fast testing, but can validate against the real implementation as needed.


This. Unit Test should not connect to external service imo.

And I do not get the gripe with a production rate limit tbh.


Rate limits are of course fine. I think token level limits make more sense, because application level limits force the consumer to either to track a rate window across asynchronous processes or make the calls synchronously. But, I mean, that's fine too. Think it's just like OAuth1a, fine in itself but add enough of these things on top of one another, and you've created a technical hurdle that's just too difficult to leap.


Same. I’d be more worried about an API with no rate limit. Say if another customer ships a bug that DoS’s the API on accident.


I would think a user token level limit would prevent this, although I could imagine a case where a bug simultaneous affected all user tokens, but I'd imagine you'd set that app level limit pretty high because otherwise you'd be making life very difficult for legitimate use cases.


I've seen quite a few people that just call all automated tests "unit tests". I wouldn't read too much into it.


On the same topic: API Practices If You Hate Your Customers https://queue.acm.org/detail.cfm?id=3375635


It sounds pretty malevolent to me.

But I'm disinclined to attribute malevolence to people without a motive. So I'm inclined to think they had to implement an API for some contractual reason; they put their best architect on it, with a brief to minimise server load at all costs; he then handed it off to the most junior developer team.

BTW: API implementors don't generally design their API around the unit-testing requirements of the API's users. Please don't test the API I wrote; I've already tested it. Test your own code.


Any guesses which "major fitness brand" this is?


Further up they suggest Garmin


I'd put my money on Strava.


Gotta say never heard of an api like this before that serves everything once. Forever.


Not quite as bad, but I've worked with plenty of APIs that have an endpoint that returns all new data since the last time you ran the call. If something goes wrong with the call, it's basically impossible to see just the new data.


Wow, that's an awful design. It's even harder to implement that than to leave it stateless and require the client to pass in a date range.


Think about it like a message queue. You can't keep things stateless and serve old data forever, or you have a queue that grows forever.

What you can do however is have an explicit "ack" for data being received and stored, before it's discarded. And apparently that API didn't.


>Instead of responding to your API requests we’ll call you back.

How...? Why?


My guess is that they do not want to serve those requests synchronously. They just put the request in a queue and have some workers fulfilling them at their own pace, without worrying about HTTP timeouts. It sucks for the user of the API because handling that involves a lot of additional complexity


Right. I think when you're dealing with massive traffic and you want to create a highly scalable API this is one technique. But then if everything is hitting a queue why have an application level rate limit? Adding items to the queue costs essentially 0, and you get to it when you get to it. If you think an app is abusing your API, then change the rate you process the queue for that app, or you know reach out to the app developer and ask them to stop.


The queue has to be stored somewhere and that takes either memory and disk or memory. That's not finite and prone to spikes (e.g. from another service suddenly waking up and calling you)


That's true. And really, of all the things the app level rate limit is perhaps the least worth mentioning. Not wanting to get into the weeds, what I didn't say is that there's no way of returning the extent of the data for a given user. So, the recommended approach by the support team is to always request all data. This easily maxes out the rate limit. Feedback that this lack of transparency is problem for both consumer and provider alike, fell on deaf ears.


This sounds like translation services. You hit an API, there's an automatic translation, then a human looks at it, then it gets sent back.


Based on the authors comment that they "might call you back in an hour" suggest that keeping a connection open for that long is impractical. I don't doubt there are APIs that misuse this pattern, but in some places it is the right way to go, even though it might not be obvious to the user.


Sounds like webhooks, but horribly misused.


At some point you seriously need to say enough. This is so ridiculous as to not be remotely worth it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: