Forgive my ignorance (I'm on HN reading a lot in an attempt to educate myself) - I can see why that would be a bad idea, but what is the correct alternative to hardcoding the secret/key in the app?
Store the third-party credentials server-side, and send or proxy the requests that need it through your own server (authenticated with user-specific credentials).
You don't have to return the API keys to the user's phone. You could set up a proxy API that will call the third-party service on behalf of an authenticated user.
I didn't mean this for scenarios where the user has an account with private content with the third party (e.g. Twitter), but for e.g. a common backend resource such as sending an error report to developers for user-facing errors.
Full disclosure, I work for a company called CriticalBlue that helps customers with these types of issues. We have a recent blog article about setting up a proxy for API keys in this way that might be useful at https://www.approov.io/blog/protect-your-api-keys-with-appro... . We have a way of then locking down that proxy so only approved apps can use it - without simply circling back to the original problem of hiding secrets in apps.
Many apps do not require account registration, so then what? I guess it could generate a server-side API token on first login then persist it. But then you need an API, which adds unnecessary cost.
Unnecessary cost? It's the cost of preventing hackers from being able to spin up 100s of EC2 instances using your account.
Make your own damn API. Have it do exactly what users are allowed to do on your behalf. It's not an unnecessary cost, it is 100% necessary if you are interfacing with external services from your app.
That EC2 example is a bit extreme, but in general I agree with what you are saying. Unfortunately, the reality is that most people paying for software development are more willing to accept these risks than to pay for security.
That said, security is not all-or-nothing. And anyone who believes that modern web-connected software can be 100% secure is either misinformed or trying to sell you something that you probably don't need.
If instead of exposing your API keys you implement an API that proxies that responsibilities your app would have needed those keys for, how are your API keys not 100% secure? You even get the added benefit of separating the implementation of those responsibilities from your app (and with app update processes being what they are, that's a huge deal when you need to make an emergency change to how you're using those APIs)
Then you could just sniff the traffic, or if you can't MITM the connection because of pinning etc. then you could still run the app inside a debugger and get the keys that way.
Note that if the keys are valid for all users, you'd only need to do this once per service.
```
obfuscates the `keyVal` from many class decompilers.
I'd love better ways to do this. But when you do Android development (like when you do Front End web dev), its easier to just assume that everything you wish to keep private will be visible and avoid keeping/saving critical information on the client app.
Stuff in native code (for example, XOR'd keys) is also marginally more difficult to access. It won't protect you from someone actively attacking your app, but it makes you less likely to get hit by people running automated scraping on apps for keys.
At the end of the day it's all security by obscurity and the real answer is proxying calls or TVMs to limit what individual keys can do, but it doesn't hurt not to be the lowest hanging fruit either.
All of your "solutions" still involve having the keys on an untrusted computer: the one in my hand; it isn't correct enough to say it is "easier to just assume that everything you wish to keep private will be visible"... it absolutely fundamentally doesn't work to put something sensitive in the hands of the attacker, even momentarily. For people who just can't grok this, imagine some incredibly trivial to pull off scenarios: your app is modified, the Java virtual machine is modified, the Linux kernel is modified, the phone is actually an emulator and the "hardware" is modified... you can't trust anything in the hands of an attacker, and trying to hide things from them using slieght of hand is foolish: I'll just log the final network traffic and work backwards (which for many or even most kinds of credentials is something that can be trivially automated) instead of trying to slog forward from the output of a static analyer.
Hardcoding your AWS stuff is nasty, because any bored script kiddie can extract it and make spurious requests, and at the end of the month you receive a larger-than-usual bill.
But Twitter API keys? These just allow access to the Twitter API. If these are leaked, the biggest risk is a denial-of-service of rate limits by a third party, which Twitter may catch and revoke the account. The developer will then have to create a new credential and update the app for it to keep working.
In this day and age where mainstream apps update often too, I doubt this is seen as a serious risk, for the long tail of apps. If this approach obviates maintaining Backend-as-a-Service proxy the developer pays out of their own pocket, it can be more cost-effective.
Is it that hard to setup a BaaS lambda function that just increments a DB field and checks it before making the Twitter call to rate-limit users? Even outside abuse I'd want to be able to rate-limit my API actions to the realm of stuff humans can do to keep out bots
I think that the risk arises from the bad practice of hardcoding any keys in your mobile app. Hardcoding Twitter keys can extend to AWS or say Stripe for example, which can lead to serious security breaches.
This post could be meant to warn developers against using this practice going forward, and like others have suggested, storing third party credentials server-side, making the call on behalf of the client app and then returning the results to the client.
What this looks like to me is the same adage: never trust the client software.
When doing web stuff, I can't trust the client with my api secrets, that's why I use oauth. My secrets are (theoretically) safe on the server side, and the client generates a short-lived token that lets me access services on their behalf if needs be. Google's drive filepicker is a pretty good example of how that works. It's awfully complicated (just logging in requires me to open an extra tab and listen for events on it for when the login flow completes), but it does work as a way to keep me from having to push out secrets to the client.
Doing this right basically requires two things. First, you must make your own API server that can run code on behalf of the client. You can trust the client to ask for stuff off of the API if it's properly authenticated, but you can't trust the client to run its own authorization code - anything that requires authorization rather than authentication has to run on the backend somewhere.
Second, your users have to trust you a little bit, so minimize that trust surface. With stuff like Google APIs, only ask for stuff you really need. I know I'm one of the few people who really read approval screens, but I have rejected and will continue to not use services that ask for too many permissions. Find the ones that just fit what you need (such as drive.file instead of drive), and don't ask for more. Don't ask for offline access unless you really mean it - I'd rather have to go through the flow again if my 30-minute token expired than let you keep a refresh token on your server (but again, the danger of the refresh token is mitigated by asking for the minimum).
Take Twitter keys for example (the most shared secrets, according to the article). Instead of hard-coding the secret in the app, just implement the Oauth authentification as it should be done: You store the secret key on your server, and issue the get request token from your server, then redirect the user to Twitter authorization page, instead of issuing the request to twitter from the app itself.
You will say: why should i do that if an attacker could just issue requests to my server and still get credentials using my app credentials? Here are some reasons:
1. You now have control over accepting the requests or not (ban ips, etc). If your app is pretty popular, spammers may try to use your app credentials to interact with Twitter as you will have a better reputation than newly generated credentials.
2. If your key is disabled by twitter for abuse, you can replace it with a new one without having to update the app itself.
Disclaimer: I work for Uber. These questions/opinions are entirely my own as a curious engineer.
> These secrets belonged to a lot of different 3rd party services, for example Uber’s secret which can be used to send in-app notification via the uber app.
In every Apple application - aren't these keys a one off, created by the client?
"The device token included in each request represents the identity of the device receiving the notification. APNs uses device tokens to identify each unique app and device combination. It also uses them to authenticate the routing of remote notifications sent to a device. Each time your app runs on a device, it fetches this token from APNs and forwards it to your provider. Your provider stores the token and uses it when sending notifications to that particular app and device. The token itself is opaque and persistent, changing only when a device’s data and settings are erased. Only APNs can decode and read a device token."
> In every Apple application - aren't these keys a one off, created by the client?
You just need the server token, and a phone number to send the notification. (The gsm token already in possession with Uber can be used to send notification to anyone at will)
A related question from an ignorant: How to do this correctly in a javascript library? E.g. I was working on a tiny js library which uses flickr API to fetch images from a specific album and having a separate server just to serve flickr API key or talk to flickr API seems quite an overkill?
There is no other way. You can't do any IP blocking as it is a client side web app, and as it is a client side web app, all data and strings will be available to the users.
I imagine however that a small wrapper won't be too hard to write. Make sure you add a good limit on it though (based on user, ip, (or a combo), whatever fits so yours is not used to freely access Flickr)
Authenticating using OAuth is one pattern that can avoid having to maintain an API passthrough service of your own. It might not make sense for all apps, though - especially if your app doesn't have any notion of an account or signing in right now.
You can generate keys for S3 on your server, allowing temporary upload or download access to specific files. Generating an IAM role per app user is not practical.
I like to intercept iOS apps' traffic with Burp Suite[1] or Fiddler[2]. The trick is to have two adapters running on the same OS, one for the public Internet, and the other acting as an ad-hoc hotspot. It's simply a case of letting Burp suite sniff the traffic on the ad-hoc network and seeing what 'goodies' you find, like API keys.
Ignoring the results here, isn't the act of disassembling random Android APKs illegal? Theoretically, could one of the affected companies sue them?
Also, if services only give you a single secret key per developer, what is the alternative? Proxying all API requests through your servers might work, but only with low-volume and if the service doesn't have rate limits per IP.
Most other schemes would require cooperation from the API providers, right?
No, in most countries it's not illegal. It may be a violation of the Terms of Service or other contracts, but that doesn't apply if you didn't have to sign it to get the APK.
> Proxying all API requests through your servers might work, but only with low-volume and if the service doesn't have rate limits per IP.
No paid api (intended to be used by a server instead of an end-user) has rate limits per ip. If it's a free api you are either using it in a way they did not intend, or the api secret is generally not truly secret (granting you special privileges) just a way to identify developers.
As for low-volume, obviously you should have servers to handle the traffic from your app. If you aren't low-volume I'm sure you can afford more servers.
There's no hidden motive here. It's interesting work, but usually this type of research is done on malware samples and the like, and not "regular" apps. That's why I'm curious if the authors should fear legal ramifications, and if so why (or why not).
I'm mainly interested in answers for the US, Germany/EU, and Switzerland.