The infuriating thing is that this isn't necessary for CLI tooling. The reason this approach is taken is that you need a way to get the token to a local process even if the user is doing authentication in a browser. This can be avoided by having the process listen on localhost, and then have the login flow redirect to localhost (including the token) on successful completion.
Unfortunately this doesn’t work for CLI tools on a remote machine (like say for vscode remote over ssh ). The browser redirect to localhost won’t work because the CLI tool isn’t on localhost.
Something akin to ssh agent-forwarding ("oauth-forwarding"?) is really needed.
And it needs to be integrated similarly well like support for jumphosts.
Haven't seen anything like this, I'll try to bring this up with the openssh folks.
Curl can connect over unix domain sockets and ssh can forward them, I feel this would be a decent way to forward authentication as access control rules would apply to the sockets.
My SSH usage has multiple servers (staging, dev, etc..) and multiple clients (laptop, desktop). Some of those connections are going through jumphosts.
Setting up SSH tunnel would be possible, but a major pain, as every source/dest combination will need to have its own port, and every signin should specify the port number.
Compared to the current system, which prints a URL in terminal which I just need to click, it would be a major usability regression.
Some login requires Identity and access management (IAM) with a web interface only, if such gateway exists, a CLI tool would have to give user a link to open oneself I guess?
"If" is a good word. The setting above is usually being used in environments where network security is a bit paranoid, so shell access won't help against lack of a hole through the firewall somewhere in-between, unless there's a way to use avian carriers.
The thread we are in is talking specifically about using cli tools on a remote machine, so that’s why I mentioned it. If you have that you don’t need a hole in the firewall.
Their machine would be pwned, but their 2nd factor would not be compromised if they used something like a yubikey, so the attacker couldn't use the compromised host to SSO to other systems and enlarge their compromise. That's why yubikey requires that you touch it - an attacker can't just remotely trigger it even if they totally own the host the yubikey is plugged into.
That's the point of TFA - unphishable second factors and ways to make them phishable. I'm saying that using the clipboard would be a bad idea in this case.
This attack like OP says is not new. For a corporate environment you simply prevent all users except one or two admins/approvers from allowing 3rd party authorizations.
For consumers, my suggestion is for federation providers (auth0,github, google,etc...) review and human-approve applications that ask users authorizations.
What's worse is if you can get rudimentary access to the target. If you can force a deauth (usually by just DOSing the domain), you can force them through the flow again. But as the domain is DOSed, you can do authentication at the same time from a non-DOSed route. Thus they authenticate the attacker instead of themselves.
In my experience, tools don't see a difference between a 409/disconnect. They just see "error, need to reauth" (Docker, cough)
It would be great if OAuth could include some form of cryptographic attestation—similar to what is already present in WebAuthn—to ensure that only trusted devices can get authorization tokens.
In the case of CLI app authorization (where you are proving that the refresh + access tokens are being retrieved on the same device that issued the request), the CLI could generate a local key, store it in the TPM/keychain, and then in the browser you could prove that you have access to that same key.
For devices, direct attestation could authenticate the device making the request (e.g. as a legitimate MacBook Pro, or something).
Of course, this depends on services choosing to implement such flows, and when you introduce a requirement for a TPM or similar, plus multiple cryptographic steps, implementors are likely to get lazy and just do something that works but is insecure (or they implement the flows badly with home-rolled crypto).
> It would be great if OAuth could include some form of cryptographic attestation
This is, as they say, a "known issue". Bearer tokens were defined in RFC 6750 and the thought was that more types of tokens would follow, including some that bound tokens and clients.
It took a while.
RFC 8705, mentioned elsewhere in thread, is one approach.
I guess this is what Heroku was pushing for [1] when client tokens were leaked. They wanted GitHub to adopt RFC 8075 [2], that combines mutual TLS auth with the tokens, so that the tokens can only be used by authorized clients, not just anyone that had possession of the tokens.
I think the only MacBook Pros with real TPMs were the 1, and 2, series from 2006/2007. Right now Apple don't allow you to do device identity attestation with the secure element. But in the world where you take this approach you don't just tie issuance to "I have a hardware keystore", you tie it to "This hardware keystore has an identity that I know belongs to a computer I own"
But the entire thread is about delegating the authentication, granting access to another machine or process. You want to give access to one device (e.g. TV) from the trusted one (e.g. iPhone).
Both devices might be very pricey, real, up-to-date machines with TPM and keys and everything you want. The problem is that you can't tell whether the TV is the one TV that the iPhone's owner means to sign in.
Yes, but the idea is that you'd enroll the device (and its TPM), and thence wouldn't be phishable like this. Granted, there might still be a problem at device enrollment time.
See sibling thread, we're being duplicated; the point of this OAuth flow is to sign in on a different device, using the trusted one. That different device might be a legitimate TV with a TPM and cryptographic attestations that it truly belong to John Doe, there is still no way for your iPhone and Apple to check whether you meant to sign in to John Doe's TV or if they are a scammer and sent you the (legitimate) sign-up link over email.
Okta actually supports this with their device identity policy, but in this scenario the IdP doesn't necessarily have insight into who's issuing the token (The AWS case involves AWS getting a valid auth from Okta and then issuing the token) so that wouldn't work. RFC 8705 covers binding tokens to an mTLS Identity but basically nobody supports that.
WebAuthn uses such a directory already. Most implementations validate the attestation against a public database of ‘trusted’ device types (and DAA enables this to be done without compromising anonymity, up to the uniqueness of a device type)
That's not a trust statement, and it's not reliable as a proof. You can reliably tell you've seen this authenticator before, but that doesn't solve the problem being described here
Their most interesting suggestion is to use the Hybrid transport of CTAP2.2 (not published yet) to perform cross device authorization in a secure way.
This involved proving proximity over Bluetooth Low Energy and a key exchange. Then the Webauthn flow happens over an encrypted channel through a TURN server.
Problem is that your cli tool now needs access to BLE. We're not there yet.
I don't quite understand the flow here, can someone explain?
It seems to me that you're on evilsite.com and you get a screen to authorize your AWS account, which evilsite.com then gets and can log in to your AWS account. In that case, however, I'm aware that I'm browsing evilsite.com, so what's the issue?
It's like evilsite.com requesting OAuth permissions to my Twitter account, no? We don't need the RFC for that, it's just what OAuth normally does, and you're supposed to be careful who you give permission to, no?
You could receive a spearphishing email from an AWS lookalike address, which includes a link to the real AWS login page, which you auth to legitimately but which then sends your credentials to somebody else.
This has a link expiry problem, as described in the article, but you can do similar things with e.g. a background tab that swaps itself out, starting the flow on demand when you go back to the tab.
That might catch quite a few people: a web page, which after X minutes in the background changes its favicon to AWS, and then next time you open it it immediately starts a device code flow and redirects to the real AWS login page. Looks just like you've gone back to an AWS tab that's just been logged out, and it's the real bone fide login so you can't tell the difference unless you remember where the tab came from. I imagine if you're working in AWS all day, it wouldn't be so unusual, and you'd just log in again.
Normally iirc oauth will just transport the token in a browser redirect to the allowed domain that linked to the login page. evil.com will not be allowed, or if it is the token will end up scoped to evil.com.
If evil.com requests access to manage my Twitter account, and it fools me into accepting, why does it matter how the token is transported? Evil.com now has access to my Twitter account.
> "This can be avoided by having the process listen on localhost, and then have the login flow redirect to localhost (including the token) on successful completion."
I think this is what the AWS Client VPN client for Ubuntu does. So AWS does have the method in their tool set somewhere, though I imagine it's owned by an entirely different team than their CLI.
> This can be avoided by having the process listen on localhost, and then have the login flow redirect to localhost (including the token) on successful completion.
I'm confused, isn't having the device listen on localhost necessary for the device authorization grant flow? What's the alternative (that, apparently, people are doing but shouldn't be)?