Hacker News new | past | comments | ask | show | jobs | submit login

My DH comment was a bit aside the point I probably should have made. (Also, apologies for my sarcasm re: bash pipes -- that was unnecessary and probably unproductive).

> Public key cryptography can be used with JWT tokens but they don't solve the problem of how the client will generate key pairs, demonstrate proof of possession of the private key, and enrol the public key with the API.

JWT is not in any way attempting to solve the problem of client identity and authentication. Rather, the question of federated user identity and how to validate that the identity assertion came from a trusted source (where the PKI and assertion signing comes in).

Furthermore it is signed with, among other assertions, the audience assertion so that you can cryptographically verify that a token was given with the authorization of a user by a trusted service (your JWT provider, via whatever authN methods it allows) and to a given client. This should give a substantial enough audit trail to enable reasonable proof that an end-user authorized a client (which itself had to authenticate to the provider) to perform an API transaction if it can be proven that the signature was validated and that the token issuer was clear about exactly what the user was giving the client authorization to do.

OAuth2, OIDC, and all modern standards I'm aware of also require client validation of some form. From the OAuth2 spec:

    Confidential clients are typically issued (or establish) a set of
    client credentials used for authenticating with the authorization
    server (e.g., password, public/private key pair).
This implementation is unspecified in OAuth2 but could (and in your case probably should) certainly include digitally signing each API request (much like twitter and amazon require) with a private key and validating the signature against the client's registered public keys as well as the constraints (especially audience, scope, and expiration) given via the token.

If your goal is to provide a non-repudiable audit trail of user identity and authorization and client identity and agency (authorized by the user to perform X) then OIDC, JWT, and client AuthN via request signing with registered keys should be more than sufficient to avoid liability in the case of rogue clients or shady users. As always, the audit trail is the most important piece, along with sound crypto and standard practices that have been audited by appropriate experts, so that your audit evidence cannot be reasonably called into question.




I completely agree with everything you said and know a bit about OIDC too, but since I'm far from mobile/3rd party apps development, there is one thing I don't understand: client credentials (confidential client) in case of mobile app installed from the marketplace. How is it done? You'd need dynamic client registration, right? I know there's a spec for that and I think I understand the mechanics. That would let you identify the client but not sure you can ever identify app developer with it (if needed for audit purposes). Or am I missing something, maybe?


I think for that you'd probably need even more robust client authentication, like allowing clients to give you a CSR for their own CA, which can sign CSRs generated by each app installation and the chain can be traced from individual app, through the developer's CA, back to a trusted internal root certificate.

That lets developers maintain key confidentiality (devs keep their CA private keys) and maintain control over the app installations' access to signed certs (as well as cert lifetime).

Even if it's not a CA, OIDC has some brief words on signing JWT with a registered keypair, which gives a similar, though less robust, ability to keep the private key secret.

No matter what, any of these scenarios still involves figuring out a way to trust the installed app is authorized by the resource owner and the client developer to obtain a signed cert/token (thus shifting real financial liability onto them in OP's scenario). Which probably means requiring the end user to register for your service also, validating the user again rather than the app.

The fundamental fact remains that the human mind remains the only truly secret place, which is why passwords aren't going anywhere, and why DRM solutions have to rely on making it illegal to attempt to obtain the decryption key embedded in device, or making attempted recovery involve physical destruction of the key.


Yeah, thinking about the same. But then I saw the OP mentioned somewhere in the comments below that he's only thinking about server to server scenario (2LO/client credentials), so its those comments above discussing fake login UIs confused me that it was about 3-legged flow.


I had the same questions, and it's very hard to find the answer - took me a very long time to piece this together but this is how Google does it:

1) You create a "normal" client in Google Developer console (i.e. a web client) 2) You create a native/Android client in the same project. This client is shared across all phones. 3) You add a scope of audience:server:client_id:$NORMAL_CLIENT_ID to auth requests from the mobile. 4) You get back token minted for the web client, from the native client!

This is how it works:

https://developers.google.com/identity/protocols/CrossClient...

The reason it is safe is because you can only do the cross client stuff from a mobile client, which disallows any redirect urls except for localhost and a couple of other special URIS (see https://developers.google.com/identity/protocols/OAuth2Insta...)

It's ok that the secret is not really secret because it's not possible to use it to making a Phishing site since the redirect URL is localhost.

I guess that doesn't answer your "how does it identity the app developer" but it does tell you how these things are deployed at least, and the important fact that there's just one client (not one for every device)


I understand that. Problem is that I can "steal" other dev's app client_id and use in my app. So it seems impossible to use such client_id for auditing/evidence. With a web client I cannot do that since I don't own the domain, so I can be proven to be a party in some transaction


They should allow for push notifications. That'd be more secure

At the end of the day though, everyone has to sign their apps with certs that are pretty well validated. So, it really cuts down on funny business like you mention.


The official recommendations for native apps are here: https://tools.ietf.org/html/draft-ietf-oauth-native-apps-01

They suggest using PKCE (challenge-repsonse) https://tools.ietf.org/html/rfc7636 to authenticate clients that can't be trusted with a client secret.


the problem with this is you're rewriting client ssl, which isn't a very good idea considering how battle tested and hardened it is. how do you know you've dealt with all the edge cases that tls spec writers have been wringing their hands over for like forever. why do a one off impl of client auth inside the oauth protocol? why not use extremely well baked pre-existing infrastructure like webcrypto? it's not like you're magically going to be compatible with everything just because you follow the overly vague guidelines of oauth2. all you get kudos for is reading the spec - but how does that help your users?

security is a very hard problem, especially asymetric crypto security. rolling your own is generally not advised.

If there is a way to define and use client tls according to the current spec, that would be best. if not, I agree that it's probably a good idea to create a new spec

I agree though that the curlness of the spec is orthogonal to the discussion.


I don't think you read the piece?

- I am not reinventing TLS - This has nothing to do with OAuth - It uses WebCrypto for key gen and CSR signing


I was replying to stuart, not to you. He was trying to re-invent tls.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: