Cloudflare is only the first to market with a solution. If this proposal catches on every WAF vendor under the sun will have it implemented before the next sales cycle. Enforcement of this standard will be commoditized down to nothing.
It cracks me up to no end how the dev tools are much better MCP clients than the web chatbots. Claude Code is so _so_ much better at MCP than Claude Web, which has issues with managing DCR client state, is comparatively terrible at surfacing debug information up, doesn't let regular users see under the hood at how tools are described or called, etc.
Using Claude Code or your IDE of choice to book a hotel is a fun unintended side effect of this.
Refresh tokens are only really required if a client is accessing an API on behalf of a user. The refresh token tracks the specific user grant, and there needs to be one refresh token per user of the client.
If a client is accessing an API on behalf of itself (which is a more natural fit for an API Key replacement) then we can use client_credentials with either client secret authentication or JWT bearer authentication instead.
That is a very specific form of refresh token but not the only model. You can just easily have your "API key" be that refresh token. You submit it to an authentication endpoint, get back a new refresh token and a bearer token, and invalidate the previous bearer token if it was still valid. The bearer token will naturally expire and if you're still using it, just use the refresh immediately, if its days or weeks later you can use it then.
There doesn't need to be any OIDC or third party involved to get all the benefits of them. The keys can't be used by multiple simultaneous clients, they naturally expire and rotate over time, and you can easily audit their use (primarily due to the last two principles).
Many "softer" forms of SSO have trickled down too. Google + Microsoft OAuth are ubiquitous today without any upchage. OAuth from a Google Workspace account managed by an IT admin has many of the same security guarantees as SAML or OIDC from a Google Workspace account, at least for a small player. There are some sketches like https://easie.dev/ that explore this further.
For extra security, an intermediary can set Content Security Policy (CSP) headers that instruct browsers to only connect to certain domains. CSP headers aren't a total solution, but they're a good tool in the toolkit for redundancy against exfiltration.
SSO chaining is super common in large corporate environments. Different orgs might have their own SSO IDP, acquisitions often bring their own, etc. Once a provider is in use, it is quite difficult to tear out later while keeping everyone in their proper accounts in all the apps that tie in. Many apps are really bad at SSO migrations, or deduplicating multiple SSO identities to a single user account.
You'd need something at the browser/UA level to unsubscribe or to make the subscription exist for only a single message. Bad content publishers have taught us to never allow Web Push notifications since they always get inundated with marketing and other nonsense - being able to bake protections against that into the spec could be interesting.
I _love_ JWTs for API authentication - one of the nicest APIs I ever consumed was essentially JSON RPC over JWTs. Unfortunately they represent a huge usability hit over API Keys for the average joe. Involving cryptography to sign a JWT per request makes an API significantly harder to consume with tools like Postman or CURL. You can no longer have nice click-to-copy snippets in your public docs. You either have an SDK ready to go in your customer's language or ecosystem of choice, or you're asking them to write a bunch of scary security-adjacent code just to get to their first successful request. No, I don't have a JWT library recommendation for Erlang, sorry.
Not that an API couldn't support both API Keys and JWT based authentication, but one is a very established and well understood pattern and one is not. Lowest common denominator API designs are hard to shake.
That's interesting - why do it this way rather than including a "reusable" signed JWT with the request, like an API token? Why sign the whole request? What does that give you?
Also what made that API so nice? Was this a significant part of it?
> That's interesting - why do it this way rather than including a "reusable" signed JWT with the request, like an API token? Why sign the whole request?
Supposedly bearer tokens should be ephemeral, which means either short-lived (say single-digit minutes) or one-time use.
This was supposed to be the way bearer tokens were supposed to be used.
> Supposedly bearer tokens should be ephemeral, which means either short-lived (say single-digit minutes) or one-time use.
The desirable properties for tokens is that they have some means of verifying their integrity, that they are being sent by the authorized party, and that they are being consumed by the authorized recipient.
A "reusable" bearer JWT with a particular audience satisfies all three - as long as the channel and the software are properly protected from inspection/exfiltration. Native clients are often considered properly protected (even when they open themselves up to supply chain attacks by throwing unaudited third party libraries in); browser clients are a little less trustworthy due to web extensions and poor adoption of technologies like CSP.
A proof of possession JWT (or auxiliary mechanisms like DPoP) will also satisfy all three properties - as long as your client won't let its private key be exfiltrated.
It is when you can't have all three properties that you start looking at other risk mitigations, such as making a credential one time use (e.g. first-use-wins) when you can't trust it won't be known to attackers once sent, or limiting validity times under the assumption that the process of getting a new token is more secure.
Generally an extremely short lifetime is part of one-time-use/first-use-wins, because that policy requires the target resource to be stateful. Persisting every token ever received would be costly and have a latency impact. Policy compliance is an issue as well - it is far easier to just allow those tokens to be used multiple times, and non-compliance will only be discovered through negative testing. Five minutes is a common value here, and some software will reject a lifetime of over an hour because of the cost of enforcing the single use policy.
I haven't seen recommendations for single-digit minute times for re-issuance of a multi-use bearer token though (such as for ongoing API access). Once you consider going below 10 minutes of validity there, you really want to reevaluate whatever your infrastructure requirements were that previously ruled out proof-of-possession (or whether your perceived level of risk-adversity is accurately represented in your budget)
> The desirable properties for tokens is that they have some means of verifying their integrity, that they are being sent by the authorized party, and that they are being consumed by the authorized recipient.
Those are desirable properties.
But session hijacking is a known problem. You have to expect a scenario where an attacker fishes one of your tokens and uses it on your behalf to access your things.
To mitigate that attack vector, either you use single-user tokens or short-lived tokens.
Also, clients are already expected to go through authentication flows to request tokens with specific sets of claims and/or scopes to perform specific requests.
Single-use tokens were expected to be the happy flow of bearer token schemes such as JWTs. That's how you eliminate a series of attack vectors.
> Generally an extremely short lifetime is part of one-time-use/first-use-wins, because that policy requires the target resource to be stateful.
Not quite.
Single-use tokens are stateful because resource servers need to track a list of revoked tokens. But "stateful" only means that you have to periodically refresh a list of IDs.
Short-lived tokens are stateless. A JWT features "issued at" time, "not before" time, and "expiration" time. Each JWT already specifies the time window when resource servers should deem it valid.
> Persisting every token ever received would be costly and have a latency impact.
No need. As per the JWT's RFC, JWTs support the JWT ID property. You only need to store the JWT ID of a revoked token, not the whole token. Also, you only need to store it during the time window it's valid.
> Policy compliance is an issue as well - it is far easier to just allow those tokens to be used multiple times, and non-compliance will only be discovered through negative testing.
I think "easier" is the only argument, and it's mostly about laziness.
Authentication flows already support emitting both access tokens and refresh tokens, and generating new tokens is a matter of sending a request with a refresh token.
Ironically, the "easy" argument boils down arguing in favor of making it easy to pull session hijacking attacks. That's what developers do when they fish tokens from some source and send them around.
> I haven't seen recommendations for single-digit minute times for re-issuance of a multi-use bearer token though (such as for ongoing API access). Once you consider going below 10 minutes of validity there, you really want to reevaluate whatever your infrastructure requirements were that previously ruled out proof-of-possession (or whether your perceived level of risk-adversity is accurately represented in your budget)
This personal belief is not grounded in reality.
It's absurd to argue that clients having to do a request each 5 minutes is something that requires you to "reevaluate your infrastructure requirements". You're able to maintain infrastructure that handles all requests from clients, but you draw the line in sending a refresh request each minute or so?
It's also absurd to argue about proof-of-possession and other nonsense. The token validation process is the same: is the token signed? Can you confirm the signature is valid? Is the token expired/revoked? This is something your resource servers already do at each request. There is no extra requirements.
You're effectively talking about an attacker breaking https aren't you? Unless you can detail another way to
get at a user's token. I'm curious to hear about it.
I did, and xss and session sniffing listed on the OWASP web page, would be prevented by following OAuth flows. So that just leaves mitm, which as I said, is effectively breaking https.
Each JWT was passed as a query param over a 307 redirect from my service to the other side, so the JWT itself was the whole request to prevent tampering from the browser. It was for an internal tool that did one thing, did it well, and never caused me any problems.
Back in the day I worked at a place that had HMAC signing on an http endpoint.
50% of the support issues were because people could not properly sign requests and it caused me to learn how to make http in all sorts of crap to help support them.
Easy to imagine that haha. That’s part of the reason I’d lean on a standard like JOSE and make signing happen automatically for users who prefer to use an SDK
> Unfortunately they represent a huge usability hit over API Keys for the average joe. Involving cryptography to sign a JWT per request makes an API significantly harder to consume with tools like Postman or CURL.
Just generate the JWT using, e.g. https://github.com/mike-engel/jwt-cli ? It’s different, and a little harder the first time, but not any kind of ongoing burden.
IMO this is a tooling issue. You can make your SDK generate keys and even base64 encode them so they appear opaque to the uninitiated (like an API key)
Installing a dependency for myself is just and a little harder the first time. Asking every developer who will ever consume my service over CURL to install a dependency is absolutely an ongoing burden.
I am pretty sure with the right tooling JWTs (or something similar) could be much easier to use and serve more needs/use cases than they tend to be used for today.
Even the very foundational libraries needed to create/sign/handle JWTs in many programming languages are kind of clunky. And I think subconsciously as developers when we encounter clunky (ie high accidental complexity) libraries/apis we sense that the overall project is kind of amateurish, or will take some trial and error to set up properly. Sometimes that's no big deal, but with auth you can't afford to risk your company or product on someone's side project.
For example, in Go, there is really only one major jwt implementation in use [0] and it's the side project of some guy with a day job working on protobufs [1,2]. Also, with all due respect to the contributors because it's a good library considering the level of support it has, it is just not easy to use or get started with.
Part of the problem is also that the JWT specification [3,4] is a bad mix of overly prescriptive and permissive regarding "claims". I actually think it needs to be replaced with something better because it's a serious problem: it adds a bunch of unnecessary fluff to deal with special claims like "email_verified" when that use case could easily just be treated like any other application-specific jwt payload data, AND it then adds a bunch of complexity because almost everything is optional.
Then of course there's the giant problem of handling your own private keys and identity/security infrastructure + all the associated risks. Nothing mature makes that easy, so everybody naturally prefers to delegate it to auth providers. But that tends to make it hard to fully leverage the underlying jwts (eg with custom claims) and might force you into an authorization model/impl that's less flexible than what JWTs actually support, because now you have to use your auth provider's apis.
I think there really needs to be some kind of all-in-one library or tool for developers to operate their own hmac/jwks/authn/authz safely with good enough defaults that typical use cases require ~no configuration. And maybe either a jwtv2 or second spec that strips out the junk from the jwt spec and prescribes basic conventions for authorization. That's actually the only realistic path to fully leveraging jwt for identity and authz, because you couldn't build something like that on top of auth providers' APIs since they're too restricted/disparate (plus the providers are incentivized to sneakily lock you in to their APIs and identity/authz).
Anyway, this is a project I've been toying with for about a year now and we have some funding/time at my company to start tackling it as an open source project. Hit me up if you're interested.
reply