Hacker News new | past | comments | ask | show | jobs | submit login

Great article and I agree with all of the criticisms.

There is a big problem with internet standards where the "division of thought" leaves a gap in the middle. The standard author thinks "I am just designing a template for people to follow, it's up to the implementor to consider whether what they're doing actually makes sense for their use-case" and the person implementing that standard thinks "I have implemented it exactly as described, therefore all uses must be sane/secure/whatever".

For example, the HTTP signatures draft standard: https://tools.ietf.org/id/draft-cavage-http-signatures-12.ht...

It specifies a pattern for signing and verifying HTTP requests. You can choose what headers to include in the signature. Now "obviously" any implementation of this standard should require you to specify up-front what headers clients must include in the signature for a request to be valid, but the specification does not mention this at all in the verification section, and none of the reference implementations actually allow you to do this! So as an attacker, I can craft any request I want and just tell the server "I'm not including any headers in the signature".

Similarly: you can include the date header in the signature to help prevent replay attacks, but nowhere does it say that you probably want to check that the date on the request is actually within some threshold of the current time.

And again: you can include the message digest in the signature, but nowhere does it say that you should actually verify that digest against the body. Worse, message digests are an entirely separate spec and no constraints are given about what digest algorithms should be used (many of them are not designed to be cryptographically secure). On top of that, message digests only make sense on requests with a body, so even if your library has support for "require certain headers to be present", there is likely no way to only enforce that a digest header is present for requests with a body.




The "division of thought" you talk about is the opposite of "opinionated" designs/standards.

The former assumes that the reader is going to take their time, think about their own implementation and their own specific use case, and make several careful decisions about how to make it work.

The latter assumes that ain't no reader got time fo dat, and is going to try to npm install any packages that have the same name as keywords in the title of your standard and assume that they're fine.

Development these days is unfortunately slanted towards the latter at a lot of companies who only measure output on number of features you've shipped or the impact of products you've launched, so every incentive there is to ship things as soon as you can.

However, there's no field in package repository metadata that says "this package has sharp edges, use with care" or "this package comes with safe defaults, but use with care nonetheless".


Not so sure - you can design things unopinionated but still have them secure. As an analogy, the whole point of statical typing is to allow all sensible programs to be written while keeping out everything that contradict itself. I.e if the PASETO were a type and an implementation a program then you wouldn't be able to write an implementation that would adher the type but would be insecure. Doesn't appear to be so with JWT and co.


regarding verification; checking the presence of specific headers, and checking that the date falls within a certain window, and checking a Digest... are all good things to do. It's not so difficult to do them yourself, even if the spec doesn't require it. You could say the spec doesn't go far enough.

Apigee (API Gateway) has an HTTPSignature verification callout that does all of those things. You just need to configure it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: