Hacker News new | past | comments | ask | show | jobs | submit login

I've worked on the UX team @ a Canadian bank and briefly touched a project that dealt with passwords.

Based on the bank I've worked at and the limited information I've received, it really isn't about the customer experience. It's likely related to legacy systems (someone alluded to this in another comment mentioning COBOL) and some outdated security practice in some of these systems. I think the security teams ideally would like to increase password strength, but costs (and non progressive technology teams) can tend to get in the way of making that change. As the article mentions, they try and mitigate this with reactionary measures and stricter restrictions when it comes to invalid login attacks.

For what it's worth, two-factor is being explored by banks and if I'm not mistaken, the banks do have existing two-factor authentication schemes -- just not for retail banking customers.




It seems to me that by taking a good KDF-hash of the user's password, then "compressing" it (either by truncating or re-hashing it to a new output length), you could get it down to a size where the base32 representation of those bytes, would both A. fit into the legacy password field, and B. obey its alphanumeric-only restrictions.

Naively, you'd think the compression would make the password field much more prone to preimage attacks (e.g. JtRing a hacked DB dump), but calculating each attempt would still require a full KDF series, even if large amounts of the KDF's output keyspace are getting conflated at the end. You'd likely have a lot of accidental collisions between passwords that happen to hash to the same value in your restricted keyspace, but they'd be no help in finding a purposeful collision, because the input keyspace would be unlimited.

Of course, if you only have 40 bits of output keyspace available (i.e. a field with a capacity for eight base32 characters), you "only" need to KDF 2^40 passwords to generate a probabilistically-complete rainbow table. So, make sure to salt the compression step with something user-specific, e.g. The UID. One simple method of achieving this would be to have the output look like

    base32(trunc(40, kdf(pw)) XOR trunc(40, kdf(pw ++ uid)))
This basically means that a successful preimage attack on your compressed KDF be would require a unique collision between the salted and unsalted hash outputs—effectively doubling the output keyspace one would have to search, as well as giving you the regular benefits of per-user salts.

---

...or you could just put the hash of a random opaque token in the password field; stand up a modern, separate server that runs a "password service" (in the SOA architecture sense) to keep a map of full (UID, KDF hash) pairs to the opaque tokens; and then have your web service hit the password service to (potentially) return it a token to use as the password in the conversation with the legacy auth system. You know, either way.


You are assuming that the passwords will be hashed and salted, something I doubt happens in most of these systems.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: