Hacker News new | past | comments | ask | show | jobs | submit login
On Phone Numbers and Identity (medium.com/the-coinbase-blog)
301 points by cmurf on Sept 28, 2016 | hide | past | favorite | 113 comments



It sounds like Coinbase's security is on point which is good to know, in an industry where security flaws are disastrous: characterised most prominently by the MtGox hack, and more recently Bitfinex's.

I've long maintained that a phone number is not an identity, and (shameless plug) this month released a service allowing anyone to acquire phone numbers anonymously, with Bitcoin: https://smsprivacy.org/


>I've long maintained that a phone number is not an identity, and (shameless plug) this month released a service allowing anyone to acquire phone numbers anonymously, with Bitcoin: https://smsprivacy.org/

This is pretty cool (and I think I've mentioned the need for it previously on HN). Is there any current solution for phone calls (over Tor or something) without giving up identity?


https://dtmf.io/

These guys operate in roughly the same space, and they provide phone calls.

My service provides voicemail, but currently not phone calls!


>Can I connect using Tor? Yes, but voice will not work as Tor does not support UDP. We are looking at ways to work around this.


My mistake! I don't know the answer then.


I guess someone could sign up for Google voice using your SMS and then only use it over tor. Don't know how good the quality would be then.

https://support.google.com/a/answer/1279090?hl=en implies it doesn't need UDP.


I thought Google voice required you to receive an automated phone call with a code to sign up.


My service can listen to a phonecall and play back a recording (voicemail sort of thing), it just doesn't let you talk back.


Thanks so much for releasing this. I think it serves equally well as a useful service and as a proof of concept, that trying to tie identity to phone numbers is a folly.


Are these phone numbers "clean"? Even if a number isn't explicitly blocked by your SMS provider, sites like Craigslist still build their own account-registration blacklists by getting lists from all the "burner phone" providers (and infrastructure providers, like Twilio) of the numbers they're currently assigned.


Where do you get the phone numbers? Twilio?


I use Nexmo https://www.nexmo.com/ but it's more or less equivalent to Twilio.


Out of curiosity, why did you choose Nexmo over Twilio?


I've used them in the past, and I think it's also slightly cheaper.


In a way, this seems like an at-speed version of the Social Security Number's slow-motion train wreck: just as SSNs are not secret (and in many cases are actually pretty easily guessable), so too possession of a phone number is too-easily changed.

We need a good solution to identity. Right now most web services use 'has control of an email account,' while some others — to include email services — use 'has control of a phone number.'

I don't trust governments in general, so 'is vouched for by a state' doesn't really work either, although for many institutions (e.g. banks) that's probably good enough (if a government wishes to confiscate one's funds, it will do that, rather than create a fake driver's license and hand it to someone to clean one's accounts out).

Perhaps some sort of social identity is possible? 'Charles is vouched for by Alice, Bob, Dave and Frank' might work, although there's the issue that someone has to vouch for each of Alice, Bob, Dave and Frank — there's a Sybil attack where someone, or a group of real people operating together, could create false identities.

For my own accounts though, if I nominate two persons I work with, four persons I go to church with, four more in my family and three friends then I'd feel pretty good if nine of those were allowed to vouch for my email. I think.


> We need a good solution to identity

We have one: public keys (or more appropriately, their fingerprints). Building social trust on top of that is another layer of scope; you use the fingerprint to ensure you're communicating with a consistent digital entity, and then have an out-of-band way of confirming (for yourself) that this consistent digital entity is equivalent to this other consistent physical entity. Note that here I'm treating a web of trust as an out of band exchange: it only works because you have an existing (out of band) connection to someone, or because you are implicitly trusting consensus. "Bootstrapping" trust, at least in this context, is a psychological/social question that really cannot be answered technologically, and is different for every person.

That's all well and good, but the problem is finding a business-sustainable way to scalably deploy such a system.


> That's all well and good, but the problem is finding a business-sustainable way to scalably deploy such a system.

Is this effectively not how keybase.io works / what they are doing? Bootstrapping identity is pretty easy, unless you're working remotely. Even then, trust-on-first-use can probably go marginally farther than what we have now. I don't think the problem is scaling that web of trust, as not everyone should need to know / sign for every employee immediately.

I think the bigger issue is just getting people to adopt public keys and digital identities based on them. Hijacking identity through social engineering is pretty rare, even if it's not particularly "hard." So most people won't want to bother with something like this until it's too late.


Keybase is definitely working on creating a trust bridge between physical identity and (a specific type of) digital identity. Whether they've managed to do it in a scalable, business-sustainable way remains to be seen.

Let me be clear: that's not a slight against them; from a business standpoint I find no good analysis or speculation on their financial numbers and see no publicly commented monetization strategy). The reality of the world today is that if you can't make something economically self-perpetuating, it will almost certainly eventually fail -- especially if the problem you're trying to solve is one of as large a magnitude as literally any social interaction on the internet (they all involve connecting digital identities to physical ones).

From a technical standpoint, the primary problem I have with Keybase is that it uses PGP. I'm not [1] the only [2] person (and, since I'm effectively 100% unknown, I'm far, FAR from the most prominent, who is arguably Bruce Schneier [3]) to criticize PGP, but at the core of it, even ignoring substantial usability issues, have you ever tried to forward a PGP message? It's sort-of possible, if you re-encrypt and re-sign everything, but it's certainly not practical and definitely not scalable. Sharing and re-sharing is the cornerstone of much of the social internet, and PGP renders it effectively impossible.

Keybase has a really, really cool body of work surrounding "connect this PGP key to this physical identity", but unfortunately PGP keys are only one in a small subset of cryptographic identities, which is itself an even smaller subset of digital identities in general. I get why they've done it this way; the alternative is to write your own cryptographic protocol that nobody is using, which is a much riskier thing (speaking from experience [4]). But at the end of the day, all of Keybase hinges around using PGP (or a PGP-like approach), and I'm very skeptical that's a viable solution to the social problems we have today.

[1] https://blog.cryptographyengineering.com/2014/08/13/whats-ma...

[2] https://moxie.org/blog/gpg-and-me/

[3] https://www.schneier.com/blog/archives/2015/11/testing_the_u...

[4] https://github.com/Muterra/doc-golix


> have you ever tried to forward a PGP message?

I believe this is a solvable UI / UX problem; what you are in fact criticizing is not the OpenPGP format of asymmetric keys, but rather the conventional implementations of PGP, based around GPG. I agree that GPG is not very user-friendly for the use cases at hand; but surely things can be improved on this front while not ditching the general PGP model itself?

edit as to the more technical issues, e.g. lack of PFS, core issue of initializing WoT, the defaults in the PGP format etc., yes, they suck, but surely one could iterate on the UX side of things, abstracting all internals behind a more general PGP API, and later gracefully changing the internals themselves, too? Not saying it's a piece of cake or anything!


To your first point: I agree that it's not the key format that's the problem with forwarding -- so, you could use a PGP key definition for a different message format that worked better, and as long as you ignored all of the extra stuff in a PGP public key definition, it's no harm no foul. At its core, it's just a public key.

But that's not what breaks forwarding, the actual message format does, and that isn't just a UI/UX problem. You must personally decrypt and re-encrypt the message against the public key of the person you're forwarding it to, and if the original author signed it you then need to somehow encapsulate the signature (which is outside of PGP spec), and then you're still left with the key distribution problem, which becomes exponentially more difficult with each tome the message is forwarded. That's not just a UI/UX problem, that's also a fundamental technical failing of the PGP message. PGP is incompatible with social.

That's not to say, by any stretch of the imagination, that these are unsolvable problems. Quite the opposite, actually; I hope at the very least that my company has made some decent progress down the road to solving them. I'm saying only that PGP simply isn't the vehicle to take us there, because it is (in my experience) a very poor design for general-purpose usage (especially social).

To your edit: unfortunately PGP encapsulation is really, really poorly situated to construct general-purpose abstractions on top of. It just doesn't work with reused (ie forwarded, replied, etc) data. Building an abstraction facade on top of that to ease transition to a new format, though technically possible, is definitely infeasible: both your storage requirements and computation requirements are going to increase linearly with each reuse of existing data (meaning it would grow exponentially with the number of shares, assuming the worst case, that everyone re-shares it).


Public keys are a good solution to identifiers.

All the stuff you said was out of scope is part of identity.

Public key fingerprints probably aren't a solution that users are ready for either. There's an analogy with PGP. It works, but good luck getting people to use it under normal circumstances.


"We need a good solution to identity. Right now most web services use 'has control of an email account,' while some others — to include email services — use 'has control of a phone number.'"

I nominate Oh By Codes[1], for obvious and selfish reasons.

We are currently working very simple, human (and machine) readable markup tags that you can put into your own, personal Oh By Code, which can then be used as your comprehensive contact token - and possibly for identity.

I just ordered a small stack of business cards that have nothing on them but:

0 x JOHN

and I will populate that, my personal code[2], with things like <pgppubkey></pgppubkey> and <phone></phone> and <email></email>. Then, instead of a cluttered up business card (or annoying, long interactions over the phone trying to spell out my email address, etc.) I can just direct interested parties to my Oh By Code which will contain whatever it is that they need.

Since the usable bits are in tags, you could do things like log into services, or address email, or dial a phone not with the phone number, but with the Oh By Code.

Yes, I am well aware of the enormous, almost insurmountable chicken and egg problem of Oh By Codes not being useful until everyone knows what they are.

[1] https://0x.co

[2] free codes are random and immutable. Custom codes can be chosen and can be edited.


> Yes, I am well aware of the enormous, almost insurmountable chicken and egg problem of Oh By Codes not being useful until everyone knows what they are.

I fail to see how what an "oh by" code achieves that cannot be achieved by an URL, save for the subjective beauty of the format looking like 0xffffff.

Plus, if this became a thing, everybody would have to put their total trust in "PERFECT PRIVACY, LLC" or "Oh By, Inc.", which seems a step backwards from the distributed nature of DNS.

Would you mind expanding a bit and explaining how this system can be used to log into services? That doesn't seem to be so obvious.


"I fail to see how what an "oh by" code achieves that cannot be achieved by an URL, save for the subjective beauty of the format looking like 0xffffff."

You're correct. Sadly, in 2016, it appears that most people aren't creating and maintaining their own web pages. I wish this weren't the case, but it is.

What's nice about an Oh By Code, other than being a lightweight (throwaway ?) website that you can (optionally) create for free is that you can communicate the "URL" in real life. You don't need a computer to pass on an Oh By Code. You can chalk it on the sidewalk or write it on a post-it note in a way that is difficult with URLs or other schemas (like email addresses).

In fact, it's even shorter and quicker than a phone number.

"Plus, if this became a thing, everybody would have to put their total trust in "PERFECT PRIVACY, LLC""

That's true. I'd say we have as good a track record as anyone. It's worth mentioning that the rsync.net warrant canary, which was the first warrant canary, turned ten this year.

"Would you mind expanding a bit and explaining how this system can be used to log into services? That doesn't seem to be so obvious."

I'm not sure. We're brainstorming ways in which it makes sense to give people an Oh By Code instead of giving people a phone number and this seemed to relate to this discussion. As I said, my new business card will have nothing on it except 0xJOHN.


A phone number is presumed to have some level of identity proofing behind it (i.e. control over the phone number means you have a credit card, bank account, photo id, etc.).

As the article points out, this turns out to be rather easy to spoof.

How do "oh by" codes solve this issue? What if I print up a bunch of business cards with 0xJOHN?

Identity proofing is a hard problem!


> Perhaps some sort of social identity is possible?

That won't work for people with no friends.

> We need a good solution to identity.

Perhaps an offline device, that uses your retina scan, fingerprint(s), and DNA encoding to — I don't know the correct cryptography jargon to describe all this — construct a temporary key, which you use with online service to create a username.

Then, whenever you need to log in to that service, you use the scanning device again to generate another temporary key, and use that a password.

That way, your physical attributes are your identity, but they are never stored or transmitted in a form that can be translated back to its inputs. You could also set a secret word on the device itself so your memory is also part of your identity, as is the norm now.

This system is of course still vulnerable to brute* force [1] but to mitigate that, the scanning device might allow you to set a fake "panic" password, which would generate an incorrect key, and maybe alert authorities.

[1]* https://xkcd.com/538/


> 'Charles is vouched for by Alice, Bob, Dave and Frank' might work

Vouched for to be _what_ exactly? Have the name 'Charlie'? There's lots of Charlies.


Their keys, of course:-)


keybase.io combines public keys and social network identity verification


"That other little thing the attacker did? He started a port of the phone number from Verizon to a VOIP provider, and that port had completed overnight."

One take away is how SMS as 2FA depends on a flawed assumption. The other I found cute is using random strings as answers to the stupid account recovery questions companies continue to ask.

Things not mentioned in the blog is exactly what personal information the attacker had, that authenticated him to Verizon. PIN? Some portion of his SSN? It must not have been account recovery questions because the target of the attack used random strings as answers.


2FA is supposed to be 2-factor authentication: a password and a phone number. If you allow the phone number to reset the password, it becomes 1-factor authentication.


> One take away is how SMS as 2FA depends on a flawed assumption.

NIST (among many others) has recommended you stop using SMS for MFA: https://threatpost.com/nist-recommends-sms-two-factor-authen...


Some time ago I started doing the random strings too. I use 1Password for everything, so I just add the questions as free-form fields to 1Password, set up password fields for the answers, and have 1Password generate (pronounceable) passwords to use as the answers.


...then I called my bank, they asked me what was my grand mother's name and I answered "Oh it's a very long, random string". I don't remember how secure the other questions were, but she seemed satisfied with such an answer (and granted me access).


Is this a hypothetical or something that actually happened? Because if it's the latter then the rep that you talked to needs to be retrained, because that's equivalent to saying "oh it's some street in San Francisco" when asked what street you grew up on.

In any case, I have 1Password generate pronounceable passwords specifically to avoid the problem of the rep trying to compare random alphanumeric strings. 1Password can even generate random phrases of N words (e.g. xkcd passwords) which works well for this sort of thing because it's easy to say those over a phone.


> Because if it's the latter then the rep that you talked to needs to be retrained.

This is kinda the point though. We can cook up the most elaborate security scheme in the world, but ultimately the keys to our accounts lie with under trained, minimum-wage call center employees.

(EDIT: and hey, I've done some grinding minimum-wage bullshit jobs, I completely understand)


It did really happen, but as I said I don't remember whether the other questions were tougher. Perhaps they allow one question to be unknown. For example, when you recover a Google account, they ask you when you've created it and the subject of your last email. among other questions. I'm pretty sure I had these ones wrong but they were apparently satisfied with my other answers.


Ah, yeah if they ask multiple questions then it would be reasonable for them to allow one wrong answer.


In my case, I forgot I had a randomly generated answer, gave them the real answer, and the representative got very confused when he tried to compare that to what he had in the system.


Thats what banks here in Australia do; instead of needing everything right, they ask half a dozen questions which most need to be correct. Which is insecure, if you ask me, but the importantly weighted questions are things like "whats your account balance on <account> here?" and "what account types do you have?" which aren't public info, I guess?


Using diceware strings as the secret questions/answers is good because they are then readable and pronounceable over the phone if you ever actually have to use them.


Did this really happen? I mean, you know what they say about the weakest link...


Yep, same here, but to make them more "phone call friendly" rather than random pronounceable passwords I generate random two or three word phrases via sites like this: http://watchout4snakes.com/wo4snakes/Random/RandomPhrase


Will 1Password detect and autofill these or do you have to look it up?


I have to look it up. There's no way 1Password would be able to detect this. But I can look it up via the 1Password Mini window (the same thing you use to fill the password in the same place) so it's no trouble.


Verizon definitely does ask for account pin whenever I call in, usually entered via number keypad before you're routed to an actual person.


Using social engineering to bypass this is a big enough issue that even the FTC has a blog post about it [1]. It's happened to popular Youtube producer H3H3 (even with a pin) [2]. Fusion did a thing about it at DefCon this year as well, demonstrating exactly how easy it is. The whole video [3] is worth watching, but I'll like to the relevant bit.

[1] https://www.ftc.gov/news-events/blogs/techftc/2016/06/your-m...

[2] https://www.youtube.com/watch?v=caVEiitI2vg

[3] https://youtu.be/bjYhmX_OUQQ?t=98


Holy crap, watch video #3 it's worth it. She adds herself to the account by calling and asking.


Wow that really was crazy easy for her to do that. I guess it also taps into emotions with the crying baby as well.


I have many accounts that require a PIN. Many of them I set up years ago and have no idea what the PIN would be. I'm always able to talk my way out of it, though. (And this is not something I practice or know how to do well. I literally just say something like, "PIN? I have no idea. I don't even remember having the option to set one.")


Apparently, the PIN is opt-in.

>With Verizon on the phone, it was a fairly simple matter to re-reset the portal password, set an account PIN to prevent attacker re-entry and un-do the phone forward.


The problem is that the default PIN (which is what 99% of customers still have probably) is just the last four digits of the primary account holder's SSN.


I've raised the issue with Coinbase support before, and I don't know if it's fixed (this was probably last year sometime), but when I used to use Coinbase heavily, it was actually impossible to turn off sms 2fa. Even now, I have totp codes, but still get an sms with a code whenever I try to login to their service.

This was a real issue for me and an actual real reason that I stopped using their service for most of last year, as I use google voice / skype for my number, it's not really two factor.

I mentioned this issue in an e-mail chain with their support ( and their response was basically that I should use authy instead of totp (Google-authenticator), or some weird workaround involving installing authy / uninstalling authy in a certain sequence which didn't work for me. However- we see here in this post, that all the attacker had to do to compromise the Authy was to compromise the phone number! (I also raised this issue with their support.)

Their response to the Authy question was that since Authy / e-mail are running on two different servers, it's considered 2-factor by them... hrm..


Unfortunately, in countries like China, your identity is your cellphone number. Almost all of the internet giants (QQ, WeChat, Weibo, Taobao, AliPay, Online Banking apps) predominantly use SMS to authenticate user login and/or as a 2FA step.

As a result, identity thefts and telecommunications frauds are very common. Sad state of reality.


identity thefts and telecommunications frauds are very common

Not sure I agree, let's tackle those in order: first, QQ is like ICQ, it's almost dead and therefore irrelevant.

On WeChat, phone number is simply used to look people up so you can find them. If you lose a phone or WeChat account, the social identity thing is used to recover it (you get a few friends to send a special magic number to you, then a reset is made). It doesn't trust your number.

No idea about Weibo.

Taobao does take your phone number but they have to because they need to be able to get in touch with suppliers when there are issues/complaints, and many customers want merchant phone numbers anyway. I don't see trust issues. You can log in with your phone but it's the pre-authenticated instance of your account on a mobile device that it cares about, not your phone number.

AliPay, no idea.

Online banking is not frequently used because until recently it totally sucked, and often was Windows only (yet macs are really popular), and people here like cash.

I think the Chinese ID card number (largely guessable with a birthday and knowledge of where someone was born) is far more dangerous here than a phone number: https://en.wikipedia.org/wiki/Resident_Identity_Card#Identit...

I feel like most financial fraud here is confidence and pyramid schemes.


In fact, a more important step is to notify the customer immediately through all set channels of critical account changes.

1) Password changes 2) Adding a subuser 3) Weird login attempts

Something like whatever Valve is doing in Steam.


It's always great to see effective incidence response and proactive employees. Congrats Coinbase and thank you for sharing. Your experience will make the rest of us more aware.

Two months ago the NIST announced that SMS for out of band authentication was deprecated. It makes sense. Phone numbers have a much bigger attack vector compared to Yubikey and Google Authenticator. This incident is a perfect example.

https://techcrunch.com/2016/07/25/nist-declares-the-age-of-s... https://news.ycombinator.com/item?id=12163046


The best advice for SMS 2FA I've seen was what was given to YouTuber boogie2988 by his attacker after he was hacked the same way - use a separate prepaid phone number that you never make public.


It doesn't scale as you need to share the number across different services. To be completely decoupled you'd need to have 1 number per account all bundled into one sim card. I don't think this is something you can setup with current services and phone software.


I mean, you could do it with Flowroute or Twilio. I already pay $1.25/mo for a DID for my home from Flowroute, if I cared enough about security buying another isn't going to put a dent in my pocketbook and I could just have a small little flask app somewhere forward them to a prepaid phone.


Or once a company gets hacked, since that will be unencrypted in the users table. eg the linkedin data dump.


Be nice to have a special device with sole purpose sms/2fa, that's very small (easier to carry than a spare phone). A secret number of course, but also running on a service with no human operators who can intervene.


Good banks issue true tokens or one time random code cards for exactly this purpose.

A token is a device that generates a known set of pseudorandom numbers. These are relatively cheap - and many modern auth systems depend on the token being installed on a much less secure shared hardware, such as a phone. Still harder to crack than SMS.


A cheap featurephone is pretty small, and you can even run a J2ME TOTP client, besides SMS.


Would there be an issue with just using a google voice number and never share that?


> Call your cell phone provider and set up a PIN or password, ask for a port freeze and ask to lock your account to your current SIM.

Seriously — do this. I’ve twice had the experience of walking into the retail store of a major UK network, supplying my mobile number, name, date of birth and postcode, and walking out with a replacement SIM card activated for my current number. In both cases I subsequently complained loudly to the network management and received apologies and assurances that the retail staff were not following protocol, training procedures would be reviewed etc.


I just went to see what my recovery options are for my phone provider's portal. It creates a temporary password that it's sent to your phone through SMS so you can access your account. This makes sense, since the only way to gain access to your account is either cracking your password or controlling your phone. I have no idea why Verizon would give access to the account through a voice call, instead of sending an SMS to your phone number. I really can't think of any situation where you'd forget your password and also not be able to access your phone, while urgently needing to get into your portal. If you were to find yourself in that situation, Verizon's position should be: OK, first sort out the phone thing, this is, go to a store and get a new SIM card and/or phone, and then we'll reset your account.


> I really can't think of any situation where you'd forget your password and also not be able to access your phone, while urgently needing to get into your portal.

'I just lost my phone, and I haven't logged into the portal in years. All my passwords were on my phone. I'm out of the country/in the sticks and there is no Verizon store nearby.'

Yes, Verizon's response ought to probably be something like, 'go to a Verizon store where an employee will verify your identity,' but that of course means trusting many more than just call-centre employees to verify identity; it also means that a government identity-card issuer would still be able to covertly reset one's number.


> it also means that a government identity-card issuer would still be able to covertly reset one's number.

Sure, but that seems like a much greater crime than just impersonating somebody on a phone call, plus greater exposure and chances of getting caught.

> and I haven't logged into the portal in years.

My point is exactly that, in which situation would you need to urgently access your portal?


> My point is exactly that, in which situation would you need to urgently access your portal?

The lost-phone situation: the number-owner has lost his phone, has bought a new one and wishes to change service immediately. It's not at all unlikely, and is likely to be perceived as urgent (I certainly wouldn't want all my account access messages going to a lost phone …).


> I certainly wouldn't want all my account access messages going to a lost phone …

Wouldn't that be solved by a fail-deadly setup where your phone is automatically wiped unless you perform a specific action?


That particular issue, yes — but it wouldn't help me set up a new phone.


I thought a significant portion of Americans simply don't have ID as well? (or voter ID laws wouldn't be controversial)


That number is around 1%, which may or may not qualify as "significant" depending on how important you think that is.

In any case, we seem to have no problem requiring ID for a lot of stuff. You need one to open a bank account or check into a hotel. Requiring one to reset a password would be fine.

It's problematic for voting because voting is seen as a much more fundamental right than getting a bank account, and because politicians deliberately make it more difficult for voters on the opposing side to obtain an ID if it's required.


Trying to research this reveals the controversial nature - reported numbers range from the 1% you quote to up to 16% (with one survey saying 95% said they had ID, but further investigation showed 10% of them were expired or invalid)


That's interesting, I turned up 1% and then didn't look any further.


The bar for "allowed to vote" is very different from "allowed to reset a cellphone account". One needs to never block any legal voter from voting, the other can be a balance between ease of access and ease of fraud. I would be very cautious conflating the two cases, because they are very different.


So what happens if as a non-US citizen which just happens to be around voting day I enter a voting station and ask to vote (and obviously lie about my identity)


Not to mention it's a felony and it will be prosecuted.

In your situation, you would almost surely have to know an already-registered voter's name and address -- I'm not aware of any states that let you register on the spot without government id. Many people don't register, and it's not automatic, so that's already an obstacle.

Also, the log books your voting location maintain get marked when a registered citizen votes. So if the voter you impersonated comes and votes later, your impersonation will be discovered.


In order to cast enough votes to affect most elections, you would need a large number of votes. The more fraudulent votes you cast, the more likely you'll hit a voter that already voted. If there's a significant volume of double vote attempts, that's a big red flag.


You'd have to know who to impersonate, successfully impersonate them (i.e. good luck trying to impersonate DeWayne Jamal Jones as an Ukrainian) and where (and on what date) they are supposed to vote, and for all that work you have affected all of one vote.


You'll probably get to cast a vote. But this is so rare. The rate of true voter fraud is so exceptionally low and the kind of fraud that voter ID laws are supposed to stop is even lower still that it's just not worth it.


Would anyone care to comment on which cellphone providers allow the highest security? The article recommended the following: "Call your cell phone provider and set up a PIN or password, ask for a port freeze and ask to lock your account to your current SIM."

It seems maybe we need a website/git-repo along the lines of https://twofactorauth.org/, but explaining phone provider security supported by provider.

Other than that there was a suggestion to have a special cellphone which is only used for SMS auth, and which you don't publish the number for.


I use Google Voice as my main number, and the google account that it's set up under has 2FA with my "real" number with ringplus. How vulnerable am I?


Aside: do you see enormous latency on calls w/RP?


Not really, no. I recently switched to a MAD plan which routs it over sprint directly, but even before calls were ok.


I was really impressed that a Korean friend had an overseas bank card with an e-ink display on it for displaying TFA codes, just like Authy/Authenticator.


> We also put out some company wide guidance on cellphone account security.

Are there published best-practices somewhere on a per-provider basis?


... Just as we should no longer trust SMS for two-factor authentication, we shouldn’t trust it for account recovery. Disable this anywhere you can.

This is interesting, but, instead of SMS, what should be used for 2FA?


From the article: personally, I recommend (in order) U2F, Push-based and TOTP/token-based


TOTP tokens?


DigitalOcean does this, but then they fall back on SMS in case you uninstall the TOTP app.


This is unfortunately quite common. Facebook won't even let you use TOTP unless you have a validated phone number to use as a fallback.


Furthermore, you have to install their entire app to use TOTP. Their app includes everything but the kitchen sink, and murders phone batteries even when in the background. On top of this, I don't see a point in installing apps if the functionality is also provided by their website. If I'm using my phone for 2FA, I'd like my phone to be secure, and I trust the security of my phone with just Chrome on it more than I trust my phone with Chrome and a million other apps.


this is the comment I was looking for. not only do they do this (which is not a good excuse), they also BEG you to put in your phone number every other day.

I've seen websites (I believe it was netflix) which ask for your phone number in order to provide you a way to reset your password... what happened to good old email resets?


Which they pretty much have to - what happens if you drop your phone and break it?


Presumably, TOTP tokens not delivered via SMS


Do you know where can I learn about designing a U2F key? Or any other similar token?

And I mean from the programming perpesctive not the harware per-se, like some sort of open source firmware for this.


I'm a little confused on why these other 2FA approaches are seen as more secure. Is it simply because the port transfer process can happen?


The point of 2FA is that you combine something you have, and something you know. For the something you have to be a strong authentication factor, you have to maintain tight control over it. When you use SMS as the factor, you're effectively mailing the second factor to someone with a lot of intermediate steps in the chain of custody.

When I use an authentication app on my phone (assuming that I destroyed the seed immediately after loading it), or I use my physical OTP token, then those keys are pretty direct proof that I have possession of that object at that time. If I lose my token, then I know I lost it and I can disable it and get a new one. As this thread demonstrates, it's possible for someone to hijack your phone without you realizing it for a period of hours to days.


Pardon my ignorance but what's to stop them from impersonating your authentication app, too?


No need to apologize.

With an authentication app, you seed it with a key which only you and the service use. Each successive PIN is generated by doing a complex mathematical operation on the initial key, and then dropping most of the digits, so it's basically impossible to infer the key by seeing even dozens or hundreds of the responses. Typically, the key is only good for a minute or so, and/or is only good for one login attempt. As long as nobody else has that key, it's virtually impossible for anyone to figure out what the next token your authentication app will offer up. The data only exists inside the app, and a copy lives on the host that authenticates your login.

For someone to impersonate your authentication app, they would have to: a. Gain access to your phone and somehow extract the key, which probably means getting past the phone's native encryption as well as the authentication app's encryption. You do have a passphrase on your phone, right? b. Find a leftover copy of the key. Don't leave one of these lying around for someone to find ... c. Hack the authentication software used on the server side. If hackers can do this, then it's pretty much game over anyway.

So, basically, if you never give someone your unlocked phone (in a context where they could take a memory dump, not just casually so they can type in their number), and you don't leave copies of the key lying around, it will be very hard for someone to spoof your authenticator app.

OK, I left one attack out -- someone could man-in-the-middle you and get you to give your password and the latest key to them, which they then use to log in. That's why you should only operate over a secure channel (HTTPS or equivalent) where you are sure you're talking with a known endpoint (manually enter the URL, or at least check it in the address bar).


Some of the authentication apps I use like this require that I register the serial number of the app (I guess that is like the public key of the app, except it's really short) with the backend of the app. So I'd open the authentication app, call an IT help desk, tell them who I am, tell them my app number, then I can 2FA with the app.

Isn't this process vulnerable to social manipulation? Someone could feasibly impersonate me over the phone and register their own app serial number instead of mine? This seems to be a common weakness in many authentication schemes (2FA or not).


"As long as nobody else has that key, it's virtually impossible for anyone to figure out what the next token your authentication app will offer up."

According to the birthday paradox, TOTP can in fact be broken quite easily if the service does not rate-limit to defend against brute-force attacks on the 6-digit generated TOTP code. An attacker only needs to try 1000 different pins within a minute or two (and some services allow codes after 5 minutes) to have a 50% chance of getting through.


Please educate me if I'm wrong, but I don't believe that is correct. Let me explain:

-- There are two kinds of OTP systems: ones that regenerate the code every n seconds, and ones that regenerate the code for every attempt

-- Considering the time-based code, each attempt has an n-choose-1 (1:n) chance of being correct, so your odds of matching the code are based on how many attempts you can make before the code changes. Since the codes can be re-used with no restriction, you have to start over fresh every time the code changes. So your ultimate chance of breaking the code is n-choose-x, or 1:(n/x). Therefore, you'd have to try 500,000 codes to have a 50% chance of forcing a random collision on a 6-digit code, and the odds only go up linearly with the number of guesses.

-- For the case where the code changes every time, each guess is once again an n-choose-1 chance (1:n). Once the code is regenerated, all information from the old code is lost (there is no forward bias on the next number), so it's once again n-choose-1. So, ultimately, you get to the same answer, which is that you have a 1:(n/x) chance of guessing the code.

Now, most OTP systems will accept several passwords before and after the valid key, in case the tokens have gotten out of sync, so there may be as many as 5 keys that will work at a given time. That is just a straight divider, meaning that you have a 5x better chance at any one time of guessing the right answer.

All that said, any sane password checker should be rate-limiting attempts, and locking out a user after, say, 10 or 20 attempts.


"Therefore, you'd have to try 500,000 codes to have a 50% chance of forcing a random collision on a 6-digit code, and the odds only go up linearly with the number of guesses."

No, not according to the birthday paradox (https://en.wikipedia.org/wiki/Birthday_problem).

There's actually a simple formula to work out the number of attempts required for a 50% chance as shown below.

For TOTP (Time-based One-time Password Algorithm):

Assuming only the latest code per current time window is allowed (and not codes on either side), the number of brute-force attempts required for a 50% chance of collision against this code, is given by:

Math.pow(10,6/2) // base=10, digits=6

This gives roughly 1000 attempts for a system that only accepts the current code and not codes on either side.

I am sure there are plenty of systems out there that don't think to rate-limit 2FA codes.


But if the employee had long passwords, was his fb account compromised through the use of password reset through SMS then?


Yes. They didn't need to know his strong password to log on, they just needed access to his mobile phone SMS in order to complete the account recovery process and change his account password to a value they chose.


That's why authy is a joke


How is authy a joke? I have to enter my backup password even after I auth with my phone number on a new phone (same number).


Authy's developer FAQ implies that ownership of the original phone number is sufficient to restore API-created tokens (e.g., Coinbase).

> What happens when a person loses their phone?

> Authy automatically synchronizes all accounts. If a user loses his phone when he buys a new one he'll be able to access back all of his account's by registering the app using the same phone number he previously had.

https://docs.authy.com/developer_faq.html


So I spent some time playing with the Authy app.

Here's the drop:

1) Dave has an active Authy account.

2) Dave configures 2FA for Coinbase using Authy.

3) Coinbase, rather than asking Dave to manually enter a TOTP secret, registers a 2FA token on Dave's Authy account via the Authy TOTP API.

4) Mallory takes over Dave's phone number.

5) Mallory resets Dave's Authy account at https://www.authy.com/phones/reset.

6) Mallory gains access to Dave's email account (using SMS recovery?).

7) Mallory confirms the Authy reset email.

8) Mallory logs into Authy using Dave's stolen phone number.

9) Mallory now has access to all of Dave's Authy API tokens (Coinbase, CloudFlare, etc).

10) Mallory can try guessing the backup password for Dave's other non-API tokens. Assuming Authy transmits the entire encrypted blob (I'm guessing it does), offline brute-forcing is possible.

tl;dr: Don't use Authy for services using the Authy API.


Yeah I really hate that CloudFlare doesn't support other otp at all




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: