Hacker News new | past | comments | ask | show | jobs | submit login
Google flagged our site as malware on our prelaunch announcement day (wp-abtesting.com)
72 points by softmodeling on Aug 20, 2013 | hide | past | favorite | 42 comments



The domain that was blacklisted was not demos.shapingrain.com (I checked), shapingrain.com itself was blacklisted as you can see from the Google Safe Browsing report here: http://safebrowsing.clients.google.com/safebrowsing/diagnost...

The theme currently in use on http://wp-abtesting.com/ has a main stylesheet called style.css which contains the URL http://www.shapingrain.com in its comments in the header.

It looks like shapingrain.com itself was infected on 2013-08-19 but cleaned by 2013-08-20. It was likely infected with a JavaScript injection vulnerability linking to the site lartedio.com which served the actual payload (likely something trying to self-install, break out of the box, etc.).

After shapingrain.com was infected and flagged by Google Safe Browsing, wp-abtesting.com would then have been flagged when Google analyzed the CSS file and saw what appeared to be a resource link to an infected site. This would appear to be a limitation via the scanner which is scanning CSS comments and treating them as valid code, though this is not without precedent and certain browsers will evaluate what is contained in comments under certain circumstances (see IE conditional comments).

So, in the end, it looks like shapingrain.com was infected yesterday and Google blacklisted that site as well as any sites pulling resources from the infected site, erring on the side of caution (possibly) and interpreting URLs within comments in CSS as possible resource links.


Hi John,

Thanks a lot for your detailed response. Really appreciated.

Probably both sites were infected at some point. Again, I never saw the malware message myself but the guy that alerted us first copied the message he got and it was explicitly mentioning the demo site: “Content from demos.shapingrain.com, a known malware distributor, has been inserted into this web page. Visiting this page now is very likely to infect your Mac with malware.”

And now, we’re going to immediately clean the CSS since this is something that had not occurred to us could be the cause of the problem. Let’s make sure we are not blacklisted again!


Sure thing. I've had some experience with the blacklist detection due to a JS file hosted on a trusted 3rd party site that had another section of their site hacked (meaning their site was blacklisted and anything pulling resources from their site was similarly blacklisted). I researched more about how things worked then and have been sharing when I can since then. I hadn't seen your specific situation before but it is my guess based on an understanding of how similar scanning setups work.

For future reference, another useful tool is Sucuri SiteCheck, which will show you the results of multiple website malware blacklists on one page: http://sitecheck.sucuri.net/scanner/

(I checked and http://wp-abtesting.com/ is clean)


You shouldn't pull static assets from 3rd party sites like that. Looks like you've fixed it. You should also use W3 TotalCache, WP Minify, or a similar plugin to minify and combine your CSS and JS. You have a lot of files loading which slows the site down.


They apparently weren't pulling static assets from a 3rd party site. They were using a theme developed by a 3rd party. That theme is hosted on their own server but has the developer's name and URL listed in the comments of the CSS (which is fairly common practice). No assets were being loaded from an external site, but it looks like that URL in the comments triggered Google's alerts. That said, combining and minimizing CSS and JS would have eliminated the comments and prevented the issue with Google's scanner.


I can't be the only one here that think Google has way too much power for anyone's good.

This being said, I have no idea what could be done, if anything, to avoid being in a situation where Google can, mistakenly, blacklist your from the Internet.

This is why I stopped using Chrome, and learned to love Firefox, again.


> This is why I stopped using Chrome, and learned to love Firefox, again

Firefox uses Google for safe browsing filtering.


We could require that Google, Microsoft, Mozilla et al use only third-party open blacklists (and contribute to those projects with their blacklist data if they feel they know more than the project does, but where the project has final say in what does or doesn't go in.) This decouples the incentive structure.


My experience is that third party blacklists (especially for e-mail spam) have a much worse false positive problem. As I said in another comment, economics is working against you. There is relatively little or no cost to a blacklist provider if there is a false positive. And there are a lot of people who care very much about spam, or malware, who put a lot of pressure on blacklist providers to be as comprehensive and to react as quickly as positive.

Heck, if you are looking at this from a larger social good perspective, it might even be better for society at large to have the blacklist provider be much more aggressive about blacklisting sites quickly. What's the cost of a malware compromising someone's machine, and requiring someone to take their desktop off-line for a day or more while they reinstall everything from scratch (and find out that they no longer can find their MS Office reinstall disks, and the new MS Office requires them to relearn where all controls are on the reorganized toolbar)? Versus the economic cost of some minor web site getting blocked for a few days?

In any case, given that you as the minor web site won't be providing any payments to the blacklist provider, why do you assume the incentive structure will be any better with third-party blacklists?


I think the interesting thing about decoupling the incentive structure isn't so much how the blacklist operator would react (more liberal blacklists), but how the browser maker would react.

Since the browser maker will no longer control the blacklist, they will now have users telling them that sites are broken (because they've been blacklisted), and they won't be able to do anything about it on the blacklist side. So, what they will be incentivized to do, is to make whitelisting a blacklisted site (especially those that only get loaded through invisible iframes etc.) have a much simpler/easier/clearer UX, so that their complaints go down. This is good for everyone, but it's not something they'll do when they still have the option "just remove X from the blacklist."


But the sites aren't really _broken_. There will be a warning displayed to the user, but the user can always say, "give me the site anyway". And users do hate it when they hit a web site which trashes their machine with malware. So they might be in favor of more stringent blacklists as well.

It might be a bad assumption that users will demand a more liberal blacklist. That's certainly not how e-mail blacklists have worked out. Sometimes the people most in favor of the blacklists that hit all sorts of innocent mail senders are the users sick and tired of spam.


Have you seen the "warning" lately? It's a pretty dire message. The only button visible is "Go Back." You have to click a tiny "advanced" text link to even see that there's an option to go to the site anyway. Try for yourself in Chrome:

malware.testing.google.test/testing/malware/


I'm not sure we want to start down the route of legislation. Could you imagine the end result of Google-as-regulated-utility? Spammer's dream!


I didn't really mean legislation--just strong grassroots demand. These are corporations, and they'll build the features their customers want, after all.

Maybe we just need a a Kickstarter to pay for the engineering time required to get this coded into Firefox and Chromium. If Mozilla or Google don't like it, call the result a fork and start a campaign for people to switch, as happened with OpenOffice+LibreOffice, or with MariaDB. Of course, I think both Google and Mozilla are smart enough that they'd see what was happening there, and just adopt the open blacklists.

As browser-users, I believe that we really do have the ability to affect the decisions of these companies, if we have something we can all get behind. :)

(...at least, if it doesn't directly interfere with their bottom line, like including an ad-blocker in Chrome would for Google. Can't have everything.)


The author acknowledged ITT that the site probably did serve malware. The right thing was done IMO. Wouldn't it be worse if the OP was blamed for infecting his users?

"Probably both sites were infected at some point. Again, I never saw the malware message myself but the guy that alerted us first copied the message he got and it was explicitly mentioning the demo site: “Content from demos.shapingrain.com, a known malware distributor, has been inserted into this web page. Visiting this page now is very likely to infect your Mac with malware.”"


I'm ashamed to confess that when I saw this, I wished the world was full of Internet Explorer users :-)


We had the same thing happen to us yesterday (our launch day) by Facebook.

We launched our application, announced it to the world, got a flood of users that was apparently "abnormal" and were flagged by their ban bot.

Our app was summarily deleted (without warning or notification). All links to the application were flagged as "abusive". And all data published by users of our application was deleted.

The only reason we discovered this was because we were alerted by our users that the application had disappeared from their bookmarks and that they were unable to access it.

Of course, when we brought this to Facebook's attention they restored the application within a couple of hours. Unfortunately, significant damage had already been done: everything published by our users was (apparently) permanently deleted, our open graph stories and actions were completely deleted, and our subscriptions to the payments apis were deleted.

The particularly insidious one is the deletion of the payments subscription. For the last 12 hours, anyone who has tried to make an in-app purchase has been charged by Facebook but not had their purchase relayed to us for fulfillment.


Reading your experience I think I should be happy about what happened to us today. Could have been much worse :-)


When I saw the headline, I immediately thought, "they must be using WordPress". WP is a giant exploitable target, and I've personally told Matt this. Automattic saw an opportunity long ago and started VaultPress for WP security. He argued it's not a WP problem, and I frankly disagree, but he obviously understands the situation better than anyone. WP is free, but security is not because self hosted WP is so exploitable.

A launch for a client also went through the same problem in 2010, and that was after 5 years of managing other WP installs (including 2 VIP WP sites). I've seen it happen too many times for it not to be Automattic's problem to address more so than they're doing now.

Stay away from self hosted WP unless your install is absolutely bullet proof, and cross linking, especially to resource files from other WP sites is the last thing you should ever do because you do not control their security which can directly affect yours, or at least your black list vulnerability due to associated content.

Our office used to be above Automattic's in SF, and I love those guys, and what Matt has done for the web, but with great power comes with great responsibility.


I'm a great promoter of Drupal, including in those cases where it competes with WordPress. However in all honesty I can't really play the security card against WP. I think WP itself meets generally accepted security standards; the 3d party code loaded into it sometimes does not, but that's the same case with any framework that allows modules.

WP probably gets a bit of a bad rap because the types of sites made with it often don't have the budget to bring high quality development. When you serve 20% of the web, and people choose you precisely because they can get cheap developers, there will be some problem sites out there running WP.

In this particular case, it seems to me that they would have been flagged if they had been running anything, Drupal or Jekyll or a static site - they had an external theme provider who referred to a domain in CSS comments that was listed by google.

The problem seems to be the accuracy of Google's flagging, not WordPress.


I think the situation is unfortunate because of their new launch, but the real problem is, as the author mentioned it, that Google webmaster tools did not alert them of the problem, so they were not able to address it on time. I also think their vendor should have alerted them (their clients) about Google flagging them and that perhaps this should be considered immediately, which would have saved them some headaches. Therefore, if the two previous problems has been taken care of, then I think we would mostly all agree it is good to be alerted of potential malware, which noone wants on their machines.


Happened to me as well, developed a website for a client and they started to get that malware warning on a specific page.

there was absolutely nothing special about that page except the content, I noticed that the content included words like "Visa, passport, license etc .." so Google classified it as a probable scam.

after a few submissions, I think the page got whitelisted.

its very annoying indeed, especially that this functionality is now built into the primary browser.


I feel like I should share a story here. The fight against spam is becoming a serious problem for legitimate businesses.

I've suffered from the same malware issue earlier. Since we use ad networks for advertising it works as follows. 1 ad network has about 3000 different ads running in different locations. If any of those domains get compromised and blacklisted and Google notices that you served something from that domain - BOOM! You're on stopbadware.org and need to get your website reviewed. You block the ad, and apply for a review. To their credit, it takes about 24 hours for the whole process (review happens, they delist you, Google caches update, etc.). However, for 24 hours anyone who comes to your site or clicks on a google result for you, or clicks anywhere to come to you - Gets a massive warning. The cost in terms of lost revenue and reputation damage are almost incalculable. This has happened to me in the past, several times. I have severed relationships with several ad networks and yet this keeps happening - even with the most reputable networks and you try to stay ahead of the curve but if you fall behind even a bit, you may get blacklisted again. All this for a site that makes maybe $200/month.

Another problem I faced was with SURBL. It's a spam blacklist that works on a bizarre system. Basically if they find your domain name beind spammed around the internet (it could be anyone else doing it) they will blacklist you. What's worse is that providers like bitly, facebook, etc. use the SURBL blacklist. So what happens ? Well, one day someone goes and spams your domain on internet forums, etc. SURBL picks up on it. Then, suddenly all your facebook links, bitly links shared on twitter, etc. start showing warning pages. Basically someone clicks on your link on facebook and gets a page saying "This site may harm your computer". Ditto for Bitly as well.

I tried to get delisted but no one at SURBL would respond. I kept trying to get in touch through their online form but no one responded for 2-3 weeks. Finally, I did a whois on the domain, found the admin contact and emailed him. I also sent him a text on his phone. At last after about 4 weeks of being on SURBL I managed to get delisted.

That. Was. An. Ordeal.

This in my opinion is absolutely unacceptable. Spam blacklists do have a responsibility to be correct in their assessments. And if they do have a false positive for any reason, they should have a streamlined resolution process. Unfortunately, the internet is the wild west and shooting before asking questions appears to be quite acceptable in these parts. I recognize that a lot of this is an attempt to protect users but when I open my mom's PC, I still see a bunch of browser toolbars, bookmarking widgets, etc. etc. (malware, right ?) Folks are still getting phished. This fight needs to be rethought.


I'm on the other side of this debate. Two years ago I was innocently using Chrome to read reddit when I suddenly got a virus in my computer that took me four hours to clean up from. Turns out it came from an ad on reddit. With things like ad networks distributing malware, Google and others should shoot first and ask questions later.

http://www.reddit.com/r/announcements/comments/e7988/a_numbe...


The unfortunate thing is that more often than not some ad networks aren't punished but rather the downstream ad-space seller.

It comes down to browsers half of the time having bad security principles. Anything can change window.location in an any iframe (on any domain) and the new sandboxing for iframes is HORRIBLE (not to mention the support of sandboxing itself is barely there). Ad networks work by passing the user around between networks until an ad is found but this removes any chance of being able to find who is responsible for a given bad ad.

The entire thing is a security nightmare and the only "fix" Google can provide consists of blacklisting downstream websites that may indirectly link to a bad ad via ad networks. I don't disagree with Google's "fix" but I do think they should approach the problem on Chrome and show what good security means when it comes to ads for the other browser vendors.


It's complicated. Modern frame navigation evolved a few years ago out of this work: http://www.adambarth.com/papers/2009/barth-jackson-mitchell-...

Those changes represented a pretty dramatic improvement and all modern browsers are now in alignment. But the resulting behavior does make numerous concessions for compatibility with existing content. Take your complaint about any child frame being able to navigate top. That behavior was retained for both content compatibility and because it was a necessary security measure.

To explain the security concern, standardized frame communication predates the broad adoption of X-Frame-Options. So, the only reliable, cross-browser click-jacking defense at the time was frame-busting via top-navigation. And because we're talking about the Web, you have to consider a transition path of many years for existing content.

It may not seem like a great explanation, but the fact is that most of the confusing security behaviors on the Web really are the result of being boxed in by the need to keep existing content working. And, unfortunately, it takes many years to get improved mechanisms widely supported among browsers and to then migrate the majority of the Web onto them.


I appreciate the complexity of the subject a lot. I understand we can't just break iframes. The frustrating thing is that sandboxed iframes left out one essential option: allow-plugins. Yes, allow-plugins would be a security concern and it goes against sandboxing but so does allow-top-navigation. We can go back and forth forever on that subject but the fact is that ads use flash and limiting the use of iframe sandboxing sucks (which makes me say it is horrible).

Fundamentally, something better should exist and websites get the flak of bad iframe security. Annoyingly, any site with an iframe on it can't determine what locations are being accessed within the iframe to be able to accurately let there be a "report ad" feature that grabs the sites involved. Sometimes ad network iframes are nested 3-4 deep (disgusting but true) and it is impossible to read the location (yet these iframes can call window.top.location with ease).

I don't know what needs to exist but trying to work against these bad actors is nearly impossible for web devs (yet the burden is placed on the site).

Thanks for taking the time to comment - I understand it's very complex but I feel like this important subject hasn't seen a lot of real world use (we've decided against using sandboxing because it doesn't work for ads).


I appreciate your frustration, but I think you may be misunderstanding the security impact of browsers allowing top navigation. Either way, an exploit can still be launched from within the nested browsing context (be it a plugin or HTML frame). So, the risk here is that some of the content (no matter how deeply nested) is being served from the same location as known malware. And the risk is the same regardless of whether or not the nested context can navigate top.


Given that most successful malware infected via a good browser (like Chrome) is obtained due by downloading (rather than an exploit) I'd say allowing top navigation is a pretty big concern. But if there is an exploit that targets a specific browser there is very little that anyone can do in advance if it is being served on an ad network.

So, if as a normal user (someone that doesn't understand malware versus OS-level dialog) sees a big warning in their trusted browser (or a popup) without having clicked on anything at all they might be compelled to download some malware. So yes, getting sandboxed iframes right would have been amazing but in my mind it was a failure.


Just to clarify "sees a big warning" - I had meant the fake warnings that a lot of malware display: "1000 INFECTIONS FOUND, DOWNLOAD THIS NOW!!!"


Yes you shoot malware itself.

You don't shoot a site that links to a site that links to a site that links to malware.

You don't label a website as malware for spamming.


The problem with the block list business model is that false positives have a much lower negative cost to the black list provider than false negatives. Users who find that some spammer doesn't get blocked, will get really annoyed, and make their displeasure known to the blacklist provider (or just switch to some other blacklist). As long as it's not a very high visibility site which is getting blocked (i.e., it's just some startup or a relatively low-profile blog that gets blocked), it's no skin off the blocklist provider's back. So the problem with blacklists like SURBL is going to be a very hard one to fix; the dead hand of economics doesn't really care about irrelevant things like fairness or justice.


Have you heard about Blue Coat filter ? Insane categories with insane definitions. For example http://sitereview.cwfservice.net/catdesc.jsp?catnum=38&catma... the 3 sites listed as example of the category, are not in this category.


Those sites seem to fit the "Computers" category to me. (Note that sites can be in more than one category.)


Is it possible that false positives like this open Google to defamation lawsuits? They're making an assertion that is false (and that their webmaster tools show they know to be false), and broadcasting that in a way that definitely has a financial impact on the businesses involved.


Good point. I don't have the knowledge to answer this but what is true beyond any doubt is that a false statement like this causes financial damage to the site


Yes. The very hard part is the actual legal battle.


And if you still see the malware warning yet, well, join me in our appreaciation to Google efforts for protecting internet users against very evil business like mine :-(

(I guess they think they only ones that follow the "Don't be evil" motto)


The domain name looks like that of a spam website. You should avoid hyphens when possible.


A slightly related problem is that we cannot use WordPress as part of the domain name which would probably make easier to distinguish ourselves (and all other WordPress related products) from other kinds of sites


Also (slightly OT):

* Link to homepage is on the right-side!?

* Somehow, content text feels too thin/dull. Little bit difficult to read :)


Thanks for the input. We´ll look into it (the link to the homepage is just because the menu starts from the right and now there is only one menu item)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: