Hacker News new | past | comments | ask | show | jobs | submit | nbpoole's comments login

As far as I'm aware this is the first time I've been mentioned in a conspiracy theory. So thanks for that. :-)


Interesting: this sounds like a recurrence of the same issue which was described a number of months back:

https://www.digitalocean.com/blog_posts/resolved-lvm-data-is...

At the time, the blog post claimed that the issue was resolved and that data was now being wiped by default. I wonder why that would have changed.


I'm the one who reported it. I was able to recover someone else's web logs from 29 December 2013 on a VM an hour old.


I think tptacek's point is that the answer to your question is spelled out very clearly on the page. ;-)

From the page:

> You should own Ferguson and Schneier’s follow-up, Cryptography Engineering (C.E.). Written partly in penance, the new book deftly handles material the older book stumbles over. C.E. wants to teach you the right way to work with cryptography without wasting time on GOST and El Gamal.

Plus a whole section at the end which starts with "If this stuff is interesting to you, here’s some additional reading:"


Thank you, I've missed this part and additional reading is all over my head


If that were true it would be a major security vulnerability. ;-)

The Google Translate content is served up from a subdomain of googleusercontent.com. This is a domain designated by Google for user-supplied content so that it can be rendered without affecting the safety of pages on google.com and elsewhere.

The demonstration here is that one page on googleusercontent.com can affect another page on googleusercontent.com. This is perfectly acceptable via the same origin policy.


I think glebm implies the same: "iframe code can access `frames[0].document` cross domain" means through translate.google.com and "modifies target page on another domain" modifies page with same domain but rendered on another domain



Not sure why this is getting voted up so much. The author came across a report of IE freezing/crashing, replicated it, and Microsoft fixed it. In the same security update (http://technet.microsoft.com/en-us/security/bulletin/ms13-03...) there are 10 other vulnerabilities described in the same way. Why is this particular vulnerability noteworthy or interesting, other than the fact that someone stumbled across it and documented it before it ended up reported to Microsoft?

In fact, CVE-2013-1297 from that same security update (which I didn't know existed until now) is far more interesting from a security perspective (http://www.cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-1...).

Microsoft Internet Explorer 6 through 8 does not properly restrict data access by VBScript, which allows remote attackers to perform cross-domain reading of JSON files via a crafted web site, aka "JSON Array Information Disclosure Vulnerability."

Similar JSON information disclosure can be very serious for a web application. http://haacked.com/archive/2009/06/24/json-hijacking.aspx describes the general issue in some depth. The fact that it was possible to use vbscript as a way to read in cross-domain JavaScript is very important from a security perspective.


I think a lot of folks are voting it up because they found it interesting and informative and it gives a real-world example of using a widely-available tool (pageheap) to diagnose bugs.

It may not be dropping any new super-advanced fuzzing or exploit techniques, but it's the story about a guy who did the legwork to run down the exploitability of a bug from public crash reports.


What is unique is the original report of the bug was public. I was the one who figured out that it was exploitable and sent it to MSRC.


Right. But I can very easily find reports of reliable ways to crash IE via CSS: https://www.google.com/search?q=crash+ie+css

I don't have a problem with your blog post. It documents how to reproduce the issue referenced in a particular CVE. But I'm curious what value people are deriving from reading it.


Not all are exploitable.


Right. But your post shows that you can reliably get the browser to crash. It doesn't demonstrate that the crash is exploitable, unless I'm missing something.


I was able to prove that it was potentially exploitable to MSRC, which is how I got them to fix it. There are a lot of non-exploitable crashes such as null pointer dereferences that MSRC will not consider as security bugs.


Luckily, this question has been asked and answered fairly recently!

Specifically https://news.ycombinator.com/item?id=5955043 from last week, complete with a response from Matt Cutts (https://news.ycombinator.com/item?id=5955374):

> It's not that PG has a grudge against Google (or vice versa) or anything like that. I believe that search engine bots crawl Hacker News hard enough that PG blocks most crawling by bots. In the case of Google, he does allow us to crawl from some IP addresses, but it's true that Google isn't able to crawl/index every page on Hacker News.

> Here's a link where I answered the same question about three weeks ago: https://news.ycombinator.com/item?id=5837004 , so this isn't a new issue. In fact, PG has been blocking various bots since 2011 or so; https://news.ycombinator.com/item?id=3277661 is one of the original discussions about this.

> And to show this isn't a Google-specific issue, note that Bing's #1 result for the search [hacker news] is a completely different site, thehackernews.com: http://www.bing.com/search?q=hacker+news

> In general, I think PG's priority is to have a useful, interesting site for hackers. That takes precedence and is the reason why I believe PG blocks most bots: so that crawling doesn't overload the site.


Well I can't no longer find the site by searching for the brand anymore and I'm sure this will dramatically change the traffic the site gets.


Good.


I'm not buying the answer Matt gave. HN was "banned" because of the anti-google comments expressed on this site.

Google doesn't want other people (who are unaware google is selling their privacy to the NSA) to accidentally stumble across the comments on HN.



I believe that's how Rails works, except using an HMAC on the cookie instead of AES (since AES itself doesn't prevent tampering).


You can't just read cookies for arbitrary domains in an iframe.


"Stateless" CSRF protection as described here is strictly inferior to other forms of protection. The reasons are somewhat laid out in this blog post's comments:

1. JavaScript in any subdomain allows a cookie to be set on unrelated subdomains. That means an attacker can set a token and override the CSRF protection entirely.

2. The "replay protection" means that you must continue to maintain state on the server (ostensibly to prevent duplicate requests)

The author has actually gone on to propose a "triple submit" system for CSRF protection (http://www.slideshare.net/johnwilander/stateless-anticsrf) which is still vulnerable to compromise if a related subdomain can be used to attack by setting many cookies.

For a more thorough discussion of CSRF mitigations, check out https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(...


Thanks for this info. That last link includes a section on double-submit cookies with a little further discussion: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(...


So there is no good way in implementing stateless CSRF?


I believe that depends on the attack vector. Any sort of extra protection will probably not do any harm; you can prevent a bunch of script-kiddies who are using pre-made software with simple obstacles, such as this.

I had a small project a while back in which I tried to prevent use of a widely used cracking application. What I made was basically a CSRF protection made with websockets, which fetched the CSRF data at onsubmit event. In the end, I did succeed in preventing the use of the application, which could have been used to save bandwidth of large file-hosting and porn sites when bundled with CAPTCHA (although it worked without one against the application). For an example, when Spotify API was released it was exploited with a speed of few thousand login requests per minute, while some sites get still hammered with tens of concurrent bots making more formal login attempts from ho knows how many different locations. Anyhow, since I used client-side JS in it, the events were able to be fired before the actual POST event, which rendered it useless against more customized attacks. Ultimately the original purpose of the project was a success; I got some crackers on their toes and was essentially banned from the community.

My point is that even small security updates may be worth it, since you can never know who you are up against with.


Not in general, no. You can drop replay protection as a requirement and that gets you to actual statelessness. If you then have your website on a single domain and never put anything else onto other subdomains, theoretically now the only risk is that your single application is vulnerable to XSS. But you shouldn't build your security based on assumptions like that if you can help it.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: