I think tptacek's point is that the answer to your question is spelled out very clearly on the page. ;-)
From the page:
> You should own Ferguson and Schneier’s follow-up, Cryptography Engineering (C.E.). Written partly in penance, the new book deftly handles material the older book stumbles over. C.E. wants to teach you the right way to work with cryptography without wasting time on GOST and El Gamal.
Plus a whole section at the end which starts with "If this stuff is interesting to you, here’s some additional reading:"
If that were true it would be a major security vulnerability. ;-)
The Google Translate content is served up from a subdomain of googleusercontent.com. This is a domain designated by Google for user-supplied content so that it can be rendered without affecting the safety of pages on google.com and elsewhere.
The demonstration here is that one page on googleusercontent.com can affect another page on googleusercontent.com. This is perfectly acceptable via the same origin policy.
I think glebm implies the same: "iframe code can access `frames[0].document` cross domain" means through translate.google.com and "modifies target page on another domain" modifies page with same domain but rendered on another domain
Not sure why this is getting voted up so much. The author came across a report of IE freezing/crashing, replicated it, and Microsoft fixed it. In the same security update (http://technet.microsoft.com/en-us/security/bulletin/ms13-03...) there are 10 other vulnerabilities described in the same way. Why is this particular vulnerability noteworthy or interesting, other than the fact that someone stumbled across it and documented it before it ended up reported to Microsoft?
Microsoft Internet Explorer 6 through 8 does not properly restrict data access by VBScript, which allows remote attackers to perform cross-domain reading of JSON files via a crafted web site, aka "JSON Array Information Disclosure Vulnerability."
Similar JSON information disclosure can be very serious for a web application. http://haacked.com/archive/2009/06/24/json-hijacking.aspx describes the general issue in some depth. The fact that it was possible to use vbscript as a way to read in cross-domain JavaScript is very important from a security perspective.
I think a lot of folks are voting it up because they found it interesting and informative and it gives a real-world example of using a widely-available tool (pageheap) to diagnose bugs.
It may not be dropping any new super-advanced fuzzing or exploit techniques, but it's the story about a guy who did the legwork to run down the exploitability of a bug from public crash reports.
I don't have a problem with your blog post. It documents how to reproduce the issue referenced in a particular CVE. But I'm curious what value people are deriving from reading it.
Right. But your post shows that you can reliably get the browser to crash. It doesn't demonstrate that the crash is exploitable, unless I'm missing something.
I was able to prove that it was potentially exploitable to MSRC, which is how I got them to fix it. There are a lot of non-exploitable crashes such as null pointer dereferences that MSRC will not consider as security bugs.
> It's not that PG has a grudge against Google (or vice versa) or anything like that. I believe that search engine bots crawl Hacker News hard enough that PG blocks most crawling by bots. In the case of Google, he does allow us to crawl from some IP addresses, but it's true that Google isn't able to crawl/index every page on Hacker News.
> And to show this isn't a Google-specific issue, note that Bing's #1 result for the search [hacker news] is a completely different site, thehackernews.com: http://www.bing.com/search?q=hacker+news
> In general, I think PG's priority is to have a useful, interesting site for hackers. That takes precedence and is the reason why I believe PG blocks most bots: so that crawling doesn't overload the site.
"Stateless" CSRF protection as described here is strictly inferior to other forms of protection. The reasons are somewhat laid out in this blog post's comments:
1. JavaScript in any subdomain allows a cookie to be set on unrelated subdomains. That means an attacker can set a token and override the CSRF protection entirely.
2. The "replay protection" means that you must continue to maintain state on the server (ostensibly to prevent duplicate requests)
The author has actually gone on to propose a "triple submit" system for CSRF protection (http://www.slideshare.net/johnwilander/stateless-anticsrf) which is still vulnerable to compromise if a related subdomain can be used to attack by setting many cookies.
I believe that depends on the attack vector. Any sort of extra protection will probably not do any harm; you can prevent a bunch of script-kiddies who are using pre-made software with simple obstacles, such as this.
I had a small project a while back in which I tried to prevent use of a widely used cracking application. What I made was basically a CSRF protection made with websockets, which fetched the CSRF data at onsubmit event. In the end, I did succeed in preventing the use of the application, which could have been used to save bandwidth of large file-hosting and porn sites when bundled with CAPTCHA (although it worked without one against the application). For an example, when Spotify API was released it was exploited with a speed of few thousand login requests per minute, while some sites get still hammered with tens of concurrent bots making more formal login attempts from ho knows how many different locations. Anyhow, since I used client-side JS in it, the events were able to be fired before the actual POST event, which rendered it useless against more customized attacks. Ultimately the original purpose of the project was a success; I got some crackers on their toes and was essentially banned from the community.
My point is that even small security updates may be worth it, since you can never know who you are up against with.
Not in general, no. You can drop replay protection as a requirement and that gets you to actual statelessness. If you then have your website on a single domain and never put anything else onto other subdomains, theoretically now the only risk is that your single application is vulnerable to XSS. But you shouldn't build your security based on assumptions like that if you can help it.