For what it's worth, I love reading about this stuff, though I specialize in InfoSec so this sort of thing is actually pretty common in our communities.
You would have definitely had a much easier time with them than you are right now.
But for what it's worth, this will blow over soon enough, the internet does not have the greatest memory (unless you actually did something horrendous, which you didn't)
The issue is that some web applications don't load what traditionally were discrete pages (e.g. PAJAX) with their own URLs. It's a trend you'll find in sites built to feel more like applications. Scroll the the bottom of an onion.com article and watch your URL update to the next page without a page refresh. This was done so modern sites built like this could still allow the user to navigate back and forward. It let's the site update the browsers location history and effectively what URL that back button will point to. I could imagine blocking this behavior if it points to a site off the TLD and it's sub domains. Hard pressed to figure out how they could prevent this, definitely a flaw in the trust model but probably worth the trade off.
Hopefully, but even then, it's good that you are making people more aware of just how sketchy it can get.
Chrome will always have nasty exploits, because it's dealing with the flexibility of the world wide web. It's more important that we the users are aware of the tricks that attackers employ, rather than having clean solutions.
I don't trust that any software is secure, and to date that mindset hasn't burned me yet!
You would have definitely had a much easier time with them than you are right now.
But for what it's worth, this will blow over soon enough, the internet does not have the greatest memory (unless you actually did something horrendous, which you didn't)