Hacker News new | past | comments | ask | show | jobs | submit login

HTTPS everywhere would solve this, and the Comcast Javascript injection - I wonder how many more people will deploy things like this before that happens?



Actually, the "HTTPS Everywhere" plugin [1] for firefox and chrome is an incomplete solution. The reason is simple; not all sites support HTTPS, so the plain HTTP-only sites are still vulnerable.

Injecting a script into insecure HTTP is just one of many abuses possible by ISP's. Replacing images on the fly is another. Recompressing (degrading) images/video is another. Messing with DNS responses is another, and so on...

A far better working solution is to use a VPN service since when it's configured correctly, it will encrypt all traffic passing through your ISP. Of course, this is really just moving the trust problem, rather than solving it, but at least using a VPN service makes it your decision who to trust. I use Tunnelr.com [2] since by reputation, similar interests, and years of traded emails, I know the people who run it.

[1] https://www.eff.org/https-everywhere

[2] http://tunnelr.com


I meant actually demanding that content providers deploy HTTPS everywhere instead.


Sorry about that. It seems my reading comprehension skills are still flawed. ;)


And here's your annual reminder of the upside-down-ternet http://www.ex-parrot.com/pete/upside-down-ternet.html


tunnelr.com... "Powered by OpenBSD"... I like it already.


You shouldn't have to encrypt your data to stop your ISP from actively 1) scanning and 2) corrupting it.

What is the FCC good for?


If you don't encrypt your data and verify the sites to which you connect (both of which https does) then anybody between you and them can intercept and alter the transmission.

I agree you shouldn't have to do it, but you need to worry about more than just your ISP.

Just assume any unsecured internet connection is actively hostile, and you'll be better off.


Indeed. What's with the Internet being a set of tubes?


And tubes based on the honor system at that.


It is not just about your ISP, it is about all the ISPs between you and the website.


Transit and peering providers aren't going to do that. Certainly not tier 1 networks.

1) They can't get away with it 2) That'd slow everything to a crawl, inspecting traffic is expensive at the scales most large providers operate at.


You shouldn't have to, I agree, but apparently you do have to. "Should" counts for nothing.


We hope ads/no-ads arms race would end there. But I could easily see some unscrupulous/greedy ISPs then resorting to setting up SSL proxies to MITM your ostensibly secure traffic, as some private organizations (schools, corporations) already do.


They'd have to have their certs installed on your computer, or be an existing CA. Schools and corps (including the one I work for) can do this because they have admin control over destination machines.


You're absolutely right, I didn't mean to imply what schools & corps currently do was shady in any way (as long as you're aware that they're doing it).


That's a good way to get yourself the CA death penalty.


So? If Comcast does it, what are they going to do?


A few financial institutions holding large quantities of Comcast's paper (and their executives' IRAs) pick up the phone and explain how they feel about having their secure websites' identity impersonated.


https is the wrong solution. that is for preventing others from seeing what you're sending/receiving, not verifying the integrity of what is sent/received. well, it does do that too, but it adds extra unneeded overhead by encrypting everything. Besides, the ISP can easily man-in-the-middle any connection you make and then inject their ads into the webpage, even if you use https.

The correct solution is signing the webpage (but not necessarily encrypting it). More technically, that means the server/website would hash the source of the webpage, and then send the webpage, the signed hash, and if needed, the cert it used to sign the hash. Upon receiving both the webpage and the signed hash, the browser would then check to make sure that the signature can be trusted (using a chain of trust the same way we do with certs for https pages already), hash the webpage source it received, and then verify that that hash matches the signed hash it received from the website.

It doesn't matter if any of that is sent in plaintext, because there is no sensitive information, and as long as the hashing algorithm used is strong (ie sha2 family, not md5), then the isp can do fuck all to inject javascript.


If TLS could negotiate certificates instead of supporting one and only one, the backbone of any sane "virtual host" system, then https: wouldn't be a big deal. It'd be the default.

Now you need a separate IP (expensive) or port (annoying) for each virtual host configured with a different SSL cert. This has to stop, but it will not be easy to fix.


RFC 3546, which includes TLS Server Name Indication, has existed since June 2003. The problem preventing deployment is the lack of client support, especially Internet Explorer on Windows XP and Android 2.x [1].

--

1. http://en.wikipedia.org/wiki/Server_Name_Indication#No_suppo...


Ha-ha, and let's call this HTTPSEC http://cr.yp.to/talks/2013.02.07/slides.pdf !

(Spoiler: HTTPSEC is not a real thing, it's what we'd had if people who invented DNSSEC, or people like the parent commenter, designed something like TLS).


wow, that is an annoying slide deck. a lot of the issues seemed to be dns-specific that don't exist with http (slide 110: "Attacker forges many UDP request packets from victim’s IP address to many HTTPSEC servers.", also exist for https (slide 106: "Each HTTPSEC key/signature is another file to retrieve. Often your browser needs a chain of keys from several servers. Could be a serious slowdown."), or were self contradictory (slide 13 says httpsec "allow[s] for verification of the origin, authenticity, and integrity of data' obtained through http", but slide 123 says "The data signed by HTTPSEC doesn’t actually include the web pages that the browser shows to the user". I think the HTTPSEC you are referencing is partly a straw man, because that httpsec does the opposite of what I proposed, which was signing the webpage itself, not the redirect info. I do have some questions about issues brought up in the slides:

is the delay/cost of signing versus encrypting data really so huge that it's infeasible to sign dynamic pages?

also, why does each non-existent http page need it's own 404? Wouldn't a static 404 response be just fine?


It was meant to be a jab at DNSSEC, so yeah it's probably nonsensical if you try to read it as a real HTTP proposal.


If the ISP can MITM the HTTPS connection (hint: they can't because they can't provide a valid certificate which matches the domain which has been signed by a trusted CA), they can MITM any signing system you may come up with.

Google has already provided statistics showing that HTTPS adds a negligible amount of CPU load to servers (and most websites aren't CPU bound anyway).


> Besides, the ISP can easily man-in-the-middle any connection you make and then inject their ads into the webpage, even if you use https.

How would they do this without triggering certificate warnings? Or are you simply saying that everyone ignores certificate warnings?


"You're almost done setting up your internet connection! For your security, please add our CA to your computer. This handy-dandy program that you can download from our website or from a usb stick our installer has with him/her will automatically do this for you.

At ShitISP we care about your cyber safety. In order to prevent viruses and other Bad Things from infecting your computer and the other computers on our network, you will be unable to do some things on the internet until you install our certificate.

Thank you for helping us keep your computer and our network safe!"


> adds extra unneeded overhead by encrypting everything

Baloney. :-) Or rather, please cite something in the last 5 years showing that the overhead of the symmetric encryption is a significant cost in HTTPS.


assuming you just want to browse websites and not transfer information that needs to be private, the extra rtt for establishing the ssl connection is the uneeeded overhead. for something like streaming a movie, that won't matter because the ssl connection will be long lived. for something like web browsing, where most websites require a new tcp connection for each GET, ssl is painful.

sure, the extra rtt is preferable to javascript injection, but signing the webpage is sufficient to prevent javascript injection and it wouldn't add extra rtt delay (aside from fetching a cert in the trust chain, which https can also suffer from in the exact same way).

depending on the algorithms used, on-the-fly signing of dynamic pages might be (read: almost certainly is) more painful than ssl/tls in terms of computation time, but to the user would still be quicker for most cases than the rtt delay added by ssl/tls.


Note that the primary HTML resource for most web pages takes multiple roundtrips to transmit completely. If you sign it only at the end, you've made the browser feel slower. So you'd have to avoid the RTT by accepting, processing, and presenting the data immediately but only later authenticating it retroactively. The bad guy can just drop or delay the legitimate signature.

This is inherently error-prone. It gives the developers a big, convenient, and reassuring assumption which the attacker is able to violate. For a complex and evolving endpoint like a web browser, I don't think you'd ever see the end of security bugs. More: https://www.ietf.org/mail-archive/web/tls/current/msg04017.h...

Furthermore, retroactive authentication still doesn't preclude the encryption: https://www.ietf.org/mail-archive/web/tls/current/msg08722.h...


The issues with bad guys dropping the sig or delaying it also equally applies equally to ssl during the handshake. But waiting for the sig and the whole page to arrive combined wit the size of modern webpages is a problem I hadn't fully considered. I suppose per packet signatures might work to fix the delay issue, but then you'd have to violate abstraction layers and the added cpu time would be completely untenable. At that point you might as well just copy ssl's dh handshake followed by a block cypher but only provide integrity and not privacy, which is stupid. yeah, ok I'm convinced, stupid idea due to practical reasons. just use https with the extra rtt's.


It's not a stupid idea, it's a problem that smart folks have been banging their head against for a long time now. Take a look at the "TLS Snap Start" proposal to see the lengths to which one must be willing to go to avoid that round trip.

But some low-hanging fruit remains. Improvements to clients and servers that increase TLS session resumption rates would help too.


It would solve it for many sites, but there are plenty that don't have full https support (and some that have none at all)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: