Hacker News new | past | comments | ask | show | jobs | submit login
HTTP/2.0 – Please admit defeat (w3.org)
323 points by hungryblank on May 26, 2014 | hide | past | favorite | 108 comments



Related [1]:

    Wired: How has your thinking about design changed over the past decades?

    Brooks: When I first wrote The Mythical Man-Month in 1975, I counseled
    programmers to “throw the first version away,” then build a second one.
    By the 20th-anniversary edition, I realized that constant incremental
    iteration is a far sounder approach. You build a quick prototype and
    get it in front of users to see what they do with it. You will always
    be surprised.
[1] http://www.wired.com/2010/07/ff_fred_brooks/


I am not sure how well that applies to a protocol that is supposed to be codified in every single browser, web server and utility library in the world. The iteration cycle will be slow, and improvements can not happen overnight.

Now, if the name was something like HTTP/1.8-alpha it might be a different thing. At least then it wouldn't carry the label of the "next big thing for everyone". It's sad, but names (and branding) do matter. Forcing a known-broken implementation upon the world is not exactly good engineering.


WebSockets went through a series of iteration cycles like this. It wasn't entirely smooth sailing, but it seems to have worked out.


Google can iterate on SPDY and QUIC by just releasing new versions of Chrome, which is where all the knowledge behind HTTP 2.0 came from.


This is often overlooked as the single biggest failing of SPDY... its a protocol designed to work well for Google, which operates completely differently from every other web property.

Google in-houses everything so a single fast multiplexed connection to a single server makes sense. Every other website has external content, ads, like buttons, etc. and you end up having to spin up 30-40 independent SPDY connections, eliminating all benefit.


This is true for application software you write. Standards work differently.


This

And I'm very afraid of the following phrase:

"we found out that there are numerous hard problems that SPDY doesn't even get close to solving, and that we will need to make some simplifications in the evolved HTTP concept if we ever want to solve them."

This usually means "ending up with a protocol that has a lot of corner cases and a lot of backward-compatible crap or maybe some half-baked stuff that was left because of some feature that nobody uses"

I am very skeptic of protocols/standards that are born from a committee (take a look at telephony protocols/standards if you doubt me)


How does the word "simplifications" denote "corner cases and stuff that was left because of some feature nobody uses"? That's the opposite of how the message sounded to me. I thought it meant they would somehow jettison corner cases and pet features in order to reduce scope.


I'm surprised that people still believe the Mythical Man-Month. As mentioned it was written almost 40 years ago.

It was revolutionary at the time but people have moved on and found many improvements to the original and also outright mistakes.

(As the author seems to have acknowledged when releasing a new improved iteration of his book.)


The impressive thing about the Mythical Man-Month isn't that, 40 years later, it is accurate in every detail and utterly relevant to your life. The impressive thing is that 40 years later, it still has a substantial amount of stuff that is accurate and relevant to your life, and that 40 years later, people still get wrong what we knew was wrong 40 years ago.

I would also accept an interpretation where that's an indictment on the field.


> I'm surprised that people still believe the Mythical Man-Month. As mentioned it was written almost 40 years ago.

That hardly means that what it has to teach isn't still valid. Admittedly, I've only read a few chapters in it, but the central point that throwing more man power at a late project only serves to make it later, is at least as relevant today as when the book was first published.

> It was revolutionary at the time but people have moved on and found many improvements to the original and also outright mistakes.

Could you be more specific?

> (As the author seems to have acknowledged when releasing a new improved iteration of his book.)

So, do your points above refer only to the first edition of the book then?


> throwing more man power at a late project only serves to make it later

This is only part of the book. And as per this thread there's a lot more to the book . IE we are specifically talking about it's comments on prototyping which the original OP mentioned in their email.

It's been years since I read it, but I'll toss in this review with their points -http://www.goodreads.com/review/show/882155551?book_show_act...

To me, to believe one persons opinionated book from 40 years ago as relevant today is just plain wrong. Even things like the way science was done back then is in questionable today.

Systems they developed will be improved on, technology and societal change will make specifics no longer totally accurate and one person won't get an entire book right.


Mistakes? Improvements? Please share.

Few books in our industry have been more prescient.


I think this (http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/...) follow-up makes a valid point:

> In the old days we had different protocols for different use cases. We had FTP and SSH and various protocols for RPC. Placing all our networking needs over HTTP was driven by the ubiquitous availability of HTTP stacks, and the need to circumvent firewalls. I don’t believe a single protocol can be optimal in all scenarios. So I believe we should work on the one where the pain is most obvious - the web - and avoid trying to solve everybody else’s problem.

If we're not careful, we're just going to end up cycling back around again and find ourselves 20 years in the past.

That said, I do think to some extent "that ship has sailed". The future of network programming seems like it will be "TCP --> HTTP -(upgraded connection)-> WebSockets --> actual application layer protocol". See, for example, STOMP over WebSockets. While it is annoying that this implies we've added a layer to the model, it's hard to argue with the real-world portability/ease of development that this all has enabled.


The importance of firewall punching can't be overstated. There are plenty of end users in workplaces or on other people's wifi who find that all outgoing ports other than 80 and 443 are blocked. Yes, this is incredibly stupid, but they're not going to do anything about it.


So we tunnel everything through 80/443, and then proxies are going to deep packet inspect that, and selectively block some over-http protocols, so we're going to tunnel stuff through a more innocuous looking protocol over HTTP, and tunnel through the tunnel through the tunnel through the tunnel.

I propose OpenVPN/SOCKS/WebSocket/HTTP/TCP/IP as the new de-facto standard connection protocol. Maybe we can FTP through that VPN connection some time, please wait while I cook up a JavaScript FTP/OpenVPN/SOCKS/WebSocket/HTTP/TCP/IP client.


Every 5 years the concept of the application port will be re-invented at the higher level of the stack.


Looked like a good idea until you mentioned JavaScript. Bummer.


If you're so against a js client, you could certainly whip one up in another language...


Whoosh...

EDIT: A OpenVPN/SOCKS/WebSocket/HTTP/TCP/IP is a terrible idea and I'm assuming it was meant as a joke. Thus, the addition of Javascript to the comment makes no difference at all. Thus, my comment was intended as a joke since it's irrelevant (again) in which language a bad proposal is implemented.

There, you took all the fun out of it. Happy? ;-)


If it's any consolation, I laughed harder at this than I did at the original joke.


I guess I'm naive and think that comments on HN should try to be constructive, so your sarcasm did miss me.

If the proposed idea is a bad one, don't you think it would have been much more helpful if you added your opinion on that to the convo rather than snark?

My apologies for ruining your fun.


The original "idea" itself was itself sarcasm. Your parent was simply underscoring it. You just failed to identify the original sarcasm, and ended up confused and a bit too serious about it all. It's just some obscure humor, no need to get worked up over it :)


Actually, I didn't get it at first and now I think it's pretty funny :-)


So what you're saying is that since incompetent firewall admins have blocked all applications except the web and no amount of reason can convince them to do otherwise, all applications of the future should be tunnelled through HTTP.

And thus naturally HTTP must be made to accommodate for all those applications.

You may think that makes sense, but you don't create something as long-lasting as internet-scale architecture based on hacks around incompetence.

Besides, if that's the path we're walking, DPI-based firewall with HTTP-level application firewalling will become the new norm, and we've gotten nowhere further. Except we now have a even bigger mess to work with.

While the OSI-model may be going a bit over board for some aspects, making all future application-protocols be a squashed through HTTP is madness. This thinking is of the same quality and mindset as of PHP developers.


> So what you're saying is that since incompetent firewall admins have blocked all applications except the web and no amount of reason can convince them to do otherwise, all applications of the future should be tunnelled through HTTP.

The product I work with has some protocols that is non-http. Some of our customers are complaining to us because _their_ users can't get through their own firewall (i.e the users own firewall, not our customers firewall). These users are often on corporate networks, and to get the users to convince their employer to fix their firewall is likely a waste of energy. In fact our customers may consider to switch to another vendor if we don't change to a HTTP based protocol.

Of cause the same firewall administrators may introduce deep packet inspection to only allow real web traffic later, creating more problems, but as someone delivering third party solutions now, you are forced to use HTTP or HTTPS.


> you don't create something as long-lasting as internet-scale architecture based on hacks around incompetence

On the contrary, the only reason the internet works is because it's robust enough to survive incompetent network admins.


I did not say that. What I am saying is that applications of the future that tunnel over HTTP will be much, much wider adopted than those that don't. Regardless of how suitable it is for the purpose. I'm not saying this imposes any constraint on the HTTP designers.

Tunnelling will probably have to be HTTPS everywhere too as a countermeasure against both surveillance and DPI.

Arguably the internet is a hack around incompetence and power-hoarding; the phone companies potentially had the technology to deliver many of the things we see as internet services as early as the 1980s, but were too bureaucratic to deliver innovation and especially cost reductions. (Compare MINITEL, for example)

That the internet makes service-based billing hard is also a feature.


> you don't create something as long-lasting as internet-scale architecture based on hacks around incompetence

The right half of your brain is wired to the left half of your body (and vice versa). That's just the standard, go-to, basically harmless example of stupidity in the design of long-lasting systems.

From http://uncyclopedia.wikia.com/wiki/Unintelligent_Design :

> Unintelligent Design is the theory that the world was designed by some higher power, but this higher power did a piss poor job at it. There are many theories as to how the universe could have been so stupidly and half-heartedly spilt into existence.

Or from the slightly more serious http://en.wikipedia.org/wiki/Unintelligent_design :

Your optic nerve originates at the front of your retina and pierces through it, instead of more sensibly originating at the back:

> The retina sends electrical signals to the brain through the optic nerve and people see images. The optic nerve, however, is connected to the retina on the side that receives light, essentially blocking a portion of the eye and giving humans a blind spot. A better structure for the eye would be to have the optic nerve connected to the side of the retina that does not receive the light, such as in cephalopods.

You stupidly breathe through the same tube you eat and drink with, causing a staggering number of unnecessary deaths:

> If the [pharynx and larynx] were not connected and did not share a portion of their travel paths, choking would not be an issue, as it isn’t for most other animals in the world.


I think people are missing your point: Human beings seem to be built based on hacks around incompetence, yet we've made it for quite some time, it's not a great point but nevertheless makes sense.


That's one of two points. Indeed, humans are a much, much longer-lasting system than the internet. But you can also think about why these kinks in the design of living things arose in the first place; as wikipedia nicely points out, a lot of glaring mistakes in one animal work the way you'd expect in other animals. The "mistakes" arise because they're the fastest way to produce a desirable result, and they persist for great periods because making changes to a working system is very hard. In a certain sense, history does show that adding hacks on to the "outside" of a working system, so that it stays working at all times, can be a superior strategy to trying to revamp the whole thing in an elegant manner.


If we're going to pull on evolutionary biology metaphors, I think the "Red Queen Hypothesis" is the more apt one for this situation...


HTTP is an arms race? Between what and what?


Until a big enough player decides to use something else.

Case in point: 5223/tcp, used by Apple's Push Notification service (amongst other things). Push notifications not being delivered to so many networks made their users angry and tech support calls (and costs) boom, and had soon 5223/tcp unblocked, even on free, non-encrypted, coffee-shop wifi networks.

I understand that most developers just choose to roll with it these days, but I really believe that putting everything atop of 443/tcp because of clueless/incompetent sysadmins is a huge mistake.

Keeping protocol stacks small, efficient and secure should be a design goal.


> because of clueless/incompetent sysadmins is a huge mistake

I think clueless is a little strong. We've just gotten to a time where the usual connection between network administrators and users is pretty loose. You don't actually know who is running that coffee shop wifi, so you can't ask for your applications to be unblocked.


Here's why I think firewall punching is stupid and should not be supported.

IP addresses are logical numbers used to communicate with a host. In order to make it easier for humans to know what host to communicate with, DNS records were created to be able to easily communicate with a particular host.

In addition to figuring out what host we want to communicate with we need a way to specify what application we want to use. So service names were created and assigned to a static list of application port numbers. Two of these application service names are "http" and "https", which each provide a different application protocol.

We've built so many things that depend on these two application protocols that developers have finally outgrown what the protocols are capable of. But to get support for a new protocol is "hard". You have to do three things to successfully roll out a new application protocol:

1. Write a server for it and make people want to use it, 2. Write a client for it and get users to want to use it, 3. Get network administrators to open up their firewalls to support it.

Sadly, many developers are playing a cat-and-mouse game with network administrators. They want their applications to be used by anyone, anywhere, automatically. So they employ tricks to get their content past network administrators, like re-using the same application service names used by protocols which are ubiquitously supported throughout worldwide networks.

But this just creates an arms race with network admins. For example, mandating strong encryption to use a service means a network admin traditionally can't restrict what kind of applications work over it. So in order to fight their users subverting their approved applications policies, more and more organizations are doing SSL inspection by injecting CA certs into their employees' computers to block unauthorized applications.

Fighting network admins is a losing game, and does not benefit the users of your applications, or the developers. Instead of hiding more features and overlaying protocols, it would be more productive to create protocols that use their own service names and provide their own functionality and let users demand it be supported by the network admins. This is how all popular application services have been adopted over the years and provides the best quality of service to everyone involved.


On the other hand, there are firewall administrators who do extensive filtering on ports 80 and 443 and allow other outgoing traffic by default. We get pretty ticked off when some thermostat, credit card machine, or postage meter decides to send its bizarre and undocumented data stream over port 80 or 443 in the name of firewall compatibility and therefore becomes incompatible with our firewall. (Yes, we can put in exceptions for those machines' IP addresses, but if they didn't try to use HTTP ports, they would Just Work.)


Let me tell you that where I live, networks like yours are the minority. Sadly, when 99% are going for the 80/443 are free and the rest forget-about-it, I'm going to "tunnel" my stuff over 443 because I don't like being called 10-20 times a day with "nothing works"-type complaints that I can't properly remotely debug because obviously, nothing works.

Edit: my "tunnel" is actually WebSockets over SSL on port 443. Not sure if you guys would block that as supposedly this should be no different from, say, gmail or facebook traffic.


Yes, websockets should be fine. Making my proxy support websockets was a little difficult, but I think it works now.

If the traffic is compliant with the protocols that ports 80 and 443 are supposed to use, it's not a problem. The one that really gets me is a thermostat using a proprietary protocol over port 443. This protocol is one where the server sends the first data over the connection; in TLS the client always sends first. So my proxy was just waiting for the TLS client hello while the thermostat was waiting for its server's message. If the thermostat had sent something first, the proxy could have seen that it was invalid TLS and passed it through; instead it deadlocked.


The future of network programming seems like it will be "TCP --> HTTP -(upgraded connection)-> WebSockets --> actual application layer protocol".

It's not like WebSockets runs "over" HTTP, it just uses an HTTP-like handshake; besides that, it's just a simple framing protocol over TCP.

That's why you can use a simple proxy to "websocket-ify" applications that use plain-old raw TCP connections.


it's just a simple framing protocol over TCP

Websockets is ridiculous for this reason.

It has a stream abstraction (Websocket API) over a packet based protocol (Websocket Protocol) on top of streaming protocol (TCP) on top a packet protocol (IP).

This shit has to stop.

We had everything we needed. Websockets should have just been exposing regular sockets to the browser.

I don't buy the HTTP proxy argument.


Except we have to deal with a lack of IPv4 addresses, DDoS attacks, load balancing, NATs, firewalls...


I used to think that. Until I realized that WebSockets added the following features that raw TCP doesn't have:

- Multiplexing multiple logical WebSocket servers on a single port, through the HTTP host header and URI.

- Authentication through HTTP basic or digest authentication.

Very, very useful features.


Right, that's why I said we're adding one layer (not two).

Still, if TCP is the transport layer, and HTTP/SMTP/etc. are the application layer, then what do we call WebSockets when we implement an application layer protocol on top of it? The application compatibility layer?

In the past we had operating systems with network stacks and shared libraries for working with application protocols. Now we have browsers with network stacks and JavaScript libraries for working with application protocols. Wasn't it just a few days ago that someone was proposing an OS that would do just enough to boot a browser, and then let the browser handle most of the "traditional" operating system tasks? How long before we see a proposal for "lowering" WebSocket connections to the TCP layer?

This is what I mean when I say that we risk finding ourselves 20 years in the past. HTTP isn't ubiquitous because it is better. It's better because it is ubiquitous. If we don't pay attention to why things like WebSockets are easier to use today than raw TCP, then history will repeat itself.


Preach it.

I haven't seen anything that actually requires a new OSI layer, only people trying to reinvent lower-level layers poorly to address gross incompetence.

The Internet is more than the Web, and the Web will eventually be replaced.


>Wasn't it just a few days ago that someone was proposing an OS that would do just enough to boot a browser, and then let the browser handle most of the "traditional" operating system tasks?

We already have that - Firefox OS: https://www.mozilla.org/en-US/firefox/os/


I am so glad there is at least one prominent name advocating this line, because I feel like this quote from another IETF discussion is becoming more and more relevant:

> Is there an IETF process in place for "The work we're doing would harm the Internet so maybe we should stop?" - http://www.ietf.org/mail-archive/web/trans/current/msg00238....

HTTP/2.0 has been rammed through much faster than is reasonable for the next revision of the bedrock of the web. It was always clearly a single-bid tender for ideas, with the call for proposals geared towards SPDY and the timeline too short for any reasonable possibility of a competitive idea to come up.

There has never been any good reason that SPDY could not co-evolve with HTTP as it had already been doing quite successfully. If it was truly the next-step it would have been clear soon enough. All jamming it through to HTTP/2.0 does is create a barrier for entry for similar co-evolved ideas to come about and compete on even footing.


PHK has always been skeptical of HTTP/2 & SPDY

He wants radical change in the protocol but when given the opportunity submitted a (by his own admission) half baked proposal - there's also the question of what a protocol like HTTP/2 means for his product.

Although HTTP/2 started from SPDY it has evolved, and in different ways e.g. see the framing comments from the thread the OP links to.

We need a better protocol for the web now, yes we could wait around longer for more discussion but where did that get us with HTTP/1.1 - I'd be quite happy if IETF had just adopted SPDY lock, stock and barrel (and no I don't work for Google)


I'm aware that he's always been skeptical, I saw his posts as my own excitement over the idea of HTTP/2.0 died on the vine from being subscribed to the WG mailing list.

There has never and will never be a point in time where we don't need "a better protocol for the web now." The issue is that canonization was unnecessary, adoption of spdy has been progressing fine without it. And HTTP/2 diverging significantly from spdy does not inspire confidence, either. Rather it just reminds of a famous xkcd [1]and again begs the question of whether trying to turn spdy into http/2 even manages to achieve any of the goals the process was setting out for.

The whole thing just seems like a big fat SNAFU.

[1] http://xkcd.com/927/


There is another interesting thread about internet of things http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/...

"So it looks like HTTP 2 really needs (at least) two different profiles, one for web hosting/web browser users ("HTTP 2 is web scale!") and one for HTTP- as-a-substrate users. The latter should have (or more accurately should not have) multiple streams and multiplexing, flow control, priorities, reprioritisation and dependencies, mandatory payload compression, most types of header compression, and many others."

http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/...

"First and foremost, it needs to be recognized that HTTP/2 has been designed from the start to primarily meet the needs of a very specific grouping of high volume web properties and browser implementations. There is very little evidence that ubiquitous use of the protocol is even a secondary consideration -- in fact, the "they can just keep using HTTP/1.1" mantra has been repeated quite often throughout many of the discussions here on this, usually as a way of brushing aside many of the concerns that have been raised. So be it. It's clear at this point that HTTP/2 is on a specific fixed path forward and that, for the kinds of use cases required by IoT, alternatives will need to be pursued."


Pretty sure every ecommerce site will benefit from deploying HTTP/2

(although their tendency to fill sites full of third party components may reduce some of its benefits)


Well the complexity makes it hard to build a performant server.

https://groups.google.com/forum/#!searchin/mechanical-sympat...


Not exactly sure under what specific scenarios "every e-commerce site" will be running, but if HTTP/2 == SPDY, it's not looking good on mobile: http://conferences.sigcomm.org/co-next/2013/program/p303.pdf <== TLDR: current SPDY implementation has negative impact on Mobile performance. SPDY sucks on mobile is sort of a known fact, but this paper shows solid evidence that it's true.


The conclusion of "As a result, there is no clear performance improvement with SPDY in cellular networks, in contrast to existing studies on wired and WiFi networks." is rather different to "current SPDY implementation has negative impact on Mobile performance."

Of cousre as SPDY is only using a single connection then it's more vulnerable to issues with that connection.


So you agree that single connection is a problem :)? Under mobile, where high RTT is the norm rather than the exception, this has compound negative effect on performance (mistaken long response time as packet loss, reducing the congestion window that makes all flows suffer, etc, etc.). Everything else being equal, applying SPDY on mobile introduced this new mobile "vulnerability", isn't this SPDY's negative impact on Mobile? Sure, if cellular networks has the same delay/packet loss characteristics as wired/wifi, SPDY would fly, but mobile and wired/wifi are clearly not the same. Also, I used that paper just as an example showing SPDY's performance problem on mobile (with very nice detailed analysis why SPDY suffers). My conclusion of "SPDY sucks on mobile" is from my experience, not from that paper. I just use that paper to show my point. Actually, I think their paper's conclusion is a little bit too "polite" toward SPDY. [Edit: adding reference to the paper] In that paper the sentence right before the "conclusion" you quoted is "In cellular networks, there are fundamental interactions across protocol layers that limit the performance of both SPDY as well as HTTP."


I've tested SPDY over wired / wifi but not over mobile so my mobile experience is anecdotal.

That said all my mobile browsing (minus HTTPS) is run over SPDY (via the Google proxy) and I wouldn't describe it as sucking.

Even in the HTTP case it will still depend on what resource the packet loss etc., occurs for e.g. if it's something on the critical rendering path will it make that much difference?


Glad that you brought up the critical rendering path, because SPDY uses single connection for all requests, once a packet loss/re-transmission happens, the entire connection's congestion window will be cut, which will affect all requests, so most definitely it'll hurt requests on the critical rendering path. Actually the good old HTTP1.1 doesn't suffer from this because if multiple connections are used, and only one connection suffers packet loss, only the request using that connection will suffer(assuming pipelining is not enabled), the other requests using other connections won't. This actually reduces the chance of resources on critical rendering path suffer from packet loss.


But is there any research into how often the 'independent' connections actually mitigate against packet loss?

Even on HTTP if the packet loss comes in the middle of negotiating the connection for the CSS, the page is still going to be waiting for the three seconds timeout before re-negotiating the connection.


We've seen in our work, that under high RTT, high packet loss rate (you can simulate similar effect by using things like dummynet, but it wont' be exactly what you'd expect on cellular network due to reasons mentioned in ATT lab's paper), SPDY results in performance degradation over raw HTTP. Also, I'm not saying 'independent' connections can mitigate against packet loss, it can't, and for the case you mentioned, definitely we'd get a performance hit (either first paint or pageload). It's just SPDY makes it worse than the default HTTP behavior, and that's understandable because it only uses single connection, and single connection suffers from different sorts of problem, and you seem to agree this from your first comment.


Interesting response from the WG chair: http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/...

An aside: I find it odd how HN users jump to agreement when a link to a single mailing-list message is posted, ignoring other discussion on the thread. I think it's because the UI makes it hard to see the rest of the conversation (unlike -say- the comments UI on HN itself)


To put the comment and its author in context, Poul-Henning Kamp is the main developer of Varnish, a widely used high-performance standard-compliant HTTP cache.

PHK has experience of HTTP both from the server point of view (the main job of Varnish is acting as a fast HTTP server) and from client point of view (Varnish acts as client to the slow upstream HTTP servers).

As a side note, he also refrained for years from adding TLS support to Varnish after his review of OpenSSL and SSL in general (see https://www.varnish-cache.org/docs/trunk/phk/ssl.html ).


Good one. More context on the ideas and proposal Poul-Henning has put on the table: http://phk.freebsd.dk/words/httpbis.html


A much better comment to link to would have the grandparent, by Greg Wilkins from Jetty, who gives a lot more substance and context to the debate: http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/...

It does seem a little shocking that the WG chair is proposing last call while still there's serious discussion of things like dropping HPACK.


I'm here in case you want to ask me anything.

Poul-Henning


For those of us who don't follow this discussion in detail, why are you thinking the protocol needs to be scrapped outright rather than modified? Is it simply the complexity Greg Wilkins mentioned or are you really thinking about bigger philosophical changes like dropping cookies as we know them? Dropping HPACK seems like a great engineering call but that seems like a relatively minor change rather than starting over.

http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/... made me wonder what your standby plan would be – let the people who really need to care about performance use SPDY until a more ambitious HTTP 2.0 stabilizes? One of the concerns I have is that many people want performance now and it seems like HTTP 2.0 might turn into the next XHTML if it takes too long to emerge.


I think the fundamental problem with HTTP/2.0 is that it is a inadequate rush job for no good reason.

If you really want to gain performance for instance, the way to go is to get rid of the enormous overhead of cookies, to replace the verbose but almost content-free User-Agent header and so on.

Likewise, wrapping all the small blue 'f' icons and their associated tracking icons in TLS/SSL does not improve privacy on the net in any meaningful way.

But the entire focus has been to rush out a gold-plated version of SPDY, rather than to actually solve these "deep" problems in HTTP.

Similarly: Rather than accept that getting firewalls fixed will take a bit of time, everything gets tunneled through port 80/443, with all the interop trouble that will cause.

And instead working with the SCTP people on getting a better transport protocol than TCP ? Stick it all into the HTTP protocol.

Nobody seems to have heard the expression "Festina Lente" in this WG.


> Likewise, wrapping all the small blue 'f' icons and their associated tracking icons in TLS/SSL does not improve privacy on the net in any meaningful way.

It absolutely does, global passive attackers have been documented using the associated unencrypted tracking cookies to find targets.


All these concerns share an overlook of the practicalities of the web.

On the protocol level, there is a huge knowledge and API momentum that makes any protocol that fundamentally differs from HTTP an uphill battle for adoption. Whatever changes are made to HTTP, if it is too different from the application POV, it will linger behind.

Same thing about tunneling. It may not be elegant, but it's the way to fight the system. You won't change IT behaviour with a protocol change. One possible way to work would be to make new version oF HTTP have a working mode that reduces overhead to a minimum for any tunneling operation.

(The same argument hold to SCTP vs TCP.)


* All these concerns share an overlook of the practicalities of the web.*

There's nothing "practical" about HTTP/2.0. I've read the spec, since I was interested in it, and it confused the hell out of me. Then I learned that even people who implement networking specs for a living, and have tons of experience at it, are stuck trying to figure out how to implement the current spec.

So if you're going to push for shitty-but-practical, at least make sure you have the practical bit.


A well designed HTTP/2.0 would not "fundamentally differ from HTTP", and it would be trivial for web-servers like NGNIX and APACHE or frameworks like PHP to mask the differences for the application code.

For instance moving cookies to the server side would just require a simple key-value store lookup.


Forgive me if I'm not well enough informed to partake productively in this discussion. I just wanted to ask:

Can't anyone implementing a webclient and webservice on top of common web servers and browsers, choose to forgo cookies and keep the state server side? If this is so, then why do people choose to use cookies if they give more overhead?

Also you would need to keep some sort of unique identifier on the client, that the client can send the server, in order for the server to be able to look up the session state (a session id). Isn't this what cookies often are used for? I'm guessing this is probably what you meant "information-free session nonces" would solve above. This sounds interesting, could you explain this scheme to me or maybe point me in the direction of a good resource?


They could if the client provided a session-id so they knew which "cookie" to retrive from server-side storage.

My proposals is to do that, and have the top bit in the session-id mean "Persistent" or "Anonymous", so that client undisputably controls if anything will be stored about the session.

A Pesistent session-ID would be the same next time you visit the site, an Anonymous would be random, and thus not retrive any state on the server side, even if they saved it last time.

This would put the privacy decision in the hands of the client, provided we also eliminate crap like almost-per-user-unique user-agent headers.


Can't the client side do this already (with HTTP/1.1) by turning off cookies?

Or are you pushing for browsers to turn off cookies by default?


If I understand your argument correctly, you are opposed to oppurtunistic encryption as there is no identity validation, as it does not improve privacy.

But an active man-in-the-middle attack at least has a chance of being detected, as opposed to the current passive sniffing being done on a wholesale basis. Do you not see any value in that?

(I have not followed the HTTP/2 development closely enough to comment on other areas of concern.)


You read me wrong then.

I'm opposed to a protocol which claims to "improve privacy" while leaving some of the most troubling privacy invasions in place.

If we are going to the trouble to upgrade HTTP, we fix all the serious problems we can.


> If you really want to gain performance for instance, the way to go is to get rid of the enormous overhead of cookies, to replace the verbose but almost content-free User-Agent header and so on.

Yes, they add request and response overhead but what evidence is there that removing them would deliver greater performance improvements than a multiplexed protocol does?


Comparing to multiplexing is wrong. You should compare it to the header-compression ("HPACK") which is a major complication and risk in HTTP/2.0

The answer is that well-designed cookies are incompressible because they come out of cryptographic algorithms.

Also, cookies are wrong, because they store the servers state on the client computer, which led the EU to legislate against them.

Using information-free session nonces would have neither of these problems, and take up much less bandwidth.


I understand the issue of complexity in header compression but the impression I get is you seem to object to the whole basis of HTTP/2 seemingly because it's not ambitious enough.

If it didn't have HPACK but still did multiplexing would you still say abandon it?


It's a tradeoff: The added complexity must provide sufficient benefit to be worth it. HTTP/2.0 doesn't IMO, not even close.


> Also, cookies are wrong, because they store the servers state on the client computer, which led the EU to legislate against them.

Can you spell this out in more detail? (Maybe you've written an article about this at some point?)

When, if ever, is it OK to store state on the client? It seems like you'd at least make an exception for one session ID. I presume you'd also make exceptions for client-side caches? Anything else?

Do you advocate against all forms of client-side storage? IndexedDB, localStorage, Web SQL, etc?

RESTful advocates keep telling me not to store state on the server. Do you agree with that? If I can't store state on the server and I can't store it on the client, where am I supposed to store it?


What do you think about the http/0.2 proposal? http://http02.cat-v.org/

Of course we don't live in Plan-9's 9P world, and I don't think we will ever live in that world, but if you think about it, it makes a lot more sense. In every aspect. 9P could make lots of troubled / tied (think XML standard such as WebDAV) / historical standards (FTP, NFS) obsolete. It is sane, simple, fast and secure because it is just a stream of bytes, displayed as a filesystem. There is no need for tons of librarycode. And http/0.2 could be backwards compatible. With http/0.2 you could also have session id. Besides that, with mounting httpfs there is no absolute need for a browser. You could use the standard commandline tools, altough the browser is gonna be used in almost any case.

All I wanna say is that I agree with your ideas. HTTP/2 is probably gonna have a long breath, so thinking it over, and start from scratch would be a good idea IMO. With 9P it could be a real dealmaker.


I think it makes sense to take the time to get it right. Every version of HTTP will effectively have to be supported by just about every server and client forever or otherwise the web will break.

An incredible aspect of the web is that Tim Berners-Lee's first website back at CERN still works in modern browsers. Same with things like basically the entire Geocities archive.

When it gets to core infrastructure like HTTP you can't just iterate quickly and expect the entire internet to constantly upgrade along with you.

What works for early stage lean startups won't work here.


What an awful way to try and get a point across. An aggressive tone and negative words baked into every other sentence, making the statements very loaded. There's probably a lot of missing context from viewing only this link, though.


I think that one of the best things about HTTP/1.0 (and to a lesser extent 1.1) is its simplicity. The reason, to me, that that simplicity is so vital is because it has fostered large amounts of innovation.


It should be noted that the sender of this e-mail is Poul-Henning Kamp, known among other things for another e-mail from back in the day (relatively speaking): http://bikeshed.com/


I haven't been following the development of HTTP/2.0. What are the most egregious "warts and mistakes" in SPDY?


Mobile is one of them, although OP's arguments are valid as well. (http://conferences.sigcomm.org/co-next/2013/program/p303.pdf) Also, among many other things: every thing over SSL. Single Connection. Also, less of a technical problem, but as many already mentioned, too complicated as compared to plain text human readable HTTP 1.x


also ssl everywhere without authentication by default. ie slower, encrypted, but not actually safe.


Would it be accurate to suggest that the rushing of Google's SPDY, as HTTP/2.0, through IETF standardisation is roughly equivalent to the situation a few years ago when Microsoft pushed Office Open XML through as an ECMA standard? Or is that just a huge mischaracterisation?


That's a huge mischaracterization.

There are a few different implementations of SPDY, and a clear use case where it applies. Also, it's a clear standard, made for being used.

What's happening here is that there is a group of very active people that create most of the software we use on the web, and have a use case they want to support. At the same time, there are lots and lots of people that are not as active, with a huge amount of use cases that will be hindered, but since they are not active, they have very little voice.


Thanks for the explanation, I haven't been following the progress of SPDY so I'm not familiar with the detail. What use cases would be hindered by this new standard?


There are people complaining that it will break things because it's binary, that they can't have mandatory encryption, and that it's just too complex to fit in limited resources. There are probably other complaints that I didn't see.

I've never read it in enough depth to verify those claims, but the response from the standard group is always "then use HTTP 1.1", what is as a non-solution as it gets.

SPDY was great exactly because it was not the standard, it was an extra option, available if everybody agreed to it. Call it HTTP 2, and it will become mandatory in no time. IETF calling it optional won't change a thing.


I'm in favor of dumping HTTP/S and using a faster and more secure (by default) Transport protocol altogether. Post Snowden revelations we should be focusing our energy on that, rather than continuing to hack around this old protocol to make it faster and more secure.


MinimaLT [1] comes to mind. Minimal latency through better security sounds very appealing, especially when it's not a marketint trick but a paper signed by people like DJB.

The way I see it though, is not only to have a protocol, but how to get adoption. Especially when you're talking about network protocols, you need rock solid stacks in all major operating systems which is not an easy feat to accomplish.

[1]: http://cr.yp.to/tcpip/minimalt-20130522.pdf


Well, SPDY might be a "prototype", but it's solving real problems today - I care less if it's perfect or not, if it solves all problems or not, as long as it's easy to implement, has a decent footprint, and offers significant improvements over HTTP/1.1. An imperfect working prototype is better than a perfect blueprint that materializes in distant future where problems and environment can differ greatly.


Well, according to this mail from the Jetty-team, it doesn't seem like http/2.0 is easy to implement at all: http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/...

Furthermore, if http/3.0 is already being discussed, why not just skip http/2.0 entirely, and live with the current http/1.1+SPDY situation until the work towards a new standard for http is actually done?


What does "LC" mean in that post?



Wait 5 more years? No thanks!


But then you need to support 1.0, 1.1, 2.0 and 3.0. I'd rather wait a bit to avoid need to support yet another version. I can bet every new version of HTTP will cost millions of dollars across the industry. It is not agile where you just drop in a new increment. This is the base everybody need to support then and you won't be able to stop support in foreseeable future.


> I can bet every new version of HTTP will cost millions of dollars across the industry.

That would be a fairly easy bet, but I doubt anybody would take the other side. My own estimate at rolling out a major HTTP protocol revision across the industry would be in the 100's of millions. Take into account that we have approximately 750 million web sites and 3 billion clients.


But you can continue to use it even if it is not the standard as it has widespread support.


It's not perfect so start over. Seems like the definition of why v2 is always so hard.


Sounds to me like the real problem is lack of IP addresses, and the best strategy would be to hold off on updating HTTP and work on IPv6 ubiquity first. I can see why Google went a different route, but we don't all have to follow.


Just like they admitted that XHTML 2 was a mistake and scraped it, I feel they should do the same with this nonsense.

Nothing about spdy or http2.0 sparks any sort of confidence with regard to proper robust protocol design, keeping things simple nor properly separating concerns.


It's funny that you mention XHTML 2, because I think it demonstrates the opposite of what you are arguing.

XHTML 2 is a lot more like what PHK is proposing: an attempt to "rethink" HTML, come up with something simpler, revolutionary rather than evolutionary, "The Right Thing." It was an attempt to reinvent the space from first principles, and had lots of ideas that were theoretically good but unproven at large scale.

When that went nowhere, the world settled on HTML5: evolutionary, incremental, and based on standardizing existing practice. Much less sexy, but more useful in practice.

There is a time and a place for bold new ideas, but a standards body designing v2 of a protocol isn't it. Standards are for codifying proven ideas. When standards bodies try to innovate you end up with XHTML, VRML, P3P, SPARQL, etc.


> Just like they admitted that XHTML 2 was a mistake and scraped it

But "they" (the W3C) didn't do that when people were just complaining about issues with the XHTML 2.0 approach, they did it after a competing approach was developed via an extensive, multi-year process through an outside group (WHATWG), and even then only that after a short period when both approaches were the focus of official W3C working groups.

They didn't adopt a "this is limited, lets throw it away and start over" approach as the original article here calls for with regard to HTTP/2.0.


You know what we need, we need to pick one of those people and give 'em one day to invent HTTP/2.0 and it'll be a better spec compared to letting them all "decide" together by nerd-fighting each other into eternity.

No standard is perfect, but the worst standard is no standard.

Make up your fucking mind already.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: