Hacker News new | past | comments | ask | show | jobs | submit login
How Will Consumers Use Faster Internet Speeds? (freedom-to-tinker.com)
43 points by simoncion on Dec 30, 2015 | hide | past | favorite | 68 comments



I live in Kansas City and have enjoyed Google Fiber for the last three years. (First the free 5Mbps, then the $70/month Gigabit)

The answer to this question, for me, is "Doin' the same shit. Slightly faster."

The speed test results are always fantastic, but real internet usage isn't quite as utopian as the hype makes it out to be. In some cases, I suspect there are bottlenecks between my house and whatever server I'm connecting to. In other cases, it almost feels like intentional throttling.

Netflix and Hulu performance is outstanding, but Amazon Prime videos occasionally drop into that super-blocky low bandwidth mode for seemingly no reason.

Most websites work great, but I've had super weird experiences with sites and servers that use IPV6 when I'm on my MacBook.

It's these weird quirks that probably keep me from noticeably changing my internet habits. But, as usual, YMMV.


To me the main difference is upload speed. Suddenly you can start using the internet as a LAN, and then only it starts to make sense to work in the cloud, save all your files there, not worry about the house burning down or being burglarised. Probably even more useful for smaller businesses.


I totally agree with that. I can throw a 100MB ZIP into my Dropbox folder without a second thought. Box and Google Drive are equally snappy.

With Google Fiber "Free" and Time Warner before it, I was capped at 1 Mbps upstream and cloud services were much more or a chore.


In light of this, would you now revise your original comment?


Not really. Before or after gigabit, I'd throw that file in my Dropbox. It just sucks less now.

A fast connection doesn't mean that I'm deciding to throw more stuff online or make more use of cloud services. It's just nicer that the stuff I have to transfer moves a helluva lot faster now. So I haven't noticed a significant difference in my usage.


1gbit/s is just under 120MB/s - the average read/write speed of an HDD. So yea, maybe you can backup your personal PC or two, but you still won't be using it for frequent reads/writes (latency issues aside). I have 1GbE LAN cabling at home and I'd really want 10GbE. Also to put things in perspective - HDMI 2.0 does 18gbit/s. PCIE v 4.0 x 16 rounds down to 260 gbit/s of bandwidth. Now please forgive me for being ridiculous and quoting these figures. :) But I don't think you need to be that crazy to imagine a few cool use cases for such bandwidths. Or the kind of resource wastefulness (in a positive sense) it would allow :-D

My point is - 1gbit/s - not as utopian as it first sounds.


So a 1gbit/s connection like google fiber provides is around the same bandwidth as a typical hdd. A typical hdd has around 13ms of latency, which is also easily in the realm of latency to a regional datacenter.

So any technical barriers to making all hdds obsolete are evaporating. I guess with the rise of ssds the only draw of hdds is now price, and I suppose that feature won't be supplanted by some service provider any time soon.


Not to mention that most devices these days connect over wifi, which almost always offers less than 1Gbps speed.


Even using Ethernet and open source firmware on consumer routers (e.g., TomatoUSB on Broadcom-based ASUS), I wasn't able to get >100 Mbps. I guess many developers' own Internet connections don't have such bandwidth, so they're not incentivized to make improvements.


Google provided both modems and routers, which was absolutely essential. Very few people had/have the hardware to take advantage of the fiber speed.

Anyone who wants to roll out a fast service should certainly take that cue.


Anyone looking to roll out fast service should try to get Google to sell them the routers, because Google actually bothered to do the routers and modems right in ways that almost no retail modems and routers do, and few ISPs would be able to even develop detailed product requirements for.


You should check out the Edge Router series from Ubiquity. They are cheap ($50-$350) and can route traffic at gigabit speeds.

Here is the Edge Router Lite http://amzn.to/1UhB9SF

It's not a 'lazy developer' issue, but hardware really is the limiting factor. You would be surprised how much a router can impact your speed once you get above 50Mbps.


Another brand to recommend is MikroTik. Depending on your needs they have routers that can scale from small home use to medium - large office.


Yeah, you have to do a bit of research to find routers that can route/firewall at ~1Gbps. The Ubiquiti EdgeRouter Lite is one such router, with a typical retail price of just south of 100 USD. (Though -for whatever reason- you do have to manually enable hardware offload to get 1Gbps perf out of the device.)


Here is the system I purchased - http://www.pcengines.ch/apu1d4.htm

With both OpenBSD and Debian, I've had no problems routing at gigabit speed locally and the maximum 350 Mbps on WAN.


That looks pretty neat. Can you route and firewall at gigabit rates?

Also, it looks like it would cost ~150->300 USD to make a complete (board, case, storage, AC adaptor) router?


That's right about the cost. I think it's a good deal compared to something like https://store.pfsense.org/SG-2440/

What do you mean by firewall, exactly? I do nothing extreme, just an pf or iptables ruleset, run dnsmasq, privoxy - those sorts of things. Glad to take some performance benchmarks you can recommend.


> What do you mean by firewall, exactly?

The basic stuff: NAT translation, port forwarding, connection rejection and the like.

For bonus points, doing stuff like traffic shaping/prioritizing (like CoDel) at gigabit speed would be rather interesting.

At the moment the EdgeRouter Lite can do limited amounts of packet inspection using the offload hardware. As time goes on the Ubiquiti folks figure out how to better use the offload chip, but -for now- rate limiting and traffic shaping has to run through the thing's CPU, which -IIRC- gives you somewhere north of 100mbit/s of throughput.

> I think it's a good deal compared to something like...

Oh, for something that I would expect to be able to keep using for 10 years, I think it's quite a reasonable price. :D

Edit: Yeah, I don't really have any perf benchmarks to recommend. I guess -if I had the time and the gumption- I'd do something like set up iperf (probably 3 so you can use TCP) on two machines (each in a different subnet so packets would pass through the router) and adjust the iperf listen port so that my various firewall rules triggered, and compare performance.


Using open-source firmware won't get you much in the way of performance benefits if it's not using recent Linux kernels and well-maintained in-tree drivers.

And it was only recently that any routers started including CPUs that are powerful enough for high-speed packet processing. Prior to 802.11ac, basically everything was based off '90s-era single-core MIPS and the only hope of 100+Mbps throughput was to use hardware offloads that severely curtail the kinds of packet processing you can accomplish (and usually isn't supported by any open-source driver). Now we've got 1+GHz multi-core ARM processors in wireless router SoCs, but no mature drivers for them.


My mid-range router simply didn't have a fast enough processor to handle Gigabit speeds, I ended up upgrading shortly after having the service installed in my apartment to a much nicer device. It was well worth it.


For me, it's less about the throughput than the total experience given the services I use.

I have a less than stellar connection for the US (~25/5-ish?), and getting my voip to work took some real tuning in PFsense. The move to PFsense was basically motivated by the fact that I couldn't even do the things I wanted to prioritize voip via the gui provided by the mainstream combo wifi/router I had before. Without tuning (firewall, qos settings), it would frequently sound terrible as soon as someone fired up something bandwidth intensive.

I wonder how much is HW vs. SW, since the router I'm using these days is a low-end single core (2x hyper thread) atom box from Soekris that rarely has >10% cpu load. There appear to be much faster HW options available from various vendors these days at a lower price point than I paid for the soekris when it was new, but PFsense wasn't available on my cheap MIPS wifi-router.


I have that same experience with Amazon Prime videos. I instinctively want to blame Amazon, but who knows?


"Me three" re: Amazon Prime videos. Netflix and Hulu work wonderfully. We only occasionally watch something on Amazon Prime and I don't think there's been one time that we didn't have an issue with it (and I have fiber at home, 50x25 Mbps, and my ISP is well-connected network-wise).


I also blame Amazon. I only have 15mbps down, but Netflix never has a problem after it initially buffers. Amazon almost always has a few buffering periods throughout a video.


I had Google fiber in an apartment with roommates in Provo Utah. In the same way, individual tasks didn't seem like they were amazingly faster(bottlenecks and device speed could have often been the culprit for sure) but the parallelism was great. Netflix, Clash of Clans and Battlefield 4 all running through the same pipe without a hitch. We never felt like we were stepping on each other's toes bandwidth wise. And I rarely felt the need to turn off my backup utility or stop large Steam downloads.

Fun story though, it was fast enough that I got kicked for an hour from my school's network because I was downloading way too much output data, way too quickly over SCP from the school supercomputer.


Its not about what they'll do in a "what we always did just more/faster" kind of way. Its about the new, unimagined things that will take off once everyone is linked well at 1gbps.

When we went from modems to "broadband" (>1mbps or so) it grabbed the good-ole-boy music industry by the throat and shook them so hard they've still got PTSD. >10 and the movie/tv industry had their reckoning. Consumers won huge both times. Can't wait to see whats next.


But we had music piracy before 1mbps , only limited. And we had video torrents at before 10mbps(and at low quality), so those use cases we're clear, and we could have guesses that maybe legal versions of those activities will also appear.

But currently the limits are more tied to physiological limits - the highest bandwidth channel for a human is the eyes and we know their limits (somewhere between 1080p to 4K depending on various things), and we've almost covered that.

The only open question is VR streaming, but i wonder whether this work considering the extreme low latency required ?


> But we had music piracy before 1mbps , only limited.

Quality has a quality all its own. Music piracy in the era of Old Napster (before they went legitimate) was terrible. Horrible mislabeled MP3s at a bitrate best described as "wax cylinder quality" with no album artwork or other extras buying a CD would provide. Plus, of course, you were stuck behind dial-up, and even poor-quality MP3s go at about a megabyte of data per minute of music, so you were looking at substantial download times over unreliable connections for anything approaching a full album.

It was wonderful only in that it worked at all, and that it had music record stores wouldn't touch, at least not if you were out in the boonies. It was one of the few pre-YouTube channels for obscure and out-of-print music. However, that didn't make it convenient in any absolute sense, and it was usually only high-quality compared to not having the music at all.

These days, with most torrent software, you can download a whole discography, in high-quality FLAC with full album images and so on, well-organized and correctly labeled, simply by copying-and-pasting a magnet link. Multiple megabyte downloads go by in seconds. The difference in the experiences is night and day.

The music industry got angry at Old Napster. They're being crushed by Bittorrent.


Sure there we're definitely improvements in music quality.And sure 1mbps wasn't enough , But it was easy to forecast demand for bandwidth - it's reasonable that people will want higher quality music, we had decent guesses what's the maximum bandwidth that would require , and the amount of music people listened to stayed about the same - i.e. a lot.

But sure, BitTorrent definitely had a bigger impact on the music industry.


>But currently the limits are more tied to physiological limits - the highest bandwidth channel for a human is the eyes and we know their limits (somewhere between 1080p to 4K depending on various things), and we've almost covered that.

This goes counter to my experience and thinking. I am sure that this is true on a phone form factor but 8k TV is definitely better than 4k on a 60"+ screen (I've never seen them side by side on a smaller device and I am unsure how big the demo set up was).

Current 4k has limited colour capability, ramping up the stage to richer pallets will increase the bandwidth demand, as will high frame rates and the increase in resolution to 8k.

VR streaming is interesting but I guess that filming it will be a big challenge !


Beyond resolution, sure there are other factors.

But if we only talk about resolution , 4K reaches the limits of the eye at around 1.1 meter [1], and most people sit farther than that from their tv.

But usually at display demos , they let you sit closer to the tv , this and other factors (compression, placebo, framerate , maybe brightness , etc etc ) might help create better experiences or illusions/salesmanship with 8K.

Another option is for rare few maybe the retina limit is higher .

[1]please use this calculatoe physiological limits of eyes r to see thhttp://isthisretina.com/


All sorts of new media experiences open up with more bandwidth. At the beginning it was just HTML. Then everyone was able to get streaming music and streaming video! Youtube was huge leap in people's internet experience was like. Then now with higher upload speeds, we're getting broadcasting services like Twitch. What's next? Maybe streaming game services can finally take hold. Or maybe just completely decentralized computing for all tasks.


I had 100 Mbps. it was 50/month. They sent out a promo for 1000 Mbps for 65/month. I got it, wired ethernet, can actually get around 950 Mbps on the test sites.

Saw basically no difference in day to day usage. Even saw 0 difference in download speeds off of VPS servers etc.

Was hoping my employer's VPN would be like I'm on site (or as close as one could get). Bottle neck there as well.


Given that so many folks nowadays just pipe their cable or DSL modems straight into an 802.11g or n wi-fi router and then use that to connect most, if not all, of their devices, I think many customers who pay for these gigabit connections might end up disappointed.


While I understand that gigabit Internet needs to be useful to the average consumer in order to be sustainable economically, having more bandwidth (and IPv6) at home allowed me to do things I would have never thought of doing before. As someone in business where every penny counts, it makes a difference.

When your average consumer is stuck on low speeds, imagine what the consequences are on those who rely on this for their work (not to mention for those who tinker and innovate in their "garage").

Where I live in Canada, even for your average business, it's either a barely sufficient DSL/cable line at 70-100$/month, or fiber which usually starts at 500$/month. It's ridiculously expensive to get good bandwidth. Sometimes it's easier to stick point to point antennas on the roof and connect directly to a datacenter.

Faster speeds: they will find a use for it, and everyone wins.


The limiting factor will become the local wifi network. I still find that real wifi speeds are a fraction of what is advertised and I know very few people who connect their devices with ethernet.


Yeah, wireless standards tend to be advertised as the maximum speed you can get with perfect conditions, but WiFi drops the complexity of the modulation the further you get away from the router and/or the more inteferance you have around.

802.11ac can drop all the way back to 6Mbps under really bad conditions!


In northern Europe gigabit to the home isn't uncommon. As long as you start to wire up new developments with off the shelf ethernet equipment, the rest pretty much follows. I'd wager ethernet is probably less expensive than cable TV-cable, and ethernet carries everything.

The big difference with faster speeds is that you don't have to muck about with QoS. Just throw bandwidth at the problem. This means you can actually sell a product that would depend on it, since support becomes manageable.

1 Mbps is adequate for VoIP. But at 10 Mbps I can use it without thinking. I can get an account with any provider, start simultaneous calls, and there are no dropouts even when torrenting.

100 Mbps is adequate for IPTV, in broadcast HD quality, when the streams are over a controlled network. But with 1Gbps it becomes just another service, even if my kids stream too.

The big difference, as someone else pointed out here, is when upload speeds catch up. Assymetric broadband is quickly ACK limited, which disturbs real time streaming in the other direction. Latency is key here, and just getting rid of that cable modem and hit the wire directly really helps.

Regarding piracy, I think it is obvious that it goes down when real time streaming gets practical. It's more convenient, really. So more broadband really does transform markets. The problem for the past ten years has been that it's not really deployed globally, and many of the rich western countries are stuck with cable modems because of lock in effects. This should be an obvious opportunity for an entrepreneurial spirit, just wire up a suitable area and you can be first to sell IP services to them. Cable TV and phone service are moving to IP, it's just a matter of time.


When 4K (2160p) video will become mainstream we're gonna need more bandwidth to stream it (Netflix for example). Even with h.265 or V8 it's atleast double the bitrate compared to 1080p.


Or we could stream 2K with less compression. They show 2K in most cinemas and it looks pretty decent with low compression.


To load web pages with a gig of tracking Javascript on them.


Ha ha ha… sigh. It's funny, yet also sad, because it's true. As someone who's been building web sites since the late '90s, page weights seem to have grown even more radically than the proportionate growth in the typical home connection speed. Connections get faster, but page loads get slower.

Don't forget the CSS "frameworks," gratuitous fonts, and gigantic auto-playing background videos, too.


It's strange to me that these researchers don't look at countries that have had 1Gb/s residential Internet connectivity for years. Why not investigate how Internet usage changed in Slovenia, or S Korea, or some other more bandwidth advanced country than the USA?


A before/after in a whole country is not as clean an experiment as the randomized trial done in this study. With the whole country, there could be a whole bunch of other variables changing in the before and after time frames. (Like maybe some new service comes online that results in people using a lot more bandwidth.) The design of this study allows the researcher to hold those things constant.


I co-run an ISP in Montana that started out wireless (40/10) and now does FTTH (100/100 Mbps to 1/1Gbps). We don't see much difference between our wireless and fiber customers in terms of consumption at the moment. I am interested to see what happens when streaming VR becomes prevalent. Our biggest streams are Netflix 4K streams; biggest downloads are steam/console game downloads (~25Gbps).

It is fun to watch new customers bandwidth consumption. It maxes out for a few days to a week...and then then drops. As far as we can tell people back everything up, update everything, torrent everything .... and then run out of things to download / upload.


Obligatory educational link on bufferbloat

http://www.bufferbloat.net/projects/bloat/wiki/Introduction

'One of the most insidious things about bufferbloat is that it easily masquerades as something else: underprovisioning of the network. But buying fatter pipes doesn't fix the bufferbloat cascades, and buying larger buffers actually makes them worse!'


I wonder if there is a sampling bias given that they selected users who were already paying for 100 mbps, which is a pricier premium tier.


You could always ask. :)

The fellow who wrote the article is one of the folks who performed the study.


A good portion of the bandwidth that has been added so far has simply been consumed by more bandwidth intensive versions of existing products. The user behavior hasn't necessarily changed.

See Facebook's auto-playing videos as an example.


I won't be using faster internet speeds. At least I won't until I'm assured data caps are a thing of the past.

I "self-throttle" data speeds by opting for for the lowest tolerable internet service to keep me under my cap.

F@#$ing Comcast.


Data caps are likely to become more prevalent with faster end user access pipes.

ISPs have been (similar to yourself) using the limited access speeds to self-throttle the data usage of customers to limit their aggregated backhaul costs.


If data caps are to be more prevalent, I'd like to see ISPs regulated as a utility. If I use more power, or make more phone calls I'm happy to pay more for that use because the connection fees are equal between high and low demand consumers.

If I pay higher monthly connection costs for high speed access (the monthly service charge) and pay to actually use the service at the intended speed (additional fees generated by a data cap), I am being double billed for service.

Data caps in conjunction with a higher costs for high speed access are a predatory business model.


They won't use it - not soon anyway. There just isn't much of a usage case for gigabit in the home context even if you're a family that needs multiple 4K streams for some bizarre reason


The real impact of Google Fiber--assuming it doesn't get sued to oblivion by one of the other ISPs--is that it sets a new standard. Because Google is now providing better service than, say, AT&T, AT&T must now up their game to keep customers.

Thus, as Fiber rolls out to new areas, expect the pre-existing players to either improve their own services (to everyone's benefit) or try to undercut Google (to their own detriment).


Well I'd hope that many of the 'consumers' will become 'producers'. Although I'm not too optimistic, considering Internet companies have done their best effort to turn the fundamentally decentralized architecture of the Internet into an centralized one over the past decade or so.


Seems to me that it's more about what opportunities will arise to take advantage of higher speeds. Most services aim at the typical consumer, general public, so until the general public has high speed access on average there will be few "reimagine how you do X" type services that leverage it.


It shouldn't take that much bandwidth to do things. I suspect that what's really happening is that increasing the end bandwidth reduces the packet loss rate in the cable system's routers. Someone needs to test this with something that plays streaming video while noting packet loss rates.


Low latency gaming, I hope.

That could even open the door to P2P multiplayer paradigms, which might make MMOs support more player in a same world.


4K virtual reality web in 3D. Next attempt at vrml / second life might be it


As internet speed increases, we will see computers turning into thin clients. Why pay for a ridiculously powerful desktop that's idle 98% of the time?

Once you reach gigabit speeds, you can do all kinds of crazy cool things, like editing 4K videos remotely.

https://www.youtube.com/watch?v=VxHOzG-1KvM


Why pay for a ridiculously powerful desktop that's idle 98% of the time?

Because you actually own the physical machine; some may disagree, especially the DRM crowd, but there's a big difference between having hardware that is and on your property and just having volatile control over a machine in some unknown location.


People used to buy DVD machines and DVDs, now they have Netflix and set top boxes.


...and buy BluRays instead of DVDs.


This always seems to be one of those things on the verge of happening, but every effort to promote it has failed.

Even 10ms of latency can be a serious drag on your user interface. Thin clients kind of suck for this reason.


Until the speed of light starts increasing, latency will always be an issue for thin clients.

Fun experiment: Try ssh-ing into a web host or a school account. Peruse a few directories and open a few files in VI. Type in a paragraph of text. Now try the same on your local machine's terminal and compare the responsiveness!


I'm a developer, I SSH into remote machines and type code all the time. As long as you're 10-15ms away, it's not an issue.


10-15ms is good enough of a ping for an FPS. That's fantastic!


Not really. So far the use cases have become wider rather than simpler, and we've simply found use for that extra power. You need a pretty powerful machine just to watch 4K video, and to fill up that gigabit pipe. And if you want to play modern games you need to get a very powerful one.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: