Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Has lower latency than" fiber. Which is not so shocking. And, yes, technically a valid use of the word "faster" but I think I'm far from the only one who assumed they were going to make a bandwidth claim rather than a latency claim.


I wonder where does the idea of "fast" beign about throughput comes from. For me it always, always only ever meant latency.


Latency to the first byte is one thing, latency to the last byte, quite another. A slow-starting high-throughput connection will bring you the entire payload faster than an instantaneously starting but low-throughput connection. The larger the payload, the more pronounced is the difference.


ehh... latency is an objective term that, for me at least, has always meant something like "how quickly can you turn on a light bulb at the other end of this system"


Term under discussion is “speed” which goes beyond latency. If you have a low latency but high bandwidth the link is “faster” i.e “time to last byte”

Latency is well defined and nobody is quibbling on that.


An SR-71 Blackbird flies faster than a 747. Nevertheless, a 747 can get 350 people from LA to New York faster than the SR-71.


If I have to download a 4gb movie the roundtrip latency is not so important. With 4MB/s I can get the file in 1000s, with 40MB/s I can get it in 100s


Until pretty recently, throughput dominated the actual human-relevant latency of time-until-action-completes on most connections for most tasks. "Fast" means that your downloads complete quickly, or web pages load quickly, or your e-mail client gets all of your new mail quickly. In the dialup age, just about everything took multiple seconds if not minutes, so the ~200ish ms of latency imposed by the modem didn't really matter. Broadband brought both much greater throughput and much lower latency, and then web pages bloated and you were still waiting for data to finish downloading.


I think its just because ISPs have engrained in people that "speed" means bandwidth when it comes to the internet. Improving bandwidth is pretty cheap compared to improving latency because the latter requires changing the laws of physics.


If only the bottleneck was the laws of physics. In reality, it's mostly legacy infrastructure, which is of course much harder to change than the laws of physics.


> I wonder where does the idea of "fast" beign about throughput comes from.

A cat video will start displaying much sooner with 1 Mbps of bandwidth compared to 100 Kbps:

> taking a comparatively short time

* https://www.merriam-webster.com/dictionary/fast § 3(a)(2)

> done in comparatively little time; taking a comparatively short time: fast work.

* https://www.dictionary.com/browse/fast § 2

So an online experiences happens sooner (=faster-in-time) with more bandwidth.


A 9600 baud serial connection between two machines in the 90's would have low latency, but few would have called it fast.

Maybe it's all about sufficient bandwidth - now that it's ubiquitous, latency tends to be the dominant concern?


Presumably from end users who care about how much time it takes to receive or send some amount of data.


I assumed they were going to make a bandwidth claim and was prepared to reject it as nonsense.


Instantly assumed that it was clickbait.

So basically: Lower latency, lower bandwidth?


> So basically: Lower latency, lower bandwidth?

No: DAC and (MMF/SMF) fibre will (in this example) both give you 10Gbps.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: