Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Depends on if you value latency for the user. Saying I try both and choose the one coming first hurts no one but the servers not being protected by a client cache. But there’s absolutely no reason to believe a client has a cache that masks requests. AFAIK there’s no standard that says clients use caches for parsimony and not exclusively latency. As a matter of fact I think this is a good idea if it ever takes time to consult the cache, and the trade off is more bandwidth consumption which we are awash in. If you care that much run a caching proxy and use that and you’ll get the same effect of the client side cache masking requests. But I would say it’s superior because it always uses the local cache first and doesn’t waste user time on the edge condition in their cache coherency. It comes from Netscape which famously convinced everyone that it’s one of the hardest problems. That leads to the final benefit, the cache doesn’t have to cohere. If it’s too expensive at that moment to cohere and query then I can use the network source. Again the only downside is the network bandwidth is more consistently user. I would be hard pressed to believe most Firefox users already are grossly bandwidth over provisioned, and the amount of a fraction of a cable line a web browser loading from the cache no one could even notice that.


> Depends on if you value latency for the user.

I do. Which is why it's silly to throttle the cache IO.

The read latency for disks is measured in microseconds. Is it possible for the server to be able to respond faster? Sure. However, if you aren't within 20 miles of the server then I can't see how it could be faster (speed of light and everything).

These design considerations will depend greatly on where you are at. It MIGHT be the case that eschewing a client cache in a server to server talk is the right move because your servers are likely physically close together. That'd mean the server can do a better job making multiple client requests faster through caching saving the memory/disk space required for the clients.

There is also the power consideration. It take a lot more power for a cell phone to handle a network request than it does to handle a disk request. Shouting into the ether isn't cheap.


I think you’re misunderstanding the context of where Firefox works. Have you ever sat down at someone’s choking machine and wondered how on earth they’re seeing it hitting blocking IO because something’s misbehaving in the background drowning out the disks ability to service work? They don’t run in a well controlled server farm, but even there you can expect to see drives behaving poorly leading to io blocking, screwing up the cache performance which in well behaved environments works fine but in degraded environments means you defer to the network with a 0s timeout, meaning you might get no value from the local cache at all but the dual attempt minimizes the “decision time” by instead of waiting for a timeout then requesting it instead requests and goes with what comes back first. Even on a well behaved well managed hosts stuff can wake up and start indexing thinking it’s a good quiet time for that and for short periods you might end up with some slowdown in the cache behavior. Even on that normal multiprocess machine that’s well tuned you’ll sometimes see the cache slowing below the network.

The cost is the cache isn’t masking the remote infrastructures true demand. But there’s no spec that says it must afaik. I think the most interesting angle on this is that - infrastructure users serving data are paying for this latency optimization. Is that right? My view is this is the concerning thing - if widely adopted it could materially increase the cost to service providers some of whom operate on thin or negative margins, or with no income at all.


What happens when your 200 browser tabs are all looking for updates?


It makes sense to apply some throttling (and reduce CPU priority) for inactive tabs.

But the active tab when browser window is an active one should work without any throttling for lower response time.


If your 200 browser tabs are saturating your 8gbps link to your m.2 ssd, what do you think they'll do to your 10mbps connection to the internet?


What if it’s a virus scan or some background indexer browning out io resources on the disk path, or given it’s a consumer browser, some malware eating up resources unbeknownst to the user of the browser. Surely you’ve experienced noisy neighbors :-)

It’s an interesting approach because it removes the assumption browser caches mask volume for infrastructure purposes rather than user latency. I think you can overwhelm costs for low or no income services. If this is widely adopted we ask the providers of our servers to pay for that latency improvement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: