Hacker News new | past | comments | ask | show | jobs | submit login

For a specific example of latency incompetency that immediately came to mind while reading this: Chrome.

Chrome will not run properly on first execution, as in ran for the first time after a cold start of the computer, when executed off a HDD. Why? Because the HDD takes too long to read off data. Chrome expects SSD latency and fuck your computer if it's not residing on one.

When executed off a HDD, I've found Chrome only runs properly from second execution onwards after the underlying operating system has cached most of the stuff Chrome wants in RAM in anticipation of subsequent executions.

I want to say this is optimization for ever more powerful hardware, but I'm inclined to say it's also sheer incompetency that Chrome literally can't fallback gracefully if it doesn't get data as quickly as it wants.




Made some offhanded comment about Chrome perf on Twitter earlier this year and a Google friend replied something like "Well, pretty much the whole Chrome team just got upgraded to local test machines with at least 32gb of RAM. Godspeed everyone."


Makes you wonder what would happen if companies occasionally did the exact opposite to their engineers.

"Oh, you know that 32 GB machine you've got? We're replacing it with this new 16 GB one. If the test suite is too slow on your new machine, I guess you'll just have to make the tests faster."


What would happen is those engineers would rightly be concerned that their leadership had lost their marbles, and would quickly find new jobs elsewhere. These kinds of “fun” thought experiments don’t pan out in the real world.


You having fun/ being able to develop fast isn't your customer's problem/ the problem's of people actually using the things you build. Windows Vista devs with 8GB of DDR2 when real world customers had 512mb of DDR learned this lesson hard.

EDIT: Also - client side native software and web dev are insanely different. Web/ serverside people seem to disregard this. Constantly.


> You having fun/ being able to develop fast isn't your customer's problem/ the problem's of people actually using the things you build.

I cannot parse this sentence. What does vista having its minimum requirements poorly defined have to do with being forced to develop on underpowered hardware?

If my boss says “we are giving you a worse machine because we think that will make you write better code” I am out of there. There are plenty of ways to emulate weaker hardware and do performance testing and to make it a development priority that don’t involve intentionally hamstringing your engineers.


>I cannot parse this sentence. What does vista having its minimum requirements poorly defined have to do with being forced to develop on underpowered hardware?

Basically: An end-user has computer with 4GB of DDR3. Devs for <software> wrote for and tested on a machine with 64GB of DDR5. <Software> ends up running like shit on end-user's computer.

It isn't the end-user's problem that the software runs like shit, because the devs programmed to an unrealistic common denominator. The end-user is going to find <software> that doesn't run like shit on his computer, and the devs only have themselves to blame for losing a customer because they were so out of tune with reality.


> It isn't the end-user's problem that the software runs like shit

It may not be their fault but it almost certainly is their problem…


Tell me you've never developed native applications outside an iOS simulator, or OS development without telling me...


Do you actually have something to add to the discussion? Or you just want to take potshots at me?

You’re all over the place. Please explain why you think using underpowered hardware is the only legitimate way to write software that works on that hardware.


You can write your code on a nice fast machine. A dev machine should be as fast as possible. Those devs with their big fast machines should be required to run and test on much lower spec machines though.

Testing only in a VM on a beefy dev box leads to terribly performing software on customer machines. There's a multitude of performance problems that only come up when a system starts paging to disk, a machine has a HDD, or a CPU gets maxed out. These issues will be completely occulted on a dev machine with tons of RAM, 16 cores, and an NVMe disk.

Far too many developers have the beefy dev box with no requirements to test on more prosaic configurations. Even limiting a VM's CPU and memory isn't a good environment for performance testing because it's still faster than actual low end hardware.


And it can't be cost issue, the crappiest machines are some cheap laptops from any big box retailer. Okay, that fact itself might make getting them harder, but still. Just buy a once a year a low few hundred euro laptop and add to pile. Rotate in 5-10 years or as they fail.


Preach my friend


Yes, I started this discussion thread you're responding to. No, there isn't really a way to do native development without experiencing what your customers do. Build locally on the high powered one, run on your lower spec'd machine. This isn't a potshot. This is me being annoyed that 15 years later, people keep making the same mistakes of not testing on "real world" hardware. Your VM isn't a real user representation. Stop thinking so.


I think the problem is the either or nature. One dev one box is a mistake. The team should have a few they share.

In particular I often fight the slowest machine leaving the office. That should stay for a long time, set aside for testing.


It's a matter of tact. They could say, "Here are your 32GB RAM machines. Your old one? You get to keep it too. Make sure Chrome works perfectly on your old machine."


This is how it should go. Not setting the high perf machine as a benchmark.


At least not every engineer. I love using older machines and any chance to make things work in a hardware friendly way.

I have to say, I appreciate the insane expertise browser engine developers have in making JS and layouting run fast.


This is what CI (Continuous Integration) and CD (Continuous Delivery) systems were designed for. If certain tests exceed the performance budget of a low-resource environment in CI/CD, the engineer responsible will be required to fix it before a release can ship.


You’d have less fallout if you just delay giving them upgrades long past the point they are due.


This is what CI can be used for


32gb is hilariously low for Google, my machine has 196gb of ram.


bragging about having 196GB in a thread about real world performance is exactly why this thread happened


No, this thread happened because not enough effort was put into automating performance testing.

Which, unlike making everyone develop on palm-pilots, is the correct solution to performance problems.


Wasn't meant to be a brag, just letting you know that 32gb of ram is on the low end for most employees.


I can tell from the build time memory requirement of Blink.


I was just whining how when Chrome-based browsers first open up on my 5400rpm hdd, I paste in the URL and press Enter, then it loads the default home page and wipes out what I pasted... "you're goin' nowhere!"


Opening Google maps, the text entry is editable far, far before when it should be. As soon as the page loads, I can start typing. Then the JavaScript starts running, and helpfully selects the text entry, placing the cursor at the beginning of the text field. Then some cached suggestion loads, inserting some suggested search for the area being viewed.

The end result is that anything I type gets jumbled or overwritten multiple times before the page settles down and can actually be used.


This is the general curse of async UI these days, and it's everywhere from the web to native desktop apps to the OS itself.

Obviously, it can be done right, but it seems that most devs who are so eager to jump on the async bandwagon have no idea that they have to make that effort now.


That's far from being only a Chrome problem. That extremely irritating behavior is going to be with us until OS developers see the light and we get hard/soft realtime GUIs.


I've seen this exact behaviour in a few Electron apps on a Raspberry Pi 400.

Make sure nothing else is running, start the application, expect failure, start it again, works as expected.


In the first-boot-on-HDD scenario, why would latency make the program fail to start at all? I'd expect it to just start slowly.


Unanticipated race condition perhaps? A process takes 2 minutes instead of 5 seconds, and then a later part of the startup fails because it has no way to handle the totally unexpected lack of data


For starters Chrome will not wait for extensions to load/initialize. Its possible in case of HDD there might be more things that simply time out/chrome disables because it doesnt want to wait.


JIT. It's using a "known state"/ cached blob to start then quickly falls apart as it does it's "SSD expected" memory management voodoo.


This seems weird to me, can you define "run properly"?


Chrome also runs a full built in antivirus scan of your whole PC on first launch after updating.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: