Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Place I’ve recently left had 10M record MongoDB table without indexes which would take tens of seconds to query. Celery was running in cron mode every 2 second or so meaning jobs would just pile up and redis eventually ran out of memory. No one understood why this was happening so just restart everything after pagerduty alert…


Yikes. Don’t get me wrong, it’s always been this way to some extent - not enough people who can look into a problem and understand what is happening to make many things actually work correctly, so iterate with new shiny thing.

It seems like the last 4-5 years though have really made it super common again. Bubble maybe?

Huge horde of newbs?

Maybe I’m getting crustier.

I remember it was SUPER bad before the dot-com crash, all the fake it ‘til you make it too. I even had someone claim 10 years of Java experience who couldn’t write out a basic class on a whiteboard at all, and tons of folks starting that literally couldn’t write a hello world in the language they claimed experience in, and this was before decent GUI IDEs.


> It seems like the last 4-5 years though have really made it super common again. Bubble maybe?

Cloud providers have successfully redefined the baseline performance of a server in the minds of a lot of developers. Many people don't understand just how powerful (and at the same time cheap) a single physical machine can be when all they've used is shitty overpriced AWS instances, so no wonder they have no confidence in putting a standard RDBMS in there when anything above 4GB of RAM will cost you an arm and a leg, therefore they're looking for "magic" workarounds, which the business often accepts - it's easier to get them to pay lots of $$$$ for running a "web-scale" DB than paying the same amount for a Postgres instance, or God forbid, actually opting for a bare-metal server outside of the cloud.

In my career I've seen significant amount of time & effort being wasted on workarounds such as deferring very trivial tasks onto queues or building an insanely-distributed system where the proper solution would've been to throw more hardware at it (even expensive AWS instances would've been cost-effective if you count the amount of developer time spent working around the problem).


Just to give a reference for those that don't know, I rent a dedicated server that has 128gb of ram and 16 core processor (32 threads) and 2tb of local SSD storage and virtually unlimited traffic for $265 USD a month. A comparable VM on AWS would be around $750 a month (if you reserve it long term) and then of course you will pay out the nose for traffic.


Technically we were a tech startup with 10+ “senior” engineers which scrape entire web ;D


the one of those most likely to be humming along fine is redis in my experience. once ssh'd to the redis box (ec2), which was hugely critical to business: 1 core, instance had been up for 853 days, just chilling and handling things like a boss.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: