Hacker News new | past | comments | ask | show | jobs | submit login
From Cloud Chaos to FreeBSD Efficiency (dragas.net)
32 points by mpweiher 9 months ago | hide | past | favorite | 7 comments



I'm a fan of FreeBSD, but nothing really in this article really is super specific to switching to FreeBSD, aside from the customers lucking out that the crypto-miner that was injected into their system wasn't able to run on there.

This feels more like an "experienced dev/sysadmin shows inexperienced dev team how to better manage their environments" type article. That in itself is a good message and worthwhile, but the FreeBSD stuff is a bit besides the point.


I disagree. I think, for better or worse, the ecosystem around Linux and SaaS companies pushes everyone toward operational complexity and ease of scaling fast. So, so many companies don't need (and shouldn't use) that amount of complexity, but it's massively normalized.

I think Kubernetes is an incredible piece of software, but by the time you've really adopted it in the way the ecosystem recommends you have dozens and dozens of moving parts just to run a single service on it. Conversely, the FreeBSD ecosystem would push you to more OS-based solutions (such as jails and the backup solutions mentioned in the article).

Could you do the same approach on Linux? Sure, and I did that for a decade, but these days if you are googling around for best practice that's not remotely where you're pushed, or where the upstream effort is being spent.


I agree, but with "funding model" rather than "ecosystem". With VC funding the cost of failure is low, and arguably failing to scale will make you look worse than simply running out of money. In a bootstrapped business, the same pressure doesn't apply. Also, I imagine it is easier for a small team spending their own money to avoid slipping into resume-driven development.


interesting point, I think there is large chunk of truth in real life here.

stretching it a bit further, I can imagine things like "we've used the best clouds money can buy to run it fast, but task is soo complex, not every project can deal with such complexity, thus is current performance is at it's peak. Also, we used the best DX practices so all team members, while tired, do feel happy about projects progress".


I agree with your view on this too (though I'm not fan of FreeBSD, quite opposite), OS doesn't seem relevant here much.

I do observe similar cases here and there, especially on "small" sized projects with Laravel/WPs - basically there is no one to say "hey guys, 20 years ago, the same type of load was working faster on my Dual Pentium 3 server with 512MB of RAM - what your code is really doing under the hood?"


Kind of my take as well... I'm a pretty big fan of using containers on Linux, which is separate from K8s. I like the use of read-only containers, the isolation and ease of reproduction. It's not a free pass, and there are issues that can and would occur. I'm not sure there's any guarantees on any given platform.


>Kubernetes......

>they found a sufficiently powerful machine, with 128GB of RAM, 2 NVMe drives of 1TB each, and two spinning disks of 2TB each for less than 100 euros per month.

We could get a 192, or even 288 Core CPU in a single socket, depending on your Single Thread Performance requirement. And we are only about 12 months away from 256 Core with 512 vCPU. 8TB of Memory and NVMe Drives so fast that makes DRAM memory caching nearly irrelevant for most Web workloads.

I just dont understand why aren't people pushing more complexity or CAPEX to hardware. And instead pushes all the complexity to software which has a recurring cost.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: