Hacker News new | past | comments | ask | show | jobs | submit login

I wonder if there is some major propaganda push on HN to discourage actual software engineers from ever running their servers. Judging from the comments in this and similar posts, sounds like it is common belief that it's super expensive and a full time job to run bare metal services. AWS and GCP free credits have created a generation of developers that think running a webserver, a webapp and a database is rocket science, and when stuff breaks the whole company might go bankrupt because no one could ever save a corrupted database.

Now a company with thousands of server should probably leverage big cloud offerings, but when we see a post that someone is able to save 10x by going bare metal, and they don't seem to be running a large operation, we should celebrate that, rather than parrot that it is a bad idea, oh god what have you done, no one will ever be able to run servers on their own, it's not worth it.

For comparison, I manage half a dozen servers for clients and I literally do 1 hour of maintenance a year.




I think so. People create and/or sell new technologies. Then they try to market them. For instance, by writing articles exaggerating the advantages and/or downplaying the disadvantages. Then this get picked up by other folks who want something new and interesting to write about. And suddenly there is a new and hot thing even if there are lots of cases where it does not help.


In my experience running AWS etc. is something that takes at least one dev who is a subject matter expert, so it doesn't seem like it would be at all problematic to have the same thing for bare metal - a dev who is the SME on the bare metal.

Also if you have everything ready to push to a cloud provider in the case of the server going down, someone like Vercel or Netlify (easier to maintain than AWS in my experience) then I guess you are pretty setup.


Disagreeing with your thesis isn't propaganda.

If anything, you could argue the tweet is propaganda, because the labels for the AWS bar chart is missing. (Not that I think that's what's happening).

Hetzner nodes aren't actually 10x cheaper than AWS instances, so there's stuff like bandwidth usage going on.

My point being there's no silver bullet, sometimes cloud makes sense and sometimes it doesn't.


> Disagreeing with your thesis isn't propaganda.

The issue is not that people disagree, it's that:

1) People use surface level arguments that have been debunked a few times (SLA, backups, reduction in headcount)

2) There is an aggressively high frequency of these arguments.

If you couple that with the marketing spend of major cloud providers; it's easy to paint a picture of at least a small contingent of people who's livelihoods depend on keeping cloud growing; combined with a contingent of people who have skilled into a cloud (sunk-cost fallacy) and don't want to reskill and you have a decent self-reinforcing propaganda machine.

It likely helps things that there's nobody except a tiny plurality of nerds who really care to refute the marketing claims. There's not many colocation or hosting companies with even 1% of the marketing spend of even the third largest hyperscaler (google).

I wrote about the three major reasons you'd want to use a cloud provider here: https://blog.dijit.sh/gcp-the-only-good-cloud/


It is not comparing the same thing, but if you look strictly at the specs of the hardware you get then AWS can be easily 5-10 times as expensive as Hetzner. If you include bandwith the difference can be far more ridiculous if you get anywhere close to using the amount of traffic that is included in the price with Hetzner.

If I take the Hetzner AX102 (16 cores, 128GB ECC RAM, 2x1.92 TB NVMe SSD) that costs about $0.2 per hour. An EC2 instance (on demand) with 32 vCPUs starts at around $1.2 per hour, so 6 times as expensive. And this doesn't include storage which would be something like an additional $0.3 per hour for AWS.

Of course this isn't a fair comparison. Using on demand instances is somewhat unfair to AWS, but reserved instances also wouldn't be entirely fair as you're only locked in for a month with Hetzner (with a setup fee for most server) and not 1-3 years. I'm guessing with the CPUs here and assume that 2 vCPUs are roughly one real core. And the storage is not comparable at all.

You do get a lot of other things with AWS, but you usually also pay for those. Which can certainly be worth it. But I am slightly tired of people arguing that you save money with the cloud because you can scale down automatically. And this argument doesn't really work in many cases if you get that much more hardware for your money.


Yeah, hetzner nodes aren't 10x cheaper, because the products are not comparable. AWS/cloud services innovation is that their offering is incomparable and setup in a way which makes comparison very difficult.

It is not only the hardware cost that saves you money, but also the fact that if you rent dedicated servers you tend to develop your infra in a different way which is ultimately way more cost effective.

Dedicated servers definitely aren't a silver bullet, nobody is saying that. Managing your own servers requires work - how much, that is arguable and depends on many factors. To me the problem is more that AWS and cloud companies are selling a very expensive silver bullet, which really isn't even a silver bullet. In my experience most of the time it doesn't make sense to buy that.


I often find myself in a mid-size situation that's awkward. Hetzner doesn't provide SLAs, so if I have to then I need to run my clusters across multiple providers. If I need to support full DR in 8hrs I need to have something on another data center, I can't just run my AWS terraform in a different region. If my infra were 10x larger I'd go dedicated, but right now I'm happy enough to eat the AWS premium.

> develop your infra in a different way

Totally agree, but for many people that ship has sailed.


> Hetzner doesn't provide SLAs

They do if you enter an agreement with them for professional services; I'm not certain but I think there's an SLA also on their "managed" servers: https://www.hetzner.com/managed-server/

However, if you want a "no-human" SLA with bare-metal rented servers, I could recommend Gcore: https://gcore.com/hosting/dedicated

Their managed kubernetes even supports bare-metal nodes, which is actually something I'm using.


But the managed servers look comparable in price to AWS instances at the same specs? (I'm eyeballing, I don't want to pull numbers on my phone).

I don't want bare metal nodes, they put a high lower bound on the cost of a distributed cluster. I want a "herd" distributed across data centers, and a system that spins up new ones if the old ones die. I don't care if AWS shuts down an ec2 instance without warning, because a new one will automatically replace it.


It's weird how certain everyone is that it would take a ton of time to maintain. I mean, maybe. Depends on what you're running.


People want simple answers. "It depends" annoys everyone :-\


I thing this is partly the result of people simply parroting shallow statements concerning 'backup, availability, disaster recovery,...' they've read or heard elsewhere, partly the result of a general push towards a service economy where it is considered smart to outsource to 'the experts'. While this may make sense in some cases - e.g. those where the task at hand is not recurring and requires knowledge or equipment which is not at hand and hard/expensive to come by - it is often more expensive and less flexible than just doing the thing yourself.

Of course HN is a site frequented by people who are wont to start those same service companies so it is not surprising for there to be a tendency to play up the 'risks' of doing things yourself instead of 'leaving it to the experts'.


I have a 10kW 4090 training cluster in my garage with water cooling.

It paid itself back in 3 months if I'd used AWS services.


Not something that requires HA, backups, scaling, auditing, logs, security, and such, though.

Would you run your prod inference cluster the same way?


It's little brother is the prod inference cluster. It's had an uptime of 100% over the last two years.

People seriously don't realize just how _big_ computers have gotten. The AWS mentality is still stuck in 2005 when a t2.micro was a decent computer.


> The AWS mentality is still stuck in 2005 when a t2.micro was a decent computer.

The T2 EC2 instance family got announced in 2014: https://aws.amazon.com/blogs/startups/announcing-amazon-ec2-...


The first instance was the m1.small it's specs are marginally better than those of t2.micro, it's not been generally available since 2022 if memory serves.

https://aws.amazon.com/blogs/aws/ec2-instance-history/

https://instances.vantage.sh/aws/ec2/t2.micro

https://instances.vantage.sh/aws/ec2/m1.small

Please nitpick better. Nothing is more embarrassing then being pedantic and wrong.


And t2.micro was not a decent computer at that time.


You don't need most of that until you're making enough money to hire someone to worry about it.


is that enough to heat your house in the winter?


>I wonder if there is some major propaganda push on HN to discourage actual >software engineers from ever running their servers

How to scare a computer engineer in 2024? Invite him to configure a server.


> I wonder if there is some major propaganda push on HN to discourage actual software engineers from ever running their servers. Judging from the comments in this and similar posts, sounds like it is common belief that it's super expensive and a full time job to run bare metal services. AWS and GCP free credits have created a generation of developers that think running a webserver, a webapp and a database is rocket science, and when stuff breaks the whole company might go bankrupt because no one could ever save a corrupted database.

Related to this observation you really should watch DHHs keynote talk at Rails World 2024 from about minute 20 to 30: https://youtu.be/-cEn_83zRFw?t=1296

He addresses exactly this phenomenon.

EDIT: changed URL to include timestamp to the start of the relevant content.


>sounds like it is common belief that it's super expensive and a full time job to run bare metal services.

Remember the age old saying: Time is money.

If you run your own servers, you are spending your own time and thus money to manage them.

If you buy services from hosting ("cloud") providers, you spend money to have someone else manage them.

The question then becomes: Do you value your time or your money more? And then you pick the one that spends the one you value less.

Some people value their time, some people value their money. YMMV.


Your comment doesn't take into account that some things cost more than others, and might be not worth it their price.

Running a server by yourself isn't more time consuming than having AWS do it for you. The issues that cause very long downtimes are flooding of the server room, subpar connectivity, low quality hardware, CPU is on fire.

Just do not host your production services in your basement. Any VPS saves you from these, at 1/10th the AWS cost. This is obvious to anyone that has the basic skills to run a Linux host.

The fact that it needs to be spelled out means that these days few have the skills and/or the others have been convinced that it is "a waste of time" and "makes only sense if your time is worthless" which is total FUD.


I've realized that my time only becomes more valuable as I get older, to the point that if a problem can be solved by just throwing more money at it then that is a very compelling solution.

Sure, buying a VPS or whatever from a hosting provider might be cheaper than buying a full blown cloud solution, but at what personal time cost?

Time is a finite resource, much more than most of us ever realize. Worse, time is a resource that cannot be replenished. Money, on the other hand, is a replenishable and theoretically infinite resource. Sometimes, paying big bucks for premium services is the cheaper solution to moving on in life.


It takes me more time to find what I need in the infernal hell hole of an AWS web interface, try to understand their hopelessly complex jungle of lingo and abstraction layers. Hell, I'll deploy a fresh Hetzner bare metal server with Debian in a matter of 3-4 minutes and roll with it. 10 times more bang for your buck, no vendor lock in, and guess what: I am in control. I know what runs on my server. I understand the platform inside out. I don't trust developers who are afraid of setting up their own Linux server.


Both you and the parent commenter I replied to need to drop the notion these are people who are "afraid" of setting up their own servers.

No, these are people who don't care about setting up their own servers.

Similarly to how I just bought a NAS unit from Synology and just use Tailscale because I don't care about assembling a homebrew NAS or setting up my own VPN infrastructure, most people don't care about setting up their own servers.

Why do we not care? Because time is finite and we have more important shit we need or want to attend to.


If it was the case that we needed less total aggregate time for a cloud deployment, we would see it represented in operations headcounts.

What I have witnessed is actually the opposite; as mentioned elsewhere in this thread I was on a team of 6 sysadmins (who knew perl/python and C as a requirement for employment, so more like "devops" but with worse tools): and we ran 1% of web traffic on a few hundred physical machines.

However, Ubisoft has 5% of it's total headcount as IT staff, my last job has roughly a similar number of staff for a multiplayer game as I had for a huge SaaS web-store which even ran it's own payment systems (IE; before Stripe existed) and had regulatory compliance issues to take care of. -- and Ubisoft is using AWS/Azure/GCP and has moved away from bare-metal hosting, and my last job only ever ran on cloud providers, as it was du-jour when they started the company.

I have yet to see any evidence that it actually saves time. I think it's weaponised short term thinking, because going from the console to a running system indeed takes less time, but the tinkering with CDK/Terraform/IAM/VPC and trying to architect your solution to be cost effective and reliable is taking at least an equivelant amount of time it seems.

Statistics don't lie here, if "devops" and "SRE" is the new sysadmin then headcounts have not fallen. Operations staff are in as much (or even more) demand as they were in 2010- We just made operations more complex and vendor specific.


Because setting up your server takes an infinite amount of time, while configuring things on AWS takes zero minutes, right?


I think the gap here is that UNIX sysadmin skills are a lot rarer than many people imagine, and are also incredibly time consuming to learn. UNIX is many things, user friendly isn't one of them. AWS offers GUIs that are well documented. Linux has ... man pages. Probably some blog posts from 2007 if you're lucky.

I mean, I admin my own servers. But I learned Linux as a kid when time was cheap. If you didn't have that experience then yeah it may make sense to just use as much cloud as possible. Sysadmin is just unpleasant even when you do know how to do it.


But I don't see how AWS actually saves you from needing to understand linux. Maybe the full serverless lambda stuff, maybe, but otherwise you're gonna have to set up your stuff on some virtual boxes


Yeah exactly, the more you buy into the cloud platform the less Linux you have to learn. Learning how to admin your own Postgres is a part of "learning Linux" but there's nothing fulfilling about learning apt, installing it and then discovering you can't connect out of the box. Then you have to learn vim, learning what pg_hba.conf is, and how to edit it, how to use systemctl to make the edit take effect, what sudo is etc ... it all takes time and it's not like it's a foundation of deep knowledge that'll be useful for other stuff to build on. Everything is wildly inconsistent and learning how to configure Postgres doesn't help much with learning how to do MySQL or other services. It's just a big pile of UNIX trivia.


If you think knowing apt / vim / postgresql / bash / linux intimately is a waste of time and just a pile of Unix trivia, then you and I are entirely different type of engineers / developers, I don't think we even speak the same language :) I'm actually grateful that your mindset is becoming more prevalent, as it only serves to increase the value of my own skillset.


It only increases the value of that skillset if it's in demand. Look, I'm on your side on this, I've learned all this stuff and it'd be nice if it was in high demand. But it's just not. Most people's experience of apt-get stops at a Dockerfile, most people never learned vim and never will because they have VSCode, most people do not really know how to use bash and have never recompiled a kernel.

It's a truism that you can do stuff a lot cheaper yourself if you have these skills than paying a cloud to do it for you, but cloud services have grown like crazy for over a decade now and show no sign of slowing down. UNIX is fundamentally a user-hostile operating system, it will never change, and it's nice for those of us who learned it that we can sometimes convert our rapidly-obsoleting skills into savings (sometimes), but it doesn't seem likely to stay that way. The Linux vendors just haven't improved the usability of their platform at anywhere near a fast enough pace to keep up with the cloud vendors.


Look at it this way: Most of us don't care how cars work, we outsource the production and maintenance because time is valuable compared to the money we spend.

Time is finite and there are more things in this world than we can ever afford to care in a single lifetime, not caring is a matter of budgeting that finite time so we can live our lives in a fruitful manner.


Again, making the false argument that instead of learning a system engineer skillset you need to learn nothing at all. As if instead of spending time on systems, everything will be magically done for you and it costs zero FTE headcount.

Wrong, wrong, wrong. You spend a similar amount of time, but on different knowledge and tasks. And I'm saying I find my skillset more enjoyable and more useful compared to the vendor lock-in and Tower of cloud services of Babylon that is AWS/Azure/Google.


So many people in these conversations seem to miss that buying something also takes time. There's this unspoken assumption that because you're paying someone else to do it that it'll take less of your time, but it's extremely frequently this isn't actually true. Cloud is a good example of this in a lot of cases, I've not seen a company using cloud that didn't spend a lot of time managing it, frequently far more time than they would spend managing their own servers (especially because they need to jump through a lot of hoops for spending not to spiral out of control).


Honestly, I'm just going to cite the Greatest Hacker News Comment Of All Time(tm)[1] and be done with it.

[1]: https://news.ycombinator.com/item?id=9224


If we're playing that game I'll link https://devnull-as-a-service.com/




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: