Hacker News new | past | comments | ask | show | jobs | submit login
I got everything off the cloud and am paying less (twitter.com/rameerez)
62 points by baxtr 3 months ago | hide | past | favorite | 118 comments



Where "10x less money" is still less than the cost of a single developer. If this needs any extra work then it's just been busy work and change for the sake of it.

I'm all for monoliths and hosted solutions, but let's not pretend that saving $1k / month is going to make or break most businesses.

What OP appears to have done is take on risk to save costs.

Another thing is, even when the cloud goes down, the story is "Global outages caused by Azure/AWS/GCloud/etc going down", and people are generally understanding.

If you have an outage of your machine, the story is "<Your Company> services down".


I’m a business owner. It’s not just about the cost. It’s also about the improvement in control, visibility and efficiency. This has long term compounding effects, such that well-architected in-house applications may be literally millions of times more efficient than the cloud. The cloud has an incentive to charge you for every megabyte that goes in and out of a wire, whereas if you own that wire, the megabyte is basically free. This enables you to do things that aren’t possible otherwise.


> If you have an outage of your machine, the story is "<Your Company> services down".

That is silly for two reasons:

1. Cloud doesn't save you from outages

2. Outages happen, and are usually very rare, whatever your host

That said, I have fewer outages on my machines than AWS has on theirs. I run a service for a client, on one server, that has had 100% uptime during workdays for 7 years.


> That said, I have fewer outages on my machines than AWS has on theirs.

It gets even better (or worse, depending on your view point) if you factor in services you're dependent on. In my career I probably had more outages due external partners running their stuff on AWS than I ever did because our own servers where down.

That's not to say that AWS is bad, but it take a very skilled administrator to do AWS correct, and it costs a lot of money to get the required redundancy. The whole "I can't vacuum because US-EAST-1 is down" is because someone didn't want to pay what it costs to do redundancy in AWS.


> someone didn't want to pay what it costs to do redundancy in AWS

Seems a little pointless to pay a premium on everything else then.


Loss of energy because the grid and internet are down are probably the major reasons for outages which means small companies don't want to host.


What? UPS boards are cheap. LiFePO4 batteries are cheap. Metered ATS PDUs are cheap.

I've had far less downtime (read: zero) from my custom power redundancy solutions on my homelab than from power issues with my colo provider.

Small companies would be far better off with local power redundancy than some gargantuan complex industrial Liebert solution with a 4 hour SLA.


True story ^^^^^

Add to this, that you can have outages due to operational complexity, which tends to happen in complex cloud native setups.


> If you have an outage of your machine, the story is "<Your Company> services down".

Yes. Have you attributed a cost to these stories?

Downtime is rare no matter what kind of infra you have. When you get downtime, fix it and the business will continue. In my experience people way over estimate on many services how bad an outage is. People aren't going to your competitor because of a single outage. Likely 98% of your customers might not even notice it. If you fix it and communicate honestly with your customers, the amount of money you lose is minimal.


He's an indie developer from the looks of it. $1k is a decent sum of money and maybe the effort is worth it.


Well, it depends. There are companies or individuals with tight budgets. And if you set something like that yourself, from my experience it needs very little maintenance, if done right. My servers run without interruptions and interventions for months now. YMMV.


Until a disaster happens. If you’re running a commercial site that sells products and it goes down… that translates to money lost.

If you were paying 15k a year to put a portfolio site online, then I can see where it can be too much.


It's not like cloud providers don't have outages either. And for disasters you should always have backups and a migration plan, even if you are in the cloud, no? I'm not preaching against cloud, just saying that there are some cases where going bare metal is a better option instead of using cloud by default "because everyone does it".


I'm not arguing for or against cloud, but there are more costs to running bare metal than it would seem.

I'm a sysadmin and have run schools on bare metal and on cloud. There are things like drive failures, hardware failures, connectivity issues, networking, user access, and all of those things that are much easier in the cloud. Much easier to spin up or down if you no longer need them.

If you're running networked storage, then that needs to have both a fail-over node and backup solution. You probably need to run extra hardware, cabling, etc. Cloud trivializes this.

If you're saving 15k in terms of hard cost, but spending 55k/yr for a sysadmin on-prem to maintain, then you're not saving money at all.

A disaster can be as simple as a failed disk, or overheated server because you're not a sysadmin and you put the server in a cabinet (I've run into this, dealing with faculty). Dead computer, no lessons for the week - lost time for the school and the professor in question.

Every place is different; you have to do a cost analysis and it's not as simple as "I saved 10x!"


Just to be clear, I was not talking about on-prem bare metal, but more like hosted bare metal by providers like Hetzner or OVH. Like the guy in the article has. Speaking of drive failures, cables and other hardware failures - provider takes care of that and replaces failed parts.


This is the issue in cloud vs. metal discussions online.

All the time I think I'm reading apples and then realize people were talking bananas.


> Much easier to spin up or down if you no longer need them.

It takes 5 minutes to set up a Docker Swarm Mode cluster. It takes maybe 15 minutes for k3s or microk8s. After that, auto-scaling is dead simple, and no MORE complex than some shitty vendor locked-in cloud solution.

> that needs to have both a fail-over node and backup solution

ZFS pool replication, Ceph, GlusterFS, etc. Lots of options here. These are long-solved problems.

> A disaster can be as simple as a failed disk

Right, which is why you design your on-prem cluster with N+2 redundancy in the first place, and with a locked cabinet with spare parts. Cattle not pets, and all that. Do you think your EBS storage never fails? You'd need to do exactly the same thing in the cloud, anyway.

> but spending 55k/yr for a sysadmin on-prem to maintain

First, if you're paying only 55k for sysadmins, you should be planning to fail anyway. Competence is compensated quite a bit north of there.

Second, assuming the context is small business, you're going to have role crossover anyway, it's inevitable - chances are that your developer(s) is(are) administering this. Not every business is Facebook.


> If you’re running a commercial site that sells products and it goes down… that translates to money lost.

True! It's also true that if you control your own servers, then you can fix the problem yourself instead of waiting until some provider somewhere gets around to it.


>Until a disaster happens.

why not go cloud when the disaster happens while fixing your disaster, then close down cloud when disaster fixed?


> Where "10x less money" is still less than the cost of a single developer.

Okay? Did you factor into that not needing the "cloud administrator", or whatever it is called now?

When your cloud goes down because payment got blocked, or your account got suspended for suspicious activity (pressing F5 in browser) or something similar, do you still get understanding?


I think I agree. I host my own stuff on a home server, and I love that, but I'd never consider for a moment that my employer or clients should do anything similar.

If you've got some spare time and the skills, alright. This can be a great way to save money. If you're especially confident in your ability to make this work and your margins are very thin, this might even be a wise move. You really can save a lot.

But what if you get slammed with traffic? Can you scale in any direction? Do you have a means of balancing loads with your $1000 server? How will you ensure it's secure? What if it lights on fire (figuratively/literally)?

The cloud does some useful stuff. You shouldn't always pay with your limbs for that, but in some cases, that stability, redundancy, scalability, and flexibility is worth every penny.


Getting slammed with traffic on a cloud system is the stuff of nightmares. Suddenly you wake up with a huge bill, and the tools to manage and restrict the cost are miserable in cloud systems because that is something they don't find useful. It's extremely cheap to get a system with a high speed link that for most use cases you'll never come close to saturating and in the unlikely event you ever do, then something is wrong and your service going slow or failing is better than trying to keep up and getting a huge bill out of the blue. Maybe not for every situation, but for most.

Most of the truly skilled sysadmins I know recommend burst-to-the-cloud rather than pure cloud.


I only self-host stuff I can live without if it goes down for a week or two.

So no email, no important documents etc. live on my self-hosted services.

If the motherboard on my Unraid server shorted out right now, I'd be slightly inconvenienced, but I wouldn't lose sleep over it. Everything actually important is (also) on a service I pay monthly for.


Totally agree. Of course there are savings in monthly costs and perhaps a lot of learning on the way (the first time) but two factors are being missed here: cost of your time spent on this, and second, what I’ve been calling Day-2 costs: backups, disaster recovery, scaling, etc. what to do when a hard drive fails (they do, believe me!)


> backups, disaster recovery ...

Are you meaning people using AWS/Azure/GCP/etc don't need to take care of these?


Sometimes I wonder if people underestimated the effort and hours that go to get many of these functionalities to work in whatever cloud platform they are in. And then test them or fix issues when there is some unexpected thing there...


No they do but the amount of work that’s required is much less most of the time. You can always define the problem in such a way that there is no difference between a cloud and non-cloud solution but 12 years being in the business of deploying to cloud and non-cloud, the types of issues I’ve seen are almost always easier to get around on the cloud


It's really not. It takes about the same amount of time to add an rsync job to my crontab as it does to click through the backup scheduling options in the Linode control panel.

A lot of people here have fallen for the cloud marketing efforts. Nobody is saving any time.


Good point, adding cost of a team of professional cloud engineers as compared to a few underpaid sysadmins to host stuff onprem, the cost savings would be not x10 but x20 or x30 :)


Now all it needs is an outage on a Sunday evening and your savings for the next year will be eaten up by the people having to fix them at overtime rates on a weekend...

Yes, it's cheaper when everything is going well, it always is. That's not why we pay through the nose for AWS. We pay because Someone Else will be running around the colo facility swapping hardware and debugging networking issues if there's a problem.

Hiring a 24/7/365 redundant rotation of people doing that is a LOT more than $1000/month...


Exactly. If you work for 100$ an hour, which would be on the cheap side for skilled devops people and you put a 100 hours in to move a lot of servers offline; that's going to cost 10K$. That would be a small project. A bit over two weeks for a single person. Then add support, regular maintenance, etc. And it starts adding up more cost. And to get to 24x7 uptime, you need skilled people on stand by all the time. All that at 100$/hour/person. That's a lot of cost that you need to factor into your calculations. Mostly it doesn't make sense for small companies to be doing this.

With many companies, the realistic cost of moving off cloud in labor would be higher than years of cumulative bills for cloud hosting. Even if you don't value your own time, you might want to consider doing something more valuable with it than devops.


Aside from the 'it's a cargo cult' discussion, when we start dissecting these technical merits we often miss the business angle of __Accountability__. This is exactly the reason why you see Fortune 500 companies paying millions to external consultants for what seem to be fairly trivial advisory/decision making tasks.

If you are running something big enough and business critical enough and you roll your own custom _anything_, even if you save a gazillion, you are a hero for the next quarter. But if your custom solution goes fubar you are packing your boxes. The effort/time/cost of managing/configuring cloud anything is negligible if it allows people in companies where Software is a cost centre to deflect liability if something goes fubar.

It's pretty much the revamped 21st century version of the 80's meme "nobody got fired for buying IBM". Nobody will get fired at big corp for going all in on Azure or AWS. Look at CrowdStrike's cluster fuck, there were zero heads rolled for using the cloud services impacted by it. The very opposite would be true if it happened on bare metal servers. Google, Amazon, Microsoft, carry a 'too big to go wrong' halo with them (at least in public discourse) and that just sells.


The flip side of this is control.

If s3 goes down, and therefore your service is down, there's little you can do about it whilst your customers are screaming at you.


Yes but you can blame S3. I think the OP had a major point. In large organizations the incentive is to have someone to blame that is not you.


If I was paying a large amount of money for a service that was then down, and then the vendor just shrugged with "s3 is down, nothing I can do", I'd be re-tendering that very day.


It’s not about you the customer though. It’s about the internal politics in the company.

If it’s self hosted and it breaks you are in an IT firefight. If it’s a cloud SaaS product and it breaks you can twiddle your thumbs and yell at someone else and credibly shirk responsibility.


I get what you're saying, it's about incentives and the slopey-shoulderyness that characterises the "someone else's problem" culture.

My point is that this is a bad culture for the company. There's no point playing internal politics and finger-pointing if your revenue is walking out the door.


This is a hilariously small deployment. Anybody that assumes bare metal is easy at scale has not had to build out and cool data centers. Heck even designing and keeping the hvac running properly requires expertise. Same with powering all this stuff. One electrical fried rat in the right place can really ruin your weekend.

I still believe running your own servers is cheaper but there is a tun of logistics you don’t want to really deal with.


I ran 1% of Web Traffic in 2011 without running my own datacenter.

Our "platform" team was 6 people, two network engineers and four "devops" (Sysadmins that could write code, but were still called SysAdmins then).

We had two datacenters, one was SAVVIS in Winnersh Triangle, the other was in Santa-Clara USA.

We used remote hands- rarely- in the event that a cable died or a harddisk needed to be replaced, though such events are exceptionally rare even at the scale we had.

Mostly we would collect up failures and visit one day every 3 months.


It's weird how the same straw man argument comes back when this topic is discussed. As if the only two options in the world are cloud and on-premises bare metal. While the best solution is often to simply rent a bare metal server from any of the big hosting providers like Hetzner. They take care of cooling, physical security, hardware failures, network infrastructure etc. All you have to do is manage the server. Which isn't more or less work than managing the horror that is AWS (just 1000x more enjoyable in my opinion) while getting 10 times more bang for your buck.


Did you ever hear of colocation, root servers and VPSs?


The vast majority of applications are “hilariously small deployments” and only use the cloud or k8s because the engineer wants that on their resume.

And there’s a huge range of sizes between “hilariously small” and “needs a whole data center”.

This conversation always seems to assume everything is and needs to be Google-scale.


Reading through these comments I'm not sure everyone is on the same page as to what "Cloud" actually is.

In both the Twitter thread and the HN comments people seem to think VPS in no longer "cloud". IMO you are not "off the cloud" unless you own the hardware. A VPS is not that much different than an EC2 instance.

So in reality you just converted from cattle to pets[1] while just being on a different type of cloud.

And once you get into actual colocation you have to buy the hardware, have redundancy, and the colocation facility is probably charging you quite a bit (and perhaps even extra for bandwidth and power consumption).

[1] "cattle" refers to a disposable and easily replaceable resource, like virtual machines or containers, while "pets" are unique, carefully tended servers that are individually managed and not easily replaceable.



The fact that this guy is using GP2 EBS volumes right off the bat makes me question his AWS savvy. If he’d optimized from the start in the cloud, the story could have been about ‘How I Saved 2x on Monthly Infrastructure Costs,’ with a meaningful discussion around trade-offs and architecture choices. Instead, he’s running an idle server, consuming 400W like it’s still 2007


> his AWS savvy

It seems to me that at this point, being an AWS customer is about as difficult as running a server manually. It's just different skills. The risk of running your own servers is certain kinds of downtime may be harder to fix (though if you have some "sysadmin savvy", ie, backups, it shouldn't be much), and the risk of being an AWS customer is accidentally paying 10x what you should.


You say that as if AWS pricing isn't designed to be confusing in order to get people to spend more money. There are people who do nothing but consult on AWS because it's so complicated. I don't want to learn AWS, I want to build things.


Until recently, the AWS pricing page had no prices on it. (Or, if it did have prices, I couldn't find them.)


Its probably the beginning of the next phase in the we should centralise everything/no thats bad - lets decentralise everything cycle.

The 'one stop shop' idea describes it pretty well https://www.investopedia.com/terms/o/onestopshop.asp


Too scared to ask, but isn't it still the cloud ? The OP is just not using managed services anymore, which is obviously less expensive.

I always thought that all kind of servers in datacenters could be defined as cloud server.


"Cloud" is hyperscalers and managed services.

The idea was, I think, related to network diagrams that showed everything outside of your own border as a cloud icon. (IE; very foggy and unknown).

It never really made much sense, but normally what seems to separate a MSP/service or colocation is the availability of managed object storage and an abstraction on the network.

However, now it seems to have been coopted and now means basically anything that isn't in your closet. However we ran many physical machines we paid for in colocation facilities, and when AWS was coming up there was an understanding that "that's not what the cloud is" among the masses. Despite managing them over the internet. So, a fuzzy term has become more fuzzy.

So much for us being "engineers".


Paying less is only one potential benefit. Another, arguably more important, guaranteed benefit is control.

For me control is the number one issue with computers today. Not cost, not features, not convenience or anything else. "Cloud" computing has been defined as "someone else's computer". Bloggers and other pundits have also cited a "war on general purpose computing". These complaints, and others, to me, are summarised by the issue of control.


10x less money sounds impressive, but the actual savings is 15k a year - less than half what it would cost to hire a full-time sysadmin to keep things updated, running smoothly, backed up. And if you never hire a sysadmin? That’s ok, you’re spending the time to maintain it. 10h a week to keep things running smoothly means ~25 dollars per hour over the year. Do you earn more than that or is your time worth 25 an hour?


In my experience maintaining a few wikis and blog sites, maintenance time of the entire server is something more like 10h per year, and even that is being pessimistic.


Sounds like there's no DevOps/SysAdmin staff needed when using the cloud? ;)


Never implied that. But it's much easier for an IT "department" of 1 to maintain a fleet of cloud services than a bunch of machines in a closet.

When dealing with numbers like 15k/year... it's implied that we're not dealing with a full staff.


So you hire someone who enjoys building computers in their free time instead of someone who is an AWS expert where both have a 0.2 loading on infra.


> ... a fleet of cloud services than a bunch of machines in a closet.

Maybe for a bunch of machines in a closet. But modern dedicated servers and similar are pretty easy to maintain. ;)


That savings is significant when bootstrapping. And the cloud costs often increase in short order. With a colo server you’re not having to worry about a runaway process or config issue skyrocketing costs. That $120 is fixed and there’s probably a ton of headroom in there for growth.

It’s been several years since I’ve run a business, but I had moved from AWS to a colo ESXi setup. I saved a bunch of money and got a better product. My costs were fixed, the hardware was better, the VM sizes more configurable, and I had high availability. I wasn’t spending anywhere near 10h / week to keep it running. Linux and most daemons are pretty robust.

I’d gladly acquire and exercise general sysadmin knowledge over some vendor-specific thing. AWS may run fine if you have a light load, but their API docs can lie, they have hidden constraints, and many apps end up using the wrong service or architecture and need to go through costly migrations. It’s hardly set it and forget it. There’s a ton of services and constraints to learn. There’s time involved in learning the tooling, the API, and the invariable bugs integrating with the devops tool of the day.

Then you need to factor in the time spent trying to speed up your app to compensate for the slow cloud machine you’re running on. You can choose beefier machine types, but you’ll pay through the nose. Performance can be wildly inconsistent if you’re on an oversubscribed host. So, you can get a cloud bare metal machine, but that’s even more money and good luck getting the spend approved. To get out of that hole teams will often layer in brittle caching that would be otherwise unnecessary.

These costs rarely get mentioned when we talk about cloud vs bare metal. That was a substantial time sink for the startups I was at. Large companies have whole teams dedicated to devops, which isn’t far off from having a traditional ops team.


The cloud is so incredibly good. But it’s expensive.

This setup seems quite a maintenance hell. Let’s see what happens to his SLAs.


His SLAs are probably fine.

The hidden cost is the time he spends.

However, if running computers are your business, spending time on getting money is called having a job.


> The hidden cost is the time he spends.

Everyone seems to think that managing AWS is free.


On a related note - what is 10x less? Usually people mean 90% off. Wouldn't 10x less be the following: list price is 1000$. You get a rebate of 1$ and then pay 999$. Then you get a really good discount, 10x of that and then you pay "10x less" while still pay 989$?


It just means x/10


That's a tenth. I agree with GP, "ten times less" is ambiguous.


If "ten times more" is "x * 10", then "ten times less" is "x / 10".


You understand '10 times more', right? Well, '10 times less' is the inverse of that.


His setup is pretty uncomplicated:

- database (postgres) - redis cache - webserver

How to exit the cloud (aka run everything in VPS) if you also need this?

- reliable and scalable block storage - queue / task management - trusted email-sending provider

Not questioning the trend, just seriously interested in finding solutions to this.


> His setup is pretty uncomplicated

Like the of the AWS deployed sites.


This discussion is interesting, but I'm always cautious viewing discussion when an argument contrary to the interests of a very, very, very large industry are in play.



if you're running a small operation, as I do, there aren't many reasons to use Big Cloud.

i've been running my tiny business on the same provider as op for more than a year, and i've done very little maintenance to keep things up and running. granted, you need to set things up the first time, but after that, it just keeps chugging along.


Yeah, the Emperor's New Clouds. Sometime ago I thought I was crazy and everybody else was right :-) but now we are seeing more and more people doing the numbers.

https://logical.li/blog/emperors-new-clouds/

I wonder, though, why the topic is so contentious. This is supposed to be an engineering discipline, not a fight between religions.


You could do configuration management in the 90s with rdist well enough on 100 servers. The problem with IT in the early 2000s was that IT departments were shoveling money to companies like VMWare and EMC and overbuilding all their hardware. Amazon built everything on commodity hardware and open source and didn't have vendors acting like financial boat anchors on their scalability. IT departments also usually had massive amounts of friction and gatekeeping to getting anything done, and you needed a design review meeting to spin up a single server for someone. And really I think it was elimination of that centralized control and bureaucracy that made cloud computing so popular.


First time on HN?


Using the cloud = losing your know-how + gaining vendor lock-in


Some people even say 'cloud == deskilling'.


it really depends on the scope of what you run in the cloud and what kind of support/uptime you expect/require.

if you're going to run your own servers with same level of support it will definitely be much higher than $120/month

but this is, alas, not the tech issue, but a more low level understanding the reasoning of why cloud is more interesting for certain use cases, and on prem better for other.

it is definitely not a one solution fits all

edit: https://rameerez.com/how-i-exited-the-cloud/

oh wait, he did not get anything "off the cloud" he is still running on a "cloud" based solution, just a different company.

LOL


I wonder if there is some major propaganda push on HN to discourage actual software engineers from ever running their servers. Judging from the comments in this and similar posts, sounds like it is common belief that it's super expensive and a full time job to run bare metal services. AWS and GCP free credits have created a generation of developers that think running a webserver, a webapp and a database is rocket science, and when stuff breaks the whole company might go bankrupt because no one could ever save a corrupted database.

Now a company with thousands of server should probably leverage big cloud offerings, but when we see a post that someone is able to save 10x by going bare metal, and they don't seem to be running a large operation, we should celebrate that, rather than parrot that it is a bad idea, oh god what have you done, no one will ever be able to run servers on their own, it's not worth it.

For comparison, I manage half a dozen servers for clients and I literally do 1 hour of maintenance a year.


I think so. People create and/or sell new technologies. Then they try to market them. For instance, by writing articles exaggerating the advantages and/or downplaying the disadvantages. Then this get picked up by other folks who want something new and interesting to write about. And suddenly there is a new and hot thing even if there are lots of cases where it does not help.


In my experience running AWS etc. is something that takes at least one dev who is a subject matter expert, so it doesn't seem like it would be at all problematic to have the same thing for bare metal - a dev who is the SME on the bare metal.

Also if you have everything ready to push to a cloud provider in the case of the server going down, someone like Vercel or Netlify (easier to maintain than AWS in my experience) then I guess you are pretty setup.


Disagreeing with your thesis isn't propaganda.

If anything, you could argue the tweet is propaganda, because the labels for the AWS bar chart is missing. (Not that I think that's what's happening).

Hetzner nodes aren't actually 10x cheaper than AWS instances, so there's stuff like bandwidth usage going on.

My point being there's no silver bullet, sometimes cloud makes sense and sometimes it doesn't.


> Disagreeing with your thesis isn't propaganda.

The issue is not that people disagree, it's that:

1) People use surface level arguments that have been debunked a few times (SLA, backups, reduction in headcount)

2) There is an aggressively high frequency of these arguments.

If you couple that with the marketing spend of major cloud providers; it's easy to paint a picture of at least a small contingent of people who's livelihoods depend on keeping cloud growing; combined with a contingent of people who have skilled into a cloud (sunk-cost fallacy) and don't want to reskill and you have a decent self-reinforcing propaganda machine.

It likely helps things that there's nobody except a tiny plurality of nerds who really care to refute the marketing claims. There's not many colocation or hosting companies with even 1% of the marketing spend of even the third largest hyperscaler (google).

I wrote about the three major reasons you'd want to use a cloud provider here: https://blog.dijit.sh/gcp-the-only-good-cloud/


It is not comparing the same thing, but if you look strictly at the specs of the hardware you get then AWS can be easily 5-10 times as expensive as Hetzner. If you include bandwith the difference can be far more ridiculous if you get anywhere close to using the amount of traffic that is included in the price with Hetzner.

If I take the Hetzner AX102 (16 cores, 128GB ECC RAM, 2x1.92 TB NVMe SSD) that costs about $0.2 per hour. An EC2 instance (on demand) with 32 vCPUs starts at around $1.2 per hour, so 6 times as expensive. And this doesn't include storage which would be something like an additional $0.3 per hour for AWS.

Of course this isn't a fair comparison. Using on demand instances is somewhat unfair to AWS, but reserved instances also wouldn't be entirely fair as you're only locked in for a month with Hetzner (with a setup fee for most server) and not 1-3 years. I'm guessing with the CPUs here and assume that 2 vCPUs are roughly one real core. And the storage is not comparable at all.

You do get a lot of other things with AWS, but you usually also pay for those. Which can certainly be worth it. But I am slightly tired of people arguing that you save money with the cloud because you can scale down automatically. And this argument doesn't really work in many cases if you get that much more hardware for your money.


Yeah, hetzner nodes aren't 10x cheaper, because the products are not comparable. AWS/cloud services innovation is that their offering is incomparable and setup in a way which makes comparison very difficult.

It is not only the hardware cost that saves you money, but also the fact that if you rent dedicated servers you tend to develop your infra in a different way which is ultimately way more cost effective.

Dedicated servers definitely aren't a silver bullet, nobody is saying that. Managing your own servers requires work - how much, that is arguable and depends on many factors. To me the problem is more that AWS and cloud companies are selling a very expensive silver bullet, which really isn't even a silver bullet. In my experience most of the time it doesn't make sense to buy that.


I often find myself in a mid-size situation that's awkward. Hetzner doesn't provide SLAs, so if I have to then I need to run my clusters across multiple providers. If I need to support full DR in 8hrs I need to have something on another data center, I can't just run my AWS terraform in a different region. If my infra were 10x larger I'd go dedicated, but right now I'm happy enough to eat the AWS premium.

> develop your infra in a different way

Totally agree, but for many people that ship has sailed.


> Hetzner doesn't provide SLAs

They do if you enter an agreement with them for professional services; I'm not certain but I think there's an SLA also on their "managed" servers: https://www.hetzner.com/managed-server/

However, if you want a "no-human" SLA with bare-metal rented servers, I could recommend Gcore: https://gcore.com/hosting/dedicated

Their managed kubernetes even supports bare-metal nodes, which is actually something I'm using.


But the managed servers look comparable in price to AWS instances at the same specs? (I'm eyeballing, I don't want to pull numbers on my phone).

I don't want bare metal nodes, they put a high lower bound on the cost of a distributed cluster. I want a "herd" distributed across data centers, and a system that spins up new ones if the old ones die. I don't care if AWS shuts down an ec2 instance without warning, because a new one will automatically replace it.


It's weird how certain everyone is that it would take a ton of time to maintain. I mean, maybe. Depends on what you're running.


People want simple answers. "It depends" annoys everyone :-\


I thing this is partly the result of people simply parroting shallow statements concerning 'backup, availability, disaster recovery,...' they've read or heard elsewhere, partly the result of a general push towards a service economy where it is considered smart to outsource to 'the experts'. While this may make sense in some cases - e.g. those where the task at hand is not recurring and requires knowledge or equipment which is not at hand and hard/expensive to come by - it is often more expensive and less flexible than just doing the thing yourself.

Of course HN is a site frequented by people who are wont to start those same service companies so it is not surprising for there to be a tendency to play up the 'risks' of doing things yourself instead of 'leaving it to the experts'.


I have a 10kW 4090 training cluster in my garage with water cooling.

It paid itself back in 3 months if I'd used AWS services.


Not something that requires HA, backups, scaling, auditing, logs, security, and such, though.

Would you run your prod inference cluster the same way?


It's little brother is the prod inference cluster. It's had an uptime of 100% over the last two years.

People seriously don't realize just how _big_ computers have gotten. The AWS mentality is still stuck in 2005 when a t2.micro was a decent computer.


> The AWS mentality is still stuck in 2005 when a t2.micro was a decent computer.

The T2 EC2 instance family got announced in 2014: https://aws.amazon.com/blogs/startups/announcing-amazon-ec2-...


The first instance was the m1.small it's specs are marginally better than those of t2.micro, it's not been generally available since 2022 if memory serves.

https://aws.amazon.com/blogs/aws/ec2-instance-history/

https://instances.vantage.sh/aws/ec2/t2.micro

https://instances.vantage.sh/aws/ec2/m1.small

Please nitpick better. Nothing is more embarrassing then being pedantic and wrong.


And t2.micro was not a decent computer at that time.


You don't need most of that until you're making enough money to hire someone to worry about it.


is that enough to heat your house in the winter?


>I wonder if there is some major propaganda push on HN to discourage actual >software engineers from ever running their servers

How to scare a computer engineer in 2024? Invite him to configure a server.


> I wonder if there is some major propaganda push on HN to discourage actual software engineers from ever running their servers. Judging from the comments in this and similar posts, sounds like it is common belief that it's super expensive and a full time job to run bare metal services. AWS and GCP free credits have created a generation of developers that think running a webserver, a webapp and a database is rocket science, and when stuff breaks the whole company might go bankrupt because no one could ever save a corrupted database.

Related to this observation you really should watch DHHs keynote talk at Rails World 2024 from about minute 20 to 30: https://youtu.be/-cEn_83zRFw?t=1296

He addresses exactly this phenomenon.

EDIT: changed URL to include timestamp to the start of the relevant content.


>sounds like it is common belief that it's super expensive and a full time job to run bare metal services.

Remember the age old saying: Time is money.

If you run your own servers, you are spending your own time and thus money to manage them.

If you buy services from hosting ("cloud") providers, you spend money to have someone else manage them.

The question then becomes: Do you value your time or your money more? And then you pick the one that spends the one you value less.

Some people value their time, some people value their money. YMMV.


Your comment doesn't take into account that some things cost more than others, and might be not worth it their price.

Running a server by yourself isn't more time consuming than having AWS do it for you. The issues that cause very long downtimes are flooding of the server room, subpar connectivity, low quality hardware, CPU is on fire.

Just do not host your production services in your basement. Any VPS saves you from these, at 1/10th the AWS cost. This is obvious to anyone that has the basic skills to run a Linux host.

The fact that it needs to be spelled out means that these days few have the skills and/or the others have been convinced that it is "a waste of time" and "makes only sense if your time is worthless" which is total FUD.


I've realized that my time only becomes more valuable as I get older, to the point that if a problem can be solved by just throwing more money at it then that is a very compelling solution.

Sure, buying a VPS or whatever from a hosting provider might be cheaper than buying a full blown cloud solution, but at what personal time cost?

Time is a finite resource, much more than most of us ever realize. Worse, time is a resource that cannot be replenished. Money, on the other hand, is a replenishable and theoretically infinite resource. Sometimes, paying big bucks for premium services is the cheaper solution to moving on in life.


It takes me more time to find what I need in the infernal hell hole of an AWS web interface, try to understand their hopelessly complex jungle of lingo and abstraction layers. Hell, I'll deploy a fresh Hetzner bare metal server with Debian in a matter of 3-4 minutes and roll with it. 10 times more bang for your buck, no vendor lock in, and guess what: I am in control. I know what runs on my server. I understand the platform inside out. I don't trust developers who are afraid of setting up their own Linux server.


Both you and the parent commenter I replied to need to drop the notion these are people who are "afraid" of setting up their own servers.

No, these are people who don't care about setting up their own servers.

Similarly to how I just bought a NAS unit from Synology and just use Tailscale because I don't care about assembling a homebrew NAS or setting up my own VPN infrastructure, most people don't care about setting up their own servers.

Why do we not care? Because time is finite and we have more important shit we need or want to attend to.


If it was the case that we needed less total aggregate time for a cloud deployment, we would see it represented in operations headcounts.

What I have witnessed is actually the opposite; as mentioned elsewhere in this thread I was on a team of 6 sysadmins (who knew perl/python and C as a requirement for employment, so more like "devops" but with worse tools): and we ran 1% of web traffic on a few hundred physical machines.

However, Ubisoft has 5% of it's total headcount as IT staff, my last job has roughly a similar number of staff for a multiplayer game as I had for a huge SaaS web-store which even ran it's own payment systems (IE; before Stripe existed) and had regulatory compliance issues to take care of. -- and Ubisoft is using AWS/Azure/GCP and has moved away from bare-metal hosting, and my last job only ever ran on cloud providers, as it was du-jour when they started the company.

I have yet to see any evidence that it actually saves time. I think it's weaponised short term thinking, because going from the console to a running system indeed takes less time, but the tinkering with CDK/Terraform/IAM/VPC and trying to architect your solution to be cost effective and reliable is taking at least an equivelant amount of time it seems.

Statistics don't lie here, if "devops" and "SRE" is the new sysadmin then headcounts have not fallen. Operations staff are in as much (or even more) demand as they were in 2010- We just made operations more complex and vendor specific.


Because setting up your server takes an infinite amount of time, while configuring things on AWS takes zero minutes, right?


I think the gap here is that UNIX sysadmin skills are a lot rarer than many people imagine, and are also incredibly time consuming to learn. UNIX is many things, user friendly isn't one of them. AWS offers GUIs that are well documented. Linux has ... man pages. Probably some blog posts from 2007 if you're lucky.

I mean, I admin my own servers. But I learned Linux as a kid when time was cheap. If you didn't have that experience then yeah it may make sense to just use as much cloud as possible. Sysadmin is just unpleasant even when you do know how to do it.


But I don't see how AWS actually saves you from needing to understand linux. Maybe the full serverless lambda stuff, maybe, but otherwise you're gonna have to set up your stuff on some virtual boxes


Yeah exactly, the more you buy into the cloud platform the less Linux you have to learn. Learning how to admin your own Postgres is a part of "learning Linux" but there's nothing fulfilling about learning apt, installing it and then discovering you can't connect out of the box. Then you have to learn vim, learning what pg_hba.conf is, and how to edit it, how to use systemctl to make the edit take effect, what sudo is etc ... it all takes time and it's not like it's a foundation of deep knowledge that'll be useful for other stuff to build on. Everything is wildly inconsistent and learning how to configure Postgres doesn't help much with learning how to do MySQL or other services. It's just a big pile of UNIX trivia.


If you think knowing apt / vim / postgresql / bash / linux intimately is a waste of time and just a pile of Unix trivia, then you and I are entirely different type of engineers / developers, I don't think we even speak the same language :) I'm actually grateful that your mindset is becoming more prevalent, as it only serves to increase the value of my own skillset.


It only increases the value of that skillset if it's in demand. Look, I'm on your side on this, I've learned all this stuff and it'd be nice if it was in high demand. But it's just not. Most people's experience of apt-get stops at a Dockerfile, most people never learned vim and never will because they have VSCode, most people do not really know how to use bash and have never recompiled a kernel.

It's a truism that you can do stuff a lot cheaper yourself if you have these skills than paying a cloud to do it for you, but cloud services have grown like crazy for over a decade now and show no sign of slowing down. UNIX is fundamentally a user-hostile operating system, it will never change, and it's nice for those of us who learned it that we can sometimes convert our rapidly-obsoleting skills into savings (sometimes), but it doesn't seem likely to stay that way. The Linux vendors just haven't improved the usability of their platform at anywhere near a fast enough pace to keep up with the cloud vendors.


Look at it this way: Most of us don't care how cars work, we outsource the production and maintenance because time is valuable compared to the money we spend.

Time is finite and there are more things in this world than we can ever afford to care in a single lifetime, not caring is a matter of budgeting that finite time so we can live our lives in a fruitful manner.


Again, making the false argument that instead of learning a system engineer skillset you need to learn nothing at all. As if instead of spending time on systems, everything will be magically done for you and it costs zero FTE headcount.

Wrong, wrong, wrong. You spend a similar amount of time, but on different knowledge and tasks. And I'm saying I find my skillset more enjoyable and more useful compared to the vendor lock-in and Tower of cloud services of Babylon that is AWS/Azure/Google.


So many people in these conversations seem to miss that buying something also takes time. There's this unspoken assumption that because you're paying someone else to do it that it'll take less of your time, but it's extremely frequently this isn't actually true. Cloud is a good example of this in a lot of cases, I've not seen a company using cloud that didn't spend a lot of time managing it, frequently far more time than they would spend managing their own servers (especially because they need to jump through a lot of hoops for spending not to spiral out of control).


Honestly, I'm just going to cite the Greatest Hacker News Comment Of All Time(tm)[1] and be done with it.

[1]: https://news.ycombinator.com/item?id=9224


If we're playing that game I'll link https://devnull-as-a-service.com/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: