> Rack mount hardware is almost always expensive, loud, and power hungry.
Buying old rack-mount server hardware for home use is almost always a mistake. Old server hardware may feel cheap when you see an old dual-socket rack mount server on eBay with hardware that was fast 8 years ago, but you can probably meet or exceed the performance with something like a cheap 8-core Ryzen.
Rack mount servers are also exceptionally loud. Unless you love the noise of small, high-RPM server fans, you don't want rack mount server hardware in your house.
And don't forget the power bill. Some old servers idle at hundreds of watts, which will add up over the several years you leave it running. 24/7 server hardware is a good example of where it makes sense to be mindful of power consumption.
> What makes even less sense is wanting use to use software like vSphere or ESXi since its about 10x more complicated than just using virt-manager/QEMU.
I disagree. ESXi is actually extremely easy to use, as long as you pick compatible hardware up front. The GUI isn't perfect, but it's intuitive enough that I feel confident clicking around to accomplish what I need instead of looking up a tutorial first.
I've always ignored advice even people say somethings too hard or not worth it. And I pretty much never regret it.
I absolutely regret trying to get a used rack mount server running.
The combination of steep learning curve from workstations to server hardware, plus parts that were failing but tested ok, made for an extremely difficult path to troubleshooting and getting it running right.
And that's before you get to the quirk's of getting it to boot and installing an OS and drivers and software.
I love it now that it works, but it easily took 100x the time (yes 100x) and probably 2-2.5x the total expected cost getting it to that point.
Not counting the additional AC unit I installed to keep it (somewhat) quieter.
I usually expect one or two aspects of my projects to have unexpected roadblocks, but for this it was issues with what seemed like every single step.
My experience was buttery smooth. It was plug it in, replace the drives with new SATA spinning for bulk and SSD for fast storage, install proxmox, and I had my first apps on it within a few hours of starting. This was first on a Dell RX720 and later on an HP DL380 Gen9.
I got complete servers from decommissioning projects and they just worked. In 5 years, I’ve replaced a SAS controller battery backup unit on one of them.
The plural of anecdote isn’t data and all that, but if you buy complete gear that just aged out, it worked on the last day they used it and is very likely to work on the first day you use it.
The fan noise and power draw is annoying. Running a house full of VMs (I’ve got about 20 containers plus VMs), it pulls about 290 Watts per the meter. That doesn’t feel outrageous on the power side and is certainly convenient. (It’s about $500/yr in power.)
I have used consumer hardware and enterprise hardware in my rack. There’s pros and cons either way.
Consumer hardware is cheaper, more power efficient, and quiet.
But, there are a few reasons I have switched to enterprise hardware:
* remote management and redundancy features make it more reliable if I’m traveling and want to remote in to do something
* some software works better with enterprise hardware features (i.e. special disk controller modes)
* I feel like my development experience is closer to what I can expect in production
* 100+ gigs of RAM on one system without a big upfront expense
* the occasional PITA of working with enterprise hardware helps me to understand what I might be expecting out of infrastructure team in production, or design ways to make their life easier
> And don't forget the power bill. Some old servers idle at hundreds of watts, which will add up over the several years you leave it running. 24/7 server hardware is a good example of where it makes sense to be mindful of power consumption.
My rack has an always on laptop for applications that always "need" to be running. I then have an arduino in my office that's sole purpose is to wake on lan or power on via UPS until the servers are online when I turn my switch to the "on" position, and put them to sleep when in the off position. Any servers still on after 15 mins in the "off" position just get halted.
Once I did that and the friction of going on/off was so low my power consumption went way down.
Latest Ryzen's CPU perf and max 128GB ECC UDIMM should be enough for most users, but it still lacks homelabby features and flexibilities. I need more than 16x PCIe lanes that supports ACS and IPMI, so I go for EPYC build. It's great except idle power consumption. I with they improve it to Xeon level, but maybe chiplet architecture (8CCD and massive CCD) isn't good about it.
It was my initial thought since X399D8A-2T mobo (X399 but has IPMI) existed for TR 2000. But they changed chipset for TR 3000 and TR 3000 don't offer 16 core SKU (it's enough for me). I also found that even TR 2000 build isn't much cheaper than EPYC because it needs ECC UDIMM that rarely sold cheaply rather than ECC RDIMM. So finally I go for EPYC Rome.
Buying old rack-mount server hardware for home use is almost always a mistake. Old server hardware may feel cheap when you see an old dual-socket rack mount server on eBay with hardware that was fast 8 years ago, but you can probably meet or exceed the performance with something like a cheap 8-core Ryzen.
Rack mount servers are also exceptionally loud. Unless you love the noise of small, high-RPM server fans, you don't want rack mount server hardware in your house.
And don't forget the power bill. Some old servers idle at hundreds of watts, which will add up over the several years you leave it running. 24/7 server hardware is a good example of where it makes sense to be mindful of power consumption.
> What makes even less sense is wanting use to use software like vSphere or ESXi since its about 10x more complicated than just using virt-manager/QEMU.
I disagree. ESXi is actually extremely easy to use, as long as you pick compatible hardware up front. The GUI isn't perfect, but it's intuitive enough that I feel confident clicking around to accomplish what I need instead of looking up a tutorial first.