Aren’t vSphere licenses a bit expensive for the average homelab? I personally went with a larger former workstation with more cores and RAM, because I expect to only ever have a single node, even if I’d love to have a full cluster.
I found recently that they got 500 bucks license for 6 CPUs now! I think it is doable since it is lifetime license (without updates though after a year(?)).
The newer version is even more interesting and it's just released on the market now: NUC 9 Extreme/Pro Kits. [1]
It's got i9/Xeon options, 3 NVME drives, and even PCIe x4 and PCIe x16 slots. It's bigger but still quite small. The PCIe slots are the killer feature to me.
Yeah it's touching 2k USD without RAM / storage or OS. I don't know if many homelabbers are going to buy that just for the form factor. Building a small box with 8-C Ryzen would be an interesting exercise.
There's no practical reason to use this new box for homelab folks compared to some older micro workstations that came out a year or so ago that had E3-class Xeons in them at least. For an entry of $2k it's easily worth it now to find some old Xeon D mini ITX based boards and plop them into inexpensive cases with SFX PSUs. Old gear is the bread and butter of the homelab experience and people dumping. The hard part with AMD building is mostly the ECC DIMM motherboard support experience but SMB-class hardware still is a Good Idea for prosumer and homelab folks due to software compatibility (read: drivers and less cost-driven gotchas ala winmodems) compared to consumer hardware.
Me too, they are so tiny and powerful! Just got the 64GB Frost Canyon one, awesome little thing. Running it with an Nvidia 2080Ti via Razor Core Chroma and Ubuntu 20.04. GPU Compute (CUDA) works, which is really the only thing I need it for, but I cannot get it properly working with a display though.
One thing I really like about ESXi the last time I used it was the built-in virtual switch, other solutions in KVM world where not up to par. Maybe it's different today.
I’m very much a lightweight and I found that ESXi was great when combined with Fusion on a Mac. Being able to build VMs locally then upload them is nice. Maybe other platforms have methods like this too?
I've always preferred Proxmox because of its container support. I have a number of Proxmox hosts that don't have any VMs on them at all, just containers.
I'm looking for a hypervisor recommendation... I am hoping to virtualize at least two if not all three of my machines: Unraid (pass-thru HDDs); pfSense (pass-thru NIC); and Windows 10 (pass-thru Graphics). I am hoping to have Windows 10 running fast enough to do some light photo editing. Which hypervisor would you think to try first?
KVM works well. You can use system-wide libvirt-based management to have the VMs automatically come up and go down with the host machine, and treat them like services.
That said, there's value in having your router/firewall separate from other systems (virtual or otherwise), which will substantially reduce complexity and make it less likely that you accidentally make a system accessible on the wrong side.
I've just recently spun up some VMs on my mid-range laptop with KVM. Once I got it up and running, I was pretty happy with the performance, but getting it there was not super easy.
In particular, qemu and apparmor don't always play nicely together and the error messages you get by default don't tell you that it's an apparmor problem.
Importing a VMDK package was also tricky, I ended up having to get familiar with some XML incantations to get the machine to import properly. Feedback from qemu or its friends was totally obtuse and unhelpful there too.
Do you know if it's possible in KVM to create separate virtual subnetwork managed by KVM (like: adding VMs to that network, reading their IPs) which is accessible from parent network?
I got home network and wanted KVM to control which VM gets which IP, but I couldn't make them accessible from outside - without static IPs on VM side. When I use virtual bridge, KVM cannot set the IPs, but at least they are accessible from my home network.
It's not a KVM-level thing unless you want to fiddle by hand with bridges and obscure command line options. libvirt can do it reasonably easily on top however (see "virsh net-list" etc).
I've done all of that on XenServer, or rather XCP-ng now. (pfsense (nic passthrough), freenas (SAS controller passthrough) and windows 10 with gpu passthrough)
It doesn't have the same hype as the others but far more open. Biggest drawback is that the configuration software is windows-only (you can get it running in wine but it is a bit hacky, also xen-orchestra is a free (for personal use, if you build from source) web-configuration utility that one can use).
Would not want to be at mercy of vmware. Also don't want to invest time into a system with such restrictions on use.
Proxmox might be a decent alternative but I have a few systems and the cost would be prohibitive for hobby use.
XCP-ng Center (the Windows client) will disappear and Xen Orchestra is the only officially support client. In the next month, we'll bundle "XO Lite" into hosts to manage a simple machine without having to install anything!
I'd usually recommend just using KVM on Fedora/CentOS, but if you're not that Linux savy or don't want to spend time fiddling around I'd probably say give Proxmox a go, or XCP-ng if you want something that's easier to cluster between hosts but don't mind the storage not being quite as quick.
I have the same impulse to want to virtualise my router. I’ve repeatedly agonised over this but keep coming back to the decision that as cool as it would be, so long as internet uptime matters, the router probably needs to be separate hardware.
You can get good, power efficient pfsense hardware for around $200 USD. A few bonus notes:
—You could spend a little bit more and get hardware that's good enough to add a virtualisation layer to your dedicated router.
—If your uplink is less than 300 megabit, and you have a managed switch with VLANs, your pfsense box doesn't need to be dual NIC. This could open the door for cheaper or re-purposed hardware.
I have had pfSense running on Proxmox for several months now.
Upgraded to the latest pfSense version a couple of days ago and the total downtime was in the order of minutes.
It's a bit too early to say whether it's a good idea or not; the main concern I have is that you have two points of failure instead of only one. Other than that worry it has been very smooth.
The company that makes pfSense used to run its (pfSense) firewalls in ESXi, but then I got tired of having to wait for VMware to come up before the firewall(s) would come up.
Actually ESXi web works well for 'outside' managing VMs and for connecting 'inside' there is official app 'VMWare Remote Console' on AppStore
I haven't actually used Proxmox but had X11/Xquartz on macOS, following enough to manage them. You don't even need to open X manually. It gets started when there is a connection.
ssh -X user@host -- virt-manager &
Also getting started on KVM on Debian is as easy as:
I dumped it years ago when my container work kicked off and they started paywalling everything behind vcenter.
If web works with base ESXi great! If not then I'm not interested.
I just don't work on a lot of companies in the position of needing VMWare anymore. Either they're all in on AWS or they're too small to justify the expense.
the Windows ESXi client has been EOL for a while now and is not supported by current versions. It's all managed through the web or the esxcli commands now.
That's almost it's strong point. It's pretty quirky from the outside, but I like Proxmox. You can see, pretty quickly, with a ~$50/month experiment on OVH's discount arm (SoYouStart.com) if it's a good fit.
You can mix and match normal VMs with LXD containers, and it does quite well for single physical hosts or small clusters. Past some scale, I'd go with something else, but it's great at the < 20 physical host range. It's a fairly unsung open source area. OpenVZ used to be a semi for-profit, open source/ but we want money, competitor here. Guessing they are mostly dead now. (Not celebrating that...they did a lot of good).
The good thing about Proxmox is the flexibility because of It's Debian-based.
I can do anything than can do on Debian like custom monitoring, manager filesystem, etc.
I've just discovered PhotoPrism [1] through this article, which looks like a great solution for organising my photos in my homelab/home server - which is an HP N54 Microserver (Gen 7), recently reformatted to run FreeNAS; loving it so far (previously was some Frankenstein mix of Windows Server 2012 running an Ubuntu VM via Hyper-V, with the storage being ZFS within the VM... terrible performance!)
There is also PhotoStructure [2] that I like the look of, but PhotoPrism is OSS and already integrates machine learning for identifying objects in photos... yes I know I can't expect great performance out of my N54L on that front!
If you've signed up for beta access, know that I'm sending out invites in batches, and hope to send out another this week, with the release of v0.8.0 : https://photostructure.com/about/release-notes/
(sorry if you've been waiting a while!)
Briefly, there are 3 builds of PhotoStructure: a desktop build (packaged with Electron), a Docker image, and a git repo you can pull, `yarn install`, and run directly. More details about why I felt the need to quit my office job and work on it full time are here: https://photostructure.com/about/introducing-photostructure/
If you've got any questions, feel free to ask them here or email hello@photostructure.com.
off-topic:
Here's a weird thing... I've to admit that one concern I've is to like PhotoPrism (this is what is keeping me a little bit afraid of trying it). The issue is that I'm a Go programmer and it seems to be a good piece of software, but it uses the GPL license, and I've as a motto to avoid reading GPL'ed source code, especially when it affects what I do for a living. It'd be even worse if I think about contributing to it...
I discovered it a few months ago when I was searching the web to see what is out there. Previously to moving to iPhoto (regretted a lot), I used digiKam and I enjoyed it.
* I don't want to risk reading something I like there and ending up copying the idea or writing similar code unwillingly, as this license is viral.
> but it uses the GPL license, and I've as a motto to avoid reading GPL'ed source code
So, don't? Just pretend that it's a proprietary application with no source that just drops from the sky as a binary or docker image. Can you not get what you want as just a "normal" user?
*> Previously to moving to iPhoto (regretted a lot)
I used iPhoto for a while and am actively considering buying a used one to run it on again for family photos. What didn't you like about it, and is there anything that can take the metadata from iPhoto and translate it?
I'd not recommend using the older versions because it corrupted my library, and it removed some photos from Flickr after I removed a set (I hated it so much... it wasn't supposed to work like this, Apple), and made the original ones very hard to find. The new Photos app seems way better, though. However, I admit my trust in it is not great either because I'm afraid these experiences might happen again... So I'd either stick to Lightroom or go to something light that I know I can always just rely on organizing the files by directories on my filesystem.
I run my homelab on NUCs as well but no hypervisor. Everything I run is in Docker containers. So the NUCs have CentOS installed and then just containers across the Swarm cluster.
Storage is an NFS mount from my Synology NAS.
The only reason I would really care to have a hypervisor is because then I can do more from Terraform.
I use Terraform to configure my DNS, my Ubiquiti gear, and the Swarm cluster but that leaves a gap where I need to do something to manage the actual CentOS machine. There isn't much to manage (users, SSHd config, SSH keys, packages such as vim, docker, and htop, and then NFS mounts) but the less I have to manage myself the better. Just don't think it's worth adding a whole hypervisor just to pull possibly some of the networking into Terraform.
That's exactly what I want to do. Do you have any guides you used? I'm having a little trouble with nfs and docker permissions (guid/uid I think) trickling down to the nfs share in my Synology.
I currently run my Sandstorm.io server on an 7th Gen i5 NUC.
I recently picked up some old budget Gigabyte BRIXs that will take on some other fun roles. (A 6th Gen Celeron and a 4th Gen i5.)
I have wanted to run an AD domain for managing my gaming PCs, but I had some difficulty with ESXi IIRC (probably also NIC issues, like in the article), and Intel has decided to be a complete tool about network drivers for "consumer" chipsets being installable on Windows Server. One of the perks of the old 4th Gen BRIX unit: A Realtek LAN NIC.
One of the things I'm sad about on newer NUCs is the removal of the LED ring. 7th gen NUCs in particular have a neat multicolor LED ring around the front panel that can be programmatically controlled... pretty neat for server status like uses. But I suspect they found it rarely used and so left it out for cost or something.
VMs work ok, but much I am much happier with Linux containers. NUCs get bogged down pretty easily and go from quiet purring kittens, to screeching and whining when they get under load. A dozen containers, with different distorts and versions runs nicely.
The network device is annoying - I couldn't get it to work on debian stable, but it is supported in Debian testing thanks to kernel 5.5
I also had problems with the HDMI video - It didn't work with the hdmi->dvi cable I got with either of my spare monitors, nor did it work with my TV and the existing hdmi cable I had there.
I did get it to work by borrowing the hdmi cable from a friends htc vive.
I have a couple of Raspberry Pis and Onion Omegas on my LAN and a few Linux and BSD VMs on my MacBook (NATed). They can mostly ssh amongst themselves, but the biggest problem I have is that I haven't been able to come up with a satisfactory DNS/mDNS setup. I just don't like fiddling with IP addresses. Any suggestions?
I thought the point of mdns is that it just works with zero configuration.
Other alternatives include running pihole on one machine (or all machines for backup), or giving them static IPs in your DHCP server and then copying a hosts file to them all.
Oh yes, when it works it doesn't need any configuration:) (Well, aside initial setup, but that was years ago on Arch so it might be easier now...) But my personal experience was that it didn't always work, and then it was "fun" to troubleshoot. But seriously, do take this with a grain of salt; I abandoned it years ago, so it might "just work" now.
Running your own DNS is the best option, I use dnsmasq, which functions as both a DHCP and DNS server. Any DHCP requests from the client also contains a hostname, which dnsmasq then stores in its DNS table. So if your hostname is `macbook-air`, dnsmasq will resolve that to the DHCP IP address assignment.
On the cheap side, I got old Lenovo m93p - USFF with 8GB RAM, i5 (2 cores, 4 threads) with 4 USB 3.0 ports, which is just enough for lightweight VMs, some test containers and ZFS mirror for backups / NAS. Whole package with 2x2TB disks was around 350$.
I still swear by my Ameridroid H2's - Celeron quad core, dual channel DDR 4, 2 x 1Gbps nic, 1 x NVME, 2 x Sata, 4K video - less then $120. I have 3 that I got the end of last year - tricked out with 16G DDR4 and 2x480G SATA - $250 each
A few days ago I picked up a SFF Dell (precision T1700) with a Xeon E3, 32GB DDR3, small SSD + HDD, 11 USB ports, and a thunderbolt pcie card. All for $200.
The CPU is roughly equivalent to a 6th or 7th gen i5, but with 8 threads instead of 4.
Installed proxmox with 1 VM and several containers. Couldn't be happier with everything. My wife can't even hear it under decent load in the living room.
Picked up a HP Z230 SFF with a Xeon E3 a month ago. Fits two hard drives, a sata ssd, two nvme, as well as a USB-C Pcie adapter. It runs Ubuntu 19.10 with ZFS, and VMs using libvirt and hand defined XML.
Anyone with similar homelab setup - do you use the free ESXi 5.5 or more recent versions which are quite pricey? If 5.5 how do you orchestrate it? vSphere Client on Windows is awful.
> I discovered the Ethernet NIC wasn’t working.
> More Ethernet issues
Likewise with 5.5, where enabling the ethernet is often nontrivial (e.g. injecting the drivers for the Realtek NICs).
ESX is basically free. You just can't manage your ESX hosts as a cluster using vCenter, which gives you things like vMotion or backup your VMs up using the vCenter API.
Also, the Windows client was dropped a while ago and the Flash version is now deprecated.
I think I signed up for a trial licence and got 6.7 u3, but I can’t seem to find a link. It’s been running a while so it isn’t a 60 day one.
VMWares website is utterly impenetrable and it’s like a form of hell trying to get out of the loops you end up in with licence agreements and download options.
I have an older NUC that I've been repaving as needs change, using it as a build agent for some projects, a homelab to experiment with hasura, etc. It's always done well, but it's perf relative to other options would never make me consider using it as a type 1 hypervisor like this. I may need to pick up a NUC10I7FNH.
I got a NUC8i7BEH with ESXI. Works great, run dev lab out of it publicly facing vlan'd off on my home fiber.
Only negative I ran into is - I bought the bigger one with NVME and regular SSD. I couldnt fit a SSD in with NVME with heatsink :-/ Besides that this thing is snappy and uptime been perfect.
I have an old Intel NUC D54250WYKH which I run ESXi on as my core applications server and where all my inbound homelab traffic gets routed through. There's a single CentOS VM with Docker installed which has all the host resources allocated to it.
Currently running the following containers-
proxy [1] - Nginx proxy host for all ingress traffic, has a nice web interface and works with Let's Encrypt
pihole [2] - Primary DNS server running Pi-hole to block ads
cf-ddns [3] - Client to update Cloudflare records with my IP address (wildcard record *.myhomelab.dev which I run the proxy
hosts through)
unifi-controller [4] - UniFi Controller appliance
watchtower [5] - Keeps the docker images up-to-date
portainer [6] - Web interface for managing docker containers
guacamole [7] - Apache Guacamole remote desktop gateway
postfix-relay [8] - Open SMTP relay on my internal network which forwards everything to Amazon SES, makes email notifications easier
It's a great piece of kit and love the small form factor, it sits in my network comms cabinet and I've had no issues. I pass through a monitoring cable from a UPS in the cabinet to the CentOS VM which monitors if there's a power cut and shuts the ESXi host down safely.
I initially ran PhotonOS with Docker but had some networking issues so switched to CentOS 7. I have larger, more powerful SuperMicro hypervisor hosts which run the bigger application containers.
The portability of being able to lift a VM off the hardware and onto another piece of hardware is a nice feature when you are using bargain consumer hardware for your "servers". The overhead is pretty minimal.
I'm not currently doing it with my NUC servers, but it's a pretty good choice to do so, and ESXi is free if you don't need to, you know, manage it in any competent way.
He means the shell of ESXi aka. it's console. As the article says, the ESXi 'shell' is not a full shell. ESXi is designed to be managed via the Web interface or via remote command line tools.
By the way, it's not my take (OP), it's the VMware ESXi's official take on the matter. I don't think my take is much relevant because I don't have experience with it.
That's true for the OS, but as an application vmware products are perfectly happy to be interacted with via PowerCLI. And if you're still running an older version with that godforsaken flash interface it's really preferable.
O_o what the heck are you trying to run in a home lab where that isn't enough? I currently run 20+ containers per NUC in my home lab. I have plenty of capacity left and nothing else to really run...
* dual NVME drives for all NVME vSAN
* dual nics work out of the box for ESXi
* dual thunderbolt 3 ports to enable 10GbE via TB3 adaptor or PCIe expansion
* Linux friendly AMD GPU that has no problems being passed through to a nix or win vm
* low profile and easily stackable for a rack shelf
* Supports 2x32GB SODIMMs
* Power efficient
* Low noise
Love these things!