Hacker News new | past | comments | ask | show | jobs | submit login
Homelab: Intel NUC with the ESXi Hypervisor (henvic.dev)
124 points by henvic on April 20, 2020 | hide | past | favorite | 109 comments



The ultimate NUC for a vSphere lab is the "hades canyon" NUC8i7HVK. I have three of them in a vSAN cluster.

* dual NVME drives for all NVME vSAN

* dual nics work out of the box for ESXi

* dual thunderbolt 3 ports to enable 10GbE via TB3 adaptor or PCIe expansion

* Linux friendly AMD GPU that has no problems being passed through to a nix or win vm

* low profile and easily stackable for a rack shelf

* Supports 2x32GB SODIMMs

* Power efficient

* Low noise

Love these things!


Aren’t vSphere licenses a bit expensive for the average homelab? I personally went with a larger former workstation with more cores and RAM, because I expect to only ever have a single node, even if I’d love to have a full cluster.


VMUG Advantage membership ($200/year) gets you a 6 socket, non-production license for basically the entire VMWare catalog.


For $0/year I can run Linux and ovirt.


That won't help you very much if the goal is to maintain familiarity with VMware products (which is the entire point of the VMUG licensing scheme).


I found recently that they got 500 bucks license for 6 CPUs now! I think it is doable since it is lifetime license (without updates though after a year(?)).


just the ESXi hypervisor is free for non-production use.


The newer version is even more interesting and it's just released on the market now: NUC 9 Extreme/Pro Kits. [1]

It's got i9/Xeon options, 3 NVME drives, and even PCIe x4 and PCIe x16 slots. It's bigger but still quite small. The PCIe slots are the killer feature to me.

[1]: https://www.intel.com/content/www/us/en/products/boards-kits...


It's incredible. I just wish pricing came down a bit; I am happy to pay a premium for the form factor but this is too much.


Yeah it's touching 2k USD without RAM / storage or OS. I don't know if many homelabbers are going to buy that just for the form factor. Building a small box with 8-C Ryzen would be an interesting exercise.


There's no practical reason to use this new box for homelab folks compared to some older micro workstations that came out a year or so ago that had E3-class Xeons in them at least. For an entry of $2k it's easily worth it now to find some old Xeon D mini ITX based boards and plop them into inexpensive cases with SFX PSUs. Old gear is the bread and butter of the homelab experience and people dumping. The hard part with AMD building is mostly the ECC DIMM motherboard support experience but SMB-class hardware still is a Good Idea for prosumer and homelab folks due to software compatibility (read: drivers and less cost-driven gotchas ala winmodems) compared to consumer hardware.


with support for optane memory too, nice.


If you just want network between them, you should actually be able to connect them directly with a thunderbolt cable, no need for Ethernet in between.


Me too, they are so tiny and powerful! Just got the 64GB Frost Canyon one, awesome little thing. Running it with an Nvidia 2080Ti via Razor Core Chroma and Ubuntu 20.04. GPU Compute (CUDA) works, which is really the only thing I need it for, but I cannot get it properly working with a display though.


Do these have remote manangement? IIRC there are only a few NUC models that have BMCs (or Intel's equivalent - AMT).


KVM and Proxmox also run very well on that platform, if you're more used to those stacks (and the management tools are free).


One thing I really like about ESXi the last time I used it was the built-in virtual switch, other solutions in KVM world where not up to par. Maybe it's different today.


I’m very much a lightweight and I found that ESXi was great when combined with Fusion on a Mac. Being able to build VMs locally then upload them is nice. Maybe other platforms have methods like this too?


VMware Workstation runs on Windows and Linux and supports the same "Connect to ESXi" functionality that Fusion does on macOS.


Thank you.


Some additional options:

* XCP-ng (based on xenserver)

* FreeBSD bhyve.


I've always preferred Proxmox because of its container support. I have a number of Proxmox hosts that don't have any VMs on them at all, just containers.


I'm looking for a hypervisor recommendation... I am hoping to virtualize at least two if not all three of my machines: Unraid (pass-thru HDDs); pfSense (pass-thru NIC); and Windows 10 (pass-thru Graphics). I am hoping to have Windows 10 running fast enough to do some light photo editing. Which hypervisor would you think to try first?


KVM works well. You can use system-wide libvirt-based management to have the VMs automatically come up and go down with the host machine, and treat them like services.

That said, there's value in having your router/firewall separate from other systems (virtual or otherwise), which will substantially reduce complexity and make it less likely that you accidentally make a system accessible on the wrong side.


I've just recently spun up some VMs on my mid-range laptop with KVM. Once I got it up and running, I was pretty happy with the performance, but getting it there was not super easy.

In particular, qemu and apparmor don't always play nicely together and the error messages you get by default don't tell you that it's an apparmor problem.

Importing a VMDK package was also tricky, I ended up having to get familiar with some XML incantations to get the machine to import properly. Feedback from qemu or its friends was totally obtuse and unhelpful there too.


Do you know if it's possible in KVM to create separate virtual subnetwork managed by KVM (like: adding VMs to that network, reading their IPs) which is accessible from parent network?

I got home network and wanted KVM to control which VM gets which IP, but I couldn't make them accessible from outside - without static IPs on VM side. When I use virtual bridge, KVM cannot set the IPs, but at least they are accessible from my home network.


It's not a KVM-level thing unless you want to fiddle by hand with bridges and obscure command line options. libvirt can do it reasonably easily on top however (see "virsh net-list" etc).


Cockpit with Virtual Machines plugin also works quite well for normal VM scenarios - create/edit/view console etc. works great.


I've done all of that on XenServer, or rather XCP-ng now. (pfsense (nic passthrough), freenas (SAS controller passthrough) and windows 10 with gpu passthrough)

It doesn't have the same hype as the others but far more open. Biggest drawback is that the configuration software is windows-only (you can get it running in wine but it is a bit hacky, also xen-orchestra is a free (for personal use, if you build from source) web-configuration utility that one can use).

Would not want to be at mercy of vmware. Also don't want to invest time into a system with such restrictions on use.

Proxmox might be a decent alternative but I have a few systems and the cost would be prohibitive for hobby use.


XCP-ng Center (the Windows client) will disappear and Xen Orchestra is the only officially support client. In the next month, we'll bundle "XO Lite" into hosts to manage a simple machine without having to install anything!


So there will be no free alternative left?

Not sure if a gimped shareware-like(?) option sounds that appealing.


I'd usually recommend just using KVM on Fedora/CentOS, but if you're not that Linux savy or don't want to spend time fiddling around I'd probably say give Proxmox a go, or XCP-ng if you want something that's easier to cluster between hosts but don't mind the storage not being quite as quick.


You don’t have to be that Linux savvy to manage virtual machines with Cockpit.


Have you ruled out unraid? It has built-in VM management and supports pass-through.


The only issue is I didn't want to lose my pfSense connection when Unraid reboots for upgrades.


I have the same impulse to want to virtualise my router. I’ve repeatedly agonised over this but keep coming back to the decision that as cool as it would be, so long as internet uptime matters, the router probably needs to be separate hardware.

You can get good, power efficient pfsense hardware for around $200 USD. A few bonus notes:

—You could spend a little bit more and get hardware that's good enough to add a virtualisation layer to your dedicated router.

—If your uplink is less than 300 megabit, and you have a managed switch with VLANs, your pfsense box doesn't need to be dual NIC. This could open the door for cheaper or re-purposed hardware.


I have had pfSense running on Proxmox for several months now.

Upgraded to the latest pfSense version a couple of days ago and the total downtime was in the order of minutes.

It's a bit too early to say whether it's a good idea or not; the main concern I have is that you have two points of failure instead of only one. Other than that worry it has been very smooth.


The company that makes pfSense used to run its (pfSense) firewalls in ESXi, but then I got tired of having to wait for VMware to come up before the firewall(s) would come up.

That, and CARP on VMware is ... tiresome.


Proxmox. ESXi client is windows only and most of KVM's clients are either linux only, cli only, or dogshit.

Proxmox is great as long as you have the mental fortitude to ignore the tickler that pops up on login if you're on community support only.


Actually ESXi web works well for 'outside' managing VMs and for connecting 'inside' there is official app 'VMWare Remote Console' on AppStore

I haven't actually used Proxmox but had X11/Xquartz on macOS, following enough to manage them. You don't even need to open X manually. It gets started when there is a connection.

    ssh -X user@host -- virt-manager &
Also getting started on KVM on Debian is as easy as:

    apt install -y libvirt-bin virt-manager



ESXi web is good. Why you need an app for this?


I dumped it years ago when my container work kicked off and they started paywalling everything behind vcenter.

If web works with base ESXi great! If not then I'm not interested.

I just don't work on a lot of companies in the position of needing VMWare anymore. Either they're all in on AWS or they're too small to justify the expense.


the Windows ESXi client has been EOL for a while now and is not supported by current versions. It's all managed through the web or the esxcli commands now.


Do you need vcenter for that?


You really don't want to virtualize unRAID.


I’m not that into hypervisors, but wouldn’t hyper-v be a good choice here? You could have near native windows graphics performance.


Can u run proxmox on a single developer machine ?

(so that you boot up your machine into one of the virtualized OSes)


That's almost it's strong point. It's pretty quirky from the outside, but I like Proxmox. You can see, pretty quickly, with a ~$50/month experiment on OVH's discount arm (SoYouStart.com) if it's a good fit.

You can mix and match normal VMs with LXD containers, and it does quite well for single physical hosts or small clusters. Past some scale, I'd go with something else, but it's great at the < 20 physical host range. It's a fairly unsung open source area. OpenVZ used to be a semi for-profit, open source/ but we want money, competitor here. Guessing they are mostly dead now. (Not celebrating that...they did a lot of good).


The good thing about Proxmox is the flexibility because of It's Debian-based. I can do anything than can do on Debian like custom monitoring, manager filesystem, etc.


> Can u run proxmox on a single developer machine ?

Yes. Proxmox is built on Debian (or was, last I played with it) and you can actually install it on a Debian host, if desired.

> (so that you boot up your machine into one of the virtualized OSes)

No.


gpu + usb passthrough and autostart the vm on boot


Proxmox boots to a shell as far as I have seen. The interface is all web-based.

Also have seen some folks who have used a Pi or a Chromebit to remote into any machine they choose.


No, you'd want something more like unRAID for that.


I've always disliked the fact that proxmox is basically uncapable of hosting virtual machine images to spawn vms at will.

You have to fiddle with template images but it's kind of a tedious task.


I was almost sold to bhyve as I intended to have FreeBSD on this machine, but at the last minute I decided to give ESXi a try first.


If you decide to give bhyve a whirl at some point, I highly recommend giving vm-bhyve[1] a try for managing the VMs. Makes lots of tasks much easier.

[1]: https://github.com/churchers/vm-bhyve


bhyve is cool, but ESXi has some good tools.


I've just discovered PhotoPrism [1] through this article, which looks like a great solution for organising my photos in my homelab/home server - which is an HP N54 Microserver (Gen 7), recently reformatted to run FreeNAS; loving it so far (previously was some Frankenstein mix of Windows Server 2012 running an Ubuntu VM via Hyper-V, with the storage being ZFS within the VM... terrible performance!)

There is also PhotoStructure [2] that I like the look of, but PhotoPrism is OSS and already integrates machine learning for identifying objects in photos... yes I know I can't expect great performance out of my N54L on that front!

[1] https://photoprism.org/

[2] https://photostructure.com/


Hi, author of PhotoStructure here.

If you've signed up for beta access, know that I'm sending out invites in batches, and hope to send out another this week, with the release of v0.8.0 : https://photostructure.com/about/release-notes/

(sorry if you've been waiting a while!)

Briefly, there are 3 builds of PhotoStructure: a desktop build (packaged with Electron), a Docker image, and a git repo you can pull, `yarn install`, and run directly. More details about why I felt the need to quit my office job and work on it full time are here: https://photostructure.com/about/introducing-photostructure/

If you've got any questions, feel free to ask them here or email hello@photostructure.com.


You'd never guess from the Photoprism website that it actually exists. It's just a huge ad.

But there is actually code, tucked away in the hard to find github link: https://github.com/photoprism/photoprism

Unfortunately, it requires docker, and cannot be installed standalone. That's a deal-breaker for me.


off-topic: Here's a weird thing... I've to admit that one concern I've is to like PhotoPrism (this is what is keeping me a little bit afraid of trying it). The issue is that I'm a Go programmer and it seems to be a good piece of software, but it uses the GPL license, and I've as a motto to avoid reading GPL'ed source code, especially when it affects what I do for a living. It'd be even worse if I think about contributing to it...

I discovered it a few months ago when I was searching the web to see what is out there. Previously to moving to iPhoto (regretted a lot), I used digiKam and I enjoyed it.

* I don't want to risk reading something I like there and ending up copying the idea or writing similar code unwillingly, as this license is viral.

Thanks for showing me PhotoStructure!


> but it uses the GPL license, and I've as a motto to avoid reading GPL'ed source code

So, don't? Just pretend that it's a proprietary application with no source that just drops from the sky as a binary or docker image. Can you not get what you want as just a "normal" user?


I'd argue this is hard to do when it's so easy to explore a bit further, so I'd rather just stay away from trouble :)


*> Previously to moving to iPhoto (regretted a lot)

I used iPhoto for a while and am actively considering buying a used one to run it on again for family photos. What didn't you like about it, and is there anything that can take the metadata from iPhoto and translate it?


I'd not recommend using the older versions because it corrupted my library, and it removed some photos from Flickr after I removed a set (I hated it so much... it wasn't supposed to work like this, Apple), and made the original ones very hard to find. The new Photos app seems way better, though. However, I admit my trust in it is not great either because I'm afraid these experiences might happen again... So I'd either stick to Lightroom or go to something light that I know I can always just rely on organizing the files by directories on my filesystem.


I run my homelab on NUCs as well but no hypervisor. Everything I run is in Docker containers. So the NUCs have CentOS installed and then just containers across the Swarm cluster.

Storage is an NFS mount from my Synology NAS.

The only reason I would really care to have a hypervisor is because then I can do more from Terraform.

I use Terraform to configure my DNS, my Ubiquiti gear, and the Swarm cluster but that leaves a gap where I need to do something to manage the actual CentOS machine. There isn't much to manage (users, SSHd config, SSH keys, packages such as vim, docker, and htop, and then NFS mounts) but the less I have to manage myself the better. Just don't think it's worth adding a whole hypervisor just to pull possibly some of the networking into Terraform.

Anyways, NUCs + Docker Swarm = great win.


That's exactly what I want to do. Do you have any guides you used? I'm having a little trouble with nfs and docker permissions (guid/uid I think) trickling down to the nfs share in my Synology.


With regards to NFS permissions, I set my Synology up as so:

* Client: 192.168.0.8/29 (whatever the smallest subnet is I can use that includes my NUC)

* Privilege: Read/Write

* Squash: No mapping

* Asynchronous: Yes

* Non-privileged port: Denied

* Cross-mount: Denied

This is fairly out of date (I keep my own stuff in a private monorepo, this was a snapshot I posted for a friend), but here is the basics of a script to setup my NUC: https://github.com/regner/homelab-stacks/blob/master/scripts...

Of note, the "Adding NFS share to fstab" part doesn't actually do that...


I currently run my Sandstorm.io server on an 7th Gen i5 NUC.

I recently picked up some old budget Gigabyte BRIXs that will take on some other fun roles. (A 6th Gen Celeron and a 4th Gen i5.)

I have wanted to run an AD domain for managing my gaming PCs, but I had some difficulty with ESXi IIRC (probably also NIC issues, like in the article), and Intel has decided to be a complete tool about network drivers for "consumer" chipsets being installable on Windows Server. One of the perks of the old 4th Gen BRIX unit: A Realtek LAN NIC.

One of the things I'm sad about on newer NUCs is the removal of the LED ring. 7th gen NUCs in particular have a neat multicolor LED ring around the front panel that can be programmatically controlled... pretty neat for server status like uses. But I suspect they found it rarely used and so left it out for cost or something.


VMs work ok, but much I am much happier with Linux containers. NUCs get bogged down pretty easily and go from quiet purring kittens, to screeching and whining when they get under load. A dozen containers, with different distorts and versions runs nicely.


I just got one of those NUCs.

The network device is annoying - I couldn't get it to work on debian stable, but it is supported in Debian testing thanks to kernel 5.5

I also had problems with the HDMI video - It didn't work with the hdmi->dvi cable I got with either of my spare monitors, nor did it work with my TV and the existing hdmi cable I had there. I did get it to work by borrowing the hdmi cable from a friends htc vive.


I have a couple of Raspberry Pis and Onion Omegas on my LAN and a few Linux and BSD VMs on my MacBook (NATed). They can mostly ssh amongst themselves, but the biggest problem I have is that I haven't been able to come up with a satisfactory DNS/mDNS setup. I just don't like fiddling with IP addresses. Any suggestions?


I thought the point of mdns is that it just works with zero configuration.

Other alternatives include running pihole on one machine (or all machines for backup), or giving them static IPs in your DHCP server and then copying a hosts file to them all.


Oh yes, when it works it doesn't need any configuration:) (Well, aside initial setup, but that was years ago on Arch so it might be easier now...) But my personal experience was that it didn't always work, and then it was "fun" to troubleshoot. But seriously, do take this with a grain of salt; I abandoned it years ago, so it might "just work" now.


A couple of my systems have resolvers that can't resolve .local names (OpenBSD, OpenWrt/musl)



I've always just given static local IPs to everything in my router settings - usually it lets you map MAC addresses to desired IPs.

Then you pass traffic on certain ports to certain IPs if you want to be able to access anything from outside.


You can assign them static IPs in DHCP, then run your own DNS server that returns local IPs for those hosts.

Or bridge those VMs on your macbook so it's a flat network, then use Avahi for mDNS so they all have .locals.


Running your own DNS is the best option, I use dnsmasq, which functions as both a DHCP and DNS server. Any DHCP requests from the client also contains a hostname, which dnsmasq then stores in its DNS table. So if your hostname is `macbook-air`, dnsmasq will resolve that to the DHCP IP address assignment.


On the cheap side, I got old Lenovo m93p - USFF with 8GB RAM, i5 (2 cores, 4 threads) with 4 USB 3.0 ports, which is just enough for lightweight VMs, some test containers and ZFS mirror for backups / NAS. Whole package with 2x2TB disks was around 350$.


I still swear by my Ameridroid H2's - Celeron quad core, dual channel DDR 4, 2 x 1Gbps nic, 1 x NVME, 2 x Sata, 4K video - less then $120. I have 3 that I got the end of last year - tricked out with 16G DDR4 and 2x480G SATA - $250 each

edit clarify it was $250/each


A few days ago I picked up a SFF Dell (precision T1700) with a Xeon E3, 32GB DDR3, small SSD + HDD, 11 USB ports, and a thunderbolt pcie card. All for $200.

The CPU is roughly equivalent to a 6th or 7th gen i5, but with 8 threads instead of 4.

Installed proxmox with 1 VM and several containers. Couldn't be happier with everything. My wife can't even hear it under decent load in the living room.


I got a rack full of Dell and HP blade servers of Ebay years ago - nice gear - dual quad core xeons with 48G ram, SSD, and gigabit networking...

Thing is, running all 16+ blades pushes my electric bill over $500/month, and sounds like a fighter jet getting ready to take off :-P

Having silent gear is totally underrated!


Picked up a HP Z230 SFF with a Xeon E3 a month ago. Fits two hard drives, a sata ssd, two nvme, as well as a USB-C Pcie adapter. It runs Ubuntu 19.10 with ZFS, and VMs using libvirt and hand defined XML.


Anyone with similar homelab setup - do you use the free ESXi 5.5 or more recent versions which are quite pricey? If 5.5 how do you orchestrate it? vSphere Client on Windows is awful.

> I discovered the Ethernet NIC wasn’t working.

> More Ethernet issues

Likewise with 5.5, where enabling the ethernet is often nontrivial (e.g. injecting the drivers for the Realtek NICs).


ESX is basically free. You just can't manage your ESX hosts as a cluster using vCenter, which gives you things like vMotion or backup your VMs up using the vCenter API.

Also, the Windows client was dropped a while ago and the Flash version is now deprecated.


I think I signed up for a trial licence and got 6.7 u3, but I can’t seem to find a link. It’s been running a while so it isn’t a 60 day one.

VMWares website is utterly impenetrable and it’s like a form of hell trying to get out of the loops you end up in with licence agreements and download options.


> VMWares website is utterly impenetrable and it’s like a form of hell trying to get out of the loops

Oh man, yes. The only way to download installation binary of something specific was a direct link to VMWare servers found somewhere on the internet.


I torrented it the last few times. I have a licence and am unclear if this is a breach of TOS.


I have an older NUC that I've been repaving as needs change, using it as a build agent for some projects, a homelab to experiment with hasura, etc. It's always done well, but it's perf relative to other options would never make me consider using it as a type 1 hypervisor like this. I may need to pick up a NUC10I7FNH.


For my setup I was running ESXi on a 2012 i7 Mac Mini with 16GB of RAM

At the time I bought it (2013?) it was far better value for money than any of the Intel NUCs.

I've always wanted someone to produce a NUC in a similar form factor, with good design but there doesn't seem much demand

And alas the current Mac Mini's are completely un-upgradable


You can upgrade the memory on the 2018 mini, but yes that's it.


I got a NUC8i7BEH with ESXI. Works great, run dev lab out of it publicly facing vlan'd off on my home fiber.

Only negative I ran into is - I bought the bigger one with NVME and regular SSD. I couldnt fit a SSD in with NVME with heatsink :-/ Besides that this thing is snappy and uptime been perfect.


I have an old Intel NUC D54250WYKH which I run ESXi on as my core applications server and where all my inbound homelab traffic gets routed through. There's a single CentOS VM with Docker installed which has all the host resources allocated to it.

Currently running the following containers-

  proxy [1] - Nginx proxy host for all ingress traffic, has a nice web interface and works with Let's Encrypt
  pihole [2] - Primary DNS server running Pi-hole to block ads
  cf-ddns [3] - Client to update Cloudflare records with my IP address (wildcard record *.myhomelab.dev which I run the proxy 
  hosts through)
  unifi-controller [4] - UniFi Controller appliance
  watchtower [5] - Keeps the docker images up-to-date
  portainer [6] - Web interface for managing docker containers
  guacamole [7] - Apache Guacamole remote desktop gateway
  postfix-relay [8] - Open SMTP relay on my internal network which forwards everything to Amazon SES, makes email notifications easier
It's a great piece of kit and love the small form factor, it sits in my network comms cabinet and I've had no issues. I pass through a monitoring cable from a UPS in the cabinet to the CentOS VM which monitors if there's a power cut and shuts the ESXi host down safely.

I initially ran PhotonOS with Docker but had some networking issues so switched to CentOS 7. I have larger, more powerful SuperMicro hypervisor hosts which run the bigger application containers.

[1] https://hub.docker.com/r/jc21/nginx-proxy-manager [2] https://hub.docker.com/r/pihole/pihole [3] https://hub.docker.com/r/joshava/cloudflare-ddns [4] https://hub.docker.com/r/linuxserver/unifi-controller [5] https://hub.docker.com/r/containrrr/watchtower [6] https://hub.docker.com/r/portainer/portainer [7] https://hub.docker.com/r/oznu/guacamole [8] https://hub.docker.com/r/simenduev/postfix-relay


Why run ESXi at all and not just install CentOS directly on the NUC?


The portability of being able to lift a VM off the hardware and onto another piece of hardware is a nice feature when you are using bargain consumer hardware for your "servers". The overhead is pretty minimal.

I'm not currently doing it with my NUC servers, but it's a pretty good choice to do so, and ESXi is free if you don't need to, you know, manage it in any competent way.


Yes, the Thunderbolt 1 to Ethernet adapter works with ESXi. I use it with my Mac Mini 2011 running ESXi.


That was quite a thorough and interesting post, I was pleasantly surprised! :)


Thanks! By the way, I noticed this video https://www.youtube.com/embed/WkA0dMudUG0 showing TempleOS didn't make when building from Markdown to HTML.

* I'm going to fix this later today + some misspells.


After playing around with ESXi, Proxmox, Hyper-V. I converged my mini-homelab to:

- Any linux-based OS, Ubuntu in this case.

- Ansible / Terraform for task automation.

- K8s / Docker for containerization.

Given I have only 1 desktop + laptop, this option is good enough for me to learn cluster setup / networking / automation.


[flagged]


He means the shell of ESXi aka. it's console. As the article says, the ESXi 'shell' is not a full shell. ESXi is designed to be managed via the Web interface or via remote command line tools.


By the way, it's not my take (OP), it's the VMware ESXi's official take on the matter. I don't think my take is much relevant because I don't have experience with it.

Well... I sure love CLIs and shells... :)


That's true for the OS, but as an application vmware products are perfectly happy to be interacted with via PowerCLI. And if you're still running an older version with that godforsaken flash interface it's really preferable.


Why is this better than running VietualBox in a host OS?


6 cores and 64GB max memory? It’s hardly enough to run anything.


O_o what the heck are you trying to run in a home lab where that isn't enough? I currently run 20+ containers per NUC in my home lab. I have plenty of capacity left and nothing else to really run...


I think sqldba's post was sarcasm. As in, once you open Slack, Gmail and a few Chrome tabs, poof there goes 10GB of RAM.


It's enough to run a lot of stuff.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: