This is quite interesting, are there major performance benefits to using something like AtmanOS? My main concern is if something like this will ever be able to run bare metal, and cut out the overhead of a hypervisor, if performance is critical, then a dedicated box is likely.
Then again, this brings us to the same issues that the *BSD and Darwin kernels have, with really lackluster driver availability. While some big corps like Sony might get AMD to build them a performant GPU driver, the tragedy of the commons situation repeatedly occurs with BSD licensed projects, major improvements aren't contributed upstream reliably as there is no business reason.
That is why you use Xen as a base. Xen lets the dom0 (primary domain) handle the device drivers and then provides guests access to them via paravirtualised interfaces. I.e you get Linux compatibility and then you can run any OS that has paravirtualised drivers written for it.
Generally if you are going to use something like this or MirageOS you will choose to pass through certain real hardware to the guest and write drivers for it.
Say for instance pass through an Intel NIC and then have your application embed a TCP stack and DPDK like components so that you can run your application at line rate.
Not trying to be stupid, but I'm not really clear on what the advantage is of this system. Do I understand correctly that the dom0 OS is still providing drivers and hardware abstraction? So both that and some userland exists somewhere in the stack; it's just not duplicated in the virtualized OSes?
Is the main goal simplicity, performance, or something else?
Userland isn't duplicated, it's 2 totally separated userlands. The userlands may be the same distro in case of Xen (for example RHEL 7) but they still lead separate lives; They are different installations. So that's also multiple userlands in which you have to for example deploy patches (RPM's, Deb's, etc.) for your SSH install.
And yes, I see some value in the reduction of maintenance costs, but you're not doing this manually--you have a package manager of some sort and you're automatically tracking some standard image (either for your company or from some upstream maintainer like Canonical). So conceptually I get the simplicity argument, but practically speaking, it's not really more work to maintain two userlands vs one, right?
I guess there's also an argument of resource (disk, memory footprint, etc) overhead of the second userland. It's not clear to me how significant that is, which was part of my question.
Then again, this brings us to the same issues that the *BSD and Darwin kernels have, with really lackluster driver availability. While some big corps like Sony might get AMD to build them a performant GPU driver, the tragedy of the commons situation repeatedly occurs with BSD licensed projects, major improvements aren't contributed upstream reliably as there is no business reason.