From my read, it counters Bryan Cantrill's claim that, "unikernels are undebuggable".
From personal experience I'm also quite certain Bryan Cantrill's claim is spurious in that regard, as I've used both debugging and tracing facilities w/ LING unikernels to assess a number of runtime and clustering issues.
Yup, If you're running unikernels in a VM for "improved performance" you're doing it wrong. Containers are a much better solution. Unfortunately people see containers now as a packaging mechanism rather than an alternative to hardware virtualization. Also doesn't help that everyone runs their applications in The Cloud where you're required to run on a VM. I'd really like to see containers services like Joyent's take off.
>I'd really like to see containers services like Joyent's take off.
Interesting. I'd like to see a return to owning/renting whole machines and running containers on them directly (as several major tech companies do). A lot of the value proposition for the cloud service providers is a multi-host hypervisor that's really easy to get started with (compared to buying vSphere). If it becomes really easy to deploy your own instance of an open-source container scheduler across your own boxes, we can return to cost-competitive commodity boxes instead of overpaying for AWS.
Agreed, private cloud systems that leverage OS level virtualization like Joyent's solution is completely production ready from a software perspective. IMO, the biggest challenge past getting CTO approval is finding production 24/7 support for the complete stack.
For SMB private clouds, its a no-brainer; I just buy a FreeNAS or TrueNAS box (FreeBSD based) from iXSystems and have them customize the box for extra ram and CPU. You get ZFS, DTrace and a wicked NAS, OS level virtualization with jails and even linux binary support if required. I've done this successfully in production and could not be happier.
Note: I don't work for iXSystems but I do love their products and services.
I'd really like to see containers services like Joyent's take off.
Same here, especially since the zones that the Joyent containers are built on are really easy to use and provide entire miniature yet full fledged UNIX servers, not to mention it’s all open source and gratis and the community is really competent.
Apart from disassembling the machine code, assuming one could even attach a debugger to such an application, how would you debug a unikernel application in production?
Just like debugging Java and .NET applications running bare metal on production.
By having a rich runtime (kernel) that exposes the internals to the world, like Mission Control, TraceViewer and the respective debuggers, when the right set of flags/authentication are enabled.
Unikernel are no different from running embedded applications, bare metal.
I know they are no different since they are one and the same. Now imagine your unikernel application is running inside of a vehicle and there is a bug in the head-up display code. Without re-inventing DTrace and kdb from Solaris / illumos from scratch, how would you debug your unikernel application in order to find and even fix the bug? (With kdb, it can be done on the fly.)
Inside of a vehicle, not yet; production example, yes.
Back in the day, there was a kernel bug which prevented Solaris 10 installer from booting on systems with a Pentium III processor. The workaround was to boot the kernel with -k, let it crash and drop into kdb, then patch a certain kernel structure and drop out of the debugger, continuing execution. The same mechanism would work in a car or a Boeing 787, which I understand were actually running Solaris as the control & navigation system.
How are theoretical applications which are designed to do just one specific thing a threat to a fully featured hypervisor with 15000 packages used in production?
From my read, the benefits do not outweigh the costs. If you want light weight microservices, OS level virtualization is the way to go.