Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Any company (let alone Fly) doing this won't go against Nvidia Enterprise T&C?

Good question for a lawyer. Even more reason (beyond maintenance cost) that it would be best done by Nvidia. qCUDA paper has a couple dozen references on API remoting research, https://www.cs.nthu.edu.tw/~ychung/conference/2019-CloudCom....

> If it has been contributed to QEMU, it isn't GPL/LGPL?

Not contributed, but integrated with QEMU by commercial licensees. Since the GPGPU emulation code isn't public, presumably it's a binary blob.

> I hope I'm not talking to DeepSeek / DeepResearch (:

Will take that as a compliment :) Not yet tried DS/DR.



NVIDIA support is not special as far as QEMU is concerned—the special parts are all in their proprietary device driver, and they talk to QEMU via the VFIO infrastructure for userspace drivers. They just reimplemented the same thing in Cloud Hypervisor.

Red Hat for one doesn't ship any functionality that isn't available upstream, much less proprietary, and they have large customers using virtual GPU.


> NVIDIA.. just reimplemented the same thing in Cloud Hypervisor.

Was that recent, with MIG support for GPGPU partitioning? Is there a public mailing list thread or patch series for that work?

Nvidia has a 90-page deployment doc on vCS ("virtual compute server") for RedHat KVM, https://images.nvidia.com/content/Solutions/data-center/depl...


Not NVIDIA; fly.io reimplemented the parts that CH didn't already have. I know that CH is developed on GitHub but I don't know whether the changes are public or in-house.

That said, the slowness of QEMU's startup is always exaggerated. Whatever they did with CH they could have done with QEMU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: