VT-d in particular gives you the ability to expose one piece of hardware (that knows how to partition itself in some way) as multiple devices on the PCIe bus. With cgroup support, a container could be assigned one of the split devices, and act within the container as if it were the whole device. This is what regular hypervisors do, but they require a full set of virtualized devices (a virtual CPU, a virtual memory, etc.) while this approach allows you to virtualize only the resources your containers actually want to contend over.
So you could have, say, one virtual ethernet card per container (letting you run a container as a promiscuous-mode packet filter for its own VPC subnet, while still not being able to snoop on other VPCs' traffic) or one virtual GPU per container (allowing you to containerize OpenCL apps), while still having your containers acting like regular processes otherwise.
In order to get one virtual slice of device per container I just need the device's driver to support that or I need to have a layer on top of the driver that partitions the device. As the different cgroups have the same kernel and thus the same set of drivers I see no advantage in splitting physical devices. What am I missing?
So you could have, say, one virtual ethernet card per container (letting you run a container as a promiscuous-mode packet filter for its own VPC subnet, while still not being able to snoop on other VPCs' traffic) or one virtual GPU per container (allowing you to containerize OpenCL apps), while still having your containers acting like regular processes otherwise.