Not only do people do this, it's generally how VPS providers work. Most machines barely use the CPU most of the time (web servers etc.) so reserving a full CPU core for a VPS is horribly inefficient. It doesn't matter anyway, because SMT isn't relevant for this particular bug.
With SMT allowing twice the cores on a CPU for most workloads, disabling it would double the cost for most providers!
There are VPS providers that will let you rent dedicated CPU cores, but they often cost 4-5x more than a normal virtual CPU. Overprovisioning is how virtual servers are available for cheap!
SMT is relevant in the VM case of this bug because it determines whether this bug is restricted to data outside the VM or not.
Providers usually won't disable SMT completely, they'd run a scheduler which only allows 1 VM to use both SMT threads of a core. Ultra cheap VPS providers may still find that not worth the pennies though as if you sell a majority of single core VPS then the majority of your SMT threads are still unavailable even with the scheduler approach.
Fully dedicated cores aren't necessarily required because in the timesliced case the registers are unloaded and reloaded when different VMs are shuffled on and off the core. That said, they definitely prevent the cross-vm-data-leak case of this bug.
> Fully dedicated cores aren't necessarily required because in the timesliced case the registers are unloaded and reloaded when different VMs are shuffled on and off the core. That said, they definitely prevent the cross-vm-data-leak case of this bug.
Registers are unloaded and reloaded when different processes / threads are scheduled within a running VM too. That should protect the register contents, but because of this issue, it doesn't, so I don't see why it would if it's a hypervisor switching VMs instead of an OS switching processes. If you're running a vulnerable processor on a vulnerable microcode, it seems like you can potentially read things put into the vulnerable registers by anything else running on the same physical core, regardless of context.
Context switching for processes is done in software (i.e. the OS) via traps because TSS does not store all the registers and it doesn't offer a way to be selective to what the process actually needs to load (=slower). This limits its visibility to what's in the actively mapped registers as well as not guaranteeing the procedure even tries to reload all the registers. In this case, even if the OS does restore certain registers it has no way to know the processor left specific bits of one speculatively set in the register file.
On the other hand, "context switching" for VMs is done via hardware commands like VMSAVE/VMLOAD or VMWRITE/VMREAD which do save/load the entire guest register context, including the hidden context not accessible by software which this CVE is relying on. Not that it isn't impossible for this to be broken as well, but it's a completely different procedure and one the hardware is actually responsible for completely clearing instead of "supposed to be reset by software".
So while the CVE still affects processes inside of VMs the loading/unloading behavior inter VM should actually behave as a working sandbox and protect against cross-VM leaks, barring the note by lieg on SMT still possibly being a problem (I don't know enough about how the hardware maintains the register table between SMT threads of different VMs to say for sure but I'm willing to guess it's still vulnerable on register remappings).
There may well be other reasons I'm completely mistaken here but they'd have to explain why the inter-VM context restore is broken not why it works for inter-process restore. The article already explains why the latter happens, but it doesn't make a claim about the former.
I can't easily find good documentation on the instructions you mentioned; but are you sure those save and load the whole register file, and not just the visible registers? There are some registers that are not typically explicitly visible, that I'd expect to also be saved or at least manipulable in a hypervisor, but just like the cache state isn't saved, I wouldn't expect the register file to be saved.
If we assume the register file isn't saved, just the visible registers, what's happening is the visible registers are restored, but the speculative dance causes one of the other values in the register file to become visible. If that's one of the restored registers, no big deal, but if it was someone else's value, there's the exploit.
If you look at the exploit example, the trick is that when the register rename happens, you are re-using a register file entry, but the upper bits aren't cleared, they're just using a flag to indicate the bits are cleared; then when rolling back the mispredicted vzeroupper unsets the flag, the upper bits of the register file entry are revealed.
Reading more the VM* command sets definitely load/save more than just the normally visible registers, the descriptions in the AMD ASM manual are very explicit about that. However, it looks like (outside the encrypted guest case where everything is done in 1 command) the hyper visor still calls the typical XRSTOR for the float registers, which is no different than the normal OS case. If that's true then I can see how the register file is still contaminated in the non SMT case.
Well you don't have to reserve any CPU Cores per VM. There's no law saying you can't have more VMs than logical cores. They're just processes after all and we can have thousands of them.
Of course not, but the vulnerability works by exploiting the shared register file so to mitigate this entire class of vulnerabilities, you'd need to dedicate a CPU core and as much of its associated cache as possible to a single VM.
With SMT allowing twice the cores on a CPU for most workloads, disabling it would double the cost for most providers!
There are VPS providers that will let you rent dedicated CPU cores, but they often cost 4-5x more than a normal virtual CPU. Overprovisioning is how virtual servers are available for cheap!