Close but not quite -- sibling hyperthreads (logical cores) share cache state. Physical cores do not share cache state. Different processes, threads, or VMs on sibling hyperthreads (by definition on the same physical core) can infer the other's memory state based on the cache state.
If an attacker is pinned to one hyperthread, and the victim is pinned to another which isn't a sibling hyperthread, none of the spectre attacks will work since the cache state isn't shared.
As an attacker with code exec on a core, you can theoretically play games with the OS scheduler until you're running on a sibling core with your victim thread/process/vm.
That's not true. Spectre works due to speculative execution leaking memory data through a side channel exposed by hyperthreading. That memory can be in use by any of the threads, not just the ones on sibling threads.
The side channel for most of the spectre variants is the latency of misses on the cache lines. L1 and L2 cache lines are local to a physical core. As far as I know, nobody has made any of the spectre variants work by measuring the latency of L3 cache misses, which are local to NUMA nodes if I understand correctly, but I'd love to hear otherwise.
The most recent round of spectre variants measured the latency of the line fill buffers and other parts which are local to a physical core's memory subsystem.
If an attacker is pinned to one hyperthread, and the victim is pinned to another which isn't a sibling hyperthread, none of the spectre attacks will work since the cache state isn't shared.
As an attacker with code exec on a core, you can theoretically play games with the OS scheduler until you're running on a sibling core with your victim thread/process/vm.