23 years ago I sat in a meeting with Sun, Intel, Mellanox, and 1 or 2 others. In that meeting we discussed putting an RDMA interface on individual hard drives, trays of RAM, CPUs, and other more exotic devices (like battery backed RAM, no conventional SSD in those days of course). You’d install RAM 1 42U rack at a time, disks likewise, CPUs in another rack and so on. All partitioned, controlled, managed, and of course billed-for by a “data center OS”.
Disaggregation costs an absolute fortune. The network is 10% of the total datacenter cost and the network does not carry memory and PCIe traffic. If you make the network 10x faster to carry that, now it's 90% of the cost.
I was in the room for similar discussions as part of the OpenCompute project, which was around remote IO and resource disaggregation. There are systems like this, and they may make sense for certain use cases, but generally speaking hyperscalers today are built around virtualization which doesn't align well to this model.