There was a moment with the spinning rust drives where it would have made sense to have storage in a networked device and not locally but now it rarely makes sense unless an incredibly fast interconnect could be used.
Of course this example is still interesting and cool.
For a couple of years I had a Linux NAS box under my desk with like 8 Samsung 850 pros in a big array connected to my desktop over 40GbE. Then NVMe became a common thing and the complexity wasn't worthwhile.
Yes, that was my point - 10Gbps is just way too slow, even full Thunderbolt bandwidth can be easily saturated in raid configuration - NVMe are just incredibly fast.
25 Gb Ethernet roughly matches a PCIe Gen 3 NMVe drive’s max throughput 50 Gb will match Gen 4. These are RDMA capable.
It seems 25Gb dual port Mellanox CX-4 cards can be found on eBay for about $50. The cables will be a bit pricey. If not doing back to back, the switch will probably be very pricey.
Cheap though... it's a small fraction of the price for a new one.
Are there better options around?
Looking at Ebay just now, I'm seeing some Mellanox IB switches around the same price point. Those things are probably super noisy though, and IB means more mucking around (needing an ethernet gateway for my use case).
Assuming jumbo packets are used with RoCE, every 4096 bytes of data will have 70 bytes of protocol overhead [1]. This means that a 25 Gb/s Ethernet link can deliver no more than 3.07 GB/s of throughput.
Each lane of PCIe Gen 3 can deliver 985 MB/s [2], meaning the typical drive that uses 4 lanes would max out at 3.9 GB/s. Surely there is some PCIe/NVMe overhead, but 3.5 GB/s is achievable if the drive is fast enough. There are many examples of Gen 4 drives that deliver over 7 GB/s.
Supposing NVMe-oF is used, the MVMe protocol overhead over Ethernet and PCIe will be similar.
10 Gbps doesn't come close.