Hacker News new | past | comments | ask | show | jobs | submit login

You need more like 50 Gbps to saturate a modern Nvme SSD.

10 Gbps doesn't come close.




10 Gbps does not, but 10 GBps, as written above, is 80 Gbps, matches your estimate.


I've tried the same method and it's about 10 Gbps, not 10 GBps.


Sounds like you were actually using USB 3.1 Gen 2 or USB 3.2 Gen 2[x1], not Thunderbolt 4.


Thunderbolt 4 can't do 80Gbps either


I tried it with M2 Max Macbooks. Definitely TB4/USB4 capable.


With a certified TB4 cable?


Yes.


There was a moment with the spinning rust drives where it would have made sense to have storage in a networked device and not locally but now it rarely makes sense unless an incredibly fast interconnect could be used.

Of course this example is still interesting and cool.


For a couple of years I had a Linux NAS box under my desk with like 8 Samsung 850 pros in a big array connected to my desktop over 40GbE. Then NVMe became a common thing and the complexity wasn't worthwhile.


Infiniband can match NVMe bandwidth and its latency is similar. Newer network cards can also present a NVMe-oF drive as a local NVMe drive


Yes, that was my point - 10Gbps is just way too slow, even full Thunderbolt bandwidth can be easily saturated in raid configuration - NVMe are just incredibly fast.


25 Gb Ethernet roughly matches a PCIe Gen 3 NMVe drive’s max throughput 50 Gb will match Gen 4. These are RDMA capable.

It seems 25Gb dual port Mellanox CX-4 cards can be found on eBay for about $50. The cables will be a bit pricey. If not doing back to back, the switch will probably be very pricey.


There are 100Gb/s Intel Onmi Path switches currently on ebay for cheap:

https://www.ebay.com/itm/273064154224

And yep, they do apparently work ok if you're running Linux. :)

https://www.youtube.com/watch?v=dOIXtsjJMYE

Haven't seen info about how much noise they generate though, so not sure if suitable homelab material. :/


I saw that, but didn’t consider it particularly cheap. Also, the power draw of this things is also likely a concern if run continuously.


Yeah, power draw could be a problem. :(

Cheap though... it's a small fraction of the price for a new one.

Are there better options around?

Looking at Ebay just now, I'm seeing some Mellanox IB switches around the same price point. Those things are probably super noisy though, and IB means more mucking around (needing an ethernet gateway for my use case).


It can match a single drive.


Assuming jumbo packets are used with RoCE, every 4096 bytes of data will have 70 bytes of protocol overhead [1]. This means that a 25 Gb/s Ethernet link can deliver no more than 3.07 GB/s of throughput.

Each lane of PCIe Gen 3 can deliver 985 MB/s [2], meaning the typical drive that uses 4 lanes would max out at 3.9 GB/s. Surely there is some PCIe/NVMe overhead, but 3.5 GB/s is achievable if the drive is fast enough. There are many examples of Gen 4 drives that deliver over 7 GB/s.

Supposing NVMe-oF is used, the MVMe protocol overhead over Ethernet and PCIe will be similar.

1. https://enterprise-support.nvidia.com/s/article/roce-v2-cons...

2. https://en.wikipedia.org/wiki/PCI_Express




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: