Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The interconnects have gotten waaaayyy better in datacenters over the past five years or so when compared to Infiniband. Stuff like FPGAs doing data plane routing, and all of the "converged Ethernet" standards like RoCE have really narrowed the gap between Ethernet and Infiniband.


However much Ethernet has changed, it's not appropriate for typical HPC use (c.f. IB, OPA, Aries, ...). I investigated Cisco's UCS at some length, for instance, which they had persuaded an important person to buy, and that's at least partly the baby of an MPI person. (I recall Infiniband was designed as a "datacentre" thing.)


Can you expand on why?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: