Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think it's because MPI is better, it's just most of the supercomputers I have access to require use of MPI like constructs.


And that's often because network hardware understands MPI and is able to optimize flows between nodes at far lower latency than TCP.


That's really cool. Source?


I used to work in HPC. The Mellanox gear, specifically InfiniBand is very good.

Fun fact: if you're working at a Saudi Arabian HPC center, say KAUST, your interconnects are purely Ethernet. Mellanox is (partially?) an Israeli company, and that's not very politically comfortable with procurement.


Better than? Not necessarily disagreeing, but I'm not sure what the alternatives even are at the same level of abstraction. I mean, there's PNNL's global arrays [1] but that's higher level, or Sandia Portals [2] which is lower/transport level.. Perhaps there are newer/alternative options I don't know about?

[1] http://hpc.pnl.gov/globalarrays/

[2] http://www.cs.sandia.gov/Portals/portals4-libs.html


Global arrays is normally used over MPI anyhow. I guess there's SHMEM, but that's integrated with at least OpenMPI (and others, I think). CHARM++ has been used at scale, but it's semi-proprietary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: