Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The prescheduler caps the execution window for anything that would block. If one piece of the system would take a long time to finish, it doesn't interfere with the other parts of the system completing their work on schedule. That one blocking piece will finish more slowly, but ever other moving part in the system will keep responding as expected.


>If one piece of the system would take a long time to finish, it doesn't interfere with the other parts of the system completing their work on schedule. That one blocking piece will finish more slowly, but ever other moving part in the system will keep responding as expected.

That's how it works in pretty much any language that supports message passing. I used to do MPI programming in C. Nothing you said is not true for MPI in C. Later I did some MPI in Python. True there as well. If you have MPI in any language, it is true for that language.


My understanding is that those languages rely on cooperative scheduling within a thread, meaning that the running code has to relinquish control to the scheduler. Threads themselves are prescheduled at the OS layer but OS threads are much heavier and limited in how many can be running. A Java thread is 1024kb for example, compared to an Erlang process that's 0.5kb.


>My understanding is that those languages rely on cooperative scheduling within a thread

MPI doesn't require threads.

I think people here are confusing parallel programming with multithreaded programming. One is a subset of the other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: