Hacker News new | past | comments | ask | show | jobs | submit login

Seems like the solution is "the Unix way": Use a lot of little, single-purpose applications for competing larger tasks. That way you can let the OS worry about the parallelization.

Even if you're making a large, multi-purpose app like, say, a spreadsheet you could still subdivide zillions of little tasks into their own applications. On an OS this makes little sense because of the overhead associated with spawning new processes but on Linux it'd be great.




> you could still subdivide zillions of little tasks into their own applications.

This is the approach Rust+Rayon are taking. Rust is in a good position to enable this because of language-level guarantees. Using "a lot of little, single-purpose" tasks means that each task has to own its memory; Unix of course enforces this separation, but this comes with some overhead. Language-level discipline can achieve much the same thing, more efficiently.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: