> For me the biggest challenge when sharing state is that the only benefit I can see for parallelism is performance, so if I'm not gaining performance there is no reason to use parallelism.
Aside from performance, another very common reason is to not lock the UI from the user. Even in UI-less programs, the ability to abort some operation which is taking too long. Another is averaging out performance of compute tasks, even in the case where it would be faster to handle them sequentially. Without some degree of parallelism these things are not possible.
Consider a web server. Without parallelism every single request is going to completely lock the program until its complete. With parallelism, you can spawn off each request, and handle new ones as they come in. Perceived performance for majority of users in this case is significantly improved even in the case of single processor system - e.g. you have 99 requests which each take a single second, and then one which takes 101 seconds. Total request time is 200 seconds / 100 requests = 2 seconds average per request, but if that 100 second request comes in first, the other 99 are locked for 100 seconds, so average is now > 100 seconds per request ...
Aside from performance, another very common reason is to not lock the UI from the user. Even in UI-less programs, the ability to abort some operation which is taking too long. Another is averaging out performance of compute tasks, even in the case where it would be faster to handle them sequentially. Without some degree of parallelism these things are not possible.
Consider a web server. Without parallelism every single request is going to completely lock the program until its complete. With parallelism, you can spawn off each request, and handle new ones as they come in. Perceived performance for majority of users in this case is significantly improved even in the case of single processor system - e.g. you have 99 requests which each take a single second, and then one which takes 101 seconds. Total request time is 200 seconds / 100 requests = 2 seconds average per request, but if that 100 second request comes in first, the other 99 are locked for 100 seconds, so average is now > 100 seconds per request ...