Hacker News new | past | comments | ask | show | jobs | submit login

Exactly. Its about blocking vs making guaranteed forward progress.



To support what dkersten said: there's a trade-off when deciding to go with lock-free or lock-based structures. Lock-based structures tend to have less synchronization instructions in total, but the inherent mutual exclusion means that usually only one thread can make progress at a time. (And the cheap locks are non-deterministic as to who gets the lock next.) Lock-free algorithms tend to have more synchronization instructions in total, but all threads tend to be able to make progress at all times.

In general, lock-based structures and algorithms are better in low contention situations, and lock-free structures and algorithms are better in high contention situations. The intuition is that in low contention (either few threads, or few interfering threads), there's little harm in forcing mutual exclusion, so it's not worth paying a bunch more synchronization instructions. But in high contention (lots of threads, or lots of interfering threads), mutual exclusion can mean that most of the threads are sitting idle waiting for the right to do some work, so it's worth it to pay a bit more overhead so the whole system can more often progress.


> Lock-based structures tend to have less synchronization instructions in total, but the inherent mutual exclusion means that usually only one thread can make progress at a time [...]. Lock-free algorithms tend to have more synchronization instructions in total, but all threads tend to be able to make progress at all times.

technically not correct. In a lock based implementation, it is possible that no thread is making progress, i.e. no guarantees. Lock free is the guarantee that at least one thread makes progress. Wait free is the guarantee that all threads might be making progress.


Technically correct because I deliberately used weasel words (tend to, usually). Refining the specifics of a post doesn't have to be couched as a contradiction!


Maybe I don't understand atomic well enough. If all threads hit an atomic compare and swap on the same address at the same time, doesn't the CPU have to make sure that only one thread is updating the value at a time? Wouldn't this manifest as slightly increased instruction latency for the other threads?


Yes, CPU would have to make sure that only one thread is updating the value and this would definitely increase latency for other threads.

"Guaranteed forward progress" is barely of any interest here. They actually mean, that in the event of a thread randomly stopping for some reason, other threads will still be able to perform atomic operations, because no thread can actually stop in the middle of an atomic operation.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: