Hacker News new | past | comments | ask | show | jobs | submit login

Your explanation doesn't make sense to me.

> "you commit the result if no other thread has stomped on your data"

What does "committing" mean here? If it means performing an atomic write (a-la CAS), then you're using a lock (see my next point).

> "If another thread has stomped on your data, you start over."

So you let your thread sit there in a spin-loop and CAS a condition-variable. That's a lock for all intents and purposes, and your system can still "dead-lock" (read: your thread will never get to "win the race"), if your "data gets stomped on" over and over again in-between reads (which are obviously non-atomic, otherwise you'd be locking there too).

> "Usually, there is some sort of guarantee that lock-free systems always make progress."

Getting a time-slice to continue running in a tight-loop while trying to CAS is not the same as "making progress".

"Making progress" would be - you'd get a chance to commit your changes and proceed to your next bit of business logic. But with contention, that's not guaranteed to happen at-all?




I think you’ve just mixed up wait-free with lock-free.

- Lock-free: at least one thread will make progress

- Wait-free: all threads will make progress

The difference between locks and lock-free is very noticeable if a thread holding a lock is suspended (which can happen on a modern system just by accessing memory that is paged out). On real-time systems it’s also possible to guarantee that high-priority tasks will make progress, regardless of how a low-priority task behaves. With locks, a low-priority task can easily prevent a high-priority task from running at all.


A system cannot deadlock with CAS.

It can live-lock.

I think you’re possibly confusing lock-free and wait-free, which is a stronger guarantee.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: