Hacker News new | past | comments | ask | show | jobs | submit login

``` In this sense, the lock in lock-free does not refer directly to mutexes, but rather to the possibility of “locking up” the entire application in some way, whether it’s deadlock, livelock – or even due to hypothetical thread scheduling decisions made by your worst enemy. That last point sounds funny, but it’s key. Shared mutexes are ruled out trivially, because as soon as one thread obtains the mutex, your worst enemy could simply never schedule that thread again.

[...]

A final precision: Operations that are designed to block do not disqualify the algorithm. For example, a queue’s pop operation may intentionally block when the queue is empty. The remaining codepaths can still be considered lock-free. ```

I don't get it ; how could "locking a mutex" not be considered as an "operation that are designed to block" ?

Would somebody be kind enough to tell me what am I missing here? Thanks !




> I don't get it ; how could "locking a mutex" not be considered as an "operation that are designed to block" ?

That's not what is being claimed here. What is being said is that an algorithm that blocks a thread can still be considered lock-free. There's a difference between lock-freedom and wait-freedom.


The basic differentiator is that a locking algorithm or locking code has the potential to execute but not making progress. Lock-free code is always guaranteed to make progress if it runs.

The queue implementation that blocks on an empty queue is fine if inserting into the queue is still lock-free, ie, your insert operation eventually completes or makes progress regardless of if a reader is spinning on the lock that allows you to pop a value from the queue.

That means, while the popping of a value is locking, inserting is not. Hence the mutex locking the read does not turn the queue into a locking algorithm.


I think it's saying that not _all_ blocking operations are inherently disqualifying. That doesn't mean that some (like acquiring a shared mutex) are allowed though.


> your worst enemy could simply never schedule that thread again

That seems too extreme to me. If the scheduler itself is potentially malicious, then no multi-threaded implementation is safe, since we can't assume that any thread we start will ever actually run.


This is an important observation!

If the properties of concern for your thread or program are only safety properties (“never does a bad thing”) then the fact that the program may never do anything at all is just fine!

Then, if you want some liveness properties (“eventually does something good”) you’ll need to embed some assumption about fairness/scheduling/progress just as you’ve said!

This appears in early concurrency literature (TLA etc which I can’t be bothered to look for)


This isn’t about malice.

Remembering that you are your own worst enemy, what this implies is that some other perfectly sane decision you made in the past has come to bite you in the arse.


Alistarh et al. (https://arxiv.org/abs/1311.3200) show that assuming we have a kinder, non-malicious scheduler, the situation looks even better: lock-free (some process can make progress) algorithms have a stronger progress property of being wait-free (every process can make progress).

As omazurov mentioned as a top-level comment, threads can also die, which is rather like never being scheduled. In practice, I suspect that you're right that it's often too extreme of a model—many programs use locks and don't worry too much about this issue.


"Malice" doesn't need to come into play. A thread could simply be waiting in line behind a non-preemptible thread which does not relinquish the CPU due to a bug.


One real-world example is assuming that the audio thread (scheduled at realtime priority if possible) is guaranteed to run, while the GUI thread may or may not run, or get descheduled at any point. Even if the GUI thread locks up (for example the user is resizing the window), the audio thread should not stop producing sound.


"your worst enemy" can be a stand-in for "a faulty algorithm running on another thread".

Or, another way to look at it - a lock-free algorithm that allows starvation or livelock/deadlock.


It assumes that at least 1 thread is running at all times. There are algorithms that can make progress this way.


A thread that never ran can’t lock up, so I guess it’s still technically lock free?


Yes, it’s lock-free, but it doesn’t actually work.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: