Hacker News new | past | comments | ask | show | jobs | submit login

I have to admit that I have an extremely visceral, negative feeling whenever I see a mutex, simply because I've had to debug enough code written by engineers who don't really know how to use them, so a large part of previous jobs has been to remove locks from code and replace with some kind of queue or messaging abstraction [1].

It's only recently that I've been actively looking into different locking algorithms, just because I've been diving in head-first to a lot of pure concurrency and distributed computing theory, a lot of which is about figuring out clever ways of doing mutexes with different tradeoffs.

I've gotten a lot better with them now, and while I still personally will gravitate towards messaging-based concurrency over locks, I do feel the need to start playing with some of the more efficient locking tools in C, like nsync (mentioned in this article).

[1] Before you give me shit over this, generally the replacement code runs at roughly the same speed, and I at least personally think that it's easier to reason about.




Message passing is just outsourcing the lock, right? For example a Go channel is internally synchronized, nothing magic about it.

Most of the mutex tragedies I have seen in my career have been in C, a useless language without effective scopes. In C++ it's pretty easy to use a scoped lock. In fact I'd say I have had more trouble with people who are trying to avoid locks than with people who use them. The avoiders either think their program order is obviously correct (totally wrong on modern CPUs) or that their atomics are faster (wrong again on many CPUs).


It's definitely doing synchronization behind the scenes, no argument here. BlockingQueues in Java seem to use ReentrantLocks everywhere. It's outsourcing the lock to people who understand locks better.

It just abstracts this detail away for me, and I personally trust the libraries implementing these abstractions to be more correct than some ad hoc thing I write. It's an abstraction that I personally find a lot easier to reason about, and so my thinking is this: if my reasoning is more likely to be correct because of the easier abstraction, and the internal synchronization is more likely to be correct, then it's more likely that my code will be correct.

I don't do super low-level stuff at all, most of my stuff ends up touching a network, so the small differences between the built-in synchronized structures vs the regular ones really don't matter since any small gains I'd get on that will be eaten the first time I hit the network, so a considerably higher ROI for me is almost always figuring out how to reduce latency.

If I did C or C++, I'd probably have different opinions on this stuff.


Every abstraction is about outsourcing the thing it's abstracting away. If using a queue solves your problem, you no longer have to deal with all the headaches that you can run into using a bare mutex.


> Message passing is just outsourcing the lock, right?

Kind of. If you can architect such that each channel has exactly 1 reader and 1 writer, you can send messages in a single direction with no locks. The basic idea is that you have a circular buffer with a start index and an end index. The writer can write an element and increment the end index (as long as end index+1<start index which doesn't have to be done atomically), while the reader can just read an element and increment the start index (as long as start index +1 < end index). This strategy needs to use atomic operations (which are basically free when uncontested, which they will be as long as the queue has a few elements in it)


> C, a useless language

You misspelled “fast as fuck” and “lingua franca of all architectures.”


> Message passing is just outsourcing the lock, right?

More or less, yeah. You can write an MPSC queue that doesn't explicitly use a lock (or even anything that looks like a lock).


> C, a useless language without effective scopes

Mutexes can be handled safely in C. It's "just another flavor" of resource management, which does take quite a bit of discipline. Cascading error paths / exit paths help.


What are some examples of people using mutexes wrong? I know one gotcha is you need to maintain a consistent hierarchy. Usually the easiest way to not get snagged by that, is to have critical sections be small and pure. Java's whole MO of letting people add a synchronized keyword to an entire method was probably not the greatest idea.


When, how, and why.

The biggest part of mutexes and how to properly use them is thinking of the consistency of the data that you are working with.

Here's a really common bug (psuedocode)

    if (lock {data.size()} > 0) {
      value = lock { data.pop() }
      lock { foo.add(value) }
    }
The issue here is size can change, pop can change, and foo can change in unexpected ways between each of the acquired locks.

The right way to write this code is

    lock {
      if (data.size() > 0) {
        value = data.pop()
        foo.add(value)
      }
    }
That ensures the data is all in a consistent state while you are mutating it.

Now, what does make this tricky is someone well-meaning might have decided to push the lock down a method.

Imagine, for example, you have a `Foo` where all methods operate within a mutex.

This code is also (likely) incorrect.

    value = foo.bar()
    if (value.bat()) {
      foo.baz(value)
    }
The problem here is exactly the same problem as above. Between `foo.bar()` and `foo.baz()` the state of foo may have changed such that running `foo.baz(value)` is now a mistake. That's why the right thing to do is likely to have a `foo.barBaz()` method that encapsulates the `if` logic to avoid inconsistency (or to add another mutex).

In java, the most common manifestation (that I see) of this looks like this

    var map = new ConcurrentHashMap();
    if (map.get(foo) == null)
      map.put(foo, new Value());
Because now, you have a situation where the value of `foo` in the map could be 2 or more values depending on who gets it. So, if someone is mutating `Value` concurrently you have a weird hard to track down data race.

The solution to this problem in java is

    map.computeIfAbsent(foo, (unused)->new Value());


This is one of the areas where Zig's combination of anonymous blocks and block-based defer really pay off. To create a locked region of code is just this

    {
        mutex.lock();
        defer mutex.unlock();
        // Do mutex things
    }
It's possible to get this wrong still, of course, but both the anonymous scope and the use of `defer` make it easier to get things right.

Nothing can prevent poor engineering around mutex use though. I'd want a critical path for a concurrent hashmap to look like this:

    {
        shared_map.lock();
        defer shared_map.unlock();
        if (shared_map.getOrNull(foo) == null) {
            shared_map.put(foo, new_val);
        }
    }
Where the SharedMap type has an internal mutex, and a way to check it, and all operations panic if no lock has been acquired. It could have `shared_map.lockAndGet(OrNull?)(...)`, so that the kind of problem pattern you're describing would stand out on the page, but it's still a one-liner to do an atomic get when that's all you need the critical path to perform.

I don't think these invariants are overly onerous to uphold, but one does have to understand that they're a hard requirement.


This doesn't seem to add anything over and above what std::mutex in C++ or a synchronized block in Java offer?


Less than C++. defer() is strictly inferior to RAII.


I digress but my autistic brain couldn't help itself. Provided that it's a recursive lock you could do this instead of adding a new method `foo.BarBaz`

    foo.lock {
        value = foo.bar() // foo.lock within this method is ignored
        if(value.bat()) {
            foo.baz() // foo.lock within this method is ignored
        }
    }
Also, to catch this bug early, you could assert foo is locked in `value.bat` or something. But that may or may not be feasible depending on how the codebase is structured


Composing locks is where Java usually blows up.

And computeIfAbsent can end up holding the lock for too long if the function is slow.


Composing locks isn't a Java problem - it's a fundamental abstraction problem with locks. This is one of the reasons why you usually reach for higher level abstractions than mutexes.

> And computeIfAbsent can end up holding the lock for too long if the function is slow.

How is this different from any other lock-holding code written anywhere?


I’m saying Java is exceptionally bad at this because every object is its own mutex.

And you end up having to trade single core performance for multi core by deciding to speculatively calculate the object. If there’s no object to make the critical section is very small. But as the object sprouts features you start smashing face first into Amdahl.


> because every object is its own mutex.

Not true in any practical sense.

> And you end up having to trade single core performance for multi core by deciding to speculatively calculate the object.

What is the alternative you suggest? If you care about having the predicate actually hold, and you also don't want to have to hold the lock while constructing the object, then you're going to end up in an optimistic-concurrency scenario where you check the predicate under lock, compute the object, and check again before swapping the value in. You may end up having to throw your work away when you discover the predicate changed. Java is no better nor worse at doing this than anything else.


> Not true in any practical sense.

This is going to put a damper on any further conversation.

Even with coarsening and elision every synchronized function closes a lock on the enclosing object.


Do people actually use `synchronized` methods in Java these days? It's been commonly described as an anti-pattern (for all the reasons discussed upthread here) two decades ago already.


The more useful question is has it been expunged from the JDK and common libraries. I think it's been more like 10-12 years since it really started being talked about in more than certain subcommunities and that's almost 20 years' worth of existing libraries.

OpenTelemetry is a fairly recent library. Even if you ignore some test fakes (where, let's face it, who cares), it still uses it in a few places, and uses lock objects in others. I don't see much evidence of recursion going on with the former. But that's how things always start and later there's running and screaming.


Some amount of legacy cruft is not unexpected, but it's sad that it can be seen in new code. In .NET, which has similarly problematic semantics with lock(), linters have been flagging lock(this) for ages.

I wonder where this patently bad idea of every object carrying its own publicly accessible mutex originated in the first place. Did Java introduce it, or did it also copy that from somewhere else? And what was the motivation?


Can't attest to the history of `lock` statement from the top of my head but the API shape of lock and Monitor.Enter/Exit methods it is desugared to looks like Win32's EnterCriticalSection and LeaveCriticalSection. Other Monitor's methods like Wait and Pulse look like pthread's condvar and mutex functions.

.NET also has MethodImplOptions.Synchronized like Java does. However, the only place I have ever seen this attribute was on TextWriter.Synchronized implementation in CoreLib and nowhere else.

Java itself has `Lock` and `Condition`. In the end, most synchronization primitives do the same high-level actions and bound to end up having similar API.

As for `lock(this)`, much like with many other historically abused techniques that have become frowned upon - it's not bad per se if you own the type and know that it is internal and will not be observed outside of the assembly it is defined in, provided it is small enough. It's footgun-prone, but generally very few code paths will lock an arbitrary object instance at all, so most of the time it's something you see so rarely it has become "just write a comment why and move on" when using it. Of course this requires more deliberation and it's easier to default to blanket policies that ignore context. It can be difficult to get people to "use the appropriate tool" mentality.

.NET is also getting it's a separate `Lock` type, on top of all the existing synchronization primitives, to move a little further away from other legacy aspects of `lock`ing on object instances.


It's not Monitor itself that's problematic. It's that every object is implicitly associated with one, and anyone who holds a reference to an object can lock it. It doesn't matter if the type is internal - it can still be upcast to System.Object and leaked that way.

In practice this means that unless you can guarantee that you never, ever leak a reference anywhere, you don't know who else might be locking it. Which makes it impossible to reason about possible deadlocks. So the only sane way to manage it is to have a separate object used just for locking, which is never ever passed outside of the object that owns the lock.

And yes, this is absolutely bad design. There's no reason why every object needs a lock, for starters - for the vast majority of them, it's just unnecessary overhead (and yes, I know the monitors are lazily created, but every object header still needs space to store the reference to it). Then of course the fact that it's there means that people take the easy path and just lock objects directly instead of creating separate locks, just because it's slightly less code - and then things break. It's almost always the wrong granularity, too.

Thing is, I haven't seen this design anywhere outside of Java and .NET (which copied it from Java along with so many other bad ideas). Everybody else uses the sane and obvious approach of creating locks explicitly if and when they are needed.


Monitors came from Tony Hoare in the 70s and Java put an OO spin on them.


"every synchronized function"

Right. Synchronized is the key word here. The vast majority of code doesn't involve synchronized, and therefore the vast majority of objects don't have locks associated with them. That's quite important.

Those classes which do use synchronized were just going to create a ReentrantLock held for the duration of the call anyway, in which case it's all monitorEnter and monitorExit, regardless.

> This is going to put a damper on any further conversation.

Sadge.


> in which case it's all monitorEnter and monitorExit, regardless.

Oops, I need to correct myself!

ReentrantLock doesn't depend upon monitorEnter/Exit, but rather AbstractQueuedSynchronizer and LockSupport, which ultimately delegate to Unsafe methods like park/unpark and CAS (*compareAndSet*). Don't know why I had that confused in my head.

In any case, the point holds that "synchronized" as a language feature has mostly a zero cost for code that doesn't use it. It's a red herring when discussing modern Java concurrency.


Might want to move foo.add() out of the lock scope (assuming foo is a thread-private resource):

    value = nil
    lock {
      if (data.size() > 0) {
        value = data.pop()
      }
    }
    if (value) {
        foo.add(value)
    }


Personally I've had issues with performance because of people using `synchronized` too liberally, where they end up locking a lot more code than necessary. I've also had issues with fairly typical circular-dependencies, causing deadlock, or at least pauses that aren't strictly necessary. Deadlock doesn't happen nearly as often as textbooks have led me to believe, but it can happen with sloppily written code.

In regards to Java, at this point I almost never use the `synchronized` keyword anymore and instead (if I can't easily map to some kind of queuing abstraction) use the ReentrantLock object simply because of the ability to have lock acquisition time out, and also letting you opt-in to fairness if you'd like. It's not as pretty but it's more flexible and as far as I'm aware it doesn't affect performance much.

For the most part, though, in Java, you can get away without (explicit) locks by simply abusing the built-in data structures. I know they're using their own synchronization techniques behind the scenes, but I trust those to be correct more than some ad-hoc stuff I'd write as an engineer.


Java's take on monitors was definitely not great, and people were emulating mutexes with them even in the language's earliest days.

Still there are a lot of things that can go wrong with mutexes: forgetting to unlock in the case of exceptions, priority inversion, recursive locking, deadlock, needlessly high contention, etc.

Java has had an excellent concurrency runtime with abstractions that are typically a better fit than a bare mutex for over 20 years now (c.f. Doug Lea). Synchronized still exists, because of Java's excellent backwards compatibility.


I've always disliked that lock cyclic dependencies is discussed as a hierarchy when what it really comes down to is a linear order of locks.

The problem with lock _hierarchies_ as a concept is that a lock really should represent serialization of access to a particular pool of data, and should make no assumptions that it being held implies some other lock's domain is also held. The code that results when people do not maintain this kind of rigor is quite terrible, but hierarchies tend to steer people into thinking that way because they imply recursively taking locks.

Stated differently: locks should be taken and released in a fixed order - so locks are ranked - but there should not be a model where all lower-ranked locks must be held for a given lock to be taken. The lock protects its domain and the ordering of take and release is to prevent deadlock, but there's no requirement for completeness.


I feel the similarly about C"s "volatile" (when used in multithreaded code rather than device drivers). I've seen people scatter volatile around randomly until the problem goes away. Given that volatile significantly disturbs the timing of a program, any timing sensitive bugs can be masked by adding it around randomly.


There seems to be a lot of voodoo beliefs around concurrent programming that lead to really bad things.

One of the best books I've read on it is Java concurrency in practice [1]. It does an excellent job of dispelling these occultic beliefs and letting the reader know exactly when and how concurrency should be implemented. It is applicable to more languages than just java, especially since many have adopted large parts of the java memory model.

The worst things I usually find when reviewing concurrent code is people either not using locks when they should, using locks when they shouldn't, and having inconsistent data guards. I've seen people throw in random locks to guard local non-shared state which is just crazy town but "Multiple threads are running this code, so I'm adding a lock".

I certainly prefer message passing over shared state. However, it's a little baffling to me why it's so hard for devs to grasp how to properly maintain shared state. Instead of just learning the basic rules, it gets couched in "It's just too hard to understand so keep adding things until it works".

[1] https://www.amazon.com/Java-Concurrency-Practice-Brian-Goetz...


> However, it's a little baffling to me why it's so hard for devs to grasp how to properly maintain shared state. Instead of just learning the basic rules, it gets couched in "It's just too hard to understand so keep adding things until it works".

Probably because most people aren't aware that there are basic rules to be learned. I'd imagine the typical experience is, you're very familiar with single-threaded code, and now you're trying to let other threads work with your data. You have heard that there are many pitfalls, and that there are special-purpose tools like mutexes to avoid those, but you look at the examples and find them mostly baffling. "Why do they perform these incantations for this data but not that data, or in this place but not that place?" So you come up with some weird mental model and move on with your life, never aware that there are underlying principles for maintaining shared state.

Personally, I didn't understand mutexes very well at all, until I started looking into what the atomic memory orderings from C++ et al. were supposed to mean.


Not too sure what the basic rules are and I'm not able to find any list of such rules.

For me the biggest challenge when sharing state is that the only benefit I can see for parallelism is performance, so if I'm not gaining performance there is no reason to use parallelism. If I use coarse-grained mutexes then I end up with straight forward to reason about code but I lose the performance benefit and in fact can end up with slower than single threaded code.

If I use very fine grained mutexes then I end up with faster code that has very hard to find bugs that happen on very rare occasion.

And then on top of that even if you do write correct fine grained locking, you can still end up with slow code due to cache behavior such as false sharing and cache coherence.

So ultimately I disagree that writing parallel code is simple unless you're willing to give up performance in which case you may as well just stick to single threaded code or use parallelism among independent data. Writing correct parallel software that shares state and actually delivers substantial performance benefits is incredibly difficult, and I am skeptical that there is a set of simple rules that one can simply read about.


> Not too sure what the basic rules are and I'm not able to find any list of such rules.

The actual rules are completely terrifying because they involve the physics of microprocessors. If you've watched Grace Hopper's lectures where she gives out physical nanoseconds (pieces of wire that are the same length as the distance light travels in a nanosecond, thus, the maximum possible distance data could travel in that time) you can start to appreciate the problem. It is literally impossible for the intuitive Sequentially Consistent model of how computers work to apply for today's fast yet concurrent processors. Light is too slow.

However generally people mean either Java's memory model or the C++ 11 (and subsequently 14, 17, 20) memory models used in languages such as C++, C and Rust. Those rules are less terrifying but still pretty complicated and the programming language promises to somehow provide an environment where these rules (not the terrifying ones) are all you need to know to write software. So that's nice.

It can be simple to write parallel code for a language designed to make that easy. Yes even if there's shared data. It only started to get trickier if the shared data is modified, so long as it isn't we can make copies of it safely and modern CPUs will do that without actual work by the programmer.


Are there popular languages that don't have memory models which make reasoning about concurrent models easier?

A language with a notion of threading and shared state is going to have something akin to read/write barriers built into the language memory model to tame the beast.


I think tialaramex is overselling the complexity of concurrent memory models in practice, at least for end users. In reality, all modern memory models are based on the data-race-free theorem, which states that in the absence of data races--if your program is correctly synchronized--you can't tell that the hardware isn't sequentially consistent (i.e., what you naïvely expected it to do).

Correct synchronization is based on the happens-before relation; a data race is defined as a write and a conflicting read or write such that neither happens-before the other. Within a thread, happens-before is just regular program order. Across a thread, the main happens-before that is relevant is that an release-store on a memory location happens-before an acquire-load on that memory location (this can be generalized to any memory location if they're both sequentially-consistent, but that's usually not necessary).

The real cardinal rule of concurrent programming is to express your semantics in the highest-possible level of what you're trying to do, and find some library that does all the nitty-grityy of the implementation. Can you express it with fork-join parallelism? Cool, use your standard library's implementation of fork-join and just don't care about it otherwise.


C?


C has the same model as C++ from the same era, so C11 is the C++ 11 model, C23 is C++ 20 and so on.

It's C so you don't get a comprehensive set of bells, whistles and horns like the C++ standard library, but the actual model is the same. At a high level it's all the same as C++ 11, the details are not important to most people.


> Not too sure what the basic rules are and I'm not able to find any list of such rules.

I'd suggest the book in my original comment, Java concurrency in practice.

> If I use very fine grained mutexes then I end up with faster code that has very hard to find bugs that happen on very rare occasion.

I agree this is a real risk if you are doing fine grained mutexes. But the rules are the same whether or not you want to follow them. If you have shared state (A, B, C) and you want to do a calculation based on the values of (A, B, C) then you need a mutex which locks (A, B, C). Certainly, that become a problem if you have calculations that just require (A, C) and you might want to avoid locking for B. In that case, you need a more complicated mechanism for locking than just simple mutexes which is certainly easy to get wrong. When the (A, B, C) actions happen you have to ensure that the (A, C) actions can't happen at the same time.

This isn't a complicated rule, but it is one that can be hard to follow if you are trying to do super fine grained locking. It's even trickier if you are going to abuse the platform to get correct results.

But fine v coarse isn't the problem I'm referring to when I say people get the simple rules wrong. Rather, than worrying about fine vs coarse grained locking, I very frequently see code where mutexes and concurrency primitives are just peppered everywhere and haphazardly. We might call that super coarse grained.


> For me the biggest challenge when sharing state is that the only benefit I can see for parallelism is performance, so if I'm not gaining performance there is no reason to use parallelism.

Aside from performance, another very common reason is to not lock the UI from the user. Even in UI-less programs, the ability to abort some operation which is taking too long. Another is averaging out performance of compute tasks, even in the case where it would be faster to handle them sequentially. Without some degree of parallelism these things are not possible.

Consider a web server. Without parallelism every single request is going to completely lock the program until its complete. With parallelism, you can spawn off each request, and handle new ones as they come in. Perceived performance for majority of users in this case is significantly improved even in the case of single processor system - e.g. you have 99 requests which each take a single second, and then one which takes 101 seconds. Total request time is 200 seconds / 100 requests = 2 seconds average per request, but if that 100 second request comes in first, the other 99 are locked for 100 seconds, so average is now > 100 seconds per request ...


>Aside from performance, another very common reason is to not lock the UI from the user.

This is not a good fit for parallelism, this is pretty much always accomplished using concurrency ie. async/await.


Assuming that the APIs & libraries that you need are async. Which is, unfortunately, not always the case for historical reasons.


async/await uses threads underneath. And sometimes even that paradigm is not sufficient.


> Not too sure what the basic rules are and I'm not able to find any list of such rules.

You may want to consider https://marabos.nl/atomics/ for an approachable overview that's still quite rigorous.


+1 for the Java Concurrency in Practice book. It's the book I recommend to nearly everyone who wants to get into concurrent programming. Goetz makes it a lot more approachable than most other books.


Goetz has come a long way. I knew one of the people who contributed to that book and he was a little frustrated about having to explain things to him he felt he shouldn’t have had to. The implication was he’d already had this conversation with some of the other contributors.

Sometimes though, the newbie is going to write the clearest documentation.


I loved concurrent code when I was starting out. I’d taken a pretty good distributed computing class which started the ball rolling. They just fit into how my brain worked very well.

Then I had to explain my code to other devs, either before or after they broke it, and over and over I got the message that I was being too clever. I’ve been writing Grug-brained concurrent code for so long I’m not sure I can still do the fancy shit anymore, but I’m okay with that. In fact I know I implemented multiple reader single writer at least a few times and that came back to me during this thread but I still can’t remember how I implemented it.


That's something I'm afraid of for my latest project. I did some concurrent stuff that wasn't 100% clear would actually work, and I had to write a PlusCal spec to exhaustively prove to myself that what I was doing is actually OK.

It works pretty well, and I'm getting decent speeds, but I'm really scared someone is going to come and "fix" all my code by doing it the "normal" way, and thus slow everything down. I've been trying to comment the hell out of everything, and I've shared the PlusCal spec, but no one else on my team knows PlusCal and I feel like most engineers don't actually read comments, so I think it's an inevitability that my baby is killed.


Maybe because I had a complete semester of multiprogramming in the uni, I see almost trivial to work in such environments, and cannot comprehend why is so much mystic and voodo. Actually is pretty simple.


I feel like it's not terribly hard to write something that more or less works using mutexes and the like, but I find it exceedingly hard to debug. You're at the mercy of timing and the scheduler, meaning that often just throwing a breakpoint and stepping through isn't as easy as it would be with a sequential program.

I feel like with a queue or messaging abstraction, it can be easier to debug. Generally your actual work is being done on a single thread, meaning that traditional debugging tools work fine, and as I've said in sibling comments, I also just think it's easier to reason about what's going on.


In most cases (in a C or C++ compiler, not Java) it's just straight up incorrect to use volatile for something other than memory mapped I/O. Yes, POSIX guarantees that in a specific case (signal handling IIRC) it'll do what you meant if you use volatile int. That's nice, but more generally this is not the right choice.

Unfortunately Microsoft enshrined the situation (on Windows, on their compiler, on x86 and x86-64 only) that volatile primitive types are effectively atomics with Acquire-Release ordering. This is of course awkward when Microsoft tries to bring people to a non-x86 architecture and it can't just give them this because it would suck really hard, so finally they have to grow up and teach their developers about actual atomics.

!! Edited to fix: Previously this said Relaxed ordering, the ordering guaranteed by Microsoft is in fact Acquire-Release, hence it's expensive to provide for architectures where that's not the default.


When Java implemented volatile it didn’t do anything. Later when they fixed the memory model to deal with out of order execution they made it part of the fence semantics, and then it actually made some sense.


The "volatile" keyword should never be used for C/C++ multithreaded code. It's specifically intended for access to device-mapped addresses and does not account for any specific memory model, so using it for multithreading will lead to breakage. Please use the C/C++ memory model facilities instead.

(As a contrast, note that in Java the "volatile" keyword can be used for multithreading, but again this does not apply to C/C++.)


> Please use the C/C++ memory model facilities instead

I should point out that for more than half of my professional career, those facilities did not exist, so volatile was the most portable way of implementing e.g. a spinlock without the compiler optimizing away the check. There was a period after which compilers were aggressively inlining and before C11 came out in which it could be otherwise quite hard to otherwise convince a compiler that a value might change.


The problem is that volatile alone never portably guaranteed atomicity nor barriers, so such a spinlock would simply not work correctly on many architectures: other writes around it might be reordered in a way that make the lock useless.

It does kinda sorta work on x86 due its much-stronger-than-usual guarantees wrt move instructions even in the absence of explicit barriers. And because x86 was so dominant, people could get away with that for a while in "portable" code (which wasn't really portable).


There's a lot to unpack here.

TL;DR: The compiler can reorder memory accesses and the CPU can reorder memory accesses. With a few notable exceptions, you usually don't have to worry about the latter on non-SMP systems, and volatile does address the former.

The volatile qualifier makes any reads or writes to that object a side-effect. This means that the compiler is not free to reorder or eliminate the accesses with respect to other side-effects.

If you have all 3 of:

A) A type that compiles down to a single memory access

B) within the same MMU mapping (e.g. a process)

C) With a single CPU accessing the memory (e.g. a non-SMP system)

Then volatile accomplishes the goal of read/writes to a shared value across multiple threads being visible. This is because modern CPUs don't have any hardware concept of threads; it's just an interrupt that happens to change the PC and stack pointer.

If you don't have (A) then even with atomics and barriers you are in trouble and you need a mutex for proper modifications.

If you don't have (B) then you may need to manage the caches (e.g. ARMv5 has virtually tagged caches so the same physical address can be in two different cache lines)

If you don't have (C) (e.g. an SMP system) then you need to do something architecture specific[1]. Prior to C language support for barriers that usually means a CPU intrinsic, inline assembly, or just writing your shared accesses in assembly and calling them as functions.

Something else I think you are referring to is if you have two shared values and only one is volatile, then the access to the other can be freely reordered by the compiler. This is true. It also is often masked by the fact that shared values are usually globals, and non-inlined functions are assumed by most compilers to be capable of writing to any global so a function call will accidentally become a barrier.

1: As you mention, on the x86 that "something" is often "nothing." But most other architectures don't work that way.


I’m surprised that’s true. C borrowed very heavily from Java when fixing the NUMA situations that were creeping into modern processors.


The C/C++ memory model is directly derived from the Java 5 memory model. However, the decision was made that volatile in C/C++ specifically referred to memory-mapped I/O stuff, and the extra machinery needed to effect the sequential consistency guarantees was undesirable. As a result, what is volatile in Java is _Atomic in C and std::atomic in C++.

C/C++ also went further and adopted a few different notions of atomic variables, so you can choose between a sequentially-consistent atomic variable, a release/acquire atomic variable, a release/consume atomic variable (which ended up going unimplemented for reasons), and a fully relaxed atomic variable (whose specification turned out to be unexpectedly tortuous).


Importantly these aren't types they're operations.

So it's not that you have a "release/acquire atomic variable" but you have an atomic variable and it so happens you choose to do a Release store to that variable, in other code maybe you do a Relaxed fetch from the same variable, elsewhere you have a compare exchange with different ordering rules

Since we're talking about Mutex here, here's the entirety of Rust's "try_lock" for Mutex on a Linux-like platform:

        self.futex.compare_exchange(UNLOCKED, LOCKED, Acquire, Relaxed).is_ok()
That's a single atomic operation, in which we hope the futex is UNLOCKED, if it is we store LOCKED to it with Acquire ordering, but, if it wasn't we use a Relaxed load to find out what it was instead of UNLOCKED.

We actually don't do anything with that load, but the Ordering for both operations is specified here, not when the variable was typed.


If you only use volatile in C without any atomic operations or fences, then your multithreaded code is certainly incorrect.


> remove locks from code and replace with some kind of queue or messaging abstraction

Shared-nothing message passing reflects the underlying (modern) computer architecture more closely, so I'd call the above a good move. Shared memory / symmetric multiprocessing is an abstraction that leaks like a sieve; it no longer reflects how modern computers are built (multiple levels of CPU caches, cores, sockets, NUMA, etc).


If you are doing pure shared nothing message passing, you do not need coherent caches; in fact cache coherency gets in the way of pure message passing.

Viceversa if you do pure message passing you are not benefitting from hardware accelerated cache coherency and leaving performance (and usability) on the floor.


That's good to hear! I am pretty removed from underlying hardware now, so it makes me happy to hear that better way of doing things is catching on even in low-level land.


> some kind of queue or messaging abstraction

Agreed. I find things like LMAX Disruptor much easier to reason about.


Even within Java, something like BlockingQueue will get you pretty far, and that's built into the runtime.

If I am allowed to use libraries, I end up using Vert.x for nearly everything. I think that their eventbus abstraction is easy enough to reason about, and even without using it simply using the non-blocking stuff it provides ends up being pretty handy.


Shared-nothing is typically The Right Choice in my experience as well. Maybe the odd atomic...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: