Even in your heavily contrived example, I can think of cases where the optimization isn't what the programmer wants. For instance, I might have a special handler via userfaultfd(2) that detects if I'm doing an increment of the null pointer and handles it in some special way, but can't handle just setting it to 10. For a more real example, I might have acquired a lock around do_substuff, and I might be okay with the thread segfaulting only if the lock wasn't held.
I don't find my example contrived at all, having functions assuming non-NULL pointers call other subroutines that defend against such a thing is definitely a routine thing in my experience. Maybe "do_substuff" is called in contexts where the pointer could be NULL, or maybe its developer was paranoid.
I don't think it's reasonable to expect compilers to issue less optimized code to defend against coders doing smart things that are well beyond the scope of the standard. If you want to play fast and loose with segfault handlers then be my guest, but if you want to play with fire you better know what you're doing.
Note that many of C's UBs are also generally effectively unportable, different systems will react differently to things like NULL pointer dereferences (segfault, access to some mapped memory, access to nothing at all or a special handler like the one you described) and signed overflow (overflow to 2s complement, saturation, trap etc...).
I think blaming UBs is frankly the wrong target. The problem with C is that it doesn't let you express those constraints to let the compiler enforce them and let you know when you're doing something wrong. I can't tell the compiler "this pointer is not nullable" and have it keep track of it for me, warning me if I mess up. In contrast in a language like Rust I could use an Option<> type to encode this, and I get a compilation error if I have a mismatch.
That's what I want to see in a more modern C, not a babysitting compiler that emits suboptimal code because it's trying to make my coding mistakes somewhat less mistakeful.
If the programmer really needs reads and writes through particular pointers to happen in a particular sequence, because the target memory follows different rules than the language ordinarily allows the compiler to assume, then it’s the programmer’s responsibility to use the annotation provided by the language for exactly that purpose: volatile. If the compiler had to assume that every pointer needs to be treated as volatile, just about all generated code would be be slowed down dramatically.
As for locks, the language already has rules about which memory access are allowed to be reordered across lock acquire and release calls. Otherwise locks wouldn’t work correctly at all.
To be extra pedantic, that's not what volatile does. Volatile ensures that access through the variable must behave strictly according to the rules of the C abstract machine, but the definition of access is implementation defined. A compiler author could define "access" to be "reads from the variable" or "writes to the variable", or neither, and make the entire keyword useless. As long as they document that somewhere, it's compliant with the standard. You think it means "reads and writes" but it doesn't have to.
It's tempting to write a "malicious compliance C compiler" that complies with the standard but makes all the most perverse possible choices.
Your comment is technically correct, but the original description of “volatile” in the parent comment is much clearer. It may be fun to imagine what would happen if the compiler writer were malicious, but it doesn’t really help us understand C.
Just like C compilers assume that the program is correct, the writers of the C standard assume that the people reading it are not insane or malicious.
> A compiler author could define "access" to be "reads from the variable" or "writes to the variable", or neither, and make the entire keyword useless. As long as they document that somewhere, it's compliant with the standard.
Nobody is going to accept that as a compliant implementation.