Aliasing is a real problem. If any pointer of any type can write though to any value of any other type with some defined behavior, many key optimizations—moving a value into a register!—become impossible.
However, much the same key optimisations are useful when both pointers have the same type. Given multiple float* pointers, you still want to know whether they alias for load/store forwarding and the like.
Once you've written the analysis to partition values of the same type into separate alias sets, that same analysis runs fine on values of different types.
Distinct types implying distinct alias sets does tolerate separate compilation well and is cheap to compute. The belief that it is the only way to achieve said key optimisations is worth some scepticism now that link time optimisation and interprocedural analysis are fairly common.
In general I don't know how you make currently-UB aliasing safe. But in that specific case you can just say that the value not showing up through the bad alias is a possible defined behavior.
But whether it shows up or not is a function of the optimization decision, which might have gone the other way for any number of hard-to-define reasons.
I would have thought the best case scenario for a bad alias is that you safely corrupt some other piece of data. (Assuming you let those writes go through.) So "it might not do that" seems like an improvement to me.
If you don't let those writes go through, then we're in a different happier situation, and the register optimization causes no change in behavior.
I guess I'm not sure which definition you're proposing:
a) Writes through "bad" aliases never take effect.
or
b) Writes through "bad" aliases take effect sometimes, no guarantees.
I don't think either makes the point you're hoping to.
b) is just bog standard undefined behavior. I guess you could be intending to constrain the range of what effects compiler-emitted code can have in bad-aliasing situations (no nasal daemons!) but it's not clear how much additional optimization latitude constraining only nasal daemons provides. Compilers often save local stack space by reusing stack entries for multiple local variables. A bad alias to one could corrupt an unrelated value, and you've got nasal daemons again.
a) requires massive compile-time and run-time effort to dynamically distinguish "bad" aliases from "good" ones. This is along the lines of what valgrind does at great cost.
I thought the premise was that Someone Else already took care of those nasal demons somehow, and we're just worried about putting the register optimization back into place. So the range of effects has already been constrained to something safe. We're just adding in "sometimes it doesn't have those bad effects" to enable some optimizations, and that should have very little downside.
Options a and b are just the different ways Someone Else could have implemented their solution. I'm not suggesting how they did that, I'm taking it as the premise. The problem of "how do we make bad aliasing safe" is much much much harder than "how do we still enable normal optimizations like this after we make bad aliasing safe".