I understand we disagree, so I'll try to clarify my point of view a bit which hopefully makes things better not worse.
One often (despite promises not to) is forced to optimize what one measures or shares. It may or may not be relevant to a particular project, but I doubt it is completely invalid (despite the claimed clean room separation of descriptive and inferential procedures).
The idea is: if one believes absolute deviation is the one true measure then it would not make sense to optimize over a different measure (variance).
I in fact like quantile regression, but it has its own caveats.
So, let's say we want to make an industrial process more reliable by reducing the variability of its output. We repeatedly measure the variability by means of MAD. We'd like to know what causes the variability, so we regress MAD on various predictors to see what causes variable performance. The regression allows us to optimize MAD but the regression itself is fit using ordinary least squares. I don't think anyone would object to that?
I guess you're thinking more along the lines of describing the performance of a model in terms of the MAD, and then optimizing using MAD / L1, which isn't always a wise choice? Then I guess we don't disagree at all. I do like MAD as an easy to communicate loss statistic in many cases (as well as % of cases with predictions further than a domain specific distance from the truth), but I don't think many people would consider loss to be a descriptive statistic at all – it describes not the world but a model of the world.
One often (despite promises not to) is forced to optimize what one measures or shares. It may or may not be relevant to a particular project, but I doubt it is completely invalid (despite the claimed clean room separation of descriptive and inferential procedures).
The idea is: if one believes absolute deviation is the one true measure then it would not make sense to optimize over a different measure (variance).
I in fact like quantile regression, but it has its own caveats.