While I am pretty sure this is entirely true, it's crazy not to recognize that you're producing a language where basically everyone has been thusly trained since kindergarten.
I question its truthiness. It's not as if there's some shadowy cabal conspiring to force people to put operators in the middle, we developed the infix notation over a long period of time and it seems likely there are good, though possibly not entirely logical, reasons for it. We humans have funny brains, after all.
And if you really want to dive down that rabbit hole I think you have to establish that prefix is better than suffix as well. I'm somewhat dubious of even this claim, since I find suffix easier to reason about. But maybe that's just me.
I think path dependence is plenty to force us into behavioral patterns where alternatives aren't more than marginally better. There may be ways in which infix is genuinely better (I'd conjecture perhaps flexibility of ordering permitting more communication, in terms of emphasis and structure, in the same formula), but I would stand strongly by the notion that the primary reason we respond that it's "nice" is familiarity. This is particularly recalling the trouble some of my peers had with order of operations, but I think if it was flipped and we'd spent a decade dealing with prefix notation before being confronted with infix, while we might find things with prefix notation to struggle with as children we would find prefix "nice" having struggled.
I think looking at pros and cons (in the abstract and in people's heads) of prefix vs. suffix could be interesting. What I like about suffix is that you can treat it as a stack. What I like about prefix is that I know what kind of node I'm building as I consider the arguments. I've not done enough of either to have much opinion on which matters more (some lisp, some rpn calculators, but not enough). I certainly wasn't agitating for prefix (in particular) above.
One argument for infix is that it minimizes the distance between the salient token (the verb) and its arguments. When you see "a := b", the ":=" serves as a convenient visual anchor to check the variable name (to its left) or its value (to its right).
Note that English itself is an infix language: "Bob likes Clara" (SVO, infix), rather than "Likes Bob Clara" (VSO, prefix) or "Bob Clara likes" (SOV, suffix). A cursory search tells me SVO and SOV cover 75% of all languages. It would be interesting to see if people speaking SOV languages would prefer suffix notation. I would expect common patterns in (unrelated) world languages to loosely mirror natural dispositions towards syntax. In practice, that's probably a hodge podge of prefix, suffix and infix depending on whether you're dealing with verbs, connectives, prepositions, etc.
Somewhat tangentially, I wonder if some of the appeal of object oriented languages is that they usually explicitly mimic SVO. myArray.find(2) is closer to real natural language than array_find(myArray, 2).
Well, really, Java has a tendency to take these things to an extreme. Like the author of that post I like that C++ gives you a choice.
Of course, usually there's an implicit subject (receiver): this or self. In a language like ruby, which is very very object oriented, and has a similar thing where you always have a receiver to any function call, every instance has a certain set of stuff (the Kernel module) mixed into it that allows for general tasks to be treated as an implicit receiver. It works pretty well for solving this problem on the other side.
I think that VSO confusion arises only when there is a single subject and verb.
Take for example a common prefix notation:
(< a b c d)
this stands for "are all increasing?"
Since none are really the subject, most Java-like languages (if they had this at all) would have to invent a subject Integer.areIncreasing(myList). No longer does this give you a valuable subject, just a made up subject.
Certainly some operations (mostly arithmetic) are easier on the eyes since we've had a lot of practice with it.
Whenever I do arithmetic, I use threading macros to make it easier to read. Suddenly, it reads like infix, but with more flexibility.
(+ 4 (- 1 (/ 4 2))) ;; what?!
becomes
(_> 4 (/ _ 2) (- 1 _) (+ 4)) ;; ah
In infix, it would be:
(1 - (4 / 2)) + 4
The threading macro isn't quite as nice as the default infix, but it allows for both notations.
I think the second reads very well all things considered, start with 4, divide by 2, subtract from 1, add 4. (_ is the placeholder).
That's really interesting. I can't say that I, personally, find it easier to read than the rather plain polish notation of the first example. I find I have to work really hard to connect the _s together and I find this process unpleasant. Maybe it's just an artefact of not finding bare lisp in general all that appealing.
Re your example with <, the 'natural' object oriented way to do this to me would be to treat the list itself as the subject. [a,b,c,d].isOrdered() say. To the fact that doing it this way in java would be incredibly ugly, I'll only say that I'm not even remotely a fan of java or, for that matter, C++/Java/C#-style static-typed object-orientation.
The threading macro idea is meant to just make it easier for humans to read, so if it's not easier for you, no sense bothering with it. In such, it's not a true prefix vs infix tool, just a tool possible in languages with macros.
As to the are increasing, yes, I suppose the list itself would be a more natural subject.
This is why I love macros, not really infix or prefix specifically, because a macro makes it trivial to just have this:
(. [1 2 3 4].isOrdered)
turn into this:
(isOrdered [1 2 3 4])
That way both the human and the compiler get their preferred view.