> too difficult, on a mathematical level, for mere humans to understand.
to me the current situation looks similar to when we were building more and more complex combinations of epicycles in order to more and more precisely model the "Sun and the planets rotating around the Earth". Similarly we have right now a couple of dogmas, like say Copenhagen interpretation and the recently emerged dark matter, which while not confirmed by experiments yet can't be questioned (classic sign of dogma) and as a result the science resources are available only for the complex epicycle constructs based on those dogmas.
Well, right now we have some perfectly valid mathematical constructs that nonetheless have nothing to do with reality. Gabriel's Horn is a good starter example, but you can wave it away with giving the paint "atoms" a finite size, but then there are more subtle constructs which lie in wait, like the Banach–Tarski paradox.
This will doubtlessly irritate some, but math for me is a tool and if it is not used somehow in reality, I start wondering what it is for, then. Occasionally utility will pop up later, and good, because you don't know what will end up being handy eventually, but some branches seem so esoteric I am at a loss. Again, probably my own ignorance, but I worry that those brighter and better educated than I might be led astray by castles in the sky, just from the beauty and the sensation of satisfaction you get from finding your way into the castle, if that makes sense.
>math for me is a tool and if it is not used somehow in reality, I start wondering what it is for, [...] some branches seem so esoteric I am at a loss.
i'm of a kind opposite opinion when it comes to math - myself a math PhD dropout i welcome more and more complex and esoteric constructions in math without any regard for any relation or lack of any such to physical reality. When it comes to physics though i think an opposite rule should be applied - any math constructions brought in should be checked against the physical reality.
> any math constructions brought in should be checked against the physical reality
But that's not how things work in practice. Parts of the mathematical framework used for describing a particular physical phenomenon are always a kind of mental scaffolding and do not have a direct representation in reality. This is how math always works in science. Think of the complex numbers, for example: they are used extensively in quantum physics and electrical engineering, but that does not mean that you can see them, touch them or measure them.
>Think of complex numbers, for example: they are used extensively in quantum physics and electrical engineering, but that does not mean that you can see them, touch them or measure them.
yes, we use complex numbers there only because we have a very consistent mental map between their properties/behaviour in the math model and the physical reality. The physical reality doesn't obey the math model, it is a successful and useful math model which seems to consistently describe and predict the reality in some range of conditions. So when a new model is brought in you'd want it to have at least some partial consistent mapping to reality, and with refining of a model the mapping is expected to improve. If refining of a model turns instead into rounds and rounds of complexity growths without matching massive improve in the consistency of mapping to physical reality, you'd naturally start to question the model, and may be think, kind of VC style, about spending say a few percent of resources going into that massive complexity growth of the currently dominating, yet still far from working that good, model into exploring some alternative models.
"One of the oldest tactile illusions is the Aristotle illusion. It is easy to perform. Cross your fingers, then touch a small spherical object such as a dried pea, and it feels like you are touching two peas. This also works if you touch your nose."
Math is its own reality. Well many, depending upon which axioms you start with. Computer Science is the same, the theoretical computation stuff that is. In reality, we can only build pseudo-Turing machines with limited tapes. In the end these are all equivalent to FSAs. Albeit very large ones.
>Gabriel's Horn is a good starter example, but you can wave it away with giving the paint "atoms" a finite size, but then there are more subtle constructs which lie in wait, like the Banach–Tarski paradox.
What about e or pi? Any transcendental number suffers the same problem as Gabriel's Horn. What point is there to knowing pi beyond a million digits? Recently saw a video that we know almost nothing about a power tower of pi size 4. Not even if it is or isn't an integer. Side note, e^ln(2)=2 is the trivial case of raising one transcendental number to another to get a natural number. This despite both numbers not being related to reality even less so than something like sqrt(2).
But is the research important? Or is it just impact factors and grants? There's so many researchers. It feels that some kind of optimism rather than say mathematical capability predicts who does that. Certainly I think it's very different than 20 years ago. How has the amount of publications changed over time? There just can't be enough original critical thinking for that amount.
to me the current situation looks similar to when we were building more and more complex combinations of epicycles in order to more and more precisely model the "Sun and the planets rotating around the Earth". Similarly we have right now a couple of dogmas, like say Copenhagen interpretation and the recently emerged dark matter, which while not confirmed by experiments yet can't be questioned (classic sign of dogma) and as a result the science resources are available only for the complex epicycle constructs based on those dogmas.