I really liked this, and it's also why it's so hard to teach mathematics, which is part of my current job.
Most people think in a context-dependent way. If you ask, suppose Jane has three apples and John gives her two more apples, how many does she have - then most kids at the appropriate level will visualise apples and count to five. Give exactly the same problem but with "Jane has five McGuffins" and you'll get a confused stare followed by "what's a McGuffin?". Except of course for the one kid who has no problem with the math because they misheard it as McMuffin and could visualise that!
But the math we teach in school is context-sensitive. It matters what set your inputs are from and what set the output is supposed to be in. We usually don't mention that we're doing math on real numbers, we assume that based on the context.
22 + 8 = 6
This would be incorrect in an average math class, but when you're dealing with the clock then it's something understands. 8 hours after 10 pm is 6 am.
ab = ba
We teach the commutative property as though it is universal, but it isn't. With real numbers? Sure! Swap two matrices though and you're in trouble.
I don't think kids should necessarily be taught differently, but there is definitely (implicit) context involved in math. Even in geometry: the inner angles of a triangle add up to 180 degrees, right? But in spherical geometry the sum of the inner angles of a triangle can be larger.
Yes, abstracting away from the context only works if you can tell that the problem really is context-independent. This works well with apples, not so well with e.g. commutativity once you get to things like matrices. So abstraction (from "apples" to "numbers" to "matrices") can sometimes reintroduce context that had previosly been discarded.
I have a good friend who works as a high school physics teacher, and this is apparently exactly how they teach: they teach the intuition behind the problems so that the kids can visualise them.
And this works up until about high school physics, but not too much further. Not only do many interesting mathematical objects lack some intuitive basis, some are interesting specifically because they behave counter-intuitively.
One of my favorite _rudimentary ideas about mathematics_ comes from philosopher Cathy Legg describing the work of Charles Sanders Perice:
_"Perice had a hypothetical interpretation of mathematics. So mathematics doesn’t talk about what’s actual at all. Mathematics makes no positive claims. Mathematics just tells you if you make this hypothesis, then this must follow. So mathematics is the science that draws necessary conclusions."_
If you get your head around that, then apples and McGuffins are both permissible.
I chased 2-3 linked articles deep am still wondering what is meant by reasonable here. Or "reasonably effective."
Is that just an example of the ineffective reasonability of essays?
My best guess at this point is that reasonable is what a person expects. And if that's so, it's subjective. And math abstracts realities into imperfect but objective simulacra. So I think the claim is that math is made of abstract rules. A tautology? A deepity? I must be missing something.
It's a play on the "unreasonable effectiveness of math" essay right?
I took it to mean the "unreasonable" ingredient which makes math so effective is context independence - since that's something which is not so easily attainable in other fields.
In the original essay, "reasonable" specifically meant "rational" in the sense of "able to be deduced from first principles." The point of the whole essay was that math was was spookily good at modelling reality, empirically speaking, but that fact is super weird considering we have no rational basis to expect that to be the case--ie. we have no first principles based in physical reality that we could use to deduce that math/twiddling with the relationship between symbols using rules we basically just made up should be able to model reality the way it does, and isn't that a strange mystery to contemplate.
Most essays that use the phrase really just mean "surprisingly effective," which is a pet peeve of mine, but I think this essay gets a pass because it's trying to actually address that "strange mystery."
The meta-mathematical assumptions (axioms) are the context.
Different axioms produce different truths; or if you want - they produce different Mathematical universes [1].
Maths is relative like Physics is relative - it depends on your frame of reference [2].
I'd put that in a different way: the point of maths is not really being context-independent, but to make it very clear what is the context. So, let us consider a statement A which is true provided that a certain set of hypotheses B is true. You might either consider that "A is true in the context of B" (what is commonly written as "B |- A"), so you have a context (but it is very clearly stated what it is). Or you can (often) write it as an implication: "B -> A". The whole sentence "B -> A" is an absolute, it has no context any more, because the context has been absorbed in the antecedent.
(yes, I know I am oversimplifying something, take this at the "philosophical" level)
From the lens of the Curry-Howard isomorphism where logic, category theory and type theory are just different perspectives on the same sort of mental human activity...
Implication (logic) is the same thing as internal hom (Category theory); or Function type (type theory).
It is just syntax. B |- A in logic translates to f::B -> A in Haskell.
Yeah, context-dependence is a matter of degree. However, if you rephrase the article in terms of drastically reducing context dependence, particularly eliminating physical circumstance from the context, it still says something mostly true and important.
Yeah, but it's genuinely remarkable that a solution should ever be applicable to more than one problem, other than where it was first devised. It's easy to be numb to that if you've grown up with math showing up all over the place, but that's what the "unreasonable effectiveness" thing is all about.
>So perhaps the best way to build efficient abstractions in systems is to think about the flow of the system in terms of axioms and conditionals. The abstractions are axioms that can be grouped together and the conditionals are the boundaries between them.
I wonder how you square this idea of generalization with Godel's incompleteness theorems?
Why it is even relevant? Godel's incompletness theorem applies to almost any strong formal system with self-referential abilities, so if you want to do a good formalization, you bound to have one that satisfy Godel's theorem requirement.
Easy: have you ever had a an epiphany how idiot you were and did not understand what all people told you for a long time? Then you realized something and everything just snapped into place. You now understand all. Paff! An axiom or condition was just changed in you by an experience. Suddenly you get now what others told you. Welcome to an other system of understanding.
IMO math seems effective because everything that works is called math. So yeah, that quote from the beginning is right, it's selection.
There are many different math concepts used to describe the world, everything from calculus to graph theory, geometry, and so on. These things have a two way relationship with the real world: they don't necessarily have to correspond with anything real, like Hardy's quote about his number theory work that eventually ended up appearing in cryptography, but if something in the real world happens ahead of it, math will expand to swallow it.
Think of a scientific theory that isn't described with some kind of math. I'm not sure it can be done. My sense is that whatever you think of, even if it's completely new, will be called math. For instance general relativity relied on some quite new concepts at the time, but nobody would point at it and say it wasn't math.
I think this is a very widespread idea, I used to believe in it too, perhaps due to our background. However, if you work on translating science to computers, you soon find it's not so true. There are some many ideas in science which are not mathematically encoded, but rather in human language, it's kind of frustrating. The "low-level" sciences like physics have spoilt us with their very math-like nature. But even in those sciences there are a lot of things which are not well-defined, but rather rely on human intuition and language. If you go "upwards" in the stack, you find things like biology where there is a lot of very formal scientific knowledge which is not maths. And I work in linguistics, so just imagine what it's like at this level ;)
I'm finding the deeper I study biology, the more certain I am that complex models with both classical and quantum parameters will eventually be able to predict the overwhelming majority of macromolecular behavior such as protein folding and DNA recombination.
Once you start dealing with concepts bigger than that you get into another mathematical description with Markov chain style models for cellular proliferation, followed by network analysis for tissue growth.
You can take that up further and further, I'm sure you're somewhat familar.
My question is, even if you have some examples, what do you find to be some kind of theoretical limit to the modelling that would actually be accurate?
Not a limit to the accuracy, that must simply always exist, but a limit to what can be successfully modeled at least to "acceptably correct" for use in some application?
In biology: morphology of organisms, evolution, ecology. And those deal with systems, so they use a lot of math. But interspersed with the math, you always find natural language descriptions, definitions, explanations, which are necessary for understanding and complete modelling of the theory. These make reference to the shared human experience of the world, and are not formalized in logic. Not that they cannot be, or at least so I hope. But we're very far from it today, that's what I mean.
Maybe relatedly, humans think of the world in fuzzy terms. At some point we're going to need a system for formalizing fuzzy thought, and no, fuzzy logic is not it, because that's just a continuous extension to boolean logic. Human thinking is fuzzy beyond that. But, as a computational linguist, I sometimes worry that we already have that system: natural languages!
I don't think was specific enough, I guess what I'm looking for is something that we can describe with language that doesn't have at least some sort of parameterization in regards to physics.
So take the original reaction of DNA from just inorganics, I typed those words, but have no reference for what the model actually is. What I do however have, is words for each of those things, and a set of impossibilities for what it could "not" mean.
However, the reference is not born out in terms of nothing, each of those words has a set of things that we do have models for, we have models for atoms, reactions, DNA, etc.
So in reality the sentence describes something that we simply can't point to specifics on, but is in no way "unexplainable" in terms of its logic.
Another example would be dark matter, we use those words, but really they just stand for a set of observations, empirical measurements just operating outside of the patterns we are used to, but certainly not without something to point to.
If there's some shared experience that we can't express logically, I'm at least personally unfamiliar with it, I would need some further understanding of what you have in mind.
I could also be wildly misreading what you mean, semantics are not my favorite over text.
> they don't necessarily have to correspond with anything real, ...
I would just say they correspond to encoded thought processes, encoded reasoning. If you can take a thought process and describe it in terms of sets and relations (i.e. subsets with certain properties), you have a mathematical structure and you can start trying to prove theorems.
You spend time thinking about a problem, then hopefully you start recognizing patterns, then you take the reasoning, clean it up, abstract it and generalize it to increase its ultimate utility, and package it for others to reuse and build upon.
> everything that works is called math
It is just fortunate that people have been able to "package" a lot of stuff this way. Like Riemann did with his geometry for example. It is not that mathematicians just decide to "take over" everything.
What? I'm sorry but this is utter bullshit. Math is not just anything that works. Every hot new theory is assumed to "work" in the era which it is produced, and not everything is called "math". There has been no significant "wrong" result in the entire history of math since ancient times, nor has any significant result been jettisoned from the field of math, whereas every other field or discipline of study has been wrong at some point. If math was just "anything that works", then we'd be regularly purging stuff from the "math" label, but I can't think of anything that was called "math" in history and not called "math" now.
What about "experts"? Like, I would say there are people out there who are valuable because they know how to do stuff. It's not required for them to explain how they do it, and often times it's the case that they can't, otherwise they would simply explain and we'd all be experts. Rather, we need them precisely because the results they deliver are not able to be broken down into a sequence of steps that anyone could follow - since many of them can't explain. So it's something that "works" but isn't math.
I'll add that eventually experts are replaced, but then by that time there are new experts. The problem domain evolves and what used to require experts is replaced with math, and the new experts are working in the area where things can't be math.
Conceptually I think I'm on point for this, but I don't know if my examples are super good. I'd say business, human language, politics, medicine, and art are all examples of things that have experts. In each of these fields that are things that work, but it's not yet backed up by math.
Maybe it's more accurate to say, given an infinite amount of time and intelligence, everything becomes math? And I think that makes sense, but I'm sort of inclined to believing in an objective, yet logistically intractable reality.
Sure, tacit knowledge is a real thing that people talk about. But I see expertise as a kind of navigation through murky waters rather than "theory" which tends to be an explicit thing.
One thing experts can do is tell you when a theory is applicable.
Reading just the top answers from those threads, I see no significant results that have been disproved. Only the "intuitions" and "footnotes" and some "trivial assumptions" of mathematicians, but not an actual published result that was cited by other results and had significant consequences by invalidating other results.
You can't open the conversation offering "the entire history of math since ancient times" and then demand thoroughly modern things like "an actual published result that was cited by other results" as counter-evidence.
Nonetheless many of the examples in the above links still fit your criteria.
A published result that cites another result is not "a thoroughly modern thing". Mathematicians have been citing each other since Pythagoras and Avicenna.
You seem to be arguing from a personally idealised view of math, which doesn't match reality.
Real math is full of full of mis-starts, dead ends, and established mistakes which are later corrected.
Math is exactly like science. There's a cumulative core we can be very confident about, and more exploratory edges where results are more tentative and subject to review, correction, and expansion.
Sure there are incomplete proofs and dead ends. But I have yet to see an example of an "established mistake" which disproved an entire line of research that depended on the mistake. Sure, mistakes have been published. But it never led to an entire branch of "knowledge" based on a false belief -- something that happens regularly in other fields.
Except for the corner cases. So the trivial one is "angles in a triangle add up to 180" which works in a plane but not on the surface of a sphere so navigation has to use more than trivial trigonometry functions for accuracy at scale.
That's not a corner case. Either you defined "triangle" and "angles" to mean a plane triangle and angles, or that is not a theorem. Within the theory you're looking at, the definitions are not part of the context, they are part of the theory itself. So you're not depending on the context.
Like the other response you basically said context is everything. You just prefer to call the context axioms. What is context free here is the arithmetic.
That's not a corner case, that's just more advanced math. No one ever claimed that triangulation on a plane is the same as triangulation on the surface of a sphere.
Like the other response you basically said context is everything. You just prefer to call the context axioms. What is context free here is the arithmetic.
Arithmetic still depends on which ring/field axioms you're using. :) I think you would have to agree, though, that axioms are a much more tractable kind of context than "a whole honking physical situation with atoms and entropy", which I think is the real kernel of the article.
Most people think in a context-dependent way. If you ask, suppose Jane has three apples and John gives her two more apples, how many does she have - then most kids at the appropriate level will visualise apples and count to five. Give exactly the same problem but with "Jane has five McGuffins" and you'll get a confused stare followed by "what's a McGuffin?". Except of course for the one kid who has no problem with the math because they misheard it as McMuffin and could visualise that!