Are there better arguments for anything in pure math being a posteriori? I fail to see how 4CT (or any mechanized/computer-aided proof) could possibly support this idea.
According to Gödel, it's probable that there are true mathematical formulas that cannot be proven.
So from our point of view, we cannot know given a statement whether it can be proven or not.
So all true statements in pure mathematics that we know are a posteriori true, since before we had the experience of proving it, we could not know if it's a statement that could be proven or disproven.
The Incompleteness Theorem is a technical result about formal systems. It says nothing about the provability of mathematical formulae, since math isn't a formal system.
If you accept the a priori/posteriori distinction, you're abusing words to claim that pure mathematics - the classic example of a priori knowledge! - is a posteriori. A priori knowledge isn't knowledge that we come to know is true through an experience (like proving something), it's knowledge that is true regardless of any particular experience.
Of course math is a formal system. If you take Zermelo-Frankel Set Theory as the foundation of math, it has a very clear, formal set of axioms.
And we do not know for certain that math is objective truth. If we did, there would be no philosophy of mathematics.
Reasoning within mathematics is objective, because math is a formal system. But to think that we know anything about anything is frankly pure arrogance. We don’t know why we are here or what our universe even is at a fundamental level. Math is a human-imposed construct that we use to try and make sense of it all.
> But to think that we know anything about anything is frankly pure arrogance.
It would be arrogant to conflate a model and the subject being modeled, but we have some pretty successful models for physical reality, and you won't bring me to say we don't know anything about it.
That's actually a really interesting philosophical question!
Formal proofs are, of course, objects in a formal system. But as anyone who's tried to express a mathematician's proof of any real complexity in Coq can attest, they're a very small subset of the sum total of human proofs.
The proofs that mathematicians have been doing for thousands of years, by contrast, aren't formal (in the strict sense). They're arguments that appeal to the human sense of deductive reason, in a phrase. And yes, formal logic is an attempt to codify and mechanize that as an object for study, but...well, like I said, it's a big philosophical or even neurological question as to how well it does that, or can do that; to what degree that's possible.
While not true of thousands of years, I'm (and I think most of the mathematical community) pretty confident all of the mathematics of the last 200 can be formalized. There isn't really much of a question of whether we can formalize modern informal, rigorous proofs. Indeed a benchmark of whether a proof is rigorous or not is whether there's a clear path towards formalization (similar to how we might use as an informal benchmark whether an algorithm is well-specified by whether we have a clear path to implementing it in a real programming language).
As such mechanizing mathematics into formal proofs has yet to meet any fundamental difficulties I'm aware of. The main thing is it's just a slog and few people are working on it because it's such a slog. It can usually take orders of magnitude more time to formalize a proof than to come up with its informal, but rigorous proof. But the process doesn't really require deep insights, at least not any more than translating an algorithm from a CS textbook or paper into real code. It mainly is just because there's reams and reams of tedium that can be encapsulated in a single "mutatis mutandis" or the like.
At this point Mizar has formalized essentially all of undergraduate mathematics and is gradually working its way into graduate mathematics. It's yet to meet any deep roadblocks and it's not anticipated to meet any.
Either all mathematics is formalizable and so we're not going to run into any Incompleteness Theorem-related issues and it's irrelevant...or we run into mathematics that can't be formalized and therefore mathematics isn't a fully formalizable system and so Gödel is still irrelevant :)
Although I understand that you mean to say Godel's Incompleteness Theorems aren't relevant, this is not because of anything related to formalization. Godel's Incompleteness Theorems don't say any particular theorem can't be formalized. All they say is that there will always be a theorem whose proof requires the assumption of an additional axiom. Very different things.
A lack of a full formalization for mathematics is more akin to a failure of Godel's Completeness Theorem (which roughly states our mathematical abstractions precisely capture the commonality between different situations we choose to abstract over). The Completeness Theorem however is proven to hold in modern mathematics (e.g. first order logic which includes ZFC).
Indeed the choice of whether to use a system fulfilling the Completeness Theorem is a conscious and true choice (unlike say Godel's incompleteness theorems whose avoidance requires giving up arithmetic), and a choice most logicians and the overwhelming majority of mathematicians decide to take. Even in higher order logics where it would appear the Completeness theorem fails, it can be recovered by appealing to a first-order understanding of its semantics. It's so much a choice that arguably you are leaving the bounds of mathematics proper and entering the wider field of philosophy if you give up the Completeness Theorem in the foundations of your mathematical system (as opposed to locally giving it up in a subsystem and analyzing it in a semantically complete metatheory).
Put another way, mathematics is precisely the study of formal objects and its standards of rigor entail that its own proofs are themselves formal objects that can be studied in their own right.
Arguing against the formalization of modern mathematics is akin to arguing that there are certain CS algorithms which cannot be turned into code. While there are certainly conceivable processes which cannot be implemented in code, algorithms in CS are essentially definitionally required to be implementable in code. Anything that is not is outside the purview of algorithms as understood by CS.
If we find a proof which essentially cannot be formalized the reaction of the mathematical community will not be to reject formalization, but rather to question the rigor of that proof. This is similar to the reaction that the computing community would have if it were to find a purported specification of an algorithm that turned out to be essentially unimplementable.
Godel's incompleteness theorems are really syntactic results rather than semantic results. The main consequence they have for modern mathematics is that there is no "one axiom system to rule them all," since you can always extend many systems with a new axiom, which doesn't change much, since mathematics since time immemorial has been engaged in the practice of tweaking various axioms and seeing what consequences emerge.
Indeed Godel's completeness (not incompleteness) theorem (which applies to most mathematical settings such as anything that uses ZFC) is a strong indication that everything mathematicians would care to prove is in fact provable.
It's been a long time since I studied this stuff properly, but is it true to say that you can tell that a statement is provable (or otherwise) if it can be couched in the language of first order logic?
Not quite. One way of thinking about Godel's Completeness Theorem is to couch in the language of abstractions. Every formal theory is an abstraction that seeks to abstract over many different concrete use cases. In the process we know that we give up some specificity for this generality, that is we know that there are aspects of these concrete use cases that are "forgotten" by the abstract formal theory. In return we get greater reusability of our formal theory and the consequences we can prove using it.
However, the question arises whether the abstract formal theory "forgets too much." That is, we know we must trade some specificity for increased generality, but is it possible we've traded too much? Are there attributes shared in common by all the concrete use cases to which our formal theory is applicable to but to which our abstract formal theory is blind to?
That is perhaps our formal theory is applicable to humans, cows, mathematical sets, etc., and so it necessary forgets about e.g. human-specific features, but there may nonetheless be some statement A that is true about all these things which our formal theory is unable to prove?
Godel's Completeness Theorem states that the answer is negative. Our formal theory and that which we can prove using it are exactly the attributes that are held in common across all its use cases, no more no less. That is our abstraction "leaks" no more than is strictly necessary.
So if we cannot prove or disprove a given statement in our formal theory, that is because there is at least one concrete use case to which our formal theory applies but the statement is false and one concrete use case to which our formal theory applies but the statement is true. That is unprovability is exactly synonymous with independence.
Many first-order theories, such as the theory that consists entirely of the sentence "there exists an element which is Special" can have sentences which cannot be proved or disproved in the theory (e.g. "all elements are Special."). But by the Completeness Theorem that just means there are use cases where sometimes the sentence is true and sometimes it isn't (the theory could be equally applied to a set of elements only one of which is marked as "Special" or where all the elements are marked "Special").
Godel's Completeness Theorem and Incompleteness Theorems are independent phenomena. Not only is it possible for both to hold, only one to hold, or for none to hold, they also apply to different domains of discourse. The Completeness Theorem applies to entire logical systems (such as first-order logic), while the Incompleteness Theorems refer to specific theories (such as a single given theory in first-order logic).
The two taken together mean that e.g. for any first-order theory capable of expressing arithmetic there is always more than one concrete mathematical object that satisfies that theory. That is the abstract theory is incapable of uniquely specifying a single mathematical object. Stated equivalently any abstract theory containing arithmetic can always be indefinitely extended by new axioms.
This is a common theme of first-order logic and there are related results that show in other ways how first-order logic sometimes has problems pinning down unique objects (e.g. the Downward and Upward Lowenheim-Skolem Theorems).