Godel's theorems have philosophical ramifications, but then again so do basically all theorems given the right framing.
It is nonetheless true that Godel's theorems are a particular focal point of discussion around mathematical philosophy (but again one which has surprisingly few ramifications for mathematics as a whole) and a very fruitful one at that. I'm not dismissing the idea of having a philosophical discussion around it, only cautioning that it requires a deep understanding of all the seemingly paradoxical statements that it presents which defy attempts to concisely state its philosophical ramifications.
My point at the beginning of all of this is that I disagree with the pedagogical choice of a lot of introductory posts to the incompleteness theorems to mix in that philosophical content at the beginning. You can always do it later after first understanding a lot of seemingly paradoxical statements that arise from Godel's theorems (such as the ones I offered concerning Conway's Game of Life and the consistent system that proclaims the inconsistency of its subpart).
However, by presenting the philosophical parts up front, that tends to be the thing that people latch onto rather than the inner workings of the theorems (which are extremely important to understand to make sure subsequent philosophical conclusions are coherent), and they latch onto the everyday, woolly meanings of the words "prove" and "truth" rather than their more precise counterparts in the theorems.
The exaggeration in the article I'm referring to is
> For example, it [Godel's incompleteness theorem] may mean that we can’t write an algorithm that can think like a dog...but perhaps we don’t need to.
To address this specific point, Godel's incompleteness theorems have little to say on this point. I think the assumed chain of logic is "algorithms rely on a computable listing of axioms" -> "any such listing must have truths that are unprovable in it" (the Godel step) -> "dogs perceive truth" -> "there are certain things dogs perceive that cannot ever be derived by an algorithm."
But leaving aside the plausibility of the non-Godelian steps in that chain, it's relying on a subtle simplification of what "truth" means that severely complicates the chain. Again, I would highly recommend thinking over the Conway Game of Life and inconsistent consistent theory problems. Those two problems demonstrate that "any such listing must have truths that are unprovable in it" is often an over-simplification (okay so I have a system S and the sentence "S is inconsistent" is consistent with S if S is consistent. What is true here?)
This is very similar (and indeed the similarity runs deeper due to connections between computability and logic) to analogous arguments with the halting problem/Rice's Theorem.
"Rice's theorem says that we cannot prove any abstract property about any program" -> "Static analysis attempts to prove abstract properties about programs" -> "Static analysis is impossible."
Or similar arguments that use the halting problem to argue that Strong AI is impossible.
More broadly though this sort of "anti-mechanistic" viewpoint appears all over the place in discussions about Godel's theorems to the point that the Stanford Encyclopedia has a whole section dedicated to it that plainly states "These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail." https://plato.stanford.edu/entries/goedel-incompleteness/#Gd...
Having seen some of those arguments I am very inclined to agree with the encyclopedia here.
Thank you for your clarity. This discussion chain may have kicked of an existential crisis. I think you might have exposed me to a Lovecraftian horror. The article and your discussion have been, if not mind altering, very much mind expanding.
I can't say I've followed all of the nuance in your (argument?) comments. I do know I'm going to be chewing on this for a while. I've come to grips with horror of saccades, fascinating to learn about new ways my mind, uh, decides what is true.
In any case, I appreciate your long thoughtful responses. May not mean much to you, but for at least one reader, it means a lot to me.
So you assert, many think differently.
Edit: To clarify, are you asserting that they do /not/ extend into philosophy? What exaggerations are you specifically referring to?