I found chatgpt to be a game changer for people who want to read a math text without getting stuck on something because you are missing a gap in a math prerequisite.
Chatgpt essentially fills in the gap between a concept you do understand with a concept that is in the text.
You might not get the full treatment of the gap, but it’s often enough for you to move forward in a math text.
Also these annoying “proof left as an exercise” is now “ask chatgpt for the answer”.
I think not doing the proof exercises is problematic. Just understanding some concepts on how to vaguely go from A to B isn't really enough for math, need to be able to come up with how to do it. That is the hardest part in self-study, because skipping is easier than in some class or seminar setting (and if not in a group can also not discuss problems and see how other people approach them).
I think that is problematic for math because figuring out the way is the math. Need to be super disciplined not to ask chatGPT questions that go far into that territory but rather present it with a way devised and then discuss (if possible).
Of course passively reading ChatGPT transcripts isn’t going to help you master math.
But equally problematic is the said roadblocks that - prior to ChatGPT - simply led many learners to stop learning. We finally have a mechanism to adjust difficulty level to the mastery level of the learner. This is fantastic.
I think chatgpt's effectiveness for self-studying would depend on the subfield of pure math. For example, real analysis I believe is still best self-studied by just reading baby rudin and doing the examples and exercises. However, I really could not make much progress on topology until chatgpt walked me through what the open set axioms actually meant in the context of metric spaces (which most of the topological spaces one encounters are), otherwise they just seemed very arbitrary.
In my opinion, it is less dependent on the subfield than the textbook you use for that subfield. Unfortunately, math textbook recommendations are relatively subjective, with many popular choices unsuitable for self-study, or even study.
With regards to topology, your experience rings true. In short, anyone with knowledge of calculus / basic real analysis wanting to learn topology should read "Real Analysis" by Carothers.
Usually topology is taught after real analysis, extending many results that hold on the reals as the main motivation. But this process is quite abrupt without the intermediate context of metric space, leaving many people confused. It doesn't help that Baby Rudin is quite terrible at teaching these concepts for you. On the other hand, Carothers' book is a paragon of mathematical exposition. It excels at telling you why metric space, topological space, and all the definitions are made that way.
With regards to the parent, I have to say "Proof is left as exercise" is probably the number one thing that forces students to actually read the texts. The best way to learn is to ask ChatGPT after you're stuck, not before.
I have worked through some of Carothers myself and like it a lot.
Are there other math books you think highly of that are similar, i.e. good for explaining why the definitions are the way they are as well as teaching the material?
> Also these annoying “proof left as an exercise” is now “ask chatgpt for the answer”.
ChatGPT is horrendously bad at mathematical proofs (I've written about this before), so I fear that this is a rather dangerous approach: you could be learning things that are wrong without realising it.
I just opened up a Tao’s real analysis pdf and copy and pasted one of their exercises (didn’t cherry pick), would be great if you can point out what is bad about it.
Feel free to take another exercise from that Tao’s real analysis pdf if you think you have a good counter example, would love to know its limits on honours undergrad level math texts.
Establishing a proof of a statement in mathematics means giving a series of steps of the form
A(1) && A(2) && ... && A(n) && B => A(n+1),
where each of A(i) has been proved earlier, B is one of the axioms with which you are working, and => means derivation using some fixed rules. The axiom list for B is context dependent, so that a journal paper may be using an extended set of the form "everything already known by the community, given a reference", etc. A textbook will use lower level textbooks mentioned in the introduction as lists of such contextual axioms.
The IMHO biggest issue with this chatgpt proof is that even though correct in principle, it really misses the context of your exercise: it does not really know if e.g. the well orderedness principle had been introduced already, which exact definition of the natural numbers is being used (Peano, intuitive?), etc.
As a result, the "proof" it provides is primarily name dropping — albeit correct in principle, it still requires filling in the actual argument. So, might be helpful as a hint for a student, but requires the proof to actually be produced.
Chatgpt would have no problems introducing the well-ordered principle if prompted, so the self learner can dive into the details if needed.
Hints are probably what you want if you’re a self learner and stuck, so actually chatgpt is doing a good job to guide self learners through a text they’re stuck on.
Being stuck on one statement for days is not a strategic way to learn.
> Chatgpt would have no problems introducing the well-ordered principle
Yes, well, that is part of what I was trying to say. The statement of the principle is not important outside of the structure you are building when following one particular proof, or reading a book (so, following several proofs).
You could do just as well with Zorn's lemma or axiom of choice as you would with the well-ordering; what if your course introduces one of these, but not the well-ordering principle, and then asks to solve this particular exercise? In that case, the gist of the exercise would actually be to re-derive, say, the (axiom of choice)=>(well-ordering) implication for the natural numbers. A point that would be thoroughly lost on chatgpt without the course context.
When ChatGPT’s output looks correct, it usually just means that it actually met the problem already, and “learned” the answer verbatim, and now it just applies some transformations to its output to fit your context better, giving the illusion of something more than a search engine would have done.
It sucks at 3-years old level novel logic, let alone math proofs.
This is about computability, not analysis, but I think the point still applies: ChatGPT is quick to give you an answer that sounds plausible but is actually complete nonsense.
Chatgpt essentially fills in the gap between a concept you do understand with a concept that is in the text.
You might not get the full treatment of the gap, but it’s often enough for you to move forward in a math text.
Also these annoying “proof left as an exercise” is now “ask chatgpt for the answer”.