Functional linear analysis - it has tendency to produce a proof for unprovable statements; the proofs will be logically argued and well structured and step 8 will have a statement that is obvious nonsense even to a beginning student, me. The professor on the other hand will ask why I'm trying to prove the false statement and expertly help me find my logic error.
Specifics like this make it much easier to agree on LLM capabilities, thank you.
Automatic proof generation is a massive open problem in all of computer science and not close to be solved. It’s true LLMs aren’t great at it and more is required for example as with the geometry system Deepmind progresses on.
On the other hand they can be very useful to explain concepts and allow interactive questioning to drill down and help build understanding of complex mathematical concepts, all during a morning commute via the voice interface.
I just use my memory and verify later. Unlike a LLM I have persistent long term durable storage of knowledge. Typically I can pretty easily pick out a hallucination though because there’s often a very clear inconsistency or logical leap that is non sense.