Hacker News new | past | comments | ask | show | jobs | submit login

Specifics like this make it much easier to agree on LLM capabilities, thank you.

Automatic proof generation is a massive open problem in all of computer science and not close to be solved. It’s true LLMs aren’t great at it and more is required for example as with the geometry system Deepmind progresses on.

On the other hand they can be very useful to explain concepts and allow interactive questioning to drill down and help build understanding of complex mathematical concepts, all during a morning commute via the voice interface.




How do yo debug its hallucination misinformation via voice interface while you commute?


I just use my memory and verify later. Unlike a LLM I have persistent long term durable storage of knowledge. Typically I can pretty easily pick out a hallucination though because there’s often a very clear inconsistency or logical leap that is non sense.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: