Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While the system 1/2 analogy and points by other commenters about tokenization are relevant, I'd like to highlight another observation: It's possible to teach ChatGPT to multiply correctly, by asking it to go through the computation step by step. Note what this does: It turns an O(n) response into one that is O(n^2) for the standard algorithm. This makes sense. Otherwise, ChatGPT would be able to solve problems more quickly than the fastest existing algorithm.

I'd also like to criticize a point in the article: OP implies that rot13 is naturally a system 2 problem. But I bet that a human with enough training can do it via system 1. Cue Neo watching the Matrix.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: