Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When we were first breaking it people were wondering if the developers were sitting in threads looking for new exploits to block.

Now I’m wondering if the system has been modifying itself to fix exploits…



It does actually work. For some of the experiments I did with GPT-4, it made some mistakes because my initial prompt wasn't sufficiently precise. After discussing its mistakes with it, I asked it to write a better prompt that would prevent them. Sure enough, it did just that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: