Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think it has to be unfettered progress that Ilya is slowing down for. I could imagine there is a push to hook more commercial capabilities up to the output of the models, and it could be that Ilya doesn't think they are competent/safe enough for that.

I think danger from AGI often presumes the AI has become malicious, but the AI making mistakes while in control of say, industrial machinery, or weapons, is probably the more realistic present concern.

Early adoption of these models as controllers of real world outcomes is where I could see such a disagreement becoming suddenly urgent also.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: