Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I want you to be right. But why do you think you're more qualified to say how to make AI safe than the board of a world-leading AI nonprofit?


Literal wishful thinking ("powerful technology is always good") and vested interests ("I like building on top of this powerful technology"), same as always.


Because I work on AI alignment myself and had been training LLMs long before Attention is All You Need came out (which cites some of my work).


Someone is going to be right, but we also know that experts have known to be wrong in the past, ofttimes to a catastrophic effect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: