Hacker News new | past | comments | ask | show | jobs | submit login

When we have 50% of AI engineers saying there's at least a 10% chance this technology can cause our extinction, it's completely laughable to think this technology can continue without a regulatory framework. I don't think OpenAI should get to decide what that framework is, but if this stuff is even 20% as dangerous as a lot of people in the field are saying it is, it obviously needs to be regulated.



What are the scenarios in which this would cause our extinction, and how would regulation prevent those scenarios?


You do realise it is possible to unplug something, right?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: