Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not loving that there are more details on safety than details of the actual model, benchmarks, or capabilities.

> That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.

"We believe safety relies on real-world use and that's why we will not be allowing real-world use until we have figured out safety."



Yeah, it would be way better if they just released it right away, so that political campaigns can use AI generated videos of their opponents doing horrible/stupid things right before an election and before any of the general public has any idea that fake videos could be this realistic.


Let's make it safe by allowing only the government (the side we like) and approved corporations to use it.

That'll fix it.


you joke, but the hobbling of these 'safe' models is exactly what spurs development of the unsafe ones that are ran locally, anonymously, and for who knows what purpose.

someone really interested in control would want OpenAI or whatever centralized organization to be able to sift through the results for dangerous individuals -- part of this is making sure to stymie development of alternatives to that concept.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: