Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t think it is a straw man.

The point is for now, trying to make LLM’s safe in reasonable ways has uncontrolled spillover results.

You can’t (today) have much of the first without some wacky amount of the second.

But today is short, and solving AI using AI (independently trained critics, etc.) and the advance of general AI reasoning will improve the situation.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: