Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> My humble point is this: if we build "intelligence" as a formal system, like some silicon running some fancy pants LLM what have you, and we want rigor in it's construction, i.e. if we want to be able to tell "this is how it works", then we need to use a subset of our brain that's capable of formal and consistent thinking. And my claim is that _that subsystem_ can't capture "itself". So we have to use "more" of our brain than that subsystem. so either the "AI" that we understand is "less" than what we need and use to understand it. or we can't understand it.

I don't know if you've read Jacob Bronowski's The origins of knowledge and imagination, but the latter part of his argument are essentially this. Formal systems are nice for determining truth, but they're limited and there is always some situation that forces you to reinvent that formal system (edge cases, incorrect assumptions, rules limitation,...)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: