Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why does an AGI need to have any knowledge about our reality? The principle behind an AGI should work just as well on a made up world where those puzzles play a part in.


A concept that doesn’t relate to an aspect of reality, either directly or abstracted from basic concepts that directly relate, is meaningless and arbitrary. There is no way for intelligence to grasp it, let alone do something with it.

To put it another way, a thing that solves puzzles without an understanding of reality is a calculator. When it solves a problem, it is the creator’s intelligence solving the problem, not its own.


I agree that the puzzles alone are not enough, that's why I wrote "in a made up world where those puzzles play a part in".

We are not looking for a superhuman, but for the (or a) mechanism of intelligence, which we can then transfer into a superhuman (into the real world). But the mechanism itself should work in an artifically made and very constrained world too.


These problems are spatial problems, they are not some outer wordly problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: