Hacker News new | past | comments | ask | show | jobs | submit login

There's a view that intelligence without context is meaningless. (You can't do pattern matching without definition of what your signal is -- and if you just use a trivial measure, like entropy, then white noise is the best signal!) So, you can't be generally intelligent. You can be great at a lot of things, but that means you have priors for those things, which inherently predisposes you against other priors. (Though there are of course basically infinite dimensions, in which you can have priors, but let's just assume some dimensions are more important than others.)

What we want is adaptability under goal permanence, meta-goals, usually called alignment. We want an intelligence that is very similar to us. And we like to talk a lot, and we can talk about everything, so we can represent our intelligence through talking, so a Turing test could be a great way to measure how human-similar two "intelligences" are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: