Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the tests no one really wants to think about is as follows:

LLMs feel real to us because they use our language, which embeds a ton of information in it.

A model trained on an alien language would be the exact same model (worded get embedded as numbers anyway), but we wouldn’t apply any anthropomorphization to it because we wouldn’t understand it.

Creating that distance makes it feel a bit more like what it is - unbelievably good stochastic predictions that are too complex for us to back solve - that is very different than something with wants (organisms that evolve with the intent to reproduce).

Now if we start mating models…



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: