Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting analysis, makes sense. I wonder how we should account for the “pre-built” knowledge that is transferred to a newborn genetically and from the environment at conception and during gestation. Of course things like epi-genetics also come into play.

The analogies get a little blurry here, but perhaps we can draw a distinction between information that an infant gets from their higher-level senses (e.g. sight, smell, touch, etc) versus any lower-level biological processes (genetics, epi-genetics, developmental processes, and so on).

The main point is that there is a fundamental difference: LLMs have very little prior knowledge [1] while humans contain an immense amount of information even before they begin learning through the senses.

We need to look at the billions of years of biological evolution, millions of years of cultural evolution, and the immense amounts of environmental factors, all which shape us before birth and before any “learning” occurs.

[1] The model architecture probably counts as hard-coded prior knowledge contained before the model begins training, but it is a ridiculously small amount of information compared to the complexity of living organisms.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: