Hacker News new | past | comments | ask | show | jobs | submit login

My guess as an amateur neuroscientist is that what we call intelligence is just a 'measurement' of problem solving ability in different domains. Can be emotional, spatial, motor, reasoning, etc etc.

There is no special sauce in our brain. And we know how much compute there is in our brain– So we can roughly estimate when we'll hit that with these 'LLMs'.

Language is important in a human brain development as well. Kids who grow up deaf grow up vastly less intelligent unless they learn sign language. Language allow us to process complex concepts that our brain can learn to solve, without having to be in those complex environments.

So in hindsight, it's easy to see why it took a language model to be able to solve general tasks and other types deep learning networks couldn't.

I don't really see any limits on these models.




interesting point about language. but i wonder if people misattribute the reason why language is pivotal to human development. your points are valid. i see human behavior with regard to learning as 90% mimicry and 10% autonomous learning. most of what humans believe in is taken on faith and passed on from the tribe to the individual. rarely is it verified even partially let alone fully. humans simple dont have the time or processing power to do that. learning a thing without outside aid is vastly slower and more energy or brain intensive process than copy learning or learning through social institutions by dissemination. the stunted development from lack of language might come more from the less ability to access the collective learning process that language enables and or greatly enhances. i think a lot of learning even when combined with reasoning, deduction, etc really is at the mercy of brute force exploration to find a solution, which individuals are bad at but a society that collects random experienced “ah hah!” occurrences and passes them along is actually okay at.

i wonder if llms and language dont as so much allow us to process these complex environments but instead preload our brains to get a head start in processing those complex environments once we arrive in them. i think llms store compressed relationships of the world which obviously has information loss from a neural mapping of the world that isnt just language based. but that compressed relationships ie knowledge doesnt exactly backwardly map onto the world without it having a reverse key. like artificially learning about real world stuff in school abstractly and then going into the real world, it takes time for that abstraction to snap fit upon the real world.

could you further elaborate on what you mean by limits, because im happy to play contrarian on what i think i interpret you to be saying there.

also to your main point: what intelligence is. yeah you sort of hit up my thoughts on intelligence. its a combination of problem solving abilities in different domains. its like an amalgam of cognitive processes that achieve an amalgam of capabilities. while we can label alllllll that with a singular word, doesnt mean its all a singular process. seems like its a composite. moreover i think a big chunk of intelligence (but not all) is just brute forcing finding associations and then encoding those by some reflexive search/retrieval. a different part of intelligence of course is adaptibility and pattern finding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: