Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Perhaps. But I think you'll find there are a lot of black rectangles on a PCB that aren't actually transistors. You'll end up having to teach the child a lot more if you want accurate results. And that's the same kind of training you'll have to give to an LLM.

In either case, your assertion that one _understands_, and the other doesn't, seems like motivated reasoning, rather than identifying something fundamental about the situation.



Then you explain transistors have three wires coming of them.


I mean, problem solving with loose specs is always going to be messy.

But at least with a child I can quickly teach it to follow simple orders, while this AI requires hours of annotating + training, even for simple changes in instructions.


Humans are the beneficiaries of millions of years of evolution, and are born with innate pattern matching abilities that we don't need "training" for; essentially our pre-training. Of course, it is superior to the current generation of LLMs, but is it fundamentally different? I don't know one way or the other to be honest, but judging from how amazing LLMs are given all their limitations and paucity of evolution, I wouldn't bet against it.

The other problem with LLMs today, is that they don't persist any learning they do from their everyday inference and interaction with users; at least not in real-time. So it makes them harder to instruct in a useful way.

But it seems inevitable that both their pre-training, and ability to seamlessly continue to learn afterward, should improve over the coming years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: