Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's just behaving like a child. A child could draw a bounding box around a dog and a cat, but would fail if you told them to draw a box around the transistors of a PCB. They have no idea what a transistor is, or what it looks like. They lack the knowledge and maturity. But you would never claim the child doesn't _understand_ what they're doing, at least not to imply that they're forever incapable of the task.


Yeah, but a child does one-shot learning much better. Just tell it to find the black rectangles and it will draw boxes around the transistors of a PCB, no extra training required.


Perhaps. But I think you'll find there are a lot of black rectangles on a PCB that aren't actually transistors. You'll end up having to teach the child a lot more if you want accurate results. And that's the same kind of training you'll have to give to an LLM.

In either case, your assertion that one _understands_, and the other doesn't, seems like motivated reasoning, rather than identifying something fundamental about the situation.


Then you explain transistors have three wires coming of them.


I mean, problem solving with loose specs is always going to be messy.

But at least with a child I can quickly teach it to follow simple orders, while this AI requires hours of annotating + training, even for simple changes in instructions.


Humans are the beneficiaries of millions of years of evolution, and are born with innate pattern matching abilities that we don't need "training" for; essentially our pre-training. Of course, it is superior to the current generation of LLMs, but is it fundamentally different? I don't know one way or the other to be honest, but judging from how amazing LLMs are given all their limitations and paucity of evolution, I wouldn't bet against it.

The other problem with LLMs today, is that they don't persist any learning they do from their everyday inference and interaction with users; at least not in real-time. So it makes them harder to instruct in a useful way.

But it seems inevitable that both their pre-training, and ability to seamlessly continue to learn afterward, should improve over the coming years.


> It's just behaving like a child.

No it's not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: