Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> humans do not need 10,000 examples to tell the difference between cats and dogs,

I swear, not enough people have kids.

Now, is it 10k examples? No, but I think it was on the order of hundreds, if not thousands.

One thing kids do is they'll ask for confirmation of their guess. You'll be reading a book you've read 50 times before and the kid will stop you, point at a dog in the book, and ask "dog?"

And there is a development phase where this happens a lot.

Also kids can get mad if they are told an object doesn't match up to the expected label, e.g. my son gets really mad if someone calls something by the wrong color.

Another thing toddlers like to do is play silly labeling games, which is different than calling something the wrong name on accident, instead this is done on purpose for fun. e.g. you point to a fish and say "isn't that a lovely llama!" at which point the kid will fall down giggling at how silly you are being.

The human brain develops really slowly[1], and a sense of linear time encoding doesn't really exist for quite awhile. (Even at 3, everything is either yesterday, today, or tomorrow) so who the hell knows how things are being processed, but what we do know is that kids gather information through a bunch of senses, that are operating at an absurd data collection rate 12-14 hours a day, with another 10-12 hours of downtime to process the information.

[1] Watch a baby discover they have a right foot. Then a few days later figure out they also have a left foot. Watch kids who are learning to stand develop a sense of "up above me" after they bonk their heads a few time on a table bottom. Kids only learn "fast" in the sense that they have nothing else to do for years on end.



> Now, is it 10k examples? No, but I think it was on the order of hundreds, if not thousands.

I have kids so I'm presuming I'm allowed to have an opinion here.

This is ignoring the fact that babies are not just learning labels, they're learning the whole of language, motion planning, sensory processing, etc.

Once they have the basics down concept acquisition time shrinks rapidly and kids can easily learn their new favorite animal in as little as a single example.

Compare this to LLMs which can one-shot certain tasks, but only if they have essentially already memorized enough information to know about that task. It gives the illusion that these models are learning like children do, when in reality they are not even entirely capable of learning novel concepts.

Beyond just learning a new animal, humans are able to learn entirely new systems of reasoning in surprisingly few examples (though it does take quite a bit of time to process them). How many homework questions did your entire calc 1 class have? I'm guessing less than 100 and (hopefully) you successfully learned differential calculus.


> kids can easily learn their new favorite animal in as little as a single example

Until they encounter a similar animal and get confused, at which point you understand the implicit heuristic they were relying on. (Eg. They confused a dairy cow as a zebra, which means their heuristic was a black-and-white quadrupedal)

Doesn't this seem remarkably close to how LLMs behave with one-shot or few-shot learning? I think there are a lot more similarities here than you give it credit for.

Also, I grew up in South Korea where early math education is highly prioritized (for better or for worse). I remember having to solve 2 dozen arithmetic problems every week after school with a private tutor. Yes, it was torture and I was miserable, but it did expose me to thousands more arithmetic questions than my American peers. All that misery paid off when I moved to the U.S. at the age of 12 and realized that my math level was 3-4 years above my peers. So yes, I think human intelligence accuracy also does improve with more training data.


Not many zebras where I live but lots of little dogs. Small dogs were clearly cats for a long time no matter what I said. The training can take a while.


This. My 2.5 y.o. still argues with me that a small dog she just saw in the park is a "cat". That's in contrast to her older sister, who at 5 is... begrudgingly accepting that I might be right about it after the third time I correct her.


The thing is that the labels "cat" and "dog" reflect a choice in most languages to name animals based on species, which manifests in certain physical/behavioral attributes. Children need to learn by observation/teaching and generalization that these are the characteristics they need to use to conform to our chosen labelling/distinction, and that other things such as size/color/speed are irrelevant.

Of course it didn't have to be this way - in a different language animals might be named based on size or abilities/behavior, etc.

So, your daughter wanting to label a cat-sized dog as a cat is just a reflection of her not having aligned her generalization of what you are talking about when you say "cat" vs "dog" with her own.


And once they learn sarcasm, small dogs are cats again :-)


My favourite part of this is when they apply their new words to things that technically make sense, but don't. My daughter proudly pointed at a king wearing a crown as "sharp king" after learning about knives, saws, etc.


> How many homework questions did your entire calc 1 class have? I'm guessing less than 100 and (hopefully) you successfully learned differential calculus.

Not just that: people learn mathematics mainly by _thinking over and solving problems_, not by memorising solutions to problems. During my mathematics education I had to practice solving a lot of problems dissimilar what I had seen before. Even in the theory part, a lot of it was actually about filling in details in proofs and arguments, and reformulating challenging steps (by words or drawings). My notes on top of a mathematical textbook are much more than the text itself.

People think that knowledge lies in the texts themselves; it does not, it lies in what these texts relate to and the processes that they are part of, a lot of which are out in the real world and in our interactions. The original article is spot on that there is no AGI pathway in the current research direction. But there are huge incentives for ignoring this.


> Not just that: people learn mathematics mainly by _thinking over and solving problems_, not by memorising solutions to problems.

I think it's more accurate to say that they learn math by memorizing a sequence of steps that result in a correct solution, typically by following along with some examples. Hopefully they also remember why each step contributes to the answer as this aids recall and generalization.

The practice of solving problems that you describe is to ingrain/memorize those steps so you don't forget how to apply the procedure correctly. This is just standard training. Understanding the motivation of each step helps with that memorization, and also allows you to apply that step in novel problems.

> The original article is spot on that there is no AGI pathway in the current research direction.

I think you're wrong. The research on grokking shows that LLMs transition from memorization to generalized circuits for problem solving if trained enough, and parametric memory generalizes their operation to many more tasks.

They have now been able to achieve near perfect accuracy on comparison tasks, where GPT-4 is barely in the double digit success rate.

Composition tasks are still challenging, but parametric memory is a big step in the right direction for that too. Accurate comparitive and compositional reasoning sound tantalizingly close to AGI.


> The practice of solving problems that you describe is to ingrain/memorize those steps so you don't forget how to apply the procedure correctly

Simply memorizing sequences of steps is not how mathematics learning works, otherwise we would not see so much variation in outcomes. Me and Terence Tao on the same exact math training data would not yield two mathematicians of similar skill.

While it's true that memorization of properties, structure, operations and what should be applied when and where is involved, there is a much deeper component of knowing how these all relate to each other. Grasping their fundamental meaning and structure, and some people seem to be wired to be better at thinking about and picking out these subtle mathematical relations using just the description or based off of only a few examples (or be able to at all, where everyone else struggles).

> I think you're wrong. The research on grokking shows that LLMs transition from memorization to generalized circuits

It's worth noting that for composition, key to abstract reasoning, LLMs failed to generalize to out of domain examples on simple synthetic data.

From: https://arxiv.org/abs/2405.15071

> The levels of generalization also vary across reasoning types: when faced with out-of-distribution examples, transformers fail to systematically generalize for composition but succeed for comparison.


> Simply memorizing sequences of steps is not how mathematics learning works, otherwise we would not see so much variation in outcomes

Everyone starts by memorizing how to do basic arithmetic on numbers, their multiplication tables and fractions. Only some then advance to understanding why those operations must work as they do.

> It's worth noting that for composition, key to abstract reasoning, LLMs failed to generalize to out of domain examples on simple synthetic data.

Yes, I acknowledged that when I said "Composition tasks are still challenging". Comparisons and composition are both key to abstract reasoning. Clearly parametric memory and grokking have shown a fairly dramatic improvement in comparative reasoning with only a small tweak.

There is no evidence to suggest that compositional reasoning would not also fall to yet another small tweak. Maybe it will require something more dramatic, but I wouldn't bet on it. This pattern of thinking humans are special does not have a good track record. Therefore, I find the original claim that I was responding to("there is no AGI pathway in the current research direction") completely unpersuasive.


I started by understanding. I could multiply by repeat addition (each addition counted one at a time with the aid of fingers) before I had the 10x10 addition table memorized. I learned university level calculus before I had more than half of the 10x10 multiplication table memorized, and even that was from daily use, not from deliberate memorization. There wasn't a day in my life where I could recite the full table.

Maybe schools teach by memorization, but my mom taught me by explaining what it means, and I highly recommend this approach (and am a proof by example that humans can learn this way).


> I started by understanding. I could multiply by repeat addition

How did you learn what the symbols for numbers mean and how addition works? Did you literally just see "1 + 3 = 4" one day and intuit the meaning of all of those symbols? Was it entirely obvious to you from the get-go that "addition" was the same as counting using your fingers which was also the same as counting apples which was also the same as these little squiggles on paper?

There's no escaping the fact that there's memorization happening at some level because that's the only way to establish a common language.


There's a difference between memorizing meanings of words (addition is same as counting this and then the other thing, "3" means three things) and memorizing methods (table of single digit addition/multiplication to do them faster in your head). You were arguing the second, I'm a counterexample. I agree about the first, everyone learns language by memorization (some rote, some by use), but language is not math.


> You were arguing the second, I'm a counterexample.

I still don't think you are. Since we agree that you memorized numbers and how they are sequential, and that counting is moving "up" in the sequence, addition as counting is still memorizing a procedure based on this, not just memorizing a name: to add any two numbers, count down on one as you count up on the other until the first number number reaches zero, and the number that counted up is the sum. I'm curious how you think you learned addition without memorizing this procedure (or one equivalent to it).

Then you memorized the procedure for multiplication: given any two numbers, count down on one and add the other to itself until the counted down number reaches one. This is still a procedure that you memorized under the label "multiplication".

This is exactly the kind of procedure that I initially described. Someone taught you a correct procedure for achieving some goal and gave you a name for it, and "learning math" consists of memorizing such correct procedures (valid moves in the game of math if you will). These moves get progressively more sophisticated as the math gets more advanced, but it's the same basic process.

They "make sense" to you, and you call it "understanding", because they are built on a deep foundation that ultimately grounds out in counting, but it's still memorizing procedures up and down the stack. You're just memorizing the "minimum" needed to reproduce everything else, and compression is understanding [1].

The "variation in outcomes" that an OP discussed is simply because many valid moves are possible in any given situation, just like in chess, and if you "understand" when a move is valid vs. not (eg. you remember it), then you have an advantage over someone who just memorized specific shortcuts, which I suspect is what you are thinking I mean by memorization.

[1] https://philpapers.org/rec/WILUAC-2


I think you are confusing "memory" with strategies based on memorisation. Yes memorising (ie putting things into memory) is always involved in learning in some way, but that is too general and not what is discussed here. "Compression is understanding" possibly to some extent, but understanding is not just compression; that would be a reduction of what understanding really is, as it involves a certain range of processes and contexts in which the understanding is actually enacted rather than purely "memorised" or applied, and that is fundamentally relational. It is so relational that it can even go deeply down to how motor skills are acquired or spatial relationships understood. It is no surprise that tasks like mental rotation correlates well with mathematical skills.

Current research in early mathematical education now focuses on teaching certain spatial skills to very young kids rather than (just) numbers. Mathematics is about understanding of relationships, and that is not a detached kind of understanding that we can make into an algorithm, but deeply invested and relational between the "subject" and the "object" of understanding. Taking the subject and all the relations with the world out of the context of learning processes is absurd, because that is in the exact centre of them.


Sorry, I strongly disagree.

I did memorize names of numbers, but that is not essential in any way to doing or understanding math, and I can remember a time where I understood addition but did not fully understand how names of numbers work (I remember, when I was six, playing with a friend at counting up high, and we came up with some ridiculous names for high numbers because we didn't understand decimal very well yet).

Addition is a thing you do on matchsticks, or fingers, or eggs, or whatever objects you're thinking about. It's merging two groups and then counting the resulting group. This is how I learned addition works (plus the invariant that you will get the same result no matter what kind of object you happen to work with). Counting up and down is one method that I learned, but I learned it by understanding how and why it obviously works, which means I had the ability to generate variants - instead of 2+8=3+7=... I can do 8+2=9+1=..., or I can add ten at a time, etc'.

Same goes for multiplication. I remember the very simple conversation where I was taught multiplication. "Mom, what is multiplication?" "It's addition again and again, for example 4x3 is 3+3+3". That's it, from that point on I understood (integer) multiplication, and could e.g. wonder myself at why people claim that xy=yx and convince myself that it makes sense, and explore and learn faster ways to calculate it while understanding how they fit in the world and what they mean. (An exception is long multiplication, which I was taught as a method one day and was simple enough that I could memorize it and it was many years before I was comfortable enough with math that whenever I did it it was obvious to me why what I'm doing here calculates exactly multiplication. Long division is a more complex method: it was taught to me twice by my parents, twice again in the slightly harder polynomial variant by university textbooks, and yet I still don't have it memorized because I never bothered to figure out how it works nor to practice enough that I understand it).

I never in my life had an ability to add 2+2 while not understanding what + means. I did for half an hour have the same for long division (kinda... I did understand what division means, just not how the method accomplishes it) and then forgot. All the math I remember, I was taught in the correct order.

edit: a good test for whether I understood a method or just memorized it would be, if there's a step I'm not sure I remember correctly, whether I can tell which variation has to be the correct one. For example, in long multiplication, if I remembered each line has to be indented one place more to the right or left but wasn't sure which, since I understand it, I can easily tell that it has to be the left because this accomplishes the goal of multiplying it by 10, which we need to do because we had x0 and treated it as x.


The point is the memorization exercise requires orders of magnitude fewer examples for bootstrapping.


Does it though? It's a common claim but I don't think that's been rigourously established.


> The practice of solving problems that you describe is to ingrain/memorize those steps so you don't forget how to apply the procedure correctly

Perhaps that is how you learned math, but it is nothing like how I learned math. Memorizing steps does not help, I sucked at it. What works for me us understanding the steps and why we used them. Once I understood the process and why it worked, I was able to reason my way through it.

> The practice of solving problems that you describe is to ingrain/memorize those steps so you don't forget how to apply the procedure correctly.

Did you look at the types of problems presented by the ARC-AGO test? I don't see how memorization plays any role.

> They have now been able to achieve near perfect accuracy on comparison tasks, where GPT-4 is barely in the double digit success rate.

Then lets see how they do on the ARC test? While it is possible that generalized circuits can develop in Ls with enough training but I am pretty skeptical till we see results.


> Perhaps that is how you learned math, but it is nothing like how I learned math.

Memorization is literally how you learned arithmetic, multiplication tables and fractions. Everyone starts learning math by memorization, and only later start understanding why certain steps work. Some people don't advance to that point, and those that do become more adept at math.


> Memorization is literally how you learned arithmetic, multiplication tables and fractions

I understood how to do arithmetic for numbers with multiple digits before I was taught a "procedure". Also, I am not even sure what you mean by "memorization is how you learned fractions". What is there to memorize?


> I understood how to do arithmetic for numbers with multiple digits before I was taught a "procedure"

What did you understand, exactly? You understood how to "count" using "numbers" that you also memorized? You intuitively understood that addition was counting up and subtraction was counting down, or did you memorize those words and what they meant in reference to counting?

> Also, I am not even sure what you mean by "memorization is how you learned fractions". What is there to memorize?

The procedure to add or subtract fractions by establishing a common denominator, for instance. The procedure for how numerators and denominators are multiplied or divided. I could go on.


Fractions is exactly an area of mathematics where I learned by understanding the concept and how it was represented and then would use that understanding to re-reason the procedures I had a hard time remembering.

I do have the single digit multiplication table memorized now, but there was a long time where that table had gaps and I would use my understanding of how numbers worked to to calculate the result rather than remembering it. That same process still occurs for double digit number.

Mathematics education, especially historically, has indeed leaned pretty heavily on memorization. That does mean thats the only way to learn math, or even a particularly good one. I personally think over reliance on memorization is part of why so many people think they hate math.


> Fractions is exactly an area of mathematics where I learned by understanding the concept and how it was represented and then would use that understanding to re-reason the procedures I had a hard time remembering.

Sure, I did that plenty too, but that doesn't refute the point that memorization is core to understanding mathematics, it's just a specific kind of memorization that results maximal flexibility for minimal state retention. All you're claiming is that you memorized some core axioms/primitives and the procedures that operate on them, and then memorized how higher-level concepts are defined in terms of that core. I go into more detail of the specifics here:

https://news.ycombinator.com/item?id=40669585

I agree that this is a better way to memorize mathematics, eg. it's more parsimonious than memorizing lots of shortcuts. We call this type of memorizing "understanding" because it's arguably the most parsimonious approach, requiring the least memory, and machine learning has persuasively argued IMO that compression is understanding [1].

[1] https://philpapers.org/rec/WILUAC-2


Every time I see people online reduce the human thinking process to just production of a perceptible output, I start questioning myself, whether somehow I am the only human on this planet capable of thinking and everyone else is just pretending. That can't be right. It doesn't add up.

The answer is that both humans and the model are capable of reasoning, but the model is more restricted in the reasoning that it can perform since it must conform to the dataset. This means the model is not allowed to invest tokens that do not immediately represent an answer but have to be derived on the way to the answer. Since these thinking tokens are not part of the dataset, the reasoning that the LLM can perform is constrained to the parts of the model that are not subject to the straight jacket of training loss. Therefore most of the reasoning occurs in-between the first and last layers and ends with the last layer, at which point the produced token must cross the training loss barrier. Tokens that invest into the future but are not in the dataset get rejected and thereby limit the ability of the LLM to reason.


> People think that knowledge lies in the texts themselves; it does not, it lies in what these texts relate to and the processes that they are part of, a lot of which are out in the real world and in our interactions

And almost all of it is just more text, or described in more text.

You're very much right about this. And that's exactly why LLMs work as well as they do - they're trained on enough text of all kinds and topics, that they get to pick up on all kinds of patterns and relationships, big and small. The meaning of any word isn't embedded in the letters that make it, but in what other words and experiences are associated with it - and it so happens that it's exactly what language models are mapping.


It is not "just more text". That is an extremely reductive approach on human cognition and experience that does favour to nothing. Describing things in text collapses too many dimensions. Human cognition is multimodal. Humans are not computational machines, we are attuned and in constant allostatic relationship with the changing world around us.


I think there is a component of memorizing solutions. For example, for mathematical proofs there is a set of standard "tricks" that you should have memorized.


Sure memory helps a lot, it allows you to concentrate your mental effort on the novel ot unique parts of the problem.


> How many homework questions did your entire calc 1 class have? I'm guessing less than 100…

I’m quite surprised at this guess and intrigued by your school’s methodology. I would have estimated >30 problems average across 20 weeks for myself.

My kids are still in pre-algebra, but they get way more drilling still, well over 1000 problems per semester once Zern, IReady, etc. are factored in. I believe it’s too much, but it does seem like the typical approach here in California.


I preferred doing large problem sets in math class because that is the only way I felt like I could gain an innate understanding of the math.

For example after doing several hundred logarithms, I was eventually able to do logs to 2 decimal places in my head. (Sadly I cannot do that anymore!) I imagine if I had just done a dozen or so problems I would not have gained that ability.


> This is ignoring the fact that babies are not just learning labels, they're learning the whole of language, motion planning, sensory processing, etc.

Sure, but they learn a lot of labels.

> How many homework questions did your entire calc 1 class have? I'm guessing less than 100

At least 20 to 30 a week, for about 10 weeks of class. Some weeks were more, and I remember plenty of days where we had 20 problems assigned a day.

Indeed, I am a huge fan of "the best way to learn math is to do hundreds upon hundreds of problems", because IMHO some concepts just require massive amounts of repetition.


illusion that these models are learning like children do, when in reality they are not even entirely capable of learning novel concepts

Now imagine how much would your kid learn if the only input he ever received was a sequence of words?


Are you saying it's not fair for LLMs, because of the way they are taught is different?

The difference is that we don't know better methods for them, but we do know of better methods for people.


I think they're saying that it's silly to claim humans learn with less data than LLMs, when humans are ingesting a continuous video, audio, olfactory and tactile data stream for 16+ hours a day, every day. It takes at least 4 years for a human children to be in any way comparable in performance to GPT-4 on any task both of them could be tested on; do people really believe GPT-4 was trained with more data than a 4 year old?


> do people really believe GPT-4 was trained with more data than a 4 year old?

I think it was; the guesstimate I've seen is GPT-4 was trained on 13e12 tokens, that over 4 years is 8.9e9/day or about 1e5/s.

Then it's a question of how many bits per token — my expectation is 100k/s is more than the number of token-equivalents we experience, even though it's much less than the bitrate even of just our ears let alone our eyes.


Interesting analysis, makes sense. I wonder how we should account for the “pre-built” knowledge that is transferred to a newborn genetically and from the environment at conception and during gestation. Of course things like epi-genetics also come into play.

The analogies get a little blurry here, but perhaps we can draw a distinction between information that an infant gets from their higher-level senses (e.g. sight, smell, touch, etc) versus any lower-level biological processes (genetics, epi-genetics, developmental processes, and so on).

The main point is that there is a fundamental difference: LLMs have very little prior knowledge [1] while humans contain an immense amount of information even before they begin learning through the senses.

We need to look at the billions of years of biological evolution, millions of years of cultural evolution, and the immense amounts of environmental factors, all which shape us before birth and before any “learning” occurs.

[1] The model architecture probably counts as hard-coded prior knowledge contained before the model begins training, but it is a ridiculously small amount of information compared to the complexity of living organisms.


I think that's all fair that both LMMs and and people get a certain (even unbounded) amount of "pretraining" before actual tasks.

But after the training people are much more equipped to do single-shot recognition and cognitive tasks of imagery and situations they have not encountered before, e.g. identifying (from pictures) which animals is being shown, even if it is the second time of seeing that animal (the first being shown that this animal is a zebra).

So, basically, after initial training, I believe people are superior in single-shot tasks—and things are going to get much more interesting once LMMs (or something after that?) are able to do that well.

It might be that GPT-4o can actually do that task well! Someone should demo it, I don't have access. Except, of course, GPT-4o already knows what zebras look like, so something else than exactly that..


> I think they're saying that it's silly to claim humans learn with less data than LLMs, when humans are ingesting a continuous video, audio, olfactory and tactile data stream for 16+ hours a day, every day.

Yeah, but they're seeing mostly the same thing day after day!

They aren't seeing 10k stills of 10k different dogs, then 10k stills of 10k different cats. They're seeing $FOO thousand images of the family dog and the family cat.

My (now 4.5yo) toddler did reliably tell the difference between cats and dogs the first time he went with us to the local SPCA and saw cats and dogs that were not our cats and dogs.

In effect, 2 cats and 2 dogs were all he needed to reliably distinguish between cats and dogs.


> In effect, 2 cats and 2 dogs were all he needed to reliably distinguish between cats and dogs.

I assume he was also exposed to many images, photos and videos (realistic or animated) of cats and dogs in children books and toys he handled. In our case, this was a significant source of animal recognition skills of my daughters.


> I assume he was also exposed to many images, photos and videos (realistic or animated) of cats and dogs in children books and toys he handled.

No images or photos (no books).

TV, certainly, but I consider it unlikely that animals in the animation style of pepper pig helps the classifier.

Besides which, we're still talking under a dozen cats/dogs seen till that point.

Forget about cats/dogs. Here's another example: he only had to see a burger patty once to determine that it was an altogether new type of food, different from (for example) a sausage.

Anyone who has kids will have dozens of examples where the classifier worked without a false positive off a single novel item.


So a billion years of evolutionary search plus 20 years of finetuning is a better method?


Two other points - I've also forgotten a bunch, but also know I could "relearn" it faster than the first time around.

To continue your example, I know I've learned calculus and was lauded at the time. Now I could only give you the vagaries, nothing practical. However I know if I was pressed, I could learn it again in short order.


> This is ignoring the fact that babies are not just learning labels, they're learning the whole of language, motion planning, sensory processing, etc.

Yes. All that learning is feeding off one another. They're learning how reality works. Every bit of new information informs everything else. It's something that LLMs demonstrate too, so it shouldn't be a surprising observation.

> Once they have the basics down concept acquisition time shrinks rapidly

Sort of, kind of.

> and kids can easily learn their new favorite animal in as little as a single example.

Under 5 they don't. Can't speak what happens later, as my oldest kid just had their 5th birthday. But below 5, all I've seen is kids being quick to remember a name, but taking quite a bit longer to actually distinguish between a new animal and similarly looking ones they already know. It takes a while to update the classifier :).

(And no, they aren't going to one-shot recognize an animal in a zoo that they saw first time on a picture hours earlier; it's a case I've seen brought up, and I maintain that even most adults will fail spectacularly at this test.)

> Compare this to LLMs which can one-shot certain tasks, but only if they have essentially already memorized enough information to know about that task. It gives the illusion that these models are learning like children do, when in reality they are not even entirely capable of learning novel concepts.

Correct, in the sense that the models don't update their weights while you use them. But that just means you have to compare them with ability of humans to one-shot tasks on the spot, "thinking on their feet", which for most tasks makes even adults look bad compared to GPT-4.

> How many homework questions did your entire calc 1 class have? I'm guessing less than 100 and (hopefully) you successfully learned differential calculus.

I don't believe someone could learn calc in 100 exercises or less. Per concept like "addition of small numbers", or "long division", or "basic derivatives", or "trivial integrals", yes. Note that in-class exercises count too; learning doesn't happen primarily by homework (mostly because few have enough time in a day to do it).


> But that just means you have to compare them with ability of humans to one-shot tasks on the spot, "thinking on their feet", which for most tasks makes even adults look bad compared to GPT-4.

This simply is not true as stated in the article. ARC-AGI is a one-shot task test that humans reliably do much, much better on than any AI model.

> I don't believe someone could learn calc in 100 exercises or less.

I learned the basics of integration in a foreign language I barely understood by watching a couple of diagrams get drawn out and seeing far less than 100 examples or exercises.


> not enough people have kids.

Second that. I think I've learned as much as my children have.

> Watch a baby discover they have a right foot. Then a few days later figure out they also have a left foot.

Watching a baby's awareness grow from pretty much nothing to a fully developed ability to understand the world around is one of the most fascinating parts of being a parent.


My kid is about 3 and has been slow on language development. He can barely speak a few short sentences now. Learning names of things and concepts made a big difference for him and that's a fascinating watch and realization.

This reminds of the story of Adam learning names, or how some languages can express a lot more in fewer words. And it makes sense that LLMs look intelligent to us.

My kid loves repeating the names of things he learned recently. For past few weeks, after learning 'spider' and 'snake' and 'dangerous' he keeps finding spiders around, no snakes so makes up snakes from curly drawn lines and tells us they are dangerous.

I think we learn fast because of stereo (3d) vision. I have no idea how these models learn and don't know if 3d vision will make multi model LLMs better and require exponentially less examples.


> I think we learn fast because of stereo (3d) vision.

I think stereo vision is not that important if you can move around and get spatial clues that way also.


Every animal/insect I can think of has more than 1 eye. Some has lot more than 2 eyes. It has to be that important.


> the kid will stop you, point at a dog in the book, and ask "dog?"

Of course for a human this can either mean "I have an idea about what a dog is, but I'm not sure whether this is one" or it can mean "Hey this is a... one of those, what's the word for it again?"


Babies, unlike machine learning models, aren't placed in limbo when they aren't running back propagation.

Babies need few examples for complex tasks because they get constant infinitely complex examples on tasks which are used for transfer learning.

Current models take a nuclear reactors worth of power to run back prop on top of a small countries GDP worth of hardware.

They are _not_ going to generalize to AGI because we can't afford to run them.


> Current models take a nuclear reactors worth of power to run back prop on top of a small countries GDP worth of hardware.

Nice one. Perhaps we are to conclude the whole transformer architecture is amazingly overblown in storage/computation costs.

AGI or not, we need better approach to what transformers are doing.


> I swear, not enough people have kids.

My friends toddler, who grew up with a cat in the house, would initially call all dogs "cat". :-D


My niece, 3yo, at the zoo, spent about 30 seconds trying to figure out whether a pig was a cat or a car.


I haven't seen 1000 cats in my entire life. I'm sure I learned how to tell a dog from a cat after being exposed to just a single instance of each.


I'm sure you saw over 1B images of cats though, assuming 24 images per second from vision.


> I'm sure you saw over 1B images of cats though, assuming 24 images per second from vision.

The AI models aren't seeing the same image 1B times.


Neither are you, during those 10 000 hours most of the time you aren't absolutely still.


> Neither are you, during those 10 000 hours most of the time you aren't absolutely still.

So? I'm still seeing the same object. Large models aren't trained on 10k different images of a single cat.


I have a small kid. When they first saw some jackdaws, the first bird they noticed could fly, they thought it was terribly exciting and immediately learned the word for them, and generalised it to geese, crows, gulls and magpies (plus some less common species I don't know what they're called in english), pointing at them and screaming the equivalent of 'jackda! jackda!'.


> Now, is it 10k examples? No, but I think it was on the order of hundreds, if not thousands.

If I was presented with 10 pictures of 2 species I'm unfamiliar with, about as different as cats and dogs, I expect I would be able to classify further images as either, reasonably accurately.


Not to mention that babies receive petabytes of visual input to go with other stimuli. It’s up for debate how sample efficient humans actually are in the first few years of their lives.


Hardly. Visual acuity is quite low (limited to a tiny area of the FoV), your brain is filling in all the blanks for you.


Even at that resolution (about 0.35 MP per eye for just the fovea before any processing) napkin math suggests 7.3T per day. Over 5 years you get about 13PB if my math is right, assuming 16 waking hours per day.


That’s all true, yet my 2.5 year old sometimes one-shots specific information. I told my daughter that woodpeckers eat bugs out of trees after doing what you said and asking “what’s that noise?” for the fifth time in a few minutes when we heard some this spring. She brought it up again at least a week later, randomly. Developing brains are amazing.

She also saw an eagle this spring out the car window and said “an eagle! …no, it’s a bird,” so I guess she’s still working on those image classifications ;)


I think your comment over intellectualises the way children experience the world.

My child experiences the world in a really pure way. They don’t care much about labels or colours or any other human inventions like that. He picks up his carrot, he doesn’t care about the name or the color . He just enjoys it through purely experiencing eating it. He can also find incredible flow state like joy from playing with river stones or looking at the moon.

I personally feel bad I have to each them to label things and but things in boxes. I think your child is frustrated at times because it’s a punish of a game. The departure from “the oceanic feeling.

Your comment would make sense to me if the end game of our brains and human experience is labelling things. It’s not. It’s useful but it’s not what living is about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: