There's obviously nothing wrong with learning by reading, but the way you tell whether what you read is true is by seeing whether or not it fits in with observation of reality. That's the reason we're no longer reading the books about phlogiston.
> the way you tell whether what you read is true is by seeing whether or not it fits in with observation of reality
The only way any of us ever gets to see "whether or not it fits in with observation of reality" is to see if they get an A or F on the test asking it.
Seriously.
The "moons of Jupiter" question is the only one of the above one gets to connect to an observation independent of humans, and then if they did, they'd be wrong, because you can't just count all the moons of Jupiter from your backyard with a DIY telescope. We know the correct answer only because some other people both built building-sized telescopes and had a bunch of car-sized telescopes thrown at Jupiter - and unless you had a chance to operate either, then for you the "correct answer" is what you read somewhere and that you expect other people to consider correct - this is the only criterion you have available.
Independently checking the information you read in textbooks is very difficult for sure. But it's still how we decide what's true and what's not true. If tomorrow a new moon was somehow orbiting Jupiter we'd say the textbooks were wrong, we wouldn't say the moon isn't there.
What? That's (1) not true and (2) says, uh, a lot of unintentional things about the way you approach the world. I'm not sure you realize quite how it makes you look.
For one, it's not even internally consistent -- the people who built telescopes and satellites didn't "see" the moons, either. They got back a bunch of electrical signals and interpreted it to mean something. This worldview essentially boils down to the old "brain in a jar" which is fun to think about at 3am when you're 21 and stoned, but it's not otherwise useful so we discard it.
For another, "how many moons does Jupiter have" doesn't have a correct answer, because it doesn't have an answer. There is no objective definition of what a "moon" is. There's not even a precise IAU technical definition. Jupiter has rings that are constantly changing, every single particle of those could be considered a moon if you want to get pedantic enough.
I'm always a bit shocked and disappointed with people when they go "well, you learn it on a test and that's how you know" because ...no, no that's not at all how it works. The most essential part of learning is knowing how we know and knowing how certain we are in that conclusion.
"Jupiter has 95 moons" is not a useful or true fact. "Jupiter has 95 named moons and thousands of smaller objects orbiting it and the International Astronomical Union has decided it's not naming any more of them." is both useful and predictive [0] because you know there isn't going to be any more unless something really wild happens.
> I'm not sure you realize quite how it makes you look.
I probably don't.
> For one, it's not even internally consistent -- the people who built telescopes and satellites didn't "see" the moons, either. They got back a bunch of electrical signals and interpreted it to mean something.
I'm not trying to go 100% reductionist here; I thought the point was clear. I was locking on the distinction between "learn from experience" vs. "learn from reading about it", and corresponding distinction of "test by predicting X" vs. "test by predicting other peoples' reactions to statements about X" - because that's the distinction TFA assumes we're on the "left side" of, LLMs on the "right", and I'm saying humans are actually on the same side as LLMs.
> This worldview essentially boils down to the old "brain in a jar" which is fun to think about at 3am when you're 21 and stoned, but it's not otherwise useful so we discard it.
Wait, what's wrong with this view? Wasn't exactly refuted in any way, despite proclamations by the more "embodied cognition" folks, whose beliefs are to me just a religion trying to retroactively fit to modern science to counter diminishing role of human soul at the center of it.
> I'm always a bit shocked and disappointed with people when they go "well, you learn it on a test and that's how you know" because ...no, no that's not at all how it works. The most essential part of learning is knowing how we know and knowing how certain we are in that conclusion.
My argument isn't simply "learn for the test and it's fine". I was myself the person who refused to learn "for the test" - but that doesn't change the fact that, in 99% of the cases, what I was doing is anticipating reaction of people (imaginary or otherwise) who hold accurate beliefs around the world, because it's not like I was able to test any of it empirically myself. And no, internal belief consistency is still text land, not hard empirical evidence land.
My point is to highlight that, for most of what we call today knowledge, which isn't tied to directly experiencing a phenomena in question, we're not learning in ways fundamentally different to what LLMs are doing. This isn't to say here that LLMs are learning it well or understanding it (for whatever one means by "understanding") - just that the whole line of arguing that "LLMs only learn from statistical patterns in text, unlike us, therefore can't understand" is wrong because 1) statistical patterns in text contain contain that knowledge, and 2) it's what we're learning it from as well.
> Wait, what's wrong with this view? Wasn't exactly refuted in any way, despite proclamations by the more "embodied cognition" folks, whose beliefs are to me just a religion trying to retroactively fit to modern science to counter diminishing role of human soul at the center of it.
It's unfalsifiable, that's what's wrong with it. Sure, you could be a brain in a jar experiencing a simulated world, but there's nothing useful about that worldview. If the world feels real, you might as well treat it like it is.
> My point is to highlight that, for most of what we call today knowledge, which isn't tied to directly experiencing a phenomena in question, we're not learning in ways fundamentally different to what LLMs are doing
I get what you're trying to say -- nobody can derive everything from first principles, which is true -- but your conclusion is absolutely not true. Humans don't credulously accept what we're given in a true/false binary and spit out derived facts.
All knowledge is an approximation. There is very little absolute truth. And we're good at dealing with that.
Humans learn by building up mental models of how systems work, understanding when those models apply and when they don't, understanding how much they can trust the model and understanding how to test conclusions if they aren't sure.