After reading this, it's clear that:
1) It's more important to hit the target (what users need) than to throw quickly or with force.
2) We often don't realize how scalable things are.
Simply put, the author doesn't know what he's talking about.
This nostalgia for the "good old days" should have a name.
This comparison of "medieval learning" and "modern learning" is tougher than "Wilt Chamberlain" versus "Michael Jordan." The amount of knowledge and usage of high abstractions is incomparable. Imagine what we've learned in the last 500 years, and then forget it all and teach children that. They'll finish school in one year.
If your goal is to create a capable laborer for a specific job that won't change in their lifetime, apprenticeships are the way to go. However, if your goal is to teach a person many abstract ideas not specifically related to a particular job, it won't work.
3. It's a popular idea to teach with real-world examples and learn by doing ("general abstractions" vs. "hyperspecific things," "do and watch" vs. "study"). But how many things that you study in school can you touch or do? If you throw away the last 500 years, then yes, what's left you can definitely touch.
For example, the author refers to "economic reasoning." Okay, let's create a real-world example and have the student either do it or watch how someone else does it. Hard, right?
4. We can actually use "real-world examples" and "do and watch" for almost anything students are learning. For example, we could go with the class to a nuclear power plant and 'do and watch' there. There are two issues: many things will not be allowed for students, and it would take years to finish just one year of school. We may call not using it "an embarrassment," but there is a reason for it.
5. "Human beings, it appears, are nearly unique in the animal world for being able to learn something by watching somebody else do it." This is simply not true.
6. "It’s often not so important to understand the reasons why you should do something as it is to see it being performed correctly." We do so much of this in real life unintentionally. Imagine a world where we do it intentionally. What would happen if things changes faster and faster? Who would pay all of these "real-world example" learners during their transition?
7. "Most theories are wrong anyway." You may read this and skip the other parts.
8. He got the pyramid model wrong. Putting a base first doesn't mean learning all of math first, then physics, then chemistry, then biology. A biology base may be about the differences and similarities of animals and plants, and the types of animals.
9. "...this must mean we have an ironclad theory of how scientific knowledge is produced. Except we don’t." Yes, we actually have a scientific method. He probably couldn't "do and watch" it, so it fell out of his sight.
What if LLMs get 'a mental model of requirements/code behavior'? LLMs may have experts in it, each with its own specialty. You can even combine several LLMs, each doing its own thing: one creates architecture, another writes documentation, a third critiques, a fourth writes code, a fifth creates and updates the "mental model," etc.
I agree with the PM role, but with such low requirements that anyone can do it.
Understanding the business problem or goal is actually the context for correctly writing code. Without it, you start acting like an LLM that didn't receive all the necessary code to solve a task.
When a non-developer writes code with an LLM, their ability to write good code decreases. But at the same time, it goes up thanks to more "business context."
In a year or two, I imagine that a non-developer with a proper LLM may surpass a vanilla developer.
I don't think it's helpful to put words in the LLM's mouth.
To properly think about that, we need to describe how an LLM thinks.
It doesn't think in words or move vague, unwieldy concepts around and then translate them into words, like humans do. It works with words (tokens) and their probability of appearing next. The main thing is that these probabilities represent the "thinking" that was initially behind the sentences with such words in its training set, so it manipulates words with the meaning behind them.
Now, to your points:
1) Regarding adding more words to the context window, it's not about "more"; it's about "enough." If you don't have enough context for your task, how will you accomplish it? "Go there, I don't know where."
2) Regarding "problem solved," if the LLM suggests or does such a thing, it only means that, given the current context, this is how the average developer would solve the issue. So it's not an intelligence issue; it's a context and training set issue! When you write that "software engineers can step back, think about the whole thing, and determine the root cause of a problem," notice that you're actually referring to context. If the you don't have enough context or a tool to add data, no developer (digital or analog) will be able to complete the task.
reply