Hacker News new | past | comments | ask | show | jobs | submit login

This is only one piece within a larger argument. You need to read on to understand what the rest of the argument is.

The form of the argument is this: there is no direct evidence for X, but there is a mountain of circumstantial evidence supporting "not X", so therefore, almost certainly, "not X."

X = "we can teach students how to solve problems in general, and that will make them good mathematicians able to discover novel solutions irrespective of the content"




Nice to see a response from you!

I have read the rest of the argument. However, my take upon reading it is that this is just one more contribution in a back-and-forth argument about every aspect that has been studied in math education. Despite the fact that this was published in 2010, the landscape in 2024 very much points to "it's unclear" as the answer to "is [anything] effective?", at least for me, unfortunately.


> the landscape in 2024 very much points to "it's unclear" as the answer to "is [anything] effective?", at least for me, unfortunately.

Interesting. Not sure if you saw the following post from a couple months ago, but if not, you may wish to check it out:

Which cognitive psychology findings are solid that I can use to help students? - https://news.ycombinator.com/item?id=40348986


I did! On MESE first, then on Hacker News.

Usually when there's a replication crisis, people talk about perverse incentives and p-hacking. But there's 2 things I want to mention that people don't talk as much about:

- Lack of adequate theoretical underpinnings.

- In the case of math education, we need to watch out for the differences in what researchers mean by "math proficiency." Is it fluency with tasks, or is it ability to make some progress on problems not similar to worked examples?


> Is it fluency with tasks, or is it ability to make some progress on problems not similar to worked examples?

That's an interesting point. Ideally students would have both. My impression is that the latter is far less trainable, and the best you can do is go through enough worked examples, spread out so that every problem in the space of expected learning is within a reasonably small distance to some worked example.

I.e., you can increase the number of balls (worked examples with problem-solving experiences) in a student's epsilon-cover (knowledge base), but you can't really increase epsilon itself (the student's generalization ability).

But if you know of any research contradicting that, I'd love to hear about it.

> Lack of adequate theoretical underpinnings.

If you have time, would you mind elaborating a bit more on this?

My impression is that general problem-solving training falls into the category of lack of adequate theoretical underpinnings, but I doubt that's what you mean to refer to with this point.


> That's an interesting point. Ideally students would have both. My impression is that the latter is far less trainable, and the best you can do is go through enough worked examples, spread out so that every problem in the space of expected learning is within a reasonably small distance to some worked example.

I simply mean that researcher team A will claim a positive result for method A because their test tested task fluency, while team B will claim a positive result for method B because their test tested ability to wade through new and confusing territory. (btw, I think "generalization ability" is an unhelpful term here. The flip side to task fluency I think more of as debugging, or turning confusing situations into unconfusing situations.)

> If you have time, would you mind elaborating a bit more on this?

I don't know what good theoretical underpinnings for human learning looks like (I'm not a time traveler), but to make an analogy imagine chemistry before the discovery of the periodic table, specifically how off-the-mark both sides of arguments in chemistry must have been back then.

> My impression is that general problem-solving training falls into the category of lack of adequate theoretical underpinnings, but I doubt that's what you mean to refer to with this point.

By the way, I see problem solving as a goal, not as a theory. If your study measures mathematical knowledge without problem solving, your tests will look like standardized tests given to high school students in the USA. The optimal way to teach a class for those tests will then be in the style of "When good teaching leads to bad results" that Alan Schoenfeld wrote about in regards to NYC geometry teachers.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: