> Not sure what'll fix it though. Perhaps efforts to promote good science as opposed to a great one like accepting publications for failed attempts (michaelson morley style), replication results of earlier works
Too often we try to solve social problems by "adding" something, whether it be adding an incentive or adding a program. I think to really solve the problem of publish or perish mentality, we first need to understand the root cause or causes of this mentality, then work to remove them. What I'm seeing here is humans being shepherded by enormous economic and social pressure to engage in selfish behavior for survival and/or social acceptance. Adding an incentive or a program therefore ultimately does not work because it does nothing about the fact that the humans are still largely enslaved by the aforementioned pressure. So, we must remove the pressure. Remove the pressure, remove the selfish behavior. But how to remove the pressure?
Since this is simple to answer (remove all requirements to publish frequently, and hope that a lot of journals die naturally after that), the real question is: how do we distribute funding to scientists without forcing them to frequently show their work?
I could imagine a world where every scientist (starting from Ph.D student onwards) is evaluated only e.g. on the basis of a biyearly dissertation-style report, which includes all (positive and negative) results, all data/metadata/code/analysis. Rapid communication of interesting results can still happen at conferences and the remaining journals.
But then who reads, reviews and ranks all this work? Who gets positions and funding?
The Carnegie Institution of Washington used to use this model — each year publishing a 'year book.' From 1902 through the 1980s, Institution funded scientists contributed detail reports — including figures and even new results — to the organizations Year Book. Year Books often exceeded 700+ pages.
Today, the Year Book is little more than a glossy fundraising document.
I’ve come to simply realize the problem isn’t AIs, it’s humans.
More specifically, humans who feel the need to make a quick buck.
Actually, the problem isn’t those humans, it’s the societal structure creating incentives and pressures that lead to a large percentage of humans feeling like they need to make a quick buck. Whether that quick buck be for cost of living or for social status gain.
I'm sure there are thousands of resumes out there with a Medium blog URL on them, which probably impresses HR but not so much if you actually read them. It's all just another way to "build your personal brand".
You can solve this by working backwards from the outside in, replacing a (b x) with c a b x wherever you can (not inside out, or else you get into an infinite loop!)
A cute alternative expression to solve the curried composition puzzle is c c c c c, just 5 c's in a row :)
Finally, Lean is a great language to do these puzzles in. See the following code:
/-- Compose! -/
def c (g : β → γ) (f : α → β) := g ∘ f
/-
?f (?g (?h ?x))
-/
#reduce c (c c) c ?f ?g ?h ?x
/-
?f (?g (?h ?x))
-/
#reduce c c c c c ?f ?g ?h ?x
I saw what each #reduce c c ... c did to explore the "palette" I had to work with, and then accidentally stumbled upon the answer that way. I also stumbled on the c (c c) c solution.
May as well include a formalized proof while we're at it:
def c (g : β → γ) (f : α → β) (x : α) : γ := g (f x)
example (h : γ → δ) (g : β → γ) (f : α → β) (x : α) :
h (g (f x)) = (c (c c) c) h g f x :=
rfl
-- Or
example :
(fun (h : γ → δ) (g : β → γ) (f : α → β) (x : α) => h (g (f x)))
= (c (c c) c) :=
rfl
> when the opposite side cannot be convinced by rationality, which is most of the time
Which possibility is the "failure" possibility here, that the opposite side gets convinced, or that the opposite side doesn't get convinced? I'd argue the opposite side getting convinced is the failure. This one singular exchange somehow managed to convince him of your viewpoint amidst his entire lifetime of experiences that led him to conclude the opposite.
I think it's the old rationalist way of thinking to think "If your mind isn't changed by a syllogistically correct argument, you must not be arguing in good faith (or otherwise can't be convinced by rationality)."
Everyone arrives at conclusions about various topics from the data they obtain throughout their lives through a continuous complex inference process they cannot communicate. Included in this process is a judgment of which sources are credible and which are less credible. Your friend is more credible than the guy who is showing you a study that contradicts your belief, for the purpose of changing your mind. (There are lots of studies out there so it is not hard to find one that supports your argument.) Trying to distill this complex inference process into a linear argument will necessarily lead to an inaccurate representation of how they arrived at the belief.
Doesn't mean everyone's right, though. Governments, cults, and friend groups are very good at shaping the data an individual receives. Even in these cases, you can see how based on the limited information that they do receive, it is quite reasonable from that internal perspective to believe what they do believe.
Arguments would be better if by "argument" we mean some effort, no matter how difficult, to communicate how you arrived at the beliefs you have, rather than a "well" crafted linear rational argument.
> Arguments would be better if by "argument" we mean some effort, no matter how difficult, to communicate how you arrived at the beliefs you have, rather than a "well" crafted linear rational argument.
Strong point, have you. How did you arrive at the belief that an argument strengthened by such effort, would be better?
To be clear, I meant that the conversation (the argument) would be a better experience for everyone involved, not the argument itself being somehow strengthened by it.
Nevertheless, I'll try to recall how I arrived at this conclusion.
The biggest influence was seeing for all my life, the left and right in America each claim the other is stupid, irrational, and un-convinceable by rational argument. When I was 16 it really did seem to me that the right was the irrational side here, but seeing everything come full circle with what's happening on the left these days, I realized it's kind of weird and unlikely that half the population, divided on a political line, is actually worse at thinking than the other half.
(See the bottom of this thread for a vivid example.)
Thanks for this. Indeed I can confirm the conversation is conceived to be stronger. That said, still I am not convinced that about half the population would not be worse at thinking, the weird it may be.
> There is no body of research based on randomized, controlled experiments indicating that such teaching leads to better problem solving.
I'm sorry but one don't exactly come across randomized controlled experiments in teaching very often... not to even mention ones that are well designed... so this isn't saying much.
This is only one piece within a larger argument. You need to read on to understand what the rest of the argument is.
The form of the argument is this: there is no direct evidence for X, but there is a mountain of circumstantial evidence supporting "not X", so therefore, almost certainly, "not X."
X = "we can teach students how to solve problems in general, and that will make them good mathematicians able to discover novel solutions irrespective of the content"
I have read the rest of the argument. However, my take upon reading it is that this is just one more contribution in a back-and-forth argument about every aspect that has been studied in math education. Despite the fact that this was published in 2010, the landscape in 2024 very much points to "it's unclear" as the answer to "is [anything] effective?", at least for me, unfortunately.
Usually when there's a replication crisis, people talk about perverse incentives and p-hacking. But there's 2 things I want to mention that people don't talk as much about:
- Lack of adequate theoretical underpinnings.
- In the case of math education, we need to watch out for the differences in what researchers mean by "math proficiency." Is it fluency with tasks, or is it ability to make some progress on problems not similar to worked examples?
> Is it fluency with tasks, or is it ability to make some progress on problems not similar to worked examples?
That's an interesting point. Ideally students would have both. My impression is that the latter is far less trainable, and the best you can do is go through enough worked examples, spread out so that every problem in the space of expected learning is within a reasonably small distance to some worked example.
I.e., you can increase the number of balls (worked examples with problem-solving experiences) in a student's epsilon-cover (knowledge base), but you can't really increase epsilon itself (the student's generalization ability).
But if you know of any research contradicting that, I'd love to hear about it.
> Lack of adequate theoretical underpinnings.
If you have time, would you mind elaborating a bit more on this?
My impression is that general problem-solving training falls into the category of lack of adequate theoretical underpinnings, but I doubt that's what you mean to refer to with this point.
> That's an interesting point. Ideally students would have both. My impression is that the latter is far less trainable, and the best you can do is go through enough worked examples, spread out so that every problem in the space of expected learning is within a reasonably small distance to some worked example.
I simply mean that researcher team A will claim a positive result for method A because their test tested task fluency, while team B will claim a positive result for method B because their test tested ability to wade through new and confusing territory. (btw, I think "generalization ability" is an unhelpful term here. The flip side to task fluency I think more of as debugging, or turning confusing situations into unconfusing situations.)
> If you have time, would you mind elaborating a bit more on this?
I don't know what good theoretical underpinnings for human learning looks like (I'm not a time traveler), but to make an analogy imagine chemistry before the discovery of the periodic table, specifically how off-the-mark both sides of arguments in chemistry must have been back then.
> My impression is that general problem-solving training falls into the category of lack of adequate theoretical underpinnings, but I doubt that's what you mean to refer to with this point.
By the way, I see problem solving as a goal, not as a theory. If your study measures mathematical knowledge without problem solving, your tests will look like standardized tests given to high school students in the USA. The optimal way to teach a class for those tests will then be in the style of "When good teaching leads to bad results" that Alan Schoenfeld wrote about in regards to NYC geometry teachers.
You seem to have linked a collection of general research on teaching and learning, which I am aware of exists. I'm talking about randomized controlled trials, where you assign a group of students to receive the intervention and another group to not receive it, and if it's single- or double-blinded, without them and/or the researchers being aware of which group they are in. Even writing this brings up logistical questions about how you might get a reliable research result doing this for teaching (instead of, say, medicine, where it's easy to fool a patient into thinking a placebo is the drug).
> Maybe you haven’t had reasons to come across such research before
No op, but I’ve “come across” a lot of education research. By “come across”, I mean I’ve read so much that it makes my eyes bleed.
There is some good research that yields interesting and compelling results. Rare, but out there. Usually by an individual researcher and maybe with a team. Almost never by a school of education of significant size or by (almost?) any specific field in education.
Results in education are challenging to replicate by a different researcher in a slightly different context, and studies are often trivially easy to replicate and come out with a competing/contrary conclusion by controlling a variable that the original researcher mentioned but did not control for (e.g., motivated subjects versus unmotivated subjects).
Additionally, much research in education is not well-designed, or is well-designed but on a relatively meaningless topic. There is a lot of touchy-feely research out there (like the idea that folks can learn math with just problem solving skills), and folks p-hack the hell out of data to support their a priori conclusions. It’s a smart thing to do to maximize funding and/or visibility in academic journals, but it is absolutely irresponsible in the quest for “truth” and knowledge, which one would hope our education researchers would want (n.b.,they largely don’t).
I would agree there are a lot of problems with a lot of education research. Many purported findings do not replicate or are otherwise impossible to replicate.
However, there are also many findings that are actually legit. As you say, they're rare, but there are enough of them to paint a surprisingly complete picture when you pull them together.
You're right that there are no (smooth) flat embeddings of a torus into 3-space.
To understand how a torus can be flat, it's best to replace the idea of folding with the idea of placing portals on edges. Start with a square and put portals between the north and south edges and between the left and right edges. Intuitively this is flat, and this intuition does indeed capture the mathematical notion that a torus is flat.
Why can't I do the same thing with the surface of a sphere?
Take two circles, side by side, connected by an infinitesimally small overlap, and put portals all around the circles to their equivalent point in the other circle. Then fold this over and inflate it into a sphere.
If you're suspicious about the discontinuity where they overlap, I think that's a red herring - no one is claiming the sphere is isomorphic to two disjoint surfaces of any shape.
Alternatively, make it a cube. I can definitely fold a single piece of paper into a cube, or equivalently, I can put portals on my piece of paper to give it the topology of a cube. Intuitively that is flat. But the cube isn't one of the 18 forms proposed for the universe. Nor is any of the other millions of 3d shapes I can make with the same process.
Here's an easy way to test the curvature of these examples. Draw a circle (all points equidistant to a given one) centered at a point on one of the "glue" edges. The amount that the circumference of that circle is short of the expected 2πr is a measure of the curvature at that point.
In the two-discs sphere construction, a circle centered on the disc boundaries will appear half in each disc. But it will be short of the usual circumference due to the shape of the discs. All the curvature has been pushed to these boundaries.
In the cube, a circle centered at a vertex will appear on three sides; its circumference will be three-quarters of the usual value. Note that all the curvature of the cube is at the vertices, not the edges!
For the portal-torus, try drawing a circle anywhere, passing through the portals... it will have the usual circumference, zero curvature everywhere.
I love this! And even more is true -- you can read off the Euler characteristic from adding up how many fractions of a circle are lost over all the points.
For the cube, at each vertex you've lost a quarter circle, and there are 8 vertices -- hence the Euler characteristic of a cube is 2.
For the two-disk model of the sphere, a similar thing should be true, I think, but I haven't worked it out in detail -- the integral of "circles lost" over the sphere (the support of this integral is the shared boundary of the disks) should be 2 as well.
For the two-disc sphere, I can't think of an intuitive way to "see" the "circles lost" integral. But here's a different intuitive way to see the total curvature.
Another way to measure the curvature is to look at how much the sum of the interior angles of an n-sided polygon exceeds the usual sum π(n - 2). It's most common to think about triangles, but we can also think about 2-gons... these are usually degenerate shapes with a sum of interior angles of zero.
But on the two disc-sphere, draw two lines, each from the center of one disc to the center of the other disc, passing straight through the glued boundary. These form a 2-gon with sum of interior angles (and also excess over the usual value) equal to twice the angle between the lines. To get the total curvature of the whole sphere, let each of the two interior angles be 2π, for a total of 4π... two circles, the Euler characteristic.
One thing that helped me is considering how to deform the edges of your glued circle examples into a flat square. First, map the two sides of the seam two the two corners of the plane. This becomes a sphere by folding it into a triangle and then gluing (or portaling) up neighboring edges. Having the portals on neighboring edges means that they change directions in weird ways.
This is somewhat different than the torus with portals example, where the portals were placed on opposite edges of the plane.
That's a good response! I had to spend some time to work out what goes wrong here. But I figured it out.
Parallel transport is broken in your model of the sphere. Take this example. Take a vector pointing north in circle 1, and send through the north portal. It should be pointing south when it gets to the other circle. Fine -- north in circle 1 corresponds to south in circle 2.
Now send the north-pointing vector east instead. It is going to point north in circle 2.
So the north vector changes direction depending on which portal it goes through. So parallel transport changes directions. Hence your model of the sphere is not flat.
Thinking about this a bit more, I think I see what you mean - the example where you fold it into a torus is "intuitively flat" because when you leave the portal travelling north, you re-enter travelling north, etc. That property doesn't hold for my cube example.
I wouldn't say the curvature implied by this property is "intuitively" 0 - why not 1, or 360°, or 180°? - but I can see how it's an interesting property that only a few shapes can have.
The cube isn’t flat. The curvature is just concentrated into the corners. You can measure this curvature by adding the interior angles of a triangle that contains a corner.