Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Teaching is a slow process of becoming everything you hate (dynomight.net)
801 points by dynm on March 12, 2022 | hide | past | favorite | 489 comments


I've often thought the same thing about becoming an adult, especially a parent, in general. There are so many choices that I harshly judged older people for making (how to allocate their time and money, where to live, what to allow or not allow the kids to do, how to behave at work, etc.) that I now find myself making as a married guy in my mid-30s with four kids. It makes me sad, but on each point I'm like, "Oh, now I get it." I fear that this pattern could continue until I become my father in my 50s and 60s. I try not to judge people so much anymore.

Anyway, I appreciate the article as someone who will soon try my hand at teaching. I will have a lot to learn.


On the other hand, this observation must be qualified as not necessarily generalizable to all teenagers and parents.

Sometimes a person may grow up and realize that their parents were indeed quite lacking, and be right in that assessment. I think this other circumstance is important to at least mention because children of objectively lacking parents can have doubts in their mind about their own judgment. In part because of this common trope of teenagers growing up and reflecting that they were foolish and their parents were wiser than they expected, in part because familial norms are so private that it's difficult for one to know what's abnormal for families in a harmful way.

Sometimes, what underlies painful experiences for children aren't parents actually making a wise decision, but plain bad judgment on their part.


Those types of generalization have always been troublesome for me - While I admit I was an idiot as a kid, the respect I had for my parents has decreased even further the older I get. I understand more and more how harmful their behaviors were for me as a child, and have been intentional about not falling into the same behaviors with my kids.

Some parents are just really really inadequate or abusive as parents.


But there’s a reason for the generalization. Some parents are inadequate and abusive, but virtually every kid is an impulsive and short-sighted creature whose brain isn’t fully developed until their mid 20s.


It's a matter of probability like everything in life. A good parent can lead to good outcomes, say, 60% of the time. A bad parent, 20% of the time. The numbers are random, but that's the idea. In retrospect, the vast majority of parents I knew when I was a kid were "bad," not in the sense of abusive, but in the sense of having no idea. They didn't have any clue about their own lives, how many could they have about their children?

They didn't even want to inquire and then decide what was best for their children. Too many parents are given a pass because some sort of "sacred virtue" is assigned to parenting and any mistake that does not include beating their children is somehow forgiven or given a pass.

The "they did the best they could" is a cop-out: did they do the best they could given their laziness, poor impulse control, vision that doesn't go beyond the fence of their backyard?

For example, I was an excellent student, the best in the school, in terms of awards and recognition. Not Einstein, mind you, but someone they should have identified as very talented from a scholastic/intellectual standpoint. My parents never saw this in me, for some reason, because now, I would immediately identify as very talented someone who did as well as I did. My parents never hit me or told me I was stupid or anything like that. They simply acknowledged that I was a good student and that was it. If they had said, this woman is talented, we need to support her, find her mentors, send her to the best universities, I have little doubt that I would have been very successful.

I was successful anyway, after traveling a winding and long road, but my path to professional and financial success could have been (these are my a priori odds) much easier and faster.

Now, you might say, if they had acted differently, maybe you wouldn't be here, etc., which is a very defeatist way of thinking. Their process was wrong and for no other reason than their "intellectual laziness".


>the best in the school, in terms of awards and recognition. Not Einstein, mind you

I thought the conventional wisdom was that Einstein wasn't academically accomplished until his great works were published.

A lot of your story reminds me of my own but I've reached a different conclusion. Specifically I suspect I would have been accepted into certain magnet schools (pre college) with the sourcing and simple suggestion of taking practice tests. If I had a kid that I suspected would be capable of excelling in that kind of environment that is exactly what I would do to help improve the odds of said kid achieving that goal.

But I don't resent my parents for not bothering to help in this way. I'm honestly not sure why but I suspect it has to do with an unspoken and mutual lack of respect for "academics". The system is so easily gamed, and a gameable system is a flawed one.


i think you're making the mistake of judging them with hindsight. they made the best possible decision with the resources they had, their mental model and their risk tolerance and belief in the world


"they made the best possible decision with the resources they had, their mental model and their risk tolerance and belief in the world"

Can this not be said for any action, any thought, including, if we are talking about parenting, beating the kids up? "It is the way I was brought up, my father always told me, a few slaps and they'll learn!". "I was feeling so alone and I could not control myself when they were crying, I am so sorry I decided to use the stove gas to put them to sleep, that's what my grandmother used to do!".

My belief is that that kind of "therapist's speech", which as becoming more popular as therapy became the fashionable path for people wanting to make changes in their lives, of "you did your best", "those were the tools that you had available at the time", has caused many to avoid their responsibilities by paying therapists (people who got "second hand smoke" did not even had to pay) like times back they used priest to carry the same function and getting the same result: absolution and comfort.


have been intentional about not falling into the same behaviors with my kids

Some behaviour is just plain borked, but other behaviour is just more nuanced.

An example, some kids are extroverted, others intro. What works for one, may not be the same for another.

You may need to constantly work with an extrovert, so they are eventually, as an adult, be in control of their own exuberance. And with an introvert, work with them, trying to help them expand thier ability to interact.

An ability for both to live in a shared society.

But imagine a parent who was an extrovert, was constantly upset at, as a child, being told to calm down, or stop asking 1000 questions per second. Or even, just "give another a moment to talk".

So they, with an extrovert, do not do such things. And the extrovert does not learn control, and dicipline, and to give others some space sometimes, and becomes less capable of interacting with others as an adult.

Of course, this is a poor example, and poorly phrased, but I hope my point comes across.

We should focus more strongly on what is correct for the child, not how a parent may have misapplied childhood lessons to ourselves. For those lessons may be right for your child, even if not for oneself.


Usually when someone has abusive parents, they become abusive themselves. It is a good idea that they get professional help in order to break the cycle.

>and have been intentional about not falling into the same behaviors with my kids.

This is usually the problem. You are intentional on NOT falling into THE SAME behaviors, and probably will overreact on the opposite behaviors that is as abusive as the original.

Just because something is bad does not mean that the radically opposite is good.

It is not a good idea focusing on what not to do instead of on what to do.

E.g I have seen parents that were too constrained as kids removing all limits for their children. The kids getting into bad friendships and destroying their lives as a result of the neglect from their parents.

Or someone educated as a Catholic with sexual restrictions promote sexual promiscuity on their children, with very bad outcomes.


Personally I find the solution is to meet and hang out with very diverse crowds.

I find that whenever you make a decision, you are really sampling from what you’ve already seen.

If haven’t seen a lot, your decision making is really constrained. If you’ve only seen bad decisions, you will make a lot of the same ones. You won’t even know that they’re bad.

The hardest part of life really is figuring out what you don’t know yet. And it’s really, really hard.


My great grandfather used to beat my grandfather - going as far as to stab him through the leg with a pitchfork on occasion.

My grandfather, partly from trauma and partly from what I believe is hereditary mental illness, swung back the other way, peacing out instead of parenting, leaving my father to get into a lot of trouble as a kid, and witness some different traumatic things.

My father swung back towards being unable to control his anger, and with frequent mood swings between depression and rage. Although, aside from spanking he didn't beat his kids, just pets and walls.

I'm the first generation that has been working towards treating my hereditary mental issues - taking mental health meds, going to counseling. I'm hopeful that I can break the cycle, but enough of a realist to realize I am lacking on a lot of parenting things that I need to learn from outside sources instead.


The heuristic can be reworded as adult parents are more knowledgeable and capable than children.

Put the average child in the position of the average parent and you will see the difference.

Think of how abusive the average parent is, then imagine them trying to parent with 10-20 years less development.


I'm not sure what you're trying to say here. Are you suggesting that we should excuse parental abuse, because a child would be a worse parent than an adult?


not at all. Some people are objectively bad parents.

I'm just that in general, the heuristic is accurate that parents have better decision making than their children. The very real counter examples (eg abuse) don't negate the trend.


And somehow also strangely arguing that most parents are abusive?


Not the best phrasing on my part. Better phrasing would be "consider how likely the average parent is to be abusive"

That said, all parents make bad decisions to the detriment of their children. The severity and frequency is what constitutes abuse vs mistakes.


Fortunately your children will almost certainly find new ways to critique and disparage your decisions as a parent and their respect for you also diminish as they grow older and calcify their opinions of what makes a good parent.


You're missing the point here. Ideally the 3rd gen kids will grow up to realize their 2nd gen parents made reasonable decisions even though they disagreed as kids, just as the cliché says (and like so many testimonials out there).

e.g. If gen 1 only used witty condescending remarks to teach their children that they're dumb, then 2nd gen can realize that this severely damages their kids' self-esteem. If gen 2 recobers from that abuse they may choose the path of taking their kids serioisly even with their shortcomings. The 3rd gen might take some kind of offense in that but there is less of a reason to grow in resentment towards gen 2 while they raise gen 3.


Take any family line. What generation are they at? Shouldn’t all bad parenting be weeded out by now?


People are not purely products of their parents behavior. You have not only genetic variation, but a lifetime of all the incredible complexity of life experience that happens outside the house and with all other humans. I could imagine these factors moving each person backwards and forwards along any scale of “progress” we chose to measure people by, generation by generation, at times overwhelming the positive or negative influence of their parents.


Maybe generation 3? One of my grandmothers worked on a farm in a developing country.

Times change. Subsistence farming doesn't cut it anymore. The lessons learned from that generation are something we have to grow out of.


This assumes that the kids themselves actually learn and grow at some point.

Some don't. It may be that the parents overcorrect and never teach their kids right/wrong, or it could just be the kid being a problem, or any number of other issues.


Well think of how bad things used to be, the progress is very incremental. Our short term perspectives as humans isn't good for seeing these patterns.


Also, there is no ‘right’, rather a series of attempts that fit on an ever shifting curve. There is no ‘ideal’ parenting except by sheer luck.


Doesn’t this assume a static target of ideal parenting?


Why would you think that parenting skills would improve in successive generations? Historically that doesn't seem to be the case.


And they learned a valuable lesson that it's okay/normal to terminate relationships with you.


My dad’s dad hit him with a belt when he misbehaved.

My dad spanked me with his hand when I was a toddler. Not overly hard, and not after age 4, but still.

Do I think my dad could have made better parenting choices? Yes. Am I happy that he made substantial advances from what he learned as a kid? Also yes.

A small number of parents are legitimately abusive and should have their kids taken away. Some parents are amazing and talented care givers. Most are just muddling through and fall somewhere in between.

As a parent I am doing my best to keep raising the bar - no corporal punishment over here! But I am sure my children will still find myriad ways in which I have failed them as a parent. Kids don’t come with a manual, and “professional advice” is astoundingly inconsistent/conflicting. I think most parents are doing their best, it is just a hard and poorly understood problem.


Totally agree. My mom might get roasted for the few times she swatted me on the ass, but nobody would say a word about her moving my brother and i to a place five hours away from our dad and huge extended family. I would take the former every day if it would have prevented the latter.


A similar thing happened to me. Except my parents decided, somehow, that moving one of us to live with my mom and the other to live with my dad five hours away was a good idea. I've tried to forgive them for doing this, but I'm not making much headway.


A helpful way to deal with childhood trauma, I've found, is to take a wide perspective. The speech that Tom Hanks as Walt Disney, who was abused by his father, gives in the 2013 movie "Saving Mr. Banks" beautifully encapsulates this idea and eloquently delivers this advice: overcome your past. Here is the full speech: https://moviewise.wordpress.com/2015/05/27/saving-mr-banks/


Thank you for responding, but I don't find this advice particularly helpful. Yes, I am tired. Yes, I would love to overcome my past. How does one go about doing that?

This quote also leaves out the second part of their conversation -- the speech's emotional climax -- where Disney tells Travers she needs to forgive herself. This doesn't apply to me. I have nothing to forgive myself for.


Literally nothing is wrong with a controlled spanking



Even better, a controlled whipping. Flick that switch hard. Draw blood. Nothing teaches ‘em like the old methods.


Not necessary; a child's pain tolerance is very low. The goal is not to hurt them, but to dissuade them through unpleasant stimulus. The spank doesn't even need to be very hard. But you knew this already


And unfortunately sometimes people grow up with parents who are quite lacking, then themselves fall into the same patterns, and think, "Oh, now I get it," while taking away exactly the wrong lesson. I know people who grew up in an abusive household, then eventually went on to have kids, and different siblings from that same family ended up going in completely different directions, from carrying on many of their parents' hated behaviours on one side, to being excessively permissive on the other. (And one ended up being a great parent, having put serious thought into their childhood and what they wanted to do differently.)


These two observations can be synthesised into recognising that most people are poor at empathy and haven't seriously thought through the question of why other people do what they do (fun fact, a skill that can actually be practised by arguing about politics).

Most parents are probably bad at parenting, based on the observation that most people are bad at most things and there isn't much in the way of training available compared to the size of the task. But most youths are even worse parents and have wildly misunderstood the constraints, goals and threats involved.


True, and I wouldn't limit this to only be about my own parents, but more broadly about all kinds of people like teachers, coaches, aunts and uncles, older co-workers, people in the news... even slightly older siblings who had kids a few years before I did.


That is an excellent point. Yes, usually parents do "what is best for their children". However, good intentions do not guarantee good results.

It took me 20 years to undo some of the more direct damage done to me by forcing me into a path I didn't want. Parents being completely wrong for the right reasons is very much a thing.

That doesn't even count psychological damage almost guaranteed to lurk in there pretty much forever


I think people find it very easy to judge parenting from the outside and/or with hindsight, whether their own or other peoples.

I know that my children will wish I did things differently. Whether that is location, discipline, activities, internet access etc etc. I know they will wish I made more reasoned choices in the moment. That I was never tired, distracted, frustrated. That I let them spend their time as they wished not as I feel is best for them.

Parenting is a parade of tough choices using vague heuristics and life experience. Being well intentioned is a bloody good start.


I fully expect to make many mistakes as a parent, and there's a lot of things I could have done better.

As a child though, many decisions my parents made were with the view of demonizing outside groups and "protecting" their kids from any contrarian viewpoints. Catching up socially took years, and there are some extreme harms that I still deal with to this day. My goal is to make sure my kids have a well-rounded social life, a consideration for others, and an understanding of a gamut of ideas.


> I think people find it very easy to judge parenting from the outside and/or with hindsight, whether their own or other peoples.

Surely it is easier in hindsight. Nevertheless, there’s this golden rule of “parents should always be forgiven” when if fact it should be “children should not be forced to keep their connection to parents if they don’t want to”.

You, as a parent, are not giving them the “chance to live”, you are bringing them to a world where they probably want to be happy, and if you failed to make them happy they should not be forced to keep you in their life because “a parent gave them the privilege of living”.

Yes parenting is hard. Yes I understand parents make unoptimal decisions all the time. That’s their problem.


I am glad you had a nice childhood. Now, would you try to be as understanding to children as to you to fellow parents.

Being well intentioned and caring is the bare expected minimum. Why do we need to repeat this?


I am talking about adults judging other adults decision with hindsight and out of real time with time to reflect.

You are welcome to throw stones if it makes you feel better, but try not to get hit when one flies back through your window in a few years.


[flagged]


> Bad parenting is a thing.

No shit sherlock. Here are some more hot takes for you:

    - too much screen time is bad
    - hitting children is generally counter productive
    - feeding a toddler highly processed foods full of sugar is not a good idea
Armed with this incredible wisdom you can go off and spend more time thinking how stupid and bad other people are at parenting. Have fun!


Ah, there it is. You somehow feel personally offended that someone might not like the important choices made for them.

Sorry this conversation frustrates you. Guess you're still dealing with toddlers which is not something I had considered until now. I thought we were talking about much, much older children. Teens even.


This isn't about "other peopke" though, recognising that your own parents are bad parents is very important, not sure why you try to dismiss that. People telling you that you are wrong, your parents actually did good, is just gaslighting and damaging.


You seem to have a very distorted view. Have you considered the possibility that those people aren't gaslighting you, and that they might be correct?


Have you spent much time interacting with children of abusive parents? I encourage you to go do some research a learn how bad it can get and how difficult it can be for them to struggle towards acknowledging and coming to terms with that abuse.


The “bare minimum” trope has to die.

It never adds anything meaningful to a discussion. Most often, it’s used by someone with little or no experience with a role or responsibility judging someone who is trying to carry out that role, however imperfectly.


If the assumption of a caring parent has to die, then what's the point in any of this?


> Being well intentioned is a bloody good start.

This sadly depends a lot on the community. Being well-intentioned but misinformed in fundamental ways can cause a lot of harm.


Older people knowing better than younger people is one of those heuristics that's almost always right [1]. It's hard to tell the difference between a stupid kid and a stupid parent, because stupid parents are rare. Yet people keep searching for stupid parents, because it's really important to find them when they exist. There's a constant rate of kids pulling false alarms on their parents, but you don't want to ignore them, because if one of them is for real, you don't want to be the asshole that ignored all the warning signs.

[1]: https://astralcodexten.substack.com/p/heuristics-that-almost...


Maybe I was just unlucky, but that isn't my retroactive judgement on my childhood at all. My experience was one of mostly adults making me do something that is really down to subjective preference because that was their preference. And now that I'm an adult and can make my own choices I'm much happier about it.


If a rational and emotionally mature adult still believes that the decisions their parents made were bad then there’s a much higher chance that they’re correct (compared to when a kid believes such things)


Especially in like circumstances; that is, with regard to their own children.


I’ve recently become a new father and have listening to a Cat Stevens song that your post reminded me of:

“If they were right, I’d agree, But it’s them they know, not me.”


I think is how it is supposed to be - parents making decisions for their kids using their best judgement (preferences). It is ok for kids to disagree when they grow. It is less ok to make a problem out of it. Most parents really try hard to make rational decisions using information and background they have. Criticizing them for making mistakes does not make sense to me. We all humans and we make mistakes.


> I think is how it is supposed to be - parents making decisions for their kids using their best judgement

Yes.

> (preferences).

No. The parent is supposed to use their best judgment about what is best for the kid given the kid's reasonable long-term preferences [1]--not the parent's preferences. Or, to put it another way, what is best for the actual kid, not what would be best for the parents if they were in the kid's position. The two are often very different.

[1] I say "reasonable long-term preferences" to make it clear that I am not saying parents should indulge a kid's preferences for sugary foods instead of nutritious ones, lots of TV instead of a better mix of intellectual activities, and no exercise. I'm talking about cases where the adult the kid will grow up to be will genuinely say they are different from the parents in this respect and wish their preferences as a kid had been respected.


I think we are on the same page. A parent has a mental model of the world and the kid inside it. Based on this model the parent makes projections and take actions. The problem is this mental model which never the same between the parent and the kid.

> Or, to put it another way, what is best for the actual kid, not what would be best for the parents if they were in the kid's position.

Most parents act in the best interest of a kid, not themselves. The outcome varies because parents model of the world usually flawed in many ways and they actually make bad decision thinking they have made good ones. E.g. my friend is obsessed with pre-school development. This is likely because she perceive herself as not smart and wants her son to be smart. She force her son to exercise, he can read and do some math at the age of 3. Long term this is bad for the kid. Unfortunately she cannot realize this. Doing so would require a level of knowledge and self-awareness she does not have. I her eyes she makes perfect decisions.


> She force her son to exercise, he can read and do some math at the age of 3. Long term this is bad for the kid.

Why? I was exercising, reading, and doing some (very simple) math at age 3, and I turned out ok.


> stupid parents are rare

How did you arrive at this conclusion?


I had the same reaction, but from context, I think what he meant was "cases where the conflict is due to the stupidity of the parent instead of the stupidity of the kid". The base case is that kids are stupid, because kids are supposed to be stupid.


You are making very sweeping claims with next to no evidence (ironically, a very un-wise thing to do!).

The more I observe the more I understand good parents are exceedingly rare. The vast majority of parents fuck up one way or another. Which shouldn't necessarily be surprising, parenting is very hard!


Of course, if you're an adult of average intelligence, then 50% of parents will be dumber than you. Parents who are dumber than you are would not be rare at all.

But I'm not comparing adults with other adults. I'm comparing adults with children. I would be surprised to find a child that was smarter than their parents; it happens, but it's rare.


Stupid parents are very much everywhere


But stupidity is relative and stupid children are more common. The heuristic can be reworded as adult parents are more knowledgeable and capable than children.

Put the average child in the position of the average parent and you will see the difference.


"Children" aren't a homogenous group. Neither are "parents." And "stupid" is not a single metric.

Pre-teens don't have practical adult skills and have poor or non-existent emotional regulation. Teens have some adult skills, are learning or experimenting with others, and have patchy self-regulation.

So it makes perfect sense not to allow kids to do things that are dangerous to others, and to limit what teens can do with strong guide rails.

But when it comes to world view and insight, teens can certainly be wiser than their parents. They may not know what to do with their insight, but they're not wrong - just inexperienced.

Meanwhile many adults are hopelessly naive and may be actively self-harming, especially politically and culturally. And when adults have addiction and/or Cluster B issues, even fairly young kids are more likely to be reliable and responsible.


>They may not know what to do with their insight, but they're not wrong - just inexperienced.

taking insights from teenagers...really? Do you know what kind of "insights" I had in my teenage years? They are downright cringe.


> The heuristic can be reworded as adult parents are more knowledgeable and capable than children.

I think that's true in general, but when it comes to making decisions about the child I think that has to be weighted against another heuristic that's often but not always true: that people know what's best for themselves better than other people do. I think people often wrongly neglect that principle when it comes to children assuming that they don't know what they're talking about without taking the time to listen and understand their point.


> Older people knowing better than younger people is one of those heuristics that's almost always right

Up to mid 20's, yes.

> stupid parents are rare

Not even uncommon.


> Up to mid 20's, yes.

Are you by any chance over 25 but under 40? :)

I think it’s inarguable that experience is valuable, and age correlates very well with experience up to that mid-20s point, but around that point people start to settle into a groove and the extra n years experience is actually just the same year’s experience repeated n times.


The mid-20s transition is the most obvious, I think simply because teenagers so often incredibly hot headed. But I believe emotional development continues throughout life as people acquire more experience with age (at least until senility kicks in.)


I think so too, but it's much more gradual, and therefore, not uncommon that the age vs age maturity level heuristic fails for a given pair of people.


I frequently say that 16 y/o Chris would be very disappointed in 40 y/o Chris. But 16 y/o Chris was an idiot.

As you touched on, the fun twist is when you abstract the learning so it’s not just “I was wrong about X” but “I should be much more accepting of contrarian views.”


I remember reading Harry Potter on Kindle, and Dumbledore had a line "Youth cannot know how age thinks and feels. But old men are guilty if they forget what it was to be young", and Kindle has this feature to show how many other readers have highlighted a sentence, and this sentence were highlighted by thousands. I guess they're all teenagers.


Harry Potter on Kindle readers are probably young adults.


I’m reading it out loud to my family (every character has a different voice). It’s a very mixed group with lots of discussions.


I am really looking forward to doing that as well with my daughter when she is older. Narrating books well with distinct character voices is a lot of fun!


HP on Kindle is my go-to-sleep-at-night book. I’m in early 40s.


Amazon probably has the exact demographics.


> the fun twist is when you abstract the learning so it’s not just “I was wrong about X” but “I should be much more accepting of contrarian views.”

Yup. Until people abstract over their previous experiences they will continue to find themselves in situations where they have had and discarded 5 different previous viewpoints only to think to themselves: "I've got it right this time, and anyone that disagrees is stupid".

Some would call this "Wisdom". Also interesting that learning this lesson does not make your current understanding any more accurate - it just reduces your confidence in it. Wisdom != Ability to understand.


Don’t judge your former self too harshly either. It’s easy to forget why you were an idiot at 16.


Make that 25.


50


Larry Niven. Protector.

Human eats mutagenic yam. Among various changes, it greatly augments intelligence.

The first thought of a Protector, just awakened from his mutative trance, is, "Wow, I have been really really dumb".


I am in my late 30ies and was recently thinking something like that about my 24 year old self... but sometimes I think, what if 3X-year old me is the one who's the idiot? Your brain has to go downhill at some point... judgment too, probably, even if the brain is not going downhill yet. How does one know it didn't start happening at 25/35/...?

Like, I was much more excited about tinkering with tech when I was 25. I know I was, I know it made me happy, I know that is what got me where I am now in terms of money/etc., but I can rarely summon these feelings now. That is clearly degradation, if I could swap some "wisdom" or whatever that I gained for enthusiasm I would do it.

What else has degraded?

Let's consider a more ambivalent one, I was excited when Facebook/etc. came out but I think TikTok/Snapchat etc. are stupid gimmicks... am I wiser, or just older and more boring?

Etc.


I just had my first child at age 40 and of course my experience is similar to yours. I can't help but wonder if delaying (or eschewing) having children is contributing to what seems to be a broadening generational gap. I'm learning lessons in early middle-age that previous generations learned in their early twenties. I could have used some of this new empathy I'm feeling for my parents and their generation 15-20 years ago.


Same boat, same feeling. But I also think there are mistakes we aren’t making, a maturity that will lead to perhaps enhanced outcomes for our kids even if not for us.


Even kids have less responsibility and more oversight today than generations ago. I wonder if that contributes to speed of maturation.


Which mistakes?


Not GP, but I suspect that increased emotional maturity would prevent one from lashing out, overreacting, underreacting, being unable to emotionally support one's child, etc.


You dont have to live your dreams through your kids (happens to ppl with unfinished potential due to early parenthood)


For sure. Not to mention someone like me who is child free and never going to learn these lessons.


You clearly haven't had incompetent parents.

I'll give you a personal example: both my parents _completely_ failed to manage their finances, they kept on borrowing money, refinancing the house 3-4 times.

Since the age of 17, as a young software engineer, I was constantly asked to help with the payments. Now that my mom has passed away, my dad has 0 income, he didn't bother to plan anything for his retirement and thus, we are forced to sell the house and both my brother and I have to forgo our share of the money so that he can survive.

In addition, I _still_ have to give help him financially and he refused to put his money in an investment account so that he can at least profit from the returns.. now he will basically eat every single dollar and then my brother and I have to yet again provide for him few years down the line.

Not only I got no support from them, I had to _on top of it_ fight my way through an uphill battle (they didn't even want me to study CS) and provide for them. They took all the child tax benefit money and ate it.. when I wanted to move at 20, I left with nothing, I just took my clothes.

Sure, parents are wise when it's about "not eating ice cream before dinner", but don't tell me they actually make good decisions.


I'm always amazed how these kinds of folks survive. Do they luck out into well paying jobs? What do they even spend the money on? Don't they noticed the bad patterns after a while?

Soooo many questions :-)


> how these kinds of folks survive

By getting into massive amounts of debt


Debt has to be repaid and at some point you can't.


You don’t ‘have’ to do it.

You are choosing to do it despite them stealing from you.

At some point you bear responsibility for how you are treated.

Cut him off. Let him die alone. Not your problem.

I had to do the same with my alcoholic mother.

It was very hard it the moment her antics started affecting the quality of life of my family she had to go.


I think you're confusing "my parents make shitty decisions" with "i hate my parents."


> we are forced to sell the house and both my brother and I have to forgo our share of the money so that he can survive

The money from the house? Since your dad didn't save for retirement, there was never any money to share. If you meant that you're forgoing some of your current income so he can survive, that's a different issue.


I think most of what you are talking about here is your parents being incompetent people. Even if they hadn't had kids, they still would have been in financial trouble their entire lives.

I don't even think this is due to a lack of financial education on their part. Feels more like compulsive spending, something a therapist might have been able to help them through had they been willing to talk to one.

Either way, it really sucks that you and your brother had to suffer so much for their failures.


Maybe I'm weird. I've brought up my kids without applying the dumb rules i hated as a kid and they're both happy and well adjusted. And i'm happier too because i've avoided all the usual stupid family arguments that would otherwise result.

I see other parents repeatedly inflicting on their kids rules and behaviours that are completely unnecessary but they think it's "the right thing"

i see this as truly stupid and a great way to sour your relationship with your kids when they get older and don't have to take it anymore


I'm not a parent, but this really resonated with me. My childhood was full of dumb rules that I hated. After complaining, my parents would feed me platitudes like "you'll understand and appreciate this when you're older" (sometimes with "... and have your own kids" appended to the end).

Today (at age 40) I still believe these rules were dumb and pointless, and actively harmful to my childhood development.

I do expect that, not being a parent myself, I might be judging some of these things more harshly than I otherwise would. But certainly not all things, and I certainly would have turned out just as ok (and possibly more ok-er) had many of these dumb rules not existed in the first place.

(Don't get me wrong, I still have a fairly good impression of my childhood, and I don't think these dumb rules did any permanent damage. But they were still dumb, and created more strife between my parents and me than was necessary when I was young.)


Now this is a feel-good comment. Do you mind sharing one of the dumb rules that you might have been tempted to apply but decided against?


Saying “because I said so.”

Or “stop it,” out of frustration.

Family members have babysat my kids, and this is a mistake I see all the time. If you tell kids not to do something, always tell them why. And be honest about it. It’s kids’ job to learn about the world, and it’s your job as a parent to teach them about it.


That's not a dumb rule, that's a bad explanation of a rule that may in fact be good.

"You can't stay up until midnight because I said so."

Few parents would argue that not letting them stay up until midnight is a dumb rule. But the example certainly has a bad explanation.

The problem with this entire thread is that without concrete examples everyone is imagining their own which, not coincidentally, happen to match their already existing world view.


That’s true, I guess I think bad explanations of good rules are insidious: they alienate kids and create an us-vs-them mentality. If parents stay up late but don’t allow their kids to stay up late, it seems like parents are entitled and even a five year old can sense entitlement.

Here’s the thing, though: it’s true that going to bed early is a good rule, and it’s also true that parents feel entitled; that their rules apply to their kids but not them. That’s one reason kids hate their parents - they often deserve it!


I don't apply any of my parents dumb rules to my kids... but I probably have my own dumb rules they my kids will choose not to use with their kids.


(Approx) How old are your kids?


Yep. Your point of view as a parent is often very different than as a teen/20-ager. Different priorities. You’re suddenly aware of all of the things in society influencing your kids. You’re very aware when things your read online don’t match up with reality and especially with math.

It’s just life experience. I don’t know many parents who look back at their younger years thinking, “I had it all figured out back then.”

And the longer you watch it the more aware you become of the people trying to influence kids for different reasons specifically because those kids don’t look at it and call BS immediately.


As a parent it becomes super apparent how many interested parties, especially child-less interested parties, are interested in capturing the minds of your kids. Be it to make a buck, to further their political cause or for something sexual/nefarious. Or even usurping their minds and psyche into something destructive as an ignorant and unintended consequence.

Then all the "stupid" authoritarian, seemingly arbitrary and maybe even paranoid shit your parents pulled all of a sudden comes into focus and understanding. You, more easily see, how irrational panics happen/occur and sometimes when those panics aren't entirely unjustified


More age, and requisite life-experience, changes you as a parent. In some ways it's better, in some ways it's worse. I've been a better parent and a worse parent for my younger children - IMO 30-32 is/was the right age to get balance in these things.

I expect that differs by cultural setting, nationality, etc..

Couple of examples: I'm less hot-headed as I age; I'm less physically able (much more than I expected).


Adults don't look at it and call it BS immediately, either. If anything kids are better at it. e.g. we all knew that the moral panic over violent video games was a bunch of BS.


I think these are two different classes of issues you’re comparing.

Kids are perhaps often intuitively good at evaluating the severity of unintended consequences.

Adults are much better at being aware of deliberate efforts to take advantage of developmental stages and tendencies.


My grandfather, when he was in his mid 80s, once said something to me like "you never stop being his dad, and he never stops being your son", when talking about his own father, who was in his 50s or something at that point. He wasn't being sentimental, he was saying that he was always in the role of trying to provide advice to his son, being older, and his son was always in the position of going to him for advice and looking to him for help.

I guess your comments reminded me of that in the sense that you always have something to learn from people who went before you. Sometimes they have made mistakes you don't want to repeat, and sometimes you all collectively face things no one has faced before, but usually people who are experienced have some wisdom to impart.

Sometimes I think agism is partly a sign that the pendulum has swung too far in the direction of assuming everything we do is new. The Chesterton's Fence analogy in the original post is apt in this regard.


"When I was a boy of fourteen, my father was so ignorant I could hardly stand to have the old man around. But when I got to be twenty-one, I was astonished at how much he had learned in seven years."

Always loved that quote.

Unfortunately it doesn't always work out that way. Parents can also get worse as you age and learn that some actions or behaviors are inexcusable. But as a younger person you either didn't understand the context or thought it was okay.


> Parents can also get worse as you age and learn that some actions or behaviors are inexcusable

Indeed, many would envy those in this thread who seem to have avoided having the truly stupid/insane/wicked adults in their lives.

> I was astonished at how much he had learned in seven years.

I actually interpret this as Twain saying that his father was a jerk, but that said father managed to mature with time. But he's written it in a way that the average person would chuckle and say "ah, see, his father wasn't a jerk all along!". Either that, or he's just written a perfect magic mirror.


The quote isn't about his temperament but his knowledge, so I think that interpretation is a bit of a stretch.

I think the same kind of quote would absolutely apply in exactly the same way to temperament though. The 14 year old thought his dad was a jerk who mellowed out by the time he was 21, the reality again being that the biggest change was actually the child's temperament.


Or it could have the same tongue in cheek sentiment. It's a lot harder to beat the crap out of a 21 year old than a 14 year old.


Why should you feel bad about it? You’re learning from your life experience. I too cringe at stuff I believed before I was mid-30s and married with three kids. I’m also slowly becoming my dad. A lot of that’s due to having seen a lot of things that made me realize my worldview when I was young had been limited. (Freedom of choice and freedom from norms sounds great when you’re young and feel invincible and think you’re in control of your destiny, but less so when you live life and see tons of people making all sorts of bad decisions that you managed to avoid because you did what your square parents told you to do.) That’s life.


> Freedom of choice and freedom from norms

I like to put it as: you give up some lower level freedoms (you have to wear a seatbelt) to gain other higher level ones (freedom to not die as an idiot on the way and to get to enjoy your trip to Disneyland).


“When I was a boy of 14, my father was so ignorant I could hardly stand to have the old man around. But when I got to be 21, I was astonished at how much the old man had learned in seven years.”

― Mark Twain


There's an author I read a lot who repeatedly makes a point in this writing: there are things in my writing you won't be able to understand before you get older, there is just no way. I have to say, it is so true. There are layers of layers I can only see when I get older.

Remember when you are a teenage in high school and teachers explain subtle messages in the reading? You and all your classmates are like: this is bs, totally made up by the teacher, even the author didn't think of those. Then after you get older and you may realize: those subtle messages are just so obvious and they just can't be explained to people without life experiences.


I don’t know even in hindsight there were plenty of “sometimes a door is just a door” moments with my teachers haha.

Jokes aside, totally agree with you. Definitely something you learn with age


> You and all your classmates are like: this is bs, totally made up by the teacher, even the author didn't think of those. Then after you get older and you may realize: those subtle messages are just so obvious

I think about this a lot. When consuming media, I constantly notice these kinds of messages. As you said: It’s almost impossible not to. But back in school, I was convinced that 95% was made up.


Which author?


I don't know if this is who parent meant, but it's certainly one of C.S. Lewis's not-so-subtle themes in the Chronicles of Narnia.


Indeed- Which author?


I may not make the same problems I judge my parents for, but I will certainly make new mistakes my children will judge me for. Just trying to be better.


That might be true, but I think there is a threshold called "good enough" that every parent can reach with some effort.


Chesterton’s Fence is a nice mental model to ensure we don’t harshly judge choices

https://fs.blog/chestertons-fence/


What’s wrong with doing the things that your more mature, logical self actually wants instead of what your younger, uninformed self thought you’d want?

There’s nothing stopping you from eating ice cream for breakfast, spending all of your money on Lamborghinis, and playing video games all day. As an adult you realize that those things won’t actually bring you true happiness and those decisions will cause significant negative consequences in the future that you’d rather avoid.

I don’t see what’s sad about that, other than maybe the disappointment of losing naiveté.


The root of it, for me, is that most of my choices as I grow older are compromises for the sake of ease, or from of a sense of obligation to my wife and kids. Like I recognize that different, harder choices could make me richer, or leaner, or more accomplished, or whatever, but I'm often exhausted and such choices would often be harder on everyone else in the family too. So rather than ice cream and Lamborghinis and video games, my examples would be: living in a small downtown apartment, not having a TV, walking or bicycling everywhere, prioritizing 8 hours of sleep and another hour of exercise, and going into a long disconnected "deep work" state everyday. Instead, we live in a big house in the suburbs with a two-car garage for our minivan and tons of other stuff, and everybody watches too much streaming video, and sleep and exercise are things I do only after everything else is done, and I remain near-constantly connected for the people who depend on me. I love them, but I also just wish that I could "have it all."


> and everybody watches too much streaming video, and sleep and exercise are things I do only after everything else is done

I don't know, this kind of feels like the adult version of ice cream for breakfast.

Maybe what you're describing is the difference between dreaming something and actually doing it. There's nothing forcing any of us to watch a ton of streaming video other than our own choices, although it's hard to decide to go out and do something different once you're in that habit.


Upon further reflection: What you wrote is actually central to the idea that I'm trying to express. When I was younger, I thought that I'd always make different choices and live my life "better" than older people I knew (e.g. going to the gym everyday for a serious workout), but once I grew up I understood why most people do what they do (e.g. working out irregularly, because having so many other demands on their time and energy makes it hard, or due to injury), and now I'm living much the same way in many areas of life for the same reasons. That's the sad part.


It's more that my kids watch too much, because my wife and I need breaks to get other things done or rest. Not way too much, and it's carefully curated content, but it's still one of those things that I thought, before having kids, I wouldn't allow. But now I get it. Once they stop napping, it really is a cheap way to buy a couple hours of peace and quiet.


I feel ya. As a childless person of the same age, I have several of those things you mentioned. I count them as blessings but also see that there has been a big, big trade made for them. I see that my siblings have made different choices with different blessings. I suppose this is just life.


> I don’t see what’s sad about that, other than maybe the disappointment of losing naiveté.

From my own experience, some sadness is due to treating a few people with disdain in my youth. At times, I was an arrogant, pissy teen. I wish I could make amends, but some of those people are gone.


Luckily some of those people might have recognized themselves doing the same thing when they were younger and accepted your behaviour for nothing more than youth.


Thank you. That makes me feel a little better!


It's sad to discover that the happiness you yearned for doesn't exist.


It’s still nice to finally have that ice cream for breakfast every now and then.


If it's fine now, it was fine back then also.


the # if people in the US with glucose monitors says differently.


There is a difference between "for breakfast every now and then" and "all day every day". I have a feeling that those who would be putting themselves under the risk of using glucose monitors tend to fall more into the latter group.


I would guess 90% of people who's eat ice cream for breakfast would be the type to consistently make bad health choices.


what you're saying is just that maybe it's not fine now as it was not fine then (as usual, this depends on the particular person).


I am definitely in the same boat.

For 30 years, I had a very specific vision of how I'm going to be different. Whether in serious relationships or single, I was not really buying into my parents' generation OR my own generation ideas.

Then:

1. I met "The One"

2. We had kids

And now I'm living a fairly stereotypical North American lifestyle. And not... begrudgingly or resentfully or bitterly so! It's just that nothing could prepare me or give me true empathetic understanding of how my priorities, goals, and lifestyle would change.

An extremely superficial example is vehicle; I always had sport small hatchbacks. I used to have endless debates with my SUV-loving friends about how unsafe and overly large they are.

I now have a minivan. I love it. It's not a necessity, it's a luxury, but it's a luxury I WANT as a married parent of kids in North American suburb who loves to travel with extended family.

Myriad other things about raising kids too; e.g. I will never ever again judge another human being when their kid misbehaves in public, or how they're handling it. I'm humbled with deeper understanding of how little of a clue I have as to their relationship, parenting style, or what long long LOOONG chain of events precipitated the outburst and what may be the right way to handle it given the background... or, what the parent's energy levels and day are like.


While I see some truth in your argument I disagree one the whole.

I am a parent and a teacher and I think the process is very different. Yes I sometimes differ from how I thought I would raise my children, but only in the details.

Now with teaching the problem is that you have to deal with all the kids/adults that were raised in ways that you strongly disagree with.


On the other hand (as a father of two) I find myself thinking about how my dad handled situations, comparing how I just handled it, and going, “wow did he have that backwards” haha. Not all the time of course and like you I have definitely come to appreciate how tough it is to make the “right” call, but we certainly have learned some things. One thing Boomer parents were really bad about - at least in my experience - was building a healthy relationship with food and meals in general. So much punishment and reward centers around food, it’s quite upsetting when you really think about it.

I’ve had quite a few friends in my life - men and women - with eating disorders of all shapes and sizes. You can almost always find stories involving their parents at the core of them. Making everyone sit at the table until the last person has finished their plate, the old “starving children in Africa” line we’ve all heard at least secondhand, forcing toddlers to eat everything and then they get dessert as a direct reward (which often ignores teaching them how to read signs that they’re full). The list goes on.


I wonder if that mentality was a residual aspect of a time when empty calories weren’t so prevalent and food costs were a higher percentage of the monthly budget.


I think the Boomer "Eat all the food on your plate" meme came from their parents, who lived through the Great Depression and food insecurity. Mindlessly passed down from a time of scarcity to a time of abundance. They just repeated it, but this time with gigantic, obesity-levels of food on their kids' plates.

We usually just ask our kid how much she wants to eat. It seems to work a lot better than the way my parents did it, with a lot less drama, and she's not growing up with an antagonistic and/or compulsive attitude towards food.


Speaking of dinner time, the negative effects of all the boomer moms and their trendy diets on young girls at the time. Weight Watchers. Just the phrase “Weight Watchers” being in the house and in the air.


No kidding. My wife talks about this a lot actually - how basically every mom she was around constantly talked about diets, losing weight, I need to fit in x or y outfit, etc. Just constant (usually negative) body talk all the time by adult women.


dang this that we do - "forcing toddlers to eat everything and then they get dessert as a direct reward (which often ignores teaching them how to read signs that they’re full). The list goes on"

sometimes you do need another perspective. you go through the process without even thing about harm process could be causing.


Sometimes I wonder how much we just repeat what our parents did instead of trying to find a better way, just because that's what we already know...


There's a great quote about being a parent that's often attributed to Mark Twain:

> When I was a boy of 14, my father was so ignorant I could hardly stand to have the old man around. But when I got to be 21, I was astonished at how much the old man had learned in seven years.

It's unlikely he personally said it, because his father died when he was about 11. But it sounds like something that he would have said in one of his books. [0]

[0] https://en.wikipedia.org/wiki/John_Marshall_Clemens


> I fear that this pattern could continue until I become my father in my 50s and 60s. I try not to judge people so much anymore.

I'm getting to the same age but I don't have kids yet. Still, I've grown to come to the same realisation as you did somewhere in my mid to late 20s (older people were, for the most part, right and I must have been an inssufferable twerp). I just thought it was something that came naturally to everybody with age.


I wonder if there's a way to reduce and soften the classic generational gap you describe.

Just like you I find myself constantly realizing most of my intuitions were wrong from being partially informed (and too keen on believing my perspective and intuitions)

On the other hand, our elders were young hot heads too at some point. They know we don't. :)


This is a bit paradoxical because it also works the other way around.

I definitely have the same resonance in terms of understanding, but I am also feeling like "Gosh, they had no clue what they were doing when they [game me this advice], [prevented me to do this], [scolded me for doing that] etc. etc.


I don't know if you've ever read Siddhartha by Hermann Hesse, but I get something different out of it at every stage of my life and your comment made me think of this book.


I'm already the river.


It is interesting but I had the opposite experience.

I think two things led to this.

1. In ante-natal classes the person took us through child development and what children are capable of at different ages. In particular, newborns to about 6-8 months do not even have a concept of themselves as separate people. All they can learn at that time is whether the world is a good place where their needs have a chance of being met. They are incapable of e.g. deliberately crying in order to get picked up. If you do not respond to their suffering you are just going to create a needy and insecure child. So we dodged that bullet.

2. The book "Parent Effectiveness Training" which was a revelation to me. The basic idea is that children own their lives and the consequences of their decisions. Of course there is a limit - you don't let a 3 year old run into traffic. But as far as possible let children make their own decisions. They learn really fast that way. If you micro-manage their lives you end up with 18 year old children.

This does not mean you do not have rules in your house. You are not allowed to play drums at 3am, but that would apply to everyone.

So many parents impose their own choices and preferences on their children for no good reason, and it creates resentment and stops children from learning from their own decisions.

As one example, I have lactose intolerance but I was forced to drink milk, with resulting stomach aches, etc, for many years. I literally knew better than my parents and school in this matter. Similarly if you think you know better than your children in every matter, may I suggest contemplating the spectacle of a 15 year old dressed for a party by Mom.

After our daughter turned 12 we only overrode her on two things - becoming a vegatarian (not allowed until she completed her growth) and a change of school. In both cases we carefully listened to her point of view and considered it, and explained why on these rare occasions we overruled her. Because this was so rare, and handled in a respectful manner, she accepted the decisions.

My own mother waited with great anticipation for the teen rebellion that she had forced her own children into, but it never came with my daughter. Why rebel when there is no need? She never lost the love of learning and ended up with a PhD in a hard science.

One other comment on the OP. There seems to be a wider issue here. If you let children not work, and fail, then the first time there will be a commotion. But if teachers did this consistently, word would get around and it would be accepted. Students whose life goals required passing the test would do the work. But in fact many school subjects are useless to many people and not studying is a rational response to being taught irrelevant nonsense.


I like myself.


This perfectly describes my experience as a TA in graduate school. At first I didn't understand why my advisor insisted on being so precise in assignment instructions. Then when TAing with him I saw how students could creatively misinterpret instructions, even when I could not imagine how to make them more precise. An exception for the new case would be added to the next iteration of the assignments. I only understood why we went to such lengths to prevent cheating because in my first year I watched my advisor spend two weeks of his time sitting down individually with each student and present evidence that they had cheated. Only about 10% of the students had cheated, but in a class of 1400, that's 140 students! I can't even imagine how much work that must of been on the head TA.


I thought the article was fairly strong except for in the two points you highlighted here. In the first case, I still don't understand why you don't just mark their answer from creatively misinterpreted instructions wrong and move on with life. And in the second case it seems like just not worrying about cheaters and letting it be their own funeral (or not) is optimal. I remember who the cheaters were in my classes and a couple decades later it's clear that to a one, I would much rather be in the shoes of the diligent hard workers than the cheaters.


Both questions were answered in the article. The reason for precise directions is because otherwise people will complain, and if you ignore their complaints, they will complain to your boss. At the end you'll win, but you'll waste a bunch of time defending yourself.

The reason for not allowing cheating is repetitional. If you get a reputation for allowing cheaters, then all the cheaters will want to take your class, and eventually you'll have so many that your testing will be worthless. And if word gets out that your institution allows cheating, then your students will not be respected when they leave, causing harm to the non-cheaters and your chance at keeping your job as fewer people want to attend a school known for allowing cheats.


There's a deeper reason for not allowing cheating: you are building cheaters. People who cheat in courses will cheat in industry, why wouldn't they? They normalize this behavior. So you end up with major corporations that steal, politicians that lie, etc.

If for example, Harvard and Yale's law schools stopped rampant cheating. Maybe so many of their graduates wouldn't go on to routinely lie to the public?

I don't teach because it's some sort of penance that I need to pay. I teach because I like it and I want to help build smart humans. Not contribute to our society degenerating.


I would be willing to bet that most of the politicians/ceos/etc that currently lie to everyone's face and went to harvard/yale didn't need to cheat their way through and didn't bother more often than not.


You would lose your bet. Let's just say that I know what I'm talking about first hand and I'm not making a conjecture.


just out of curiosity, did they cheat more than the average cheater? I knew a few of people who cheated in college but it was infrequent and varied by class, friend group, etc.


> The reason for precise directions is because otherwise people will complain, and if you ignore their complaints, they will complain to your boss. At the end you'll win, but you'll waste a bunch of time defending yourself.

So instead you force all your students to do busy work, like signing a statement accepting no grade if they use the wrong size bread board or photographing the breadboard next to a compass to prove it's the alignment?

To me this sounds like lazy teachers punishing students rather than working to solve the problem.


Sorry I wasn't clear, I saw and understood the explanations in the article, I just find them weak.


> I still don't understand why you don't just mark their answer from creatively misinterpreted instructions wrong and move on with life

Because the actual incidents are often in fuzzy areas where it seems possible the teacher's instructions were confusing. You're stuck making a character judgment of your student instead of evaluating knowledge. Over a career, it becomes easier to cordon off fuzzy areas than it is to risk a moral challenge.


> it seems possible the teacher's instructions were confusing.

Yes; I've been on both sides. I've written assignments that I thought were clear and unambiguous, only to find that a significant number of students misunderstood what I meant. They weren't intentionally trying to make the problems easier, they just weren't sure what I wanted. (And, of course, who is going to interpret an ambiguous problem so as to make more work for themselves? A few students will do it both ways -- the easier interpretation and the harder one -- but most won't.)

And on the other side, I've taken continuing education classes taught by other teachers where the instructions were confusing, ambiguous, or sometimes just plain impossible to follow ("You'll find the answers to this quiz in the article you just read." but the article was revised and now uses different terminology from the quiz.)


I find that students talk to each other and spread interpretations of the assignment. They might be correct, they might not - either way the interpretation spreads (never through anything like 'official' course forums set up for students to ask about interpretations, of course). They've also gone through shared experiences in other courses beforehand and will often simply come up with the same incorrect interpretation. For 5 years the basic assignment was clear and easily understood, then the next year it's almost universally misinterpreted. Those shared misunderstandings have easily outnumbered creative interpretations to help grades in my experience.


> who is going to interpret an ambiguous problem so as to make more work for themselves?

I did.

In fact I always tried to find a unique or novel solution to my problem sets, ambiguous or not. (If the problem set contained a hint I tried mightily to not use the hint, I'd always try to replace a proof by contradiction with a constructive proof etc...)

My marks suffered for it. I even almost failed a first year exam cos I didn't want to perform a grody 4x4 matrix multiplication. Later the prof said: "Your exam was crap, but you came up with a better answer for problem four than I'd thought of."

It's still one of my most cherished memories from undergrad.

I always hated the: "Will this be on the test" type of attitude. Are you there to learn and break new ground or to just get marks? I had crappy marks but my work spoke for itself.

Students should put more effort into creating their own body of work. If they spent half the energy they put into finding tricks and gaming the system, they'd be much better off for it.


I was never one to game the system until I was failing 3 classes while on academic probation (2 Fs would have gotten me kicked out). Then I gamed the shit out of the system.

That was my breaking point. Others it's losing a scholarship; others, getting a B.


I usually can tell which students will do well by how they answer ambiguous questions: they'll answer both ways, both the easy way, and the hard way.


In the first case, they complain, and there's ~750 of them (in the course I TAed) so even a small number can take up a lot of time. The right way to think about it is for a small additional bit of time spent clarifying instructions you save yourself a larger amount of time later.

In the second case, it does depend upon how much the instructor feels it's their duty to uphold the integrity of the grades in their class. I'm not sure if I would have made the same choice in my advisor's shoes, but that is the decision he made.


I was a TA too, though not for a course that large. I guess my feeling is that complainers should just be noted and ignored.


> In the first case, I still don't understand why you don't just mark their answer from creatively misinterpreted instructions wrong and move on with life.

Because your job is to educate them. They also complain about the task which in effect waste your time or give you trouble.


> Because your job is to educate them.

"Creatively misinterpreting" instructions means to me that the students are intentionally doing this (to get away with doing less work, or whatever). I think marking them down and moving on is educating them: it very quickly tells them that sticking to the letter of the law but ignoring the spirit is not ok, and will not be tolerated. It's pretty good preparation for being in the real world, too.

Regardless, giving ridiculously over-specified assignments will not be good preparation for the real world, where many (most?) things are under-specified and ambiguous. Adults need to learn how to read between the lines, interpret things properly, be comfortable asking follow-up questions for things that are not clear, and just figure things out when such clarity doesn't exist.

> They also complain about the task which in effect waste your time or give you trouble.

That sounds annoying, but to me it feels like over-specifying tasks in this way is the opposite of education. And it feels like the time dealing with the misinterpreters wouldn't be wasted; it would be spent actively teaching students that the world is not black and white, there's often no instruction manual, and that getting out of doing work through "creative misinterpretation" will not get you far.


It seems like a good learning experience to get an answer wrong because you didn't succeed in interpreting the question. Nobody takes pains to describe things in minute detail in real life.


> Then when TAing with him I saw how students could creatively misinterpret instructions, even when I could not imagine how to make them more precise.

The best part is if you do make it more precise by specifying the problem in more detail, they will just not read it and ask questions that you answered explicitly in the assignment.


sometimes "precise" in the mind of the instructor is "unintelligibly technical" to the student. I'm tutoring an (ESL) friend through an intro to programming course right now, and every time she gets an assignment she sends me the full text of it just to ask me what the instructions mean. to me, the instructions are almost describing line-by-line exactly what to write. but to someone who isn't already at the level where they can just read and understand random pages on cppreference, it's basically impenetrable. this is a course designed for people who not only have zero programming experience, but also don't even intend to pursue a CS major/minor.


At least if it is in the assignment, you can passive-aggressively copy-paste the text of the document to them.


My favorite phrase is "As per the syllabus..."


OTOH, I've definitely taken classes with years out-of-date syllabi. It is a funny thing, where some instructors consider it to be the fundamental contract between them and the student, and others consider it to be an annoying bit of extra busywork.


So, I don't know where I read this (might have been here on HN) something like:

If you create the rules for the pathological cases, then you're "optimizing" for those. Not for the majority.

Whereas the pathological cases should be dealt exactly like those.

Though sure, sometimes explanations can be better, but you can only play the game up to a point


On the other hand, if you don't address the pathological cases in writing, 90% of your time will be taken up by the 10% of people who rules-lawyer their way through life: Pointing out the lack of written clarity, complaining about 'hidden rules', writing a letter to object, appealing to your boss, appealing to boss's boss, lodging a formal complaint with leadership implying discrimination, getting actual lawyers involved, and on and on and on.

There are a small number of people who just live for the thrill of taking advantage of poorly documented rules or process. They act disingenuously under the guise of sincerity. "I'm just trying to clarify: Nowhere is it written that [$obvious_bad_behavior] is not allowed, therefore how am I supposed to know??" People who spend more time scrutinizing their university's Policies, Rules and Regulations, and Code Of Conduct, looking for exploitable flaws, than they would ever spend actually reading their assignments. Happens in the business world too. I've seen salesmen who couldn't multiply two three-digit numbers together turn into Albert Einstein when the year's bonus structure got published.


> I've seen salesmen who couldn't multiply two three-digit numbers together turn into Albert Einstein when the year's bonus structure got published.

I kinda think you are arguing against your point, here. IMO these sorts of sales people are a result of over-specifying homework questions to this degree, because they haven't been shut down or washed out at the stage where you find out they can't deal with a reasonable (or even too-low) level of detail.

But the problems you talk about in your first paragraph are real problems, and the solution is that the entirety of the school's administration needs to take a zero-tolerance approach with this sort of behavior. Rules-lawyering should be shut down at every step of the way. Yes, that might result in some actual lawsuits, which will suck up time and money, but I think that's just the price of educating people. And might still end up being less trouble overall.


Yeah that's why you can have a catch-all rule like "TA is conferred final discretion on evaluations"

Though as I said, some things are good to have in writing, if it's an exception that happens with some frequency or some corner case that's not as rare as thought


When I was a TA, I convinced my professor to stop giving graded assignments. it was obvious on tests who had done the assignments and who hadn't.


After just starting a grad program after 12 years in industry. I'd have to disagree. While a large fraction of homework is busy work designed to give the illusion of challenge and rigor - tests simply estimate whether someone has memorized the material sufficiently for a short 1 hour exam.

In CS, a ~4-20 hour project is vastly more representative of how well someone understands the material and could apply it in a real world setting than a 40 minute multiple choice exam. At the advanced levels this is true for fields such as Physics, English, History or any others.

Maybe we should ask ourselves how to give better assignments in a class that aren't simply busy work?


Many engineering programs have their most challenging courses set up as semester long projects.

In chemical engineering the final boss is the process design class, a project where you are asked to produce a chemical substance with desired properties at scale without losing money. Almost everything you learned during the program has to be used to pull it off. Programming, numerical methods, CAD, Transport phenomena, kinetics, physical chemistry, thermodynamics. It really is the best all around test for a chemical engineer.

While this is feasible for the senior year, I am not sure if you can convert for example calculus 1 into a semester long project.


Calculus 1 is an interesting subject as there certainly is a degree of memorization required (you can't re-derive the derivative of x^n every time it comes up in your career). There is a similar to intro to Organic Chemistry, Algorithms and DataStructures, intro to programming etc. But the goal is to build detailed understanding of these methods more so than memorization.

On the other hand we live in a world where access to derivative rules is trivial. I'd imagine in 1800 mathematicians would assume that you would need to have multiplication tables to be productive and not reduced to pen and paper their entire career.

I wonder if there is an opportunity to push more challenging material into the earlier classes and make them more project like.


I am currently in the last few days before submitting my Chemical Engineering Design Project (I'm designing a packed bed methanol reactor), and yes I can confirm it is absolutely fucking brutal and hands down the hardest thing I've done in my life so far


I get what you're saying but I also disagree with it as a generalization, and say it would depend on the subject. For theoretical subjects, an exam is about the only way to test your understanding. Memorization is not going to help you solve math problems.


I was a physics undergrad who hopped into a few grad classes, and to be honest I was terrible at homework and great at exams (mostly due to some youthful obstinance on putting the time in on homework). At the time I believed that the exams showed who really knew the material and who applied time to solve the problem. With some time past I see that the larger/tougher problem sets were where the real challenge was.

I recall a few unique problem sets from Graduate QM such as

- Derive from first principles the color of the sky.

- Prove that charge must be Quantized if there is one magnetic mono-pole in the universe.

The exam questions were far simpler than the theory questions asked in the problem sets. The work for the first question easily totals > 20 hours of pen and paper time.


> The work for the first question easily totals > 20 hours of pen and paper time.

I guess grad students generally take less coursework than undergrads, but how could a professor expect students to have 20+ hours on hand to solve a single question, given other demands on a student's time?


Grad students usually take 1-2 classes at a time, and the problem sets are spread out over 2 weeks.

A problem like the above would be given as a single problem for students to solve over 2 weeks.


I had an undergraduate lab where I spent 20 hours per week on the lab write ups.

That's what motivated me to switch away from a physics major.


> Memorization is not going to help you solve math problems.

On the contrary, memorization is the way most people I know got through most of their math classes, at least through calculus and linear algebra. You memorize the steps by rote repetition without really learning why they work, then the test is mostly an exercise in guessing which steps and formulas you should apply to the given problem.


Is that really memorization? Memorizing multiplication tables is one thing. Practicing the techniques over and over isn't memorization imo. In grad level maths, you are solving proofs pretty much, you can't just memorize facts in a textbook to do that.


It's memorization insofar as you can do all of that practice and become proficient at solving math problems without really knowing what they mean or why the steps work. You're regurgitating what you were taught, not making connections and using your understanding.

You used math as an example of a subject where tests are used to check understanding. I disagree, because most people that I know who did well in math did so by being good human computers, not by understanding anything.

I expect that doesn't continue to be true at the grad level, but most people don't get that far.


I’m someone who crammed their way through 4 years of computer engineering exams at a challenging university. It’s possible. It’s hard and the worst few weeks of life before exams, but it’s possible.


Cramming is not memorization. It's not optimal studying, sure, but you've still learned something.


In my experience there's little long term retention from cramming.


Can confirm. There's 0 retention. Maybe if I kept cramming over an extended period of time I could retain it. Typically though I stop after taking the exam so within about a week or two things I thought I understood disappeared.


>tests simply estimate whether someone has memorized the material sufficiently for a short 1 hour exam.

I feel a deep sadness reading this. Is your computer science curriculum more accurately described as a software engineering curriculum?

Memorization should be virtually irrelevant on most computer science exams. Proofs should be core to computer science exams; the ability to reason is the most fundamental skill to all scientists, especially for fields which are tightly coupled to mathematics.


> Is your computer science curriculum more accurately described as a software engineering curriculum?

Given that most CS students want to go into software engineering, it would surprise me if this isn't the case for most CS curriculums. In my experience CS students don't generally want to be scientists, so most CS classes are more application-oriented than proof-oriented.

Schools are starting to provide separate software engineering programs, but we're not all the way there yet.


I disagree, but at least you didn't use the word "regurgitate".

I always find it funny when people say that tests are just about "regurgitating" information. It's such a cliché that just gets regurgitated in every argument over testing, as though it's visceral imagery actually gives it any real weight.

Tests can assess whether the student learnt the material covered in class. They can also test problem solving abilities.

Assignments test conscientiousness, and the ability to make good design trade-offs when working with a single customer who is buying 100 different custom products and doesn't really care about any of them.


Graded assignments are useful to give feedback to students. And more importantly they force students to work regularly and not wait for the last minute to study.


Thus we see the problem that universities are admitting students who aren't ready for tertiary education.


I think it is mostly the latter. At least -- I rarely got useful feedback other than a little x (best case it would be on the error, more likely on the questions).

Personally, when grading I keep a file of all my feedback so I can easily copy-paste it into their feedback files (since everything is digital nowadays). For a given assignment, usually only a handful of mistakes are made (repeated by each student). If anything, having the file makes my grading more consistent -- same points for the same error.

I'm under the impression that this is a not-unpopular system, but try as I might, I cannot get anyone else to adopt it.


Personally I always preferred quizzes for that. I've always been a very strong autodidact though, there are probably people who prefer getting dragged through things by homework.


This is tough because it creates a strong incentive for them to make bad long-term decisions. Think of it from the perspective of a student: you're taking 6 other courses, all of them very demanding with graded assignments, except for this one class where the assignments are not graded. You have a limited budget of time over the week, and time is getting short. Do you: a) work really hard on your ungraded assignment and turn in your best effort for no impact on your grade or b) tell yourself that you'll make up the work at a later point in time, and then focus on your other graded assignments to make sure you optimize those grade. Then you will focus on the other course later on during spring break or something.

Sure everyone says they'll do a but really, this sets a lot of students up for a trap. They think they will have time to make all of this up later, but really what will happen is they will just fall behind in the class. The assignments from other courses keep piling up, so the free time never really materializes. In fact, the same scenario repeats: the student will forego a second assignment, having already done so once before. Then the deferred responsibilities pile up and you end up with a student who is failing your course (even though on paper the grade is undetermined (kind of like a wave function), in all actuality it's just waiting to collapse to a grade of F at test time.

Look at it this way: it's like a reinforcement learning problem. If your reward schedule is that you only give a reward to the agent when it achieves the end goal, sometimes training that agent takes a very long time; if the search space is too large, then the agent can go any which way and will take a long time to reach that goal. That's ungraded assignments.

Instead, if you give the agent little rewards along the way when it makes some significant progress, then the agent can converge to the goal state much faster, in a way that avoid a lot of unpleasantness for everyone. I don't like giving Fs, and they don't like receiving Fs. I feel like if I give an F that's really more on me than them. Part of my job is not just to put course content into student brains, but to also shape their ability to manage their time and juggle a variety of projects. It's the kind of thing I spend many semesters (4) instilling in my students and grades are one of the effective tools I use to do so.

You may say just do away with all grades and we can talk about that. There are different models we could use. But as long as others are using grades it's kind of a baked in assumption at this point. Very hard to change that kind of system.


It does sound like a pointless arms race (between different courses)

I majored in Law but took a couple CS courses on the side so I saw the contrast between traditions in different departments. CS courses had a constant stream of non-trivial graded homework. Even if I knew the materials it took me quite some time to complete them. Law courses usually one essay that counts for ~15-25% (or less frequently, a mid-term test), and the rest is the final exam.

Obviously, both methods work (I guess). But if you're already in an environment where courses give out lots of graded assignments, your concerns definitely make sense.


When I was a college student, I wasn't diligent enough to always do ungraded assignments. I'd do reading, but for actual questions I'd only do them maybe 50% of the time when preparing for a test on the material. And out of my peers I felt like even doing assigned reading and trying to do ungraded homework put me well ahead of the pack.

I think it's a maturity thing. Probably until I was ~24 I just didn't have the executive function to be able to do things like that. It seems beneficial to have graded assignments as a forcing function especially given some college students are literally teenagers.

Also, I took an accelerated math curriculum as a freshman where I went from never having written a proof/knowing how to prove something rigorously, to pretty good at it. The feedback from the assigned homework was absolutely crucial in helping me learn these skills. It's easy to follow a proof from the answer section, but since there are usually several ways to prove something, it doesn't always help just to see an answer, plus you don't know what kind of divergences/hand waves are acceptable or not without feedback.


Professors grade the assignments to make sure that students do them.

There are many other options for evaluating the students, but not many to force them to learn something.


Did someone ask the students why they were cheating or creatively interpreted instructions? And then tried to address the underlying problem?


You can't address the underlying problem that a difference in an A and a A- could very well have lasting effects on a person's life.

You can't address the underlying problem of someone making it to their late teens and being a little shit.

You can't address the underlying problem that some people don't even really want to be in your class but "have" to take it because they want a degree.

You can't address the underlying problem that some students have spent the last 19 years rules-lawyering their parents and always getting their way.

You can't address the underlying problem that any concessions you make for the 20 year old mother of two struggling with two full-time jobs on top of college will also be vehemently claimed by the stoner 20 year old with a parent on the Board and who thinks college is awesome except for the classes.

You can't address the underlying problem that the university gave you a class size three times what it would need to be for you to be able to provide each student with the requisite attention to really address anything other than "did they meet the criteria".


Seems like a trivial thing to say there is an underlying cause. Student should still be failed for cheating. Definitely not like a research physicists job to address a students personal issues


> class of 1400

Is it a normal thing in the west?

The largest class I’ve been part of in India had 105 students and I thought that was nuts. 1400 is like crazy to me.


"class of 1400" just means a given course has an enrollment of 1400 in a given semester, not necessarily that they packed 1400 students into one lecture hall and taught them all at the same time.


1400 is huge. It's common at large state universities for the introductory classes to have somewhere between 200 and 300 students. The professor lectures in a large auditorium, and grading (and questions!) are delegated to a staff of TAs.

If you get a good TA and have some good classmates, it's totally fine. Unfortunately, it's common for your TA to be crap, at which point grading becomes a nightmare.

I avoided all of this by taking introductory classes at the community college, where they teach the same material to classes of 25 students.


> every word of the assignment creatively misinterpreted

OTOH when I took operating systems I got an assignment that said “implement a job scheduler, using FIFO, LIFO or round-robin job scheduling”. So I picked FIFO, got it working and I had time left over so I thought, “what the hell? I’ll do LIFO too”. So I did, and I still had time so I took a crack at round-robin, but I didn’t have time so I turned it what I had, proud of myself for going above and beyond.

I got back a 66 on the assignment. I asked why and he said, “you didn’t even attempt round robin”. I pulled up the assignment where it VERY CLEARLY said “or” and he said, “well, it should have been obvious I meant ‘and’”.


> every word of the assignment creatively misinterpreted

I hear you, communication is hard, but trust me your statement and the authors are an AND not an OR :) the difference is scale. the students are about 1v30 when it comes to opportunities for badly written assignments. In a typical semester, I am 1v120+ hostile attacks on my communication skills

The number of students I have that actively presume I am trying to trick them is truly astonishing. It's a learned behavior, sure, but it is the behavior.

I've put parts of answers in the question text, underlined them, and then had students tell me they thought it was a trick. I'm at the point when I write tests I leave pretty blatant hints because I have found they are a useful way to punish students who have opted out of thinking and try and brute force answers. I've had students actively invent parts of assignments in their head. In the last one, I asked for three examples in a short answer problem...and gave one of them as a 'for example'. less than 10% of people used that. More than 10% told me they thought it was a trick.

I'm not mad, I'm worried. I get why IT is moving towards zero trust, but it is fundamentally at odds with actual learning. The saving grace is that I can actually use the term 'headcannon' with this generation and they know what I mean.


> The number of students I have that actively presume I am trying to trick them is truly astonishing.

> I've put parts of answers in the question text, underlined them, and then had students tell me they thought it was a trick.

Why wouldn't they assume that?

Tests are nothing but games where students figure out what answers you want. They can do that by learning the material, but there's a metagame. What were you thinking when you wrote down this question on the exam?

For example, once I took a english test which asked me the correct pronoun for ships. Choices: he, she, it, they. I know that "she" is almost always used to refer to ships, but I also know that "it" is also grammatically correct. So which is it? Is the teacher trying to test my understanding of this historical tradition? Does the teacher think "she" is antiquated? Will I lose if I pick "she"? There's enough uncertainty here for me to pause even though it's an absurdly simple question. I know the answer but I'm not 100% sure what will happen if I choose it.

There are serious consequences to getting these answers wrong and students simply can't afford to trust you.

Entire books could be written about this teacher-student metagame. Another example: choice order. Suppose the choices to the above english question were ordered so:

  Which pronoun is used to refer to ships?

  A. It
  B. He
  C. They
  D. She
Deliberately positioning an almost correct answer as the first choice was extremely common in the schools I attended. A student who's overly anxious or running low on time might read that, pick choice A without even reading the other options and move on. The second we noticed this, "letter A is never right" became a meme and to this day I feel uncomfortable choosing the first option presented to me in any context. Preparatory schools actually offered test taking classes where we were taught to read questions backwards because of this.

Teachers eventually noticed that we noticed and suddenly the right answers were the first choices. Do you have enough confidence in your knowledge to assert that letter A is correct? Now it's a deeper level metagame: does this teacher know that I know?

This is what you get when education determines your future job prospects. It is no longer about "actual learning". It is a game and our future hangs in the balance.


I think we are on the same page.

My point wasn't that they shouldn't or should, its that they do. As you note it has been socialized into them. The problem is systematic, not something any one educator can fix. Bad teachers, and bad educational experiences create long term damage beyond not learning content.

Good tests actually assess learning - what you mentioned is not a good test. Just like a test that causes stress ends up measuring stress isn't measuring learning.


I actually would have chosen "they" on that multiple-choice question, because it asks about ships, plural.


... And you would have been right.


> The number of students I have that actively presume I am trying to trick them is truly astonishing.

Because we all have experience of you guys doing just that.

I once had a question on an exam for high voltage power “why do high voltage transmission lines have 3 wires”. The answer I put seemed obvious, that each wire is for one of the 3 phases of electricity. This is the standard config for HV lines in the US.

The answer was wrong, because the professor was talking about INSIDE the cable and wanted an answer relating to how the electromagnetic field rotates around the wire so they bunch wires inside each line to improve efficiency.

Professors, especially elite ones, have seemingly perfected the art of being vague. As a student you have zero recourse against them for their terrible communication skills, your only hope is to dodge the whacky ones by befriending upperclassmen/being in a frat.


> In the last one, I asked for three examples in a short answer problem...and gave one of them as a 'for example'. less than 10% of people used that. More than 10% told me they thought it was a trick.

What did you expect? To me, it would be painfully obvious that the already given example is void and no longer up for grabs. I would never construct such an exam question. Dangerous slope, but in this case obvious.


I have to say I think it's quite possible and comical that the professor interpreted your work on two of the three questions as additional evidence that you did know it should be an "and", since I would guess relatively few students would attempt any more than _one_ permutation if the assignment only asked for one.

Damned if you do, damned if you don't.


But what did the other students the class do? If almost everyone in the class did only one, the professor would have to give almost the entire class 33% for a mistake he made.


Looking back, I should have asked around. The job scheduler was one of several options so not everybody picked it. Still, somebody else must have.

The professor actually did relent and gave me a good (not perfect, but good) grade. Then he judged me extremely harshly on the final exam so I ended up with a B in the course anyway.


That’s pretty sad. As a CS Professor, he should understand the distinction between a conjunction and disjunction.


As an educated human, he should understand that when you provide written instructions it is implicitly a "do what I say" and not "do what [you infer] I mean".

It is obviously unfair and unprofessional to penalize the student for the professor's error.


Tell that to all the math professors that go: "this part is obvious"...


I think It’s a question of scope not of the logical operator. One way to interpret the question is:

1. You choose X or Y or Z

2. You provide a scheduler that does the thing you chose

Another is:

1. You provide a scheduler where I choose X or Y or Z and the scheduler does the thing I chose.

The question is whether the disjunction applies to the input configuration of the scheduler or the output of the assignment. That is, whether the output of the assignment has type (X_scheduler | Y_scheduler | Z_scheduler) or type (X | Y | Z) -> scheduler.


I agree with you, but not as the assignment is worded:

> “implement a job scheduler, using FIFO, LIFO or round-robin job scheduling”

To me, "using" modifies "implement"; that is, the teacher is telling me to implement something, and then telling me what to use. (That is, "using" does not modify "job scheduler".) If the teacher wanted the second interpretation, they should have said:

"Implement a job scheduler that can use FIFO, LIFO, or round-robin scheduling."

In this case, "that can use" is clearly referring to the job scheduler, not to me. To be fair, though, I think some students might still (reasonably!) believe this means they only have to implement one algorithm. I would probably use this second wording, but also change the "or" to "and". Even better, though more wordy:

"Implement a job scheduler where the user can select between FIFO, LIFO, and round-robin scheduling."

Let's also remember that boolean logic and English usage are not the same thing. I don't think my two suggested wordings would be considered "over-specifying" in the same way that OP is talking about. They're just a better use of language.

The teacher here should just have re-read the problem instructions, shown understanding and empathy toward the student, and either a) just changed the grade right then and there, or b) offered to let the student complete the rest of the assignment and turn it in again. And then made the wording of the problem more clear for the next batch of students.


Not more clear, he should've fixed his mistake. Or is not and. Period. I agree with you, but I wanted to emphasize the mistake


While I think a student could read the instructions in the way the teacher intended (though I would not be one of those students either), I think the problem here is that the teacher is a poor communicator, and is too arrogant to believe that they could be fallible here.

The fix for this particular issue isn't over-specification, it's changing one word in the instructions. Or at most, adding a few more words to make things clearer.


This problem isn't a teaching problem. Evaluating someone's skills in any respect or context is basically an intractable problem. Interviewing, school, job performance, etc. etc.

If there were an organization that could "perfectly" evaluate people's skills in a fixed period of time it would quickly become the top, and eventually only company. It would use its own skills in order to remove low performers, perfectly from its own organization. It would find all of the top performers outside of the organization, perfecting arbitrating wage vs. value benefits. Profits from this would be divested back into the organization forming an infinite virtuous cycle.

Later it would supersede whatever nation it's in, conquering it by finding the best military leaders and soldiers using the same "perfect evaluation" ability. It would get the best diplomats and business leaders. Later it would turn an eye to other nations, then the world. Eventually the galaxy and the entire universe.


The first thing to understand is that the goal in teaching is not to evaluate anyone's skills. The goal in teaching is to make sure that students learn things. From a teacher's perspective, the evaluation part is entirely a hack to make sure that they do.


> The first thing to understand is that the goal in teaching is not to evaluate anyone's skills. The goal in teaching is to make sure that students learn things.

No, both learning and certification of learning (in a way legible to 3rd parties) are real and proper goals of teaching.


They're both important, but I think having exams given by a separate person or organization might be a win in some cases?

It might mean more teaching to the test, though.


I don't think it's true that in all contexts the role of a teacher is exclusively to teach. "Teachers" are also part of a credentialing system used in our society to identify people who are skilled or talented. This is discussed in the article when the author talks about the diffuse harms inflicted by cheaters.


If you have a goal that cannot be evaluated... how do you know if you're achieving it? How do you know if you're improving or worsening?

Evaluation is necessary. It doesn't have to be quantitative, we all know there are infinite problems with purely quantitative measures of human behavior, but evaluation itself is fundamentally part of any goal.


I think this is a crucial point missing in the above discussion.

However, tests should have a place in the teachers perspective, because they improve the learning effect (because they trigger the memory retrieval reliably).

So maybe the problem is simply, that tests are linked to grades ;-)


> So maybe the problem is simply, that tests are linked to grades ;-)

I agree with this. Students cannot afford to get questions wrong and learn from their mistakes. A bad grade can alter a person's future, there's too much at risk.


I disagree - without evaluation how do they “make sure that students learn things”?


> This problem isn't a teaching problem. Evaluating someone's skills in any respect or context is basically an intractable problem.

You're not engaging with author's argument. The author explicitly assumes for the sake of argument that perfect evaluation is possible. He's saying that even under this unrealistic assumption, teacher policies that naively look draconian are in fact hard to avoid given reasonable teacher effort.


That wasn’t my read. Ultimately the issues described are political. The author says as much when describing the “structural forces” and that systems with humans behave is funny ways.


That two things both involve politics does not, by itself, imply they cannot be usefully discussed separately. You need to actually argue that the issues the author discusses go away or become moot given the imperfection of real-world evaluation. It is not enough to argue that the world would look different were perfect evaluation possible.


In ancient times, all final exams would be oral ones in front of a panel of teachers. I'd guess that this technique would be pretty successful today too.


This works great, and it is still how evaluation is done when the stakes are higher: PhD defenses, executive hiring... even getting hired as an entry-level engineer at Google requires about five hours of what is basically oral examination.

But society is not willing to pay that kind of price for the earlier levels of evaluation. We want "scalable" systems. Unfortunately those same evaluations are often treated as more sensitive than they really are. For example, if you're comparing two students I'd argue that 3.0 vs 3.5 GPA gives you at least some signal, whereas 3.5 vs 3.6 GPA gives you basically no signal at all (maybe the 3.6 student took easier courses, maybe they were more lucky with cutoffs, etc.). And yet the distinction sometimes matters e.g. to graduate programs.

In well-designed systems, the GPA cutoff is set relatively low and more sensitive methods are used to select the best students from the pool. Often this includes an interview with a professor, which is also a form of oral exam.


You don't need oral arguments throughout the process. One at the end of the course is sufficient. It is then up to the students to learn the requisite knowledge in the given time period.


We could call it the Paperclip Maximizer.


There is such a system in the long run, it's called the free market.

If you can consistently outperform your peers while both parties have complete information, it's a sign of having some advantage.

Free markets are never perfect locally, but on a galactic scale they are pretty close, so the superior groups and will dominate.


> so the superior groups and will dominate.

That may be true by definition, if your definition is that superior groups eventually dominate, but that's of course just tautology.

However, depending on how you define "superior", for example "more intelligent and honest", or "more compassionate and fairer", could be what most people have in mind, then that may not be true at all. In human societies, throughout history, it's likely that who dominates is actually the most brutal and reckless, up to a point where people actually become accountable for their actions.


See, no, the second you put in "superior" then you left any idea of a free market. The idea of a free market doesn't claim to make any value judgement of what group is better, the free market is purely about selecting fair prices for commodities.

Your idea of superior groups and so on based on success on the free market is basically social darwinism.


Actually, free markets are only efficient if p=np.

(They are not strong form efficient, that was disproven long ago, and are only weak form efficient if p=np)

Unfortunately, this hasn't stopped people from believing in them anyway, because they really really want them to work, and people "feel" like they should


They still outperform other mechanisms we have of resource allocation. They're massively parallel systems with ample signaling of different agents' state.

And the worst corner cases of externalities can be mostly offset with proper regulation, even if we struggle to do it.


This is TERRIFYINGLY accurate.

""" Here’s what will happen:

Like most other humans, your students will be lazy and fallible. So many of them will procrastinate and not do the homework. So they won’t learn anything. So they will get a terrible grade on the final. And then they will blame you for not forcing them to do the homework """

This is almost exactly how adjunct teaching went for me. It was not the experience I had hoped it would be in almost any way.


There is a breed of very narcissistic person in our culture that will always find a way to blame their inadequacies and their mistakes on those around them. In high school, if you are a teacher you have quite a lot of authority in the classroom and so even if your student is oriented in this way, they will just 'not like that teacher'. Helicopter and apologist parents are increasingly an issue but they aren't directly in the classroom.

In college however, students are grappling with their own burgeoning adulthood. They realize a TA is just another student with a few years on them. While the professor might be a bit out of reach, for a narcissistic person, it is easy to justify to themselves that they are actually above the TA in status/rank/morality/righteousness/sociability. Subsequently they can beat down the TA in the way that you mentioned. "All my problems are the result of your failures to address them". "I would have done better but the TA didn't like me." "Oh I hated that class the TA was a total nerd." "No one ever told me I had to do the assignments, I didn't realize I would be tested on this."

It doesn't help that people who choose to become a TA are often a 'helpful' kind of person, the exact kind of person that tends to be a little bit susceptible to these kinds of criticisms, even if they are untrue. The only way to move forward as a TA (and as a person) in this environment is to harden yourself in the ways that the article and many other commenters mention. That's my 2c anyhow.


Many, many TAs are only doing it because it is required for their PhD program (either explicitly or in order to receive funding). Some of them still take the teaching duty seriously, but not all. Having a bad TA is not a good excuse for failing a class I agree, but in my experience most TAs are not looking at their feedback because they're really only in it for the research. And a decent portion of them would deserve the negative feedback.


> There is a breed of very narcissistic person in our culture that will always find a way to blame their inadequacies and their mistakes on those around them.

There is a flip side...a lot of students are reaching college now without the experience of the decision or blame being on them. College is viewed as so important that it cannot be left to these children who will screw it up. Grades are inflated, kids are told to do more activities but with less depth...everyone is trying to game every metric. It is a functionally nihilistic world view that becomes embedded in students minds in a way that totally lacks agency.

They feel that if they fail its because the teacher failed them...because in their priors the teacher fixed it, because failing and the consequences were not an option - especially for the 'gifted' students. It isn't a personality, it's a lived experience. Its confusing to the students just as much as to me because they don't understand what has happened or how their parents/teachers/leaders/society have failed them.


I'm sure it's theoretically possible to do poorly in a class because the teacher didn't like you, but statistically, it's got to be one of the most powerful red flags on a human. Steer clear.


There could be teachers who are fanatics about some political issue, and if you even hint you might disagree with them they'll try to lash out. I also recall some university level class (I didn't take it but a friend did) where religion was a topic and the professor intentionally set very loaded questions in online tutorials. While it's fair to ask people to question their own assumptions, it often seems the people most keen on this are ones who would become very irate if their own assumptions got questioned.

Perhaps that why some groups of people tend to gravitate to STEM, where teacher bias is less likely to have an impact.


Ah, the student didn't know one of the secret college hacks: Teachers with an ideological axe to grind are almost always the easiest graders, because they can be manipulated by dog whistles, and need to be liked. I got my best grades in the obligatory religion classes.


I didn't get you; steer clear of who - the teacher or the student?


probably steer clear of the student who claims they failed because the teacher didn't like them


I really liked homework suggestions that (critically) included the answer key and walkthrough of solution!

Doing problem sets in university without this made it way less valuable because you need the immediate feedback loop to learn and waiting until office hours or recitation takes too long and you forget.

Good classes (from grade clarity perspective) were ones where it was clear what would be tested and how to prepare. Then you could leverage the optional homework to focus on areas you didn’t understand yet.

There were classes I enjoyed that did this poorly by either forcing homework grading without answer keys (feedback loop too slow, often can’t focus on what you don’t know) - or made it very hard to know what the test format would be like to prepare for.

I like learning and enjoyed my CS classes - I also kept a high gpa at a university known to be hard (was also preparing for medschool where gpa is critical in the US), but the stress around grades was miserable.

Getting good grades is a skill that’s related to learning, but also its own thing. Sometimes to optimize grades you have to do things that hurt learning (rather than focus on how a compiler works and digging into interesting details here, you must focus your attention on the specific types of puzzles that will be tested).

I get why this is done, but I still wish there was a better way to handle this. I think ISAs and job market validation of skills is an improvement (like lambda school) but those students still blame everyone else for their own failures even in that case so it’s a hard problem.


In CS courses, I always appreciated when we were given access to the grading scripts/unit tests used by course staff. It made that feedback loop immediate, and unless you were intentionally doing something weird you usually knew exactly what grade you were getting for your submission.

As a TA, it was funny to see the ways a few students would overfit those tests. In one extreme case I literally saw a student replace a complicated function definition with a if-else chain that just determined which of the 4 test cases it was being run on…


As an CS instructor, my solution for that was to have a skeleton level of unit tests but to switch in a full suite once the deadline passed. Your grade was based on passing the full tests.

Practically everyone who "played fair" got the same grade for both. Occasionally, I would add a test that tripped everybody up and I'd have to go see what happened. Anybody who overfitted, however, got crushed.

My favorite assignment was always the next to last project (before end of semester deadlines start getting crushing). I created only a single unit test to verify the test suite runner was functional, and the students had to submit the rest of the pre-deadline unit tests. And I would switch in my full suite at the deadline. I told people that anybody who passed that suite would get bonus points that everybody was always bugging me for.

It was always absolute chaos. I only ever had one student pass the switched in suite (he got some nice bonus points out of that--didn't need them, of course). I would then reopen the project for a week (I planned for it) to let everybody else clean up their work and resubmit.

You could see and feel the difference in the students after that assignment. The fact that testing was an integral part of programming and that testing was, in and of itself, a difficult problem was a revelation--and not a particularly welcome one.


I'm currently dealing with a kind of similar situation, and honestly I find it a little odd -- I get questions from students which I'm pretty sure are honest and asked in good faith, but they seem to think they are very close to having solved the problem, despite having just over-fitted most of the tests.

It is possible that the problem is poorly written (I'm just a TA, I didn't write it, and it looks pretty clear to me), or it is possible that I'm just really gullible and they aren't actually asking in good faith (I'm a trusting person, but I've been doing this kind of work for a while and so I've seen most of the dishonest questions, this doesn't look like one to me). I dunno, I think I'll just chalk it up to the long tail of weirdness that can occur when dealing with a bunch of students.


My guess is CS1 is often the first time a student has been given a problem they have to actually reason about to solve (rather than follow a script).

This is hard to understand and adapt to and stresses out students that have been trained (for years) to learn the expected script.

You eventually get to this level in math, but only way later in post graduate work (unless you’re exceptional).

I think there’s likely a benefit in explicitly tackling this directly for young/new students, it might help them see the bigger picture. At least the earnest ones struggling to do well anyway.


I personally recorded hours of walkthroughs for my students (very basic react, svg drawing, etc).

It is such an immense amount of labor. Now i know why people regurgitate the same content everywhere or dont bother


This is also how teaching went for me. I found out why teachers have attendance policies, quizzes, and all the other things that, as a student, seemed inane, if not counterproductive, to me. While those things sometimes are, and can be overly punitive or poorly applied (like anything, much of the apparatus around teaching can be done better or done worse), I now get why instructors do those things.


I also think there's always going to be a question of which students you are optimizing for. I had a professor that didn't have an attendance policy, but at the start of every year he would show a scatter plot of class attendance versus final grade with a fit line showing decent correlation. Of course if you looked closer, the effect was mostly that very good attendance led to A's. Low attendance was a crapshoot on the plot, with every letter grade represented including many of the A+'s.

The students that attended didn't need an attendance policy because they were inclined to attend anyway. The question is how much forced attendance would have improved the scores of the bad performance/low attendance group versus how much it would have hurt the good performance/low attendance group (including missed opportunities at the same time slot). I don't see a policy that realistically helps all of the struggling students without hurting any of the top students, so a tradeoff has to be made.

Perhaps offering attendance as extra credit without making it a penalty could be a good middle ground, but I don't think it would help all of the low attendance/poor performance students. By the time they realize they need extra credit they would already be behind, and they may not care about an extra credit offer at the start of the semester.


Good! Students should be failing out of they're unwilling to work hard


>And you know what? When the students blame you, maybe they are right. The teacher is supposed to use their experience to help students learn. Shouldn’t you help the actual imperfect humans in front of them, rather than imagining a bunch of perfectly rational Platonic objects?


You could call it unwilling to work hard. Or it could be difficulty prioritizing work, or ADHD, or disorganization.

Are those traits what we’re testing for, or are we testing for knowledge of the subject?


As an advisor of PhD students I've learned that both things are important in different amounts. Much of what our education system measures right now is "willingness to work really hard from a young age." When I meet students from top-tier institutions I see a lot of this: it's really impressive. I also see a good deal of of selection for what I'd consider raw problem-solving ability. I see a smaller degree of selection for raw creativity.

The most creative students I've met have been the ones that didn't accumulate credentials, and often suffered because of (possibly undiagnosed) ADHD. They did well when they found their passion, either because they found it later in life or because they really, really cared about it. Our system doesn't do as well with these people, but they can usually make their way through.

Unfortunately there's a downside to this: all the creativity in the world isn't going to help you if you can't execute. A brilliant idea only takes you so far. And gaining sufficient background to have brilliant ideas is often an even more demanding task, which passion alone doesn't suffice for. I don't exactly know what to do about all this. What I do know is that a system that bases future success on how well individuals do at age 16 is fundamentally, profoundly stupid... And I wish I had a better one.


Ironically, measures like discussed in this article can actually make classes much harder for students with ADHD. Keeping on top of busy work, maintaining a tight schedule, etc. is not easy with executive functioning issues, and could lead to a student that actually did learn the material and performed well on exams receiving a bad final grade.

This may be more relevant to "twice exceptional" students that can still pick up on the material without following the whole class. There is certainly heterogeneity and I don't mean to speak for all ADHD students in what they would prefer. I just think it is funny your comment could be read as supporting either side of the debate without the parent context. And given the parent context I have to say I disagree.

Now whether the hand holding of attendance policies and weekly assignments and the like is better for the class on the whole I can't comment on. It's not an easy tradeoff and I don't think the decision should be made primarily based on how people with ADHD perform, unless you are teaching a class where it is disproportionately represented.


I know this is harsh but if you have ADHD, difficulty prioritizing work or are disorganised to the extent that you can't perform academically at the required level to complete a degree then you shouldn't be wasting your money and time going to university and should instead look for something that better suits your talents.


I see where you are coming from in that there is only so much burden on the teacher that would be reasonable accommodation. However I think it comes across as harsh because many of these students could successfully complete a degree if they received adequate treatment from a healthcare provider.


It's their money and time to decide if an allocation of same is wasted or not; your opinion of what is or is not a waste of someone else's time is not relevant.

Source: extreme ADD sufferer who has "wasted" tons of time swimming upstream to learn to do things easy for some others but insanely difficult for me, simply because I wanted to.


People sometimes get diagnosed with ADHD in their 30s. It's not that they are incapable of doing things, depending on the kind of ADHD they have, they might just not feel any drive to do it, or have no sense for deadlines whatsoever. It can take years to get into the habits which help you overcome it.


Attention deficit disorder is particularly interesting because these people actually have hyperattention when it comes to stuff that's interesting to them. They'll have trouble even starting boring tasks like homework but they can easily maintain their attention for 12 hours straight on some programming problem if that's what they enjoy thinking about. They are likely to be incompatible with this mass education system where people sit in a classroom to listen to lectures, especially if there's associated hyperactivity.


Not to be insensitive, but ADHD is a learning disability. Are we saying that anyone can be taught anything? That seems unrealistic to me.


The testing is for a combination of knowledge of the subject and ability to apply it in practice.


they are perfectly willing to work hard...the problem is their heuristics of what it means to work hard have been poorly developed.


One of the most effective exam techniques I have seen, which is uncommon in my country but maybe common elsewhere, is oral exams. I TA’d for a German professor who did them for his stat/ML course, and so got to sit in on all of them (I also took the course myself in an earlier year so had the experience as a student as well). The process was:

1. Give students a list of about 100 questions in advance, more than you could memorise. Some were simple like “write down the formula for X”, some were more complex like “derive the backprop update algorithm”.

2. Pick the first question difficulty based on the student’s assignment grades

3. If they go to it right, pick a harder question, otherwise an easier one. If you aren’t sure whether they really understand, interrogate them about the answer, ask follow up questions etc.

4. Choose a grade based on which questions they got right.

Firstly this was highly effective at making students actually learn the material because most were worried enough about embarrassing themselves in front of the professor that they prepared well. But also, it was extremely fair because it’s essentially impossible to cheat and fake what you know.

I suspect many professors would avoid this because it’s harder to justify the grades at the end to a third party. But if you record the exams, and the student is clearly failing to answer simple questions, it’s quite hard for them to argue they were treated unfairly.

Of all the written exams I’ve seen and taken, I’ve never seen a process as fair or effective as these oral exams.


> the student is clearly failing to answer simple questions, it’s quite hard for them to argue they were treated unfairly.

I know you didn't quite say that, but to be clear: Fairness isn't defined by someone being able to argue that they were treated unfairly. You still have room for bias in how you treat weaknesses in student answers, and few students are going to give perfect answers for everything. And even without any specific-to-the-student bias I'm going to be more consistent in grading the same question if I go through a stack of written answers en-bloc than if I grade replies with hours or days inbetween.

Randomized questions also has potential for bias that feels like it could be stronger than question selection for a written exam.

That's not to say there isn't value in doing oral exams, but they are not the solution for everything either.


Oral exams are awesome. The problem is they don't scale up to a high number of students. A teacher with hundreds of students literally cannot afford the time and effort to properly examine the knowledge of the students. So they make these multiple choice tests they can just apply to every student in parallel.

All of these problems with academia are rooted in the fact it's a mass education system designed to teach hundreds if not thousands of people all at once.


You’re right, it was a lot of effort on the part of the lecturer and it didn’t scale well as the course got more popular. I think it’s best suited to smaller, later year niche courses.


To back this up - I went through the oral exam experience on both sides (student and TA) with two different German professors and it worked incredibly well. We offered oral exams as a remedy for a few situations:

- suspected/known cheaters - students who missed exams due to illness - times in the course where we needed to get a good grip on whether our students understood the coursework the way we'd taught it

It was scary for a lot of students, especially the ones without great English, but it was always, in my view, incredibly fair.

edit: based on your comment history you're also in Australia - I have a feeling we might be talking about the same place :)


Haha small world! It was at the ANU, intro to statistical machine learning several years ago.


My experience is completely different. Due to the lack of record and a lot of both personal experience and trusted accounts I deem such exams completely untrustworthy.

The bias is both ways. Some teachers/examiners decide on the spot which questions to ask. Not all questions are the same. Some are easier and depending on the sympathy one can get one. Sometimes due to lack of accountability I even experienced complete dummy exams - a reason for which I decided to drop off from one. In college there was known strategy around one exam to not take part in written one as it was hellishly hard but instead go for oral one, where exam consisted of simply showing up.

This goes other way around, too. E.g. powerful member of exam committee of exam I took long time ago commission was infamous for failing people if not bribed in a very specific manner (not cash to hand but require buying specific set of services from family member). I dismissed that as malicious rumors but it proved to be true. I wasn’t failed but was scored inadequately which slowed my career progress for next few years.

I was told by academic staff foolproof strategies for failing students on oral exams. Like provide quick series of questions from various areas of topic, when they stumble due to stress finish them off with few extra ones. Other - ask questions so deep and profound they’ll either omit the details or start argue on ambiguity which are both enough reasons to fail.

There was small scandal about PhD exam recordings (which are obligatory) being purged 10 days after the exam, even though one of failed student decided to fight the result. Due to lack of evidence the exam result was held valid.

It really depends what kind of person is doing the exam. I’ve been always told that if one wants to skip exam they should go for oral instead of written one. Currently many are recorded but as they are widely used there’s minuscule chance they’ll ever be reviewed, and I recommend reviewing any piece of video if you think it’s easy. Find 3 seconds on 90 minutes video where student hands envelopes to the committee.

People on generally aren’t nice and while there is a sentiment for the teachers they aren’t any more special than and other group. You get lazy, power hungry or simply malicious people there too. And thus written record is a safe two way.


Oral exams sound fantastic. Unfortunately they don’t scale. Sounds difficult for a class of 30 let alone 300.


I think step 4 needs a lot more detail before I would feel comfortable with it (as a teacher).


As a teacher (CS and Math) for over a decade, I agree with much of this. I will only add that, as far as grading is concerned, I think the long-term incentive for the teacher is actually to put almost no effort in at all. There is no pay or status increase for teachers who are tough, consistent graders. In fact, some of the most revered teachers I’ve known essentially hand hold their students to a guaranteed A in the class. At first, principled teachers may stick to tough grading, but as the years go by and they watch their friends easily make 3x more in industry, the incentive to just put a check mark on every paper is about the best you can do to close that benefit gap.


When I was an adjunct (EE and Math), it was widely known amongst all of the teachers, that the student evaluation scores were primarily a measure of what grades the students expected. And I had to ask myself: If I were a student again, why would I adopt any other strategy?


I don't remember if it was a formal study but somebody has asked students at the beginning of the course what grade they _think_ they'll get and at the end it fit very well. Basically all students try to make a particular grade with the least effort, since that's what they are incentivized to do.


Indeed, since the evaluations are handed out before the final grades are tabulated, they can only be based on expected grades. But expected grades are still a better performance metric than a coin toss.


I spent my 20ss trying to become a professor and teaching undergrads. The article resonates loudly with me.

One of the best thing about nope-ing out of that lifestyle has been this:

I still teach people.

I teach people almost every day and this is incredibly validating.

I teach people almost every day and this is incredibly validating and they find this useful.

I teach new things to musicians I play with. I mentor my coworkers when they are working with new things. I help my friends and partners learn new things. The best is that I know how to research ideas and commit to learning them myself.

Much of formal education has systematic problems that make it struggle to achieve its stated goals.

But "teaching" as a form of human interaction is a wonderful thing.


It’s totally a systemic problem, that’s it. Same thing with me. I love teaching, I just can’t stand the faculty anymore.


My "senior seminar" for my undergraduate degree had the most ingenious grading system I've ever encountered, called the "cookie system". While working on your paper throughout the semester you had to meet certain milestones. Each milestone was due at 6pm and there were the following grading rules:

- if you reach the milestone before 6pm you gain one "cookie" - if you reach it after 6pm but before midnight, no cookie - you lose 1 cookie for each day it's late, starting at midnight the day after it was due - if you at any point during the semester reach a negative amount of cookies, you instantly fail the class - the final paper is graded pass/fail

This has the advantage that it keeps students on track, but the final grade is just a result of their actual knowledge and the final paper. The first few milestones were trivial to meet so you get a little buffer if you're late for some reason. In my year not a single person failed due to lack of cookies either


So if you hand in the first assignment a day late, you fail the class?


Yes, but the first assignment was something silly so everyone could do it. I could also see giving everyone 3 cookies to start with as an initial buffer


Easy solution to the regrade. Never do one off regrades. Always say that a regrade of one question will require the entire exam be regraded. This will be done by someone else or the prof who may grade it worse than a rushing grad student who is just saying “yeah, yeah, fine, ok”. Most students fear this, especially when it’s the professor doing it. Almost never had to do regrades with this policy.


This doesn't work when I've tried it. How many students or times have you implemented this policy? First, it doesn't make sense when the regrade is most objective (like points were calculated wrong, or the grader didn't see something that the student wrote). And if you say that it doesn't apply for straightforward grading mistakes, then you get emails asking you whether something is a grading mistake or has the chance of lowering a grade.

And I've tried this policy before, and got students who wrote in my course evals something like "the professor intentionally tries to scare students from asking for regrades by threatening to lower their grade even more." And then what about when you are still asked for a regrade (which in my experience was not zero, but maybe about a third or half as much as without this policy). In those cases, you end up doing way way more, so the level of effort actually increases.


As a professor, this completely resonates with me. For example, I take attendance and make it 5% of the grade. Then I give 5 free days and am generous with absences due to whatever. Why? Because it's a nudge for many students to get them to come to class, which makes them stay engaged, and ultimately get a better grade.

(The other reason I take attendance is so that I can recognize at least most of them by mid-semester, so can call on them by name when they raise their hand.)

And I'm often torn with taking points off for submitting work late. On one hand, why should it matter exactly when they submitted the work, if it's good work? On the other hand, I know that if I just said that there's no late penalty, some significant fraction of the students would wait till the end of the semester, then realize that they haven't been keeping up, then create headaches for everyone involved, including themselves.


I ran my class like a job (senior level interactive media). We had assignments with deadlines and I did PR reviews.

Deadlines are deadlines. Cut scope, features, etc but absolutely no late work.

You can guess how it went.


> I ran my class like a job... Deadlines are deadlines.

We were having a conversation at my college about deadlines at some training thing and someone pointed out that almost no job is like that. That movie scenario where the guy has a big presentation, but it's also his daughter's dressage recital or whatever, and if he misses the presentation he'll lose his job? That doesn't happen. In the real world, you just say "I can't do it that day, let's reschedule for next week." and that's fine. Most real world deadlines are soft.


As a software engineer, I generally agree.

My partner is a corporate lawyer, though, and deadlines are a _big deal_ for them.


Rocket launches, if you're a rideshare are not soft deadlines. You don't get your payload delivered on time, you lose. Even if you're the only payload, you have hard deadlines - there may be a few windows to launch in but if you miss those the next opportunity might not be for another 26 months. Someone will lose their job over that.


They are soft deadlines, launches are routinely delayed due to unforseen circumstances, such as weather.


A weather delay does not mean you get more time to get your payload onboard.


>That movie scenario where the guy has a big presentation, but it's also his daughter's dressage recital or whatever, and if he misses the presentation he'll lose his job? That doesn't happen.

Sure it does. If you are scheduled to present to a major client you can't easily reschedule for next week. Especially if it's presenting something that will have impact. Now you probably won't lose your job if you tell your team why you can't make it and it's legit but frankly you may if you no-show.


Definitely depends on your deliverables. My recent deliverables were campaigns for CES and Superbowl. I can't say lets schedule for next week for those. On the other hand, I think if I screwed up I won't get fired, either...maybe someone up the management chain :)


You are not wrong but it was a class agreement upfront and I, of course, am human as well and gave TONs of extensions.

They didn't like it, though. And unlike CS courses (this was in a media dept) none of the homework was very hard or required more than a few hours of effort.


A nice compromise that I've appreciated in the classes that I've taken - have strict deadlines, but offer X days (say, 2) of a no-questions-asked extension. It creates the clear expectation that work be turned in on time, but offers a small relief valve for one-off problems.

The problem is that this adds extra bookkeeping for a professor who's already busy with everything else going on, which gets back to the original poster's point of becoming everything that they hated.


Unfortunately, even graduating students usually still have the life experience of a child and can't see the real purpose in education.

Most are better off without a degree.

Most who get a degree would be better off with an apprenticeship tailored to their field.

Most of the rest would benefit from getting a few years of real life experience first.

The reduction in attendance would lower costs and reduce degree inflation in the job market. More productive years would be available and people could replace their college debt with a mortgage and have something of tangible value when they were done.


This. I would enroll, but nobody would help me financially like they were ready to do when I was younger. Also, let me customize the track a little bit. It's so focused on the inexperienced kids it hurts.


Sounds like you don't have the life experience to see that there is more point to getting a degree than learning. Maybe you don't like that, but it is true.


What things are supposed to be learned alongside said degree?

I graduated years ago and I still have yet to see what these other points are.


The other benefits of a degree beside learning are:

* Makes it way easier to get a job

* Great social experience


You can gain social experience lots of other places while making money rather than spending tons of money.

Degree inflation means that getting a job with one is about as hard as getting a job without one a couple decades ago. In any case, who you know is usually more important than what you know when starting out. If you wind up self-employed, then

I started out my professional programming career before I had a degree (and started learning programming around 9-10 years old). In my many, many interviews, I've never been asked about my degree.

Likewise, I've never asked a programmer about their college education in an interview and I don't see a reason to start asking that question. I care about if you can do the job and that's all any company really cares about too.


> Degree inflation means that getting a job with one is about as hard as getting a job without one a couple decades ago.

Agreed. Still both easier than getting a job without a degree now!

> In any case, who you know is usually more important than what you know when starting out.

Erm not really - when you're starting out you don't know anyone, so you can't use networking to get a job and qualifications are relatively more important. It's only when you've had a bit of a career that you start to have a network and can e.g. bypass HR because you know someone.

> I've never asked a programmer about their college education in an interview and I don't see a reason to start asking that question.

Me neither but programming is a bit of an exception. I think because so many people learn it on their own, and because it's relatively easy to directly test in an interview. It wouldn't be the same for accountancy or management or lawyers or doctors or ...


In high school I had a chemistry teacher who offererd that the grade submitted at the end of the year would be the greater of your grade with and without homework. He also warned that only occasionally the latter is a benefit rather than a harm. This intrigued me so much that I did the homework and didn't submit it in order to get a bad homework grade and overall top marks in the class. Anonymous grade info was posted on the wall periodically, everyone wanted to know who had the zero in homework.


Much of the article and comments here can be explained the death of good faith. People no longer believe in competent, benevolent power, and a process of maturation that challenges power in acceptable ways. Instead we build "systems". We pretend these systems are equitable. They merely hide power and force it to become malevolent, incompetent and terrified of challenge. We call this stagnation "progress"


This phenomena has plenty of analogues in the corporate & government worlds as well. A formal performance review system is instituted to keep people from spending all their time sucking up to their boss, and then is progressively refined to deter all the ways that it has been gamed, until it is very well adapted to preventing the historical forms of gaming the system but bears no relation at all to incentivizing good business results. A codebase gets a series of bugfixes, until it ends up slow and impossible to maintain, and then is thrown away when a competitor adapts to market conditions faster. A new government bureaucracy is formed to identify and prevent all the ways that terrorists could bring down airliners, and only serves to violate flyers' privacy and add millions of hours to accumulated travel time.

The root cause, I think, is that humans are really bad at considering both the specific and the general in their decision-making. A new procedure might perfectly solve the problem you're having right now, but the cumulative effect of all these new procedures is to make the overall system useless.

Long-lasting systems provide for ways to throw away whole parts of the system and replace them with something simpler, without throwing away the system itself. Whole industries get outcompeted by a nimble startup. Codebases get refactored, and gnarly subsystems deprecated and replaced with clean interfaces. Elected officials get thrown out of office.

Perhaps the right way to look at this is to embrace change, and position yourself as the destroyer and replacer of systems that have become overcomplicated and bloated. That's why the tech and finance industries have been so highly compensated over the last 20 years: together, they're throwing away whole parts of the 1980/90s institutions that had become bloated through 40-50 years of progressive micro-optimization.


The example about the ever-increasing list of assignment requirements appears in many other domains than teaching. Think of the AirBNB with a fifty-page guide whose 'rules' are all thinly-disguised anecdotes about something that went horribly wrong. And I'm sure we've all seen business processes like this...

This ends up (somewhat) preventing asshole behavior at the expense of making life worse for all of the non-assholes. But in reality, assholes will find new and imaginative ways to be assholes, no matter how many specific rules are in place.

One hopes that better solutions are possible. In the teaching example, we could imagine keeping the rules broad and simple, and including a reward for any student who doesn't require 'special treatment' through regrade requests, etc. (I have seen systems where regrades include a grade reduction if no errors are found.)

In AirBNB, deposits and waive-able cleaning fees serve a similar purpose.


It's regulations in general. Whenever someone complains about regs and red tape stifling innovation, it's generally that someone tried to game the system. "This is why we can't have nice things", etc


Yep, this is it. Ever wondered why syllabi and problem statements are so long, convoluted, and oddly specific? It all boils down to "Fool me once, shame on me. Fool me twice, it's going in the syllabus." [0]

Edit: I did not come up with [0], but I also don't remember where I saw it.


So I've taught a lot at the university level, and reading this and the original blog post they were responding to I realized that I gradually shifted in how I saw exams.

The traditional model, the one implicitly adopted in the posts, is one where the instructor presents material, maybe with some discussion or engagement with the material in the form of activities or assignments, and then evaluates understanding on an exam of some sort. In this model, the exam is a measurement. It makes sense from this standpoint that all you really need is some megaexam that measures your comprehension of the material, and if you pass it, you pass. There is something to be said for this in all sorts of areas of life.

There's another model, though, where the teacher is a sort of coach. In this paradigm, your role as instructor isn't just to present material and then measure it, but to provide incentives along the way for the student to engage with the material and process it. In this model, the exam is activity. You present a series of quizzes or exams for the student to problem solve, and you incentivize this by giving credit or not giving credit. It's the equivalent of training drills in sport. All those assignments and midterm exams are incentives for staying engaged with the material along the way, to practice.

I suppose you could say something like "well taking the final exam repeatedly could serve that role, and you can't literally give the same exam over and over again due to cheating and learning to the test, so what you'd really be doing is giving multiple exams, which is kinda like assignments" but then at that point you've redefined things so much it's a moot point. There's also little point in assigning material the student doesn't understand yet, so what you end up with is what usually is done, which is units, with assignments or interim exams that are graded along the way.

Ideally you'd have tailored material, activities, and exams that are tailored to specific students and their specific progress, but in practice at universities there's just not enough resources to do that. It's too expensive, if you include social components as part of the learning process. There are also general trends that are too hard to ignore (most students whereever you are will be in some peak of a bell curve of some sort), and so you end up with what usually happens (which is sort of the point of the author).


At Oxford, my degree depended entirely on 8 three-hour exams at the end of the final year, which were mostly set by different people from the people teaching you. There are issues with this (not least that if you are slightly ill say during the two weeks of exams, it affects your whole degree) but one thing that was really nice compared with the other universities where I have taught is that the relationship with the people teaching you feels fair less adversarial and more cooperative. Also by not having exams in the second year (there were exams in the first year you had to pass but which didn't count towards your degree), there was more emphasis on really understanding the subject rather just jumping through hoops.

But I think this only worked because of the teaching system there - as a general rule you had two tutorials a week which were usually one-to-two (or sometimes one-to-one or one-to-three) and it was very hard to slack too much or not do the work for them. Such a system requires a significant amount of funding so probably isn't scalable. Colleges did sometimes set their own internal exams too during the course which didn't count towards your degree (but for which you could get monetary prizes if you did well in them!) but which you could fail which would set a process in motion in which you could be kicked out of the university.


The tutorial system seems way better than what most schools do. Interesting that one of the oldest schools seems to have one of the most sensible systems, possibly because they've resisted the pressure to make everything more scalable.

The average American private university costs $45k per year (about the same as Oxford's international tuition) so you'd think they'd have the resources to do this as well. As far as I know only a couple actually do it (e.g. MIT) and their student-tutor ratio isn't as good.


"If the colleges were better, if they really had it, you would need to get the police at the gates to keep order in the inrushing multitude.

See in college how we thwart the natural love of learning by leaving the natural method of teaching what each wishes to learn, and insisting that you shall learn what you have no taste or capacity for. The college, which should be a place of delightful labor, is made odious and unhealthy, and the young men are tempted to frivolous amusements to rally their jaded spirits.

I would have the studies elective.

Scholarship is to be created not by compulsion, but by awakening a pure interest in knowledge. The wise instructor accomplishes this by opening to his pupils precisely the attractions the study has for himself.

The marking is a system for schools, not for the college; for boys, not for men; and it is an ungracious work to put on a professor."

-- Ralph Waldo Emerson

Source: http://www.anvari.org/fortune/Miscellaneous_Collections/1175...


I believe college used to be much more about being close to the cutting edge of knowledge and research (thus why instructors have historically been research professors), whereas now it is more like an extension of high school.


And most of the students are amazingly gracious and drop the issue. But some don’t, and they keep complaining and asking for regrades, and if those aren’t accepted they (or their parents) contact the principal/chair/dean/ombudsperson, who are required to have an investigation.

Reminds me of The Most Intolerant Wins: The Dictatorship of the Small Minority

https://medium.com/incerto/the-most-intolerant-wins-the-dict...

But it's the other side of the coin

The minority rule will show us how it all it takes is a small number of intolerant virtuous people with skin in the game, in the form of courage, for society to function properly.


A lot of the issues here seem to be a product of the US education system that defers excessively to the students (probably because they’re paying big bucks). But it could be worse. I did my grad studies in a European University where there was only one examination at the end of the year with no option to contest or even get feedback. The year before mine a Nobel prize winning professor made a mistake on one of his test questions, causing the half of the class that chose that question to fail. Even though it was his fault, there was no recourse. One half of the class had to repeat the entire year-long course!


Interesting. I grew up in an “exams count only” system that used a 2 decimal point score precision. So if you scored 89.75 at the end, you completed the course with 89.75. It wasn’t bucketed into grades.

There were 4 exams: two quarterlies, a half, and the final.

I don’t think it ever struck any of us that if we failed to study for an exam that it was anyone’s fault but our own.

I actually really like articles like this because they have so many unnecessary assumptions:

“Things are this way because students will complain if they suck at things”

The unstated assumptions are that students in this schooling system mostly have external loci of control.

The second thing is that courses are designed in an adding-epicycles manner based on the least reasonable member of the previous class. That is, it is a cost function that aims to minimize the failure of the greatest idiot which implicitly leads to it adding cost for the smart guy.

So you have built a schooling system optimized for the greatest idiot who believes someone else is responsible for all of his failures. This actually explains why so many college students here are like they are.


> I don’t think it ever struck any of us that if we failed to study for an exam that it was anyone’s fault but our own.

This is interesting to me. If you’re noting working through problems how do you get the feedback to even know if you really understand the material? Generally assigned homework was nice because a professor will know the key ideas and can efficiently assign work covering those parts.

Right now I’m self studying real analysis and I wish I had someone to pick problems for me because otherwise I’m just trying to do them all to ensure that if I don’t know something critical it will come out when I can solve some particular problem.

On the other side of the same coin. How does a teacher know if their teaching is effective without frequency feedback until he form of students grades? I feel like waiting several weeks to determine if you need to change course is doing a great disservice to the students.


Oh we worked through problems all the time. But if you didn’t then the next class would be impossible to understand and you’d just sit there like a fool. So the incentives were already aligned there.

And you get feedback from your peers as you work through stuff and also from the lecturer at the end of the next class.

It just wasn’t graded.


Theres much to talk about here.

A lot of this is sensitive to context. Students in high school, college, and grad school have different levels of maturity. There are also different incentives for each setting.

I would say that high school and college students are more similar than grad students though.

Perhaps more important is the fact that the power the teachers have in each setting is different as well: high school teachers have little power whereas college professors have much more leeway in designing and grading their courses.


I feel like a lot of development happens between 14 and 18, so I don't understand why freshmen and senior years of high school are so similar from an academic philosophy perspective. Even in a private school where the teachers have a bit more freedom. I think a lot of kids get thrown into college with no idea how to manage themselves because they never had to before. There ought to be a better way to ease into that.

I agree it is very context dependent though. Not just academic year but also class content. Some courses need to lay a strong foundation, others would be most useful as a survey, still others are about synthesizing knowledge from across prior courses. Some classes contain students mostly forced to be there and others contain mostly students that are excited about that particular material. Different fields lend themselves to different assignment styles. And so on.


Tangential: The article mentions Chesterton's Fence. I clicked the link to learn what that means and didn't find it (it's just a link to the guy's Wikipedia page). But check out the beautiful signature of this Chesterton fellow!

https://en.m.wikipedia.org/wiki/G._K._Chesterton#/media/File...


Chesterton's fence refers to a principle that before changing something, you should first understand why it is the way it is.

> Chesterton’s Fence is a heuristic inspired by a quote from the writer and polymath G. K. Chesterton’s 1929 book, The Thing. It’s best known as being one of John F. Kennedy’s favored sayings, as well as a principle Wikipedia encourages its editors to follow. In the book, Chesterton describes the classic case of the reformer who notices something, such as a fence, and fails to see the reason for its existence. However, before they decide to remove it, they must figure out why it exists in the first place. If they do not do this, they are likely to do more harm than good with its removal. In its most concise version, Chesterton’s Fence states the following: Do not remove a fence until you know why it was put up in the first place.

[0] https://fs.blog/chestertons-fence/


As an instructor, agree about everything but will add that I try to keep a positive framing.

That people procrastinate and need incentives is human nature — no more use bemoaning this than bemoaning politics. The job of instructor is precisely to enforce a system of rules and incentives while not being too dogmatic about them that the class turns into a grind while promoting enthusiasm for learning while creating inspired course content, while balancing all this with the instructor’s own scholarship priorities. It’s a tall order and very few people do it well.

That mastery is difficult and subtle does not distinguish pedagogy from other professions, but what is different is that every shmo off the street remembers being a student, so thinks they know the secret formula for pedagogy.


For the most part I actually had wonderful teachers during my school career. What I bitterly disagreed with was a lot of the curriculum, and I knew they didn’t have a say in that.

So my question to educators here would be: Do you ever feel like you’re forced to teach topics you know won’t benefit students?


I teach high school English, so I have a greater degree of freedom in my curriculum development (there is no textbook. I create everything). That is actually not my issue.

My problems are mostly reality/logistical constraints. There is always more I could do, more I could give, more I could help every single student, but I would have to learn to freeze time or never sleep.

It’s unhealthy and irrational, but I feel shame for not giving more when I know what a kid needs but I circumstantially lack the capacity to give it to them.


> Do you ever feel like you’re forced to teach topics you know won’t benefit students?

Normally, professors teach things they have some expertise in, and they're biased to think that this is useful to students.

Besides, it's often very debatable whether something is useful or not. For instance, I used to teach things such as theory of computation, automata theory, and similar so-called theoretical classes. You could debate ad nauseam whether this benefits students or not. Some would argue it's useless and students should do more javascript labs, other think that these are the foundations of our field, unlike the latest JS framework which will be obsolete in 2 years.

Some of my colleagues go to great length to convince students that the class they teach is useful, but I'm not convinced this is necessary. I've noticed that students are happy as long as they think they learn something from the class, and that the class is neither too hard or too easy. They don't question the utility of the class if the teacher manages to make the topic fun. For instance, labs, exercices, exams, should be of gradual difficulty, so each student feels they can make progress. This is challenging to achieve when the class audience is heterogenous.

So rather then the choice of topic, what had happened to me was that I disagreed with the way the topic was taught. In my university, sometimes we would work in team with little saying on the class syllabus, labs, exams... This can be frustrating and I'd just leave the team.


Ha! As a pure mathematician-turned-software engineer, Theory of Computation was one of the few classes I took that remains even remotely applicable. At the time I thought it was really cool, and probably made CS a little more appealing.

Conversely, in my current role as a backend/systems/researchy person, a JS class would broaden my horizons a bit like a literature class might, but I think both would be equally useful to my current job.


In what sense theory of computation is applicable to your current job?


My current role involves analyzing and understanding customer-provided SQL. Although vanilla SQL is not Turing complete, in the past I’ve definitely decided to deprioritize thinking about certain approaches due to growing complexity and because they “smelled” like the Halting problem.

Going farther back, I’ve seen a handful of instances where someone was looking for help trying to solve a graph problem, until it was pointed out that it could be reduced to an NP-complete problem. Unfortunately I can’t recall the details.


I feel that I have requirements that don't benefit students much or at all, but I just don't spend much or any time on them. At the end of my courses, I administer a test graded by me, so I can just not fail anyone. I know on day 2 if someone needs to be failed, when they do their first practical exercise. I'll probably have them removed from the course so as to not waste everyone's time, if that happens (but it almost never does.) I also have a very specific curriculum that is meant to be followed, but it's mostly not very good or focuses on things that are no longer major focus areas, because the development process lags so far behind reality. Again, I just do whatever I want, especially for practical exercises.

On the other hand, I teach another course that couldn't be more different. On the first day I give students every question to all the tests and tell them to start studying. If they can answer every question, they know plenty. I almost always give the same test, so different cohorts could (and, I'm quite confident, do) cheat by tracking which questions will be asked. I warn them against this, but ultimately I don't care because the final exam is not generated or graded by me, so it will be their predictable downfall if they go down the road of cheating.

You could reasonably say I neglect the second course; I do, for good reasons that mostly have to do with what I said about the first set of courses. I also the systems that provide education and training to my students relating to both courses, which further justifies my allocation of attention.


I was a high school math teacher for 9 years and taught classes with high stakes state administered tests at the end, and the content of the curriculum was the least of my complaints.


My college classes gave a 5% weight to homework, 45% to the midterm and 50% to the final. Since I was a good test taker I could skip almost all the homework and still get an A or an A- if I didn’t do as well on one of the exams. It also didn’t help that the professors gave extremely hard exams to small classes: I distinctly remember getting a B+ on an exam where I got 1 out of 6 questions right because everyone else got half a question right. I still don’t really know quantum mechanics basics but my grades say otherwise.


I'm currently on a goal and motivation research reading interest, so I think I can add value to this.

School pedagogical approaches are weird and appears broadly to be testing a student's ability to endure forcing themselves to learn material they may not find interesting. Obviously, this divides the student population and people with better executive functioning, or stricter parents, float to the top. It's what we've done for so long, we're anchored around the concept. From a motivational standpoint, for many students this can kill curiosity and desire for learning.

Goal attainment research consensus clearly demonstrates specific and sufficiently difficult tasks lead to better performance. It's even more ideal when the individual sets the goal or at the very least is involved in developing the organizational goal. This goes against the grain in schools. Sometimes teachers are incredibly vague, others specific. And, unless the student is in a highly individualized learning environment, like working on a capstone project, they do not play a role in course goal setting.

You start to see the potential problems when research demonstrates, the highest individual performance occurs when individuals are provided a specific and sufficiently difficult goal with a learning oriented approach and decreased emphasis on performance (tests and grades). On the other hand, when an individual already possesses skills and knowledge for assigned goals, then a performance approach, not a learning approach, yields an overall higher performance rating. Also, by far, the worst goal orientation, among the aforementioned, is performance avoidance, that is performing to avoid negative consequences.

Students are in school, they're in the process of obtaining skills and knowledge for a career they may not even have solidified yet. Students are largely falling into the performance avoidance category, then the performance approach category, and, finally, for the luckier few, the learning approach category. Add in the teacher quality variable, whether they assign specific or vague and easy or sufficiently difficult assignments, and you start to see how this creates problems for students and for society.

I speak from experience. I failed miserably during school, even dropping out of high school, for a variety of reasons outside of my control and am extremely fortunate to be where I am today.


Soooo..I teach IT for a living and am thankfully thankfully free of being on the research side of things. I am incredibly lucky to be able to generally do things how I like.

I let them know mostly early on: I do grades because I have to, not because I enjoy them. I've settled on the following: I try to make the biggest assignment an ongoing project-thing that they "turn in" more than once, and try to coach them into primarily learning and doing -- and turning in something that I can reasonably slap a good grade on.

I do one or two small quiz type deals on top of that. Very hard multiple choice, but take-home, and you're on your honor to not to consult live humans. Also, I do the nice type of "curve," so if your fellow students' grades are average low, that helps you. Honestly, this is much more to maintain classic ideas about grading, though I suppose it helps keep the younger ones on their toes. Also, I find the psychological effect of "QUIZ" to be sufficient to get people to prepare, even when they don't check the syllabus and see that these aren't all that much of their final grade.

This seems to be a pretty good way to do IT type classes.


I have taught at college and university in STEM subjects, and also professional corporate training courses on software development to senior and lead engineers for about a dozen <companies you have heard of and use their products every day>. And I will say that universally, every single class runs the exact same way. There are those that want to learn, and there are those that are killing time and will argue with you until they are blue in the face that they deserve a different grade, and they have come up with more excuses and reasons why something wasn't done than I could think up in a decade.

When I used to teach at college and university I would think "there's no way you are ever getting a job in this field" and then when I did professional corporate training I would think "I have no idea how you keep your job, but I do know that if I showed up for an interview, you wouldn't give me the time of day."

You can argue with "well maybe you're a lousy teacher" and whether that is true or not, it doesn't account for the flat out denial and debating, and dare I say it, outright lying, about why the assignment wasn't done.


One of my favorite teachers in high school had the following policies:

* There will be a short quiz every week covering recent material.

* Homework is optional for any student who got an A on the last quiz (due to the length of the quizzes, that essentially meant 100%).

* Anyone with an A average in the class so far and an A on the last quiz is permitted to sleep in class.

It worked great. Nobody's time was being wasted on busy work, nor were people recklessly left behind.


> But some are an assault on reason, with every word of the assignment creatively misinterpreted. It was never stated which temperature circuit to build or how to prove it works or what level of explanation was necessary. And who’s to say what “build” means?

You have to think like a software engineer. Test first: write the requirements fro the perspective of a test which fails if the requirements are not met.

Rather than dictating irrelevant details of the apparatus that is to be made during the assignment, describe a procedure by which it can be verified to meet the requirements.

"Build a temperature monitor circuit.": what is it monitoring: the temperature of what? Where is that taken? What is the output? Decimal temperature in Celsius to the tenth of a degree? In what range? Or else is there just a control output: is there a hysteresis to turn something on and off like a thermostat? Etc.

"Test it to prove it works." That's a poor way of giving requirements. You need specific test cases. You may have to have specialized equipment on hand that the students can use, like a controlled source of reference temperature.


Absolutely. In my experience teaching in college, this is the correct approach.

A very useful thing to have, both for teachers and students, is a "rubric": a succinct description of how the work will be evaluated, and the importance (weight) of each feature.


Yes! And then the rubric should actually define the pieces of the assignment you care about them getting right. I have this debate a lot when we’re designing assignments for an intro to Python class (which, sadly, we have to do very regularly because of sites like Chegg…). Figure out what it is you want the student to get out of the assignment (e.g., manipulating dictionaries or sorting values) and evaluate their results based on whether they did that thing. If they did the thing but missed some nitinoid details they don’t get a perfect grade, but they should get a solid, passing grade for it.


It's a silly exaggerated example. Point still stands, at least from my experience. Even with a rubric, people still (intentionally or unintentionally) find ways to do things that circumvent the learning goals/outcomes of the assignment


If the real objective is learning goals/outcome rather than (or in addition to) a working temperature circuit, then that objective has to be somehow encoded into the requirements. Or else, sometimes all the stated requirements will be met without that unstated one being hit.

This is difficult because, for instance, the possibility of cheating means that the person who says they performed the assignment might have contracted it off to someone else and learned nothing.

Someone who already has all the required knowledge can also just spin out the assignment without learning anything.

Basically, learning is a state change in the pupil; if you want to validate that some state change occurred, you have to have a way of measuring the state before and after and calculating a difference.


Any visionary attempting to restructure traditional learning should read this, and I say that with no ounce of malice or sarcasm - its a nice hazard map, and it at least one constructive change that should be enacted:

1. Grades should be a continuum (percentage), not bins (A, B, C,..). "When you are forced to discretize into a small number of bins, injustice is inevitable." Report cards have no rational reason for not being an aggregate of numbers rather than low resolution numbers (letters).

The crux of the justification given for enacting these policies students hate is that students need motivation; their human nature, even given a clear end goal, is not enough for most of them to learn at the required pace without intermediate and forced goals. Of course this leads into the problem of carefully interpreting assignments to do as little work as possible, and lowering the quality of all student's experience to make assignments painfully clear.

All this leads so naturally back into the temptation to loosen standards of the class. If students are going to lazily and disingenuously complete assignments, they will not learn, and it should reflect on their exam grades - but making every student perform the same problems every time will waste half the students time if your assignments are catered to the slowest learners - the fastest learners will feel completely patronized and waste the most time. Don't punish your best students.

The real solution is breaking up classes. One class as a monolithic, multi-month, atomic unit causes problems. Each intermediate exam should serve to split the class into many smaller classes, which can be failed and retaken modularily. In fact, students should decide when they want to take that modular class's exam, and can stay in or attempt to test out at their discretion. Now all of the sudden the relationship between doing assignments and performing on these intermediary tests is tangible, and need not be forced through forced assignments and over-specified instructions. Students can still be required complete a final exam encapsulating all material from each modular class (longer re-test periods could be applied if need be), and if they performed poorly students would have the option to retake those modular sections to build up a more robust understanding.

This has other benefits as well: pre-requisites can be much smaller, and more specific pursuits of knowledge can have constructed an express course of just the strictly necessary modular courses from each full course. Students wanting complete understandings can always go back and pick up where they left off.


> Any visionary attempting to restructure traditional learning should read this, and I say that with no ounce of malice or sarcasm...

On the contrary. Part of TFA’s point is that, like many other fields, outsiders usually have a much poorer idea than they think of the hows and whys of teaching. Until you’re actually in front of a classroom it’s not obvious why seemingly sensible ideas are often ineffective or unworkable (see also: any HN thread about k-12). Reading a blog post, rubbing some brain cells on it, and then making pronouncements is exactly the kind of thing TFA warns against.

Any visionary attempting to restructure learning should teach.


> On the contrary. Part of TFA’s point is that, like many other fields, outsiders usually have a much poorer idea than they think of the hows and whys of teaching.

So you're saying those attempting to restructure the classroom should not take into account this blog post? You seemed to fail to recognize the value of this blog post, or sharing knowledge in general: you give others an express pass to knowledge that you have gained through first hand experience. That is the meat of the article: common pitfalls new teachers trying new ideas fall into - if your goal is to change education you must experiment, and if you are going to experiment, this blog post is invaluable.

Forever more an experimenter attempting a classroom restructuring who read this post now has artificial first hand experience to draw from - to state that anyone attempting to do such a thing should ignore this piece and generate first hand experience is asking them now to retread mapped territory. Unless your prescription is that people shouldn't experiment - a notion which warrants no respect.


> So you're saying those attempting to restructure the classroom should not take into account this blog post? You seemed to fail to recognize the value of this blog post, or sharing knowledge in general...

Oh come on, this isn’t Pedagogy of the Oppressed, it’s a blog post of “things I didn’t understand until my first year on the job”.

TFA is a great post! It’s particularly interesting to laymen, and maybe rookie post-secondary faculty who have been taught a lot about their subject and very little (if anything) about how to teach. But none of it’s novel for anyone who’s been on the other side of the desk.

Talking about education practice in public forums is hard because everyone has a stake, and everyone thinks their experience as a student or parent plus a little common sense makes them an expert. Again, see literally any HN thread about education. TFA does a great job of reminding laymen that often those Chesterton fences are there for a reason. But sorry, I don’t think it’s a rejection of the idea of “sharing knowledge in general” to say that “visionaries attempting to restructure education” probably shouldn’t be laymen.

So, okay, “on the contrary” in my OP was a little strong. Not that anyone shouldn’t read it. But just because something is new to you doesn’t actually make it new.


Get real, man. The whole post is premised with "these are the first things you learn on the job when you try and innovate the classroom." I'll repeat myself again saying that any educator who actually is going to try and change parts of the protocols, procedures and practices for the better (rather than just say it can't be done every-time an idea crosses their path) are either going to learn the hard way and waste time, or take into account posts and experiences like this blog post lays out and not waste time trying what's been proven to fail.

Your argument, if it has any relevance to what I wrote, is that these teachers shouldn't innovate because they don't have experience teaching. Okay, well you can spend twenty years running a classroom in all the old ways you like, and you still won't have gained much valuable experience when it comes to new procedures, considering you haven't tried any. When it comes to innovating or experimenting, the most valuable experiences are those related to attempts at innovation and experimentation, so its quite obvious the opening moves and results to those attempts are invaluable to anyone trying to change things.

And yes, like it or not, people sharing their wisdom like in this blog post will make new teachers in general much wiser than the generation of teachers the author was in when they first started - that's the point of sharing knowledge graciously. Teachers, new or old, who have first hand experience or who take writing like this to heart absolutely are better equipped to innovate the classroom than those who have not. The experience only mantra is ironically enough antithetical to pedagogy in general.


> Your argument, if it has any relevance to what I wrote, is that these teachers shouldn't innovate because they don't have experience teaching.

Haha okay, if you want to read my post that way there’s not much anyone can do for you.

If you can’t understand why “visionaries trying to reform education need to read this entry level blog post, and having read it I will now do the exact thing it warns against” gets eyes rolling there isn’t a whole lot else to say. I get that TFA seemed really exciting and innovative to you. It isn’t. It’s a surface-level précis of things any practitioner knows. Which is fine! Because, once again, it’s written for laymen.

If you actually want to dig into the nuts and bolts, here’s a book recommendation that explores some of the points raised in TFA in depth: https://www.amazon.com/Grading-Equity-Matters-Transform-Clas...


You seem to be veering. I don't even know what TFA is, and I certainly wasn't talking about it - get a grip man. Are you seriously trying to tell me that information about opening moves that don't work isn't going to be useful to new teachers? Get your act together.


I'd like to see in a forum for dentists a thread where they offer clever suggestions for how to change software development. Clearly, all these dentists have used plenty of software before.

They would have seemingly great ideas like "programmers should get 10% of their pay docked each time I encounter a bug in a program" or "the real solution is to hire someone to test the program from start to finish before releasing it".

I bet programmers on Hacker News would be livid upon hearing these suggestions, but seem to have no problem announcing their clever solutions about other disciplines (not excluding myself).


Terrible analogy. What authority might a team of QA testers have on restructuring the methods for software development? Perhaps not as much as a software developer, but certainly enough to grope at and possibly conceive of new and useful ideas. A student is a QA tester for a classroom, they are not a dentist using a program, they were involved in the entire process and saw basically every mechanism with varying levels of ignorance on motivations.

Comparing a student to an end product consumer of software development is an embarrassingly long stretch - teaching is an art, but its not that complicated nor disconnected from its patrons.


Students are users of a class, as much as a dentist is a user of a dental software application. Neither were involved in the making of the class or software. Both were delivered the experience as designed by the course staff (instructor and/or TAs), or software team. Both only see the final product after months or years of development, done by specialists trained for many years even before that.

The error you're making is stating that "students are involved in the entire process" which is laughable. Many classes have gone through years of iteration, and even new courses take many months to develop before students set foot in the classroom, not to mention the years of experience and education needed to get the instructor to the point that they can even make a class in several months.


The only way you can refute is by being obtuse. QA testers and dentists using software are not the same, not at all - that was the premise for my whole point and you refuted it by pretending I didn't say it.

I assume you are a teacher, so what's truly laughable is that I must explain this to you: teaching students is at least a factor more intimate than delivering a finished piece of software or other commodity to a user - its also a factor less complicated to deliver a MVP. If we are talking textbooks, then you're objections apply - I would hope teaching in your mind occupies a distinct space from textbooks.

Surely many students displeased with instruction will be ignorant with some nuances and limitations of structuring a class, but the gap between good prescription and naive wishes is not a half career's worth of experience when it comes to teaching. I'm sorry to burst your bubble: Teaching is not rocket science, it is not software engineering - the prestige a teacher earns comes from either their qualifications in an advanced field or their effectiveness in imparting knowledge (or perhaps their proclivity to allow cheating depending on how you measure).

There are surely many teachers instructing arithmetic that understand teaching better than doctorate professors. Are you going to tell me that a piece of writing outlining common pitfalls, an earnest reader, and a creative mind really get you no further than a fresh, apathetic track-following graduate when it comes to innovation? Get real.

If teaching is the only thing that makes you good at teaching then I guess we should ignore all sharing of information about it, even from those who are good at it. Well, or we could just ignore your sentiments about who should be allowed to innovate in public.


My teaching experience is limited, but for teaching compsci I've had good results with programmatically graded assignments, where the students get unlimited submissions but also machine-enforced deadlines. (I liked this when I was a student, and I've liked structuring my own class this way too.) One of the big benefits is that you can be maximally clear about what the problem set is asking the students to do, because you give them an example input and an example correct output, and for each submission you tell them what their output should have been. But each grading run uses a randomized input, so they can't just hardcode the answers.

Compsci is perfect for this, because students can fix their bug quickly and resubmit their whole assignment. For math, I guess you'd want to avoid having them repeat problems they already got correct. But for other subjects that don't really have a place for "random input", I guess this doesn't work?


As a student I’ve taken computer science classes with and without some kind of autograder. I’ll say that depending on how it’s set up it can be complete nightmare fuel. In one class I recall that 3/4 of my time was formatting comments correctly to not trip the chest detection that came from not “properly attributing” the code I wrote.

Also the assignments were written to the autograder. So instead of saying “write a method that does X” it would say “write a method named NAME that takes in XYZ parameters and outputs exactly in QRS format”.

After having taken so many open-ended assignment classes that really hurt my brain.


> “write a method named NAME that takes in XYZ parameters and outputs exactly in QRS format”.

That's a pretty realistic requirement, though, isn't it? If you're writing a class that's supposed to interoperate with other systems, it's not like you can just name things whatever you feel like. They have to fit the expected interface.


Ugh automating the cheat detector sounds awful. The tool we used had a nice similarity detection feature for me to look at, and I used it by hand a few times, but of course it had false positives.

> write a method named NAME that takes in XYZ parameters and outputs exactly in QRS format

Yeah I hate that too. Personally I went with "your program should read JSON from stdin and write JSON to stdout". For students who had never seen JSON before, that was definitely some extra friction. But the upside is that it's something they're going to see a lot of in the real world.


I wish I had read this essay before the start of every year in high school and college. It would have saved me a lot of frustration, and helped me understand why things are this way.


Kind of agree but thats why I stopped teaching and went to a dev bootcamp.

Even though I taught for about 7 years my theory was that 3 years was a good amount of time to put in to teaching before starting to burn out. Sort of treating it like the public service it feels like.

Not that everyone has the option of leaving. But financially speaking I observed that even fewer people had the option / privilege of teaching indefinitely without the financial support of a spouse or roommate living situations into middle-age. Thinking mainly about the growing percentage of non-fulltime teaching roles.


I had the same experience. In addition to many jobs being less than full-time, it's also not a very secure job, in the sense that depending on your specialty there might only be a handful of universities that could ever potentially hire you in any given country. I'm currently transitioning from teaching to dev, and this insecurity is what caused me by far the biggest anxiety, much worse than not having a lot of money.

Regarding the article: I agree entirely. I also came into teaching with the goal of being different than the teachers I'd had, but every good intent got beaten out of me by someone who used it as an opportunity to skirt the rules. Combine this with strict admin stuff (you know you're BSing and I know you're BSing, but I can't give you a bad grade because you technically didn't break any explicit rules) and you end up with the current situation of having to preemptively mention every loophole and edge case.


Maybe there's room for a top tier education and/or certification environment. Some place where the assignment instructions are the simple version and the people who try to play the unreasonable "but what about this?" game just get failed out. It would be a feature of the credential in question.

And right along with that, small enough class environments that the grade for the course is pass or fail, and the exam is oral. You either know the material or you don't.

Maybe we could do this in software and try to cut back on the need for leetcode interviews. Ha!


Evaluation has always been the biggest challenge for teachers in the tech industry, because education is largely driven by assessment (I've been teaching IT for 23 years now).

But things are changing, and the pandemic is speeding that process. A decade ago James Paul Gee outlined where we want to go, and I think it will largely come to fruition before the end of this decade: https://www.youtube.com/watch?v=LNfPdaKYOPI


Reminds me of a quote I read long ago - I think it was from Sartre - that I'll try paraphrase:

"Teaching in public schools suffers from the same problems as cooking in public cafeterias - and generally produces similarly mediocre results."

I don't buy into the sentiment that public education is a mistake, or even that it's outputs are generally mediocre. I do, however, think that the insight that public education is more akin to an industrial process than an interpersonal relationship holds some water.


This opinion seems uninformed about a great body of research that has been done around standards-based grading. Stanford has led a lot of this and a family member of mine has collaborated with them on successful field studies in school districts. The result has been increased comprehension, better test scores, and especially improved performance for disadvantaged groups.

By removing grades on homework, and making it so that what is being evaluated is not collection of points, but rather ability to demonstrate the skill against a rubric while retesting, it allows learning closer to the actual target skill. It also more closely mirrors an actual career: if you are running a project at a company and do not hit your quarterly goal, then you don’t just say “Oh well, guess I got a F” and move on to phase 2 of the plan. You revise and try phase 1 again until you reach the objective.

It should be noted that switching from normal grading styles to standards-based grading is not trivial. In school districts there are in fact entire training programs and coaches like my family member that help teachers, administrators, and parents understand the concept and put it into practice. There are not only practical obstacles, but also paradigm shifts that have to slowly happen. But the results are worth it, it is overwhelmingly more effective.


This is interesting. Do you have a link to further reading?


That's a really necessary question, and I'll be blunt. I've had a fairly long research career in psychology. We all know the research standards in psychology are low, but those in educational sciences are much lower. It's a personally held belief that at least some of the misery in the current state of education comes from their research.


> The problem is that student performance is continuous. When you are forced to discretize into a small number of bins, injustice is inevitable.

Yeah, so why insist on doing that? Because that's what's easy for the schools and the teachers of this rather inhumane mass education system. How else are they to evaluate the performance of hundreds of students? They can't know each student individually. They must create artificial tests which they can apply at scale. They must create a grade economy. They must reduce humans to statistics.

Why even have grade boundaries? The difference between 89.95% and 90% is no doubt statistically insignificant. Why cause pain by creating rigid A-F boundaries?

> But some teachers are principled and are determined to police cheating anyway.

The problem is much of what is considered "cheating" by these people is actually perfectly normal real world behavior. Research and team work are absolutely normal and expected in the real world but in the artificially difficult dream world of academia they are frowned upon.

Why? Because allowing these things would force them to use less efficient evaluation methods. Much easier to ask a ton of questions in timed high pressure exams that will single-handledly decide the student's future.


> Because that's what's easy for the schools and the teachers

That's too dismissive, almost a bad faith frame. The sad reality is that there is not enough money, there are not enough teachers, and most teachers are not good teachers, and quite a few are actually bad. That cannot be changed. We can't open a can of one million good teachers. Even if money weren't an issue, they just aren't there. More teachers, even if mediocre, would help probably, but that's a matter of money and politics.

> The problem is much of what is considered "cheating" by these people is actually perfectly normal real world behavior.

So is stealing and murdering. Schools are not there to teach "real world behavior", but increasingly advanced motoric, cognitive and (to a lesser degree) social skills. At higher levels of education, project and team work becomes more frequent (even too frequent, if you ask me). But you cannot learn reading, writing and arithmetic in a group of 3 where 1 person learns to read, one learns to write, and one learns arithmetic.


Simply, because using absolute points would lead to debate over every single decimal of a point. A student with 89.95% would be determined to fight to get 90.05% and then continue to fight to get 90.15% so it only makes the problem worse.


That's not my experience. There were no A-F grade boundaries in any of the schools I've ever attended. Students only ever cared about the one boundary that still remained: minimum passing grade. I saw exactly one student trying to get a teacher to round up a 99% grade to 100% just so he could say he had a perfect grade, nobody else cared about decimals unless they were in danger of failing a class.

Boundaries are the problem. Standardized testing for post-graduate positions in my country would give students bonus points based on their overall academic performance. Full bonus points for grades > 90%, half points for grades > 80%... Of course students wanted to optimize that number as much as possible and I don't really blame them.


Someone I know went to a college where the final exam was 100% of the grade, in most courses. You could retake it as many times as you want, but only if you scored below 70% (presumably to limit the amount of retakes). The result was that if people weren't feeling confident on the test day, they would recycle their test (not hand it in), thereby receiving a zero and guaranteeing the ability to retake.

Designing systems of incentives is hard.

That said, several of the mentioned problems seem like they have solutioms.

- Excessive detail: keep both or several sets of instructions. The detailed version is authoritative in the case of a dispute by the student, but as long as they don't dispute, it's fine if they follow the spirit of the instructions.

- Deadlines: Give X "extension tokens" per semester, which allow students to submit one day late, no questions asked. Max 2 tokens per assignment (48h extension). My undergrad CS department did this and it was great.

I would guess the latter would work well for regrades as well. A generous but bounded number of regrade tokens. You could even do something like make unused tokens worth 1 bonus % on the final exam, if you want to further disincentive abuse.


> The detailed version is authoritative in the case of a dispute

I think this means everyone will always refer the detailed version. Especially if this is stated up front. Nobody wants to be fucked if the _do_ legitimately need to dispute.


> Shouldn’t you help the actual imperfect humans in front of them, rather than imagining a bunch of perfectly rational Platonic objects?

As a functional, working class adult, I carry a conviction that specific aspects of my life would improve if I was forced to give up a portion of my autonomy by an external party for what some people may term "my own good." And I don't mean this in the sense of finding enjoyable hobbies, but in doing the things that need to be done for the maintenance grounded in hard physics and biology, such as exercise and diet - two things I have little interest in, and for which I have no luxury of external deadlines for. There are limits to certain other strategies. Amphetamines don't exactly help with falling to sleep on time, for example.

For some reason, I believe this in spite of the issues my upbrining caused me, and it gives me conflicting feelings. As other commentators have pointed out, there can be numerous lifelong problems stemming from how one was raised by their parents. In my case, I suffered numerous traumas that I believe I have yet to fully move on from. Criticizing those methods, or the lack of them in the present, makes me feel like I'm shifting the blame to the world around me instead of taking responsibility for myself, as "an adult".

Yet, at the same time, I feel as if I can't be trusted to just expect myself to do the right thing because I've gained financial autonomy and the "adult" label. There are times when the perfect solution makes its appearance ("just" exercise daily), and yet the drive to pursue it fades before my eyes. There are disorders of the mind that are known to interfere with the very capability of someone to take responsibility, such as ADHD.

I sometimes imagine what it would be like to have access to a set of data on the exact dates and times of how people exercise each day, or how they plan their meals (knowing full well that those insights might not necessarily work for me), and the resulting health statistics/ages lived for those people. In some racing games, there is a "ghost" feature that shows you the runs of the players with the best times, to use as a guide for your own strategies. With the current Internet, at times it feels like there's only a massive pile of conflicting information and no sense of direction, which ignores what actual people ultimately do with their lives.


This passage is terrible and beautiful:

Here’s a story from my father. He was teaching a course for working professionals that had a large project component. He—being naive and idealistic—decided that as long as the students eventually finished the project, they had learned the material, so they should get full credit. Thus, his policy was that students could submit the project, get it graded, and then repeat this process as many times as they wanted. He knew this would mean extra work for him, but thought it would be worth it for the students.

The result, of course, was catastrophe.

To call the strategy many students took “abuse” gives no measure of their ingenuity. They realized that they could skip learning the material, and instead complete the project by running an evolutionary algorithm with my father’s grading as a reward function.

Edit: I will say though, I would give this grading scheme a poor grade too. The teacher recognizes it's more work for them, but doesn't seem to recognize it's also far more work for the students. Particularly those who care about their grades and are actually trying to abide by the spirit of the thing and not game the system.


It's funny, because this evolutionary algorithm is actually just peer review. Submit some work, it gets sent back with comments and corrections, authors change their work to reflect feedback and repeat until the result is published. This is what happens when people try to publish a paper in a journal, when people try to get their pull request accepted. Maybe these students didn't learn the material but I'm sure they learned this process.


>Particularly those who care about their grades and are actually trying to abide by the spirit of the thing and not game the system.

A grading system that is more work for students focused on the grade over learning is something I would see as a feature not a bug.


It's not more work if you're focused on the grade over learning; it's more work if you care about the grade at all, assuming the material is at all challenging. Unless you get a perfect grade on the first attempt (or at least a high enough grade to guarantee yourself a top mark in the course overall), you're going to feel compelled to keep re-trying. (And if other coursework forces you to leave it be, then to feel like you're missing out.) Better to be able to just put in your best effort the first time, and then you're done. I can only speak for myself, but I would have hated this system when I was in school.



Every year I age I respect good teachers more and more. I think it’s one of the hardest jobs. And at the risk of getting some blowback I think much stems from our broken and outdated public education system.

There has to be choice and freedom, both for the students and the teachers. I know I’m speaking vaguely but that’s because I myself can’t articulate a solution. But I know there’s a better way.


(On homework)

> And you know what? When the students blame you, maybe they are right. The teacher is supposed to use their experience to help students learn. Shouldn’t you help the actual imperfect humans in front of them, rather than imagining a bunch of perfectly rational Platonic objects?

This is an extremely mean accusation. As a former student, I've never ever blamed any of my professors for not giving me enough homework. I am sure I passed the final exam classes with much better grades and knowledge than the homework classes (which I usually failed and dropped very early). And if I've not, I've still felt extremely annoyed and mad about the system, and how nonsense and unfair it is.

I can accept a statistics about homework and no homework classes (which the article fails to provide), that the majority of students perform better, or the average is better, or the lower range is better, or anything. But this kind of arguing is simply worthless (less than worthless).


This writer seems more concerned with not being blamed than improving students' success rates.

Why have homework grades? Well, if you don't, "they will blame you for not forcing them to do the homework."

Why have deadlines? If you don't, they "blame you for not imposing deadlines on them."

I get why someone would want to avoid blame or conflict, but it shouldn't influence what you're grading people on.

His reasoning for participation grades is bogus.

1. Classes are not better if everyone is talking and asking questions. Often, this actually causes class to move more slowly and cover less ground and questions would be better after class one on one.

2. He doesn't want kids to act up and is punishing the kids that do by removing their future opportunities, which is not great because the kids most likely to act up are the ones most likely to have learning difficulties or hard home lives. So this makes things worse for kids that need the most help.

3. Participation grades are arbitrary and a tool for control. The initial arguments for other grades about how they can't be arbitrary and you need to be rigid are thrown out the window with participation grades.

4. You don't really have a major freeloader problem in classes, in part because it's not a commons. It's managed and controlled by one person or an administration. It is my opinion that it is unnecessary to punish what freeloading exists, which is what participation grades do.

5. Participation grades are not really an incentive, they are a penalty. If you make participation worth something you are forcing people to participate if they want the same grade they would have previously gotten without participation. You are penalizing non-participation by lowering their grade. For it to be a pure incentive, they would need to get extra points or something for participation, but they don't.


> 1. Classes are not better if everyone is talking and asking questions. Often, this actually causes class to move more slowly and cover less ground and questions would be better after class one on one.

Covering more material isn’t necessarily a good thing if students are not really understanding the material that’s already been presented. I personally and benefitted greatly from others asking questions. Sometimes the questions asked were not even things I considered. In my college classes that were purely lecture based where the prof didn’t allow interruptions, I certainly got more value out of studying with others because of the questions.


>Covering more material isn’t necessarily a good thing if students are not really understanding the material that’s already been presented. I personally and benefited greatly from others asking questions. Sometimes the questions asked were not even things I considered. In my college classes that were purely lecture based where the prof didn’t allow interruptions, I certainly got more value out of studying with others because of the questions.

Sure, that can also happen. I've had multiple lectures though where the questions caused pacing issues and where the questions were unique to the questioner. It's a catch 22 because the teacher asked for this but still wants to cover the required material, and they need to balance it better to avoid losing control of the class. If in a college or grad school environment, a professor has office hours. If there are multiple students with similar questions, I have found that email or later lecture clarifications work well.

I'd say I got a lot of value out of studying with others regardless of whether questions were allowed. It's a necessity where a professor yells at you for asking a question as did my college statistics professor; unfortunately he was also the head of his department so nothing we could do about it but learn on our own.

That's not the trade-off without penalties for non-participation though. It's not 'no questions at all' or 'all questions, all the time.' It's some people ask questions, most people don't, the grades aren't impacted by your own introverted nature.

Here's the kicker - most of the time, if participation is graded it doesn't massively increase the actual participation in discussion. It just gives the professor more control over the grading, something they desire especially if it's blind grading numbers. It's a penalty system for people that the professor doesn't like or behavior they don't like, and sometimes a way to reward favorites. That's all.


I think one of the major problems with university teaching at the moment is the strong emphasis on student evaluations.

I don't believe students are necessarily the best judges of learning outcomes. In face I have learned that students tend to be the most conservative entities at university. Try to make any change in a course and they will hammer you on the evaluations. The detailed instruction example is a perfect example of this. In my experience it is not so much that students find creative ways to misinterpret loose instructions, instead I had students literally complain they had to think for labs or homework. I find this attitude very weird, as if when they start working they will always be given the detailed step by step instructions.


I remember an obvious case of cheating when I was teaching college students (a boyfriend and girlfriend who sat next to each other turned in identical test papers). I remember telling them that it was obvious that one of them copied off the other and that they couldn't sit next to each other and when I started to complain I told them that filing a formal cheating complaint was a lot of work and I'd really prefer not to do that but if they'd prefer that instead of sitting apart on test days, I could do that too.

They sat apart on the next test day. I forget which one of them was the one who failed the next test, but both stopped coming to class after that (I'm assuming the one who was providing the answers was only in the class to provide the answers to the cheater).


This is what happens when you take a class where some students don't want to learn, put them in a classroom with teachers who don't want to teach those students, and expect the teachers to act as inspectors in addition to actually teaching them.

If you're trying to learn, and if other people are involved, the process goes something like this:

1. you find a mentor, a peer group, or a teacher

2. you read or watch the resources they give you, and do some exercises

3. you ask them to critique your work, possibly in exchange for money[0]

4. they give you actionable feedback, you thank them, and go back to step 2

This is because you want to learn, they want to teach you, and they're not the same person who's going to be testing you later.

[0] A good example of when you'd pay money is Drawabox's official critiques


Completely agree.

People are forced to learn because basic education is compulsory and higher education is necessary for success. Not everyone wants this but they're forced to go through it.

Since everyone has to be taught, schools turn into mass education centers designed to teach hundreds if not thousands of people at once. It's impossible to mentor individuals at this scale, so teaching becomes these one-to-many broadcasts called lectures and evaluation becomes mass testing.

There are inherent limitations to these practices. Not every student is engaged by lectures and there is no room for individualization, people who don't adapt often get diagnosed with "attention deficit" and medicated with amphetamines to improve fitness. Tests are so trivial they have to impose ridiculous constraints on them to artificially increase the difficulty, punishing perfectly reasonable behaviors in the real world such as research and team work.

And it goes on and on, it just keeps getting worse until even the average student can smell the phoniness of this system from a mile away.


>In a recent post, Parrhesia suggested that course grades should be 100% determined by performance on a final exam—an exam that could be taken repeatedly, with the last attempt being the course grade

>[...]I suspect this proposal hasn’t seen much contact with people who’ve actually taught classes

This is how a fair number of classes at my university are graded. Particularly math classes are structured so that you could literally ignore the class for 3 months and then just show up and take the exam and that decides 100% of the grade. Some homework is available for bonus points, but it only contributes to going from a failing grade to passing. While retaking the exam to get a higher grade isn't technically part of the system they will let you do it if there is space.


They let you retake the exam? Like... the same exam? Or re-do at a later term?


Re-do the exam when it is next offered. The most common math exams (e.g calculus, linear algebra) run 4 times a year so there are quite a lot of opportunities.


Once, I tried a grading scheme in which the grade is either a weighted average of assignments and exam grades, or the exam grade, whichever is higher. My reasoning was the same as in the article -- I thought this scheme would measure what matters most.

However, I found that quite a few students skipped assignments and also did poorly in the exam. Of course, the strong students sailed through either path. But the weaker ones also matter. And, after all, the point of grading is not just to assess. It is also to motivate students, and to signal to them which things are important.

Unless the class is quite strong, it makes sense to balance assignments and examinations. And, as in the article, it is helpful to pick a scheme and stick to it.


High school AP stats teacher here in their first year of teaching. While I think this article holds some water much of it is a hyperbole.

In reality a good teacher will learn they can’t make everyone happy and learn to deal with students who complain.


Made me sad reading this. I've avoided this by making sure everyone knows you never increase grades, unless there's a grading error. Also, in spite of putting more nicely worded instructions than the ones in the article, I always have around 2% of students completely miss them. More instructions have not decreased that for me. So why do more of them?

At one point I said to a student - "You should really work as hard at learning as you do trying to negotiate a higher grade!" I assume they rated me low, but I stopped reading those things years ago.


I taught college CS for 10 years before moving to industry. Cheating wasn't a huge problem, but I did run have some issues.

Gave a makeup exam to one student with an altered programming problem than the original exam. The student answered the original problem, not the one on the exam they was given. That made it very clearcut.

I also had a written requirement that students must be able to explain their homework programs to me. Had a few that couldn't explain what parts of "their" own program was doing.


Most professionals don’t know parts of their SO copy pasta code does either


this article hits so close to home with my experience teaching this semester I feel personally attacked. It makes me sad. I'm really enjoying reading the discussion others are having so I will contribute to anecdotes that I think are telling of just what the author is getting at.

1) About a year ago I was teaching a class that included working with a user group from a country in Africa. 2/3rd of the way through the semester, after submitting an extensive (and well done) research paper on the country's culture, norms, people, and public health needs a student asked me flat out "Is Chad a real country or did you invent it for this project". School is so internalized as a game that they don't even know that they are approaching school as a game.

2) I recently gave a test and received a number of regrade requests. One student had written two paragraphs to answer a question needing one sentence. His regrade request (after seeing the key) was to include the first five words of the first sentence, and ellipses, and the last four words of his last sentence and ask for credit "because the answer was in there". Another asked for a partial credit because 'I understood it even if my answer wasn't right'


> [...] course grades should be 100% determined by performance on a final exam—an exam that could be taken repeatedly, with the last attempt being the course grade. [...]

Lets take this at face value.

There is no reason pay academia for the entire class. If I can pass the final (and pay for that part) which demonstrate my knowledge, there is no reason for me to spend the entire semester/quarter/period in class or pay for it.

Prove me wrong.

This is how it works with real-world licensing and certifications.


> I had some teachers who tried to avoid the issue by setting the A boundary at 89.5%. I outwitted them by earning 89.483%

89.483% rounds to 89.5%, but not 89.50%; it's just a matter of significant figures. I see significant figures often being misunderstood. You can only ever compare values of the same number of significant figures, it's just that most of the time that's done implicitly, so it's not acknowledged.


Grading is not dissimilar to setting up arbitrary metrics on a business or an engineering team.

People will find ways to optimise for the metric.

If you give bonuses based on number of commits created or number of tickets closed you'll end up with a lot of useless commits and tickets.

If the only thing that matters in order to pass a course is to do an exam, people will optimise for that. If someone doesn't like the subject or doesn't like the teacher or doesn't like being taught (especially disagreeable boys), they will happily skip the subject and just try to get a passing score.

In university I was already working as a professional developer and I attended only a few classes I cared about and hacked my way out of exams with a mix of cheating and cramming the night before.

I enjoyed all the project work instead and I excelled at that. But that was worth just 1-2 points out of 30. Why was I forced to memorise bullshit I didn't need and that I would have not remembered 3 seconds after the exams in order to get a piece of paper saying I graduated? Isn't being able to do the projects more important than that?

When I hired people with degrees from "good schools" I was always surprised to realise how little were they able to get done on their own. I quickly stopped even checking their qualifications as they're completely worthless for anything related to work.

If I had to reform education I would make it totally based on projects. There would be no grades or titles when you get a job, just an increasingly longer list of projects you worked on.

When I was in school I had to take a Latin class. I didn't want to take it but I picked the best course according to my interest - and unfortunately it had Latin.

I spent those lessons secretly working on my own projects, then I downloaded a bunch of famous texts with their translation and I just wrote a J2ME application to look things up and used it for 5 years (Mobile internet was very expensive back then and searching on the internet would have been way harder).

After I finished my written finals in all the subjects, luckily my score was already high enough not too pass, even if I got zero at the oral exam and I kind of bullshitted my way through that last exam.

Was there any point in trying to force me to learn something? Why do we put people in this situation?


> There would be no grades or titles when you get a job, just an increasingly longer list of projects you worked on.

And you'll obtain the good obeying sheeps. You would miss all the competent rebels, those who think outside the scope.


If you have lived all your life in a basement you tend to think the flourecent light is the sun. Could not make sense of the article… my take on teaching is… you teach because you are the most capable on the subject in the community, is a responsability and you do your best. As with most professions when you do it for the money, you become cinic.


Teaching should definitely not be done by the "most capable on the subject", it should be done by those who are the best educators (with "best" being based on some metrics relating to student outcomes). Both roles are not the same. Just because someone is knowledgable about a subject doesn't mean they know how to convey that subject for consumption. Conversely I think that those who don't know as much about an entire subject but understand the material of a particular course very well could be better suited for the task of teaching than the "most capable".


Tomeitos, move to a smaller isolated community and see how you become the right person for the job.


The grading method this article argues is infeasible is widely used in the UK (although retakes aren't always unlimited or free). This does have downsides, but many of the other problems described in the article vanish. Very importantly, students aren't incentivized to hide the fact that they don't understand something in homework.


When I give my course to the students, I kind of feighten them on the first homework grading that has low weight in the final grade. Some of them even quit as I seem to be a crazy strict guy. Then they stay and I ease up, they study as needed, end of semester: “best course” / “best teacher” / “this course made most sense this year” etc.


Yup. If i taught as if a younger version of myself would have liked, it would be a terrible course for most students.


Well, the issue here, is confusing evaluation and exercice. The first one is used for pilots, to make sure they won't crash the plane. And the second one is used for learning (making mistake and correcting them).

There is no point in cheating on exercice, but there is on evaluation.


As a student for many years, I completely agree I would not prepare as well without some stakes in the homework. However, using those super precise instructions can be harmful to students. It's very difficult for me as someone with ADHD to follow these instructions and not miss something.


I always say "Never underestimate a student's ability to misinterpret an assignment."


I feel the same with Wikipedia, the Norwegian articles are often better than the English ones because they often are shorter and to the point.

I don’t want to read 2000 words for something that could be explained in under 100.

Have been thinking of a 200word limit per article version of Wikipedia.


> Have been thinking of a 200word limit per article version of Wikipedia.

Hard limits are not a good solution either. They may still be better than the status quo though.


I'm not sure that's really in the spirit of an encyclopedia, unless you really could get all the information you had in those 200 words.

I think this would be a valid imposition on the summary sections of most pages though, they can be a little unfocused I find.


You get to put all the information there, but have to place it in a sub-article. That way you force the writer to have a more faster introduction.


Sounds like ruining the system for everyone because a minority of abusive, lazy, or incompetent students. That doesn't seem reasonable at all, and it sounds like the root cause needs to be addressed.

That minority is not going to be spoon fed when they enter the real world.


A complementary proposition would be Teaching is a slow process of becoming something you love.


Another fine example of "Our systems are the scar tissue of past mistakes."


i always personally preferred project work. i was never great at exams (although i did get a whole hell of a lot better at them). project work always better suited my obsessive personality and desire to really polish things. projects feel creative, homeworks and exam prep... don't. (although learning how to take exams meant learning how to make good cheat sheets and memorizing them well, so in a way it became creative)

that said for most lower division material projects are unsuitable, for that stuff the system i saw i liked the most i first saw online for an undergrad intro ai course at mit. it was pretty simple, the course had a handful of carefully designed uncurved but not tricky half exams units that were given throughout the term. the final was two half exam spaces for any units you wanted to try again, if you did well all semester, you didn't have to show up for the final, if you messed up, it's your chance to retake the specific units you wish to improve. goal: demonstrate you learned all the techniques in the course, that's it.

sometimes it felt like putting more weight on homeworks was for student comfort and to reduce stress on exams for everyone, sadly sometimes i think it had the opposite effect of producing lazier exam design and more reliance on curves. i once took a course which had no official notes, fairly weak lectures and the claim "i teach at a level above the assigned textbook." no, he didn't, he wasted everyone's time.

i once went to see a professor after the fact to go over the final, i told him explicitly i just wanted to understand the things i got wrong but he kept returning points even after multiple statements that i didn't care. this made me very sad to think that he probably sat for hours with people arguing over points rather than discussing material.

overall it felt like some professors (or maybe their students) spent hundreds of hours designing amazing courses and some spent less than ten. those in the former camp were often prickly in terms of their specific asks, but obviously in those cases it didn't matter as the care and craftsmanship that went into the course design justified any particularity. it was the waste of time courses that were the worst (even if they did sometimes come with generous mea culpa grading).


To first order, you can solve the regrade and homework issue with policy: gate regrades on completion of homework assignments, and limit the number of regrades that may be submitted per week.

You need a rate limiter to prevent students from just spamming regrades until the evaluation returns "A", and you want to incentivize the desired behavior--homework is intended to help students develop skills.

If you want to learn more, some useful keywords and phrases to find cutting edge thought: "ungrading", "Standards Based Grading", "learning objectives-based assessment"

There is literature on this, but don't let that stop you! It's much more fun to speculate about pedagogical practices based solely on what you remember from high school and college.


That's exactly my problem with this post, the author makes it seem like a lot of thought was put into their teaching when it seems like most of their "progressive" ideas were half-baked. When talking about the abundance of regrades, instead of re-iterating on the idea, they gave up on it and deemed it impractical.

> "We could change from the current “mandatory Odysseus” regime to an “optional Odysseus” regime: On the first day of class, offer students an irrevocable choice: They can have homework and deadlines imposed on them, or not. Perhaps the students who need deadlines would learn to opt for them and others could live freely."

I don't see why there can't be more happy-mediums like having deadline schedule where if you miss one assignment, you can turn it in within the next 2 assignments. There's a lot more options in the space of possibilities that aren't being explored.

And maybe this is not exactly a criticism in the author's eyes depending on what their priorities are. I'd have to find out how the author ranks the priorities of being a teacher. Is the main priority to have the students leave with an understanding of the course material, to just do the job well enough to not get reprimanded by the school (by doing the minimum to reduce student complaints and have a "healthy" pass/fail ratio), or to aid the student in their education however far they are able to go with points given for effort (which could vary depending on what the student will actually pursue). This also isn't even touching on the topic on the quality of teaching being done and whether as much thought has been put into that as grading policies.


Yep, exactly. Chesterton's fence is a worthwhile thought experiment, but a lot of things that get justified with "we do it this way because it's been done this way" are known to cause harm.

The bigger problem is that this change is really stinking hard. In addition to whatever subject material you're trying to teach, you also have to re-train your students, because they're going to default to "an A- is a 93%, a B- is an 84%, etc", and somehow "things graded with more points were graded with more care"


You can replace "teacher" with "engineering manager" and it continues to be spot on. None of this changes after engineers graduate, and even after they have years of experience.


This is probably counter, but why do a participation merit system when you could do a participation demerit system. Your grade can go no higher than a D without participation


The whole thrust behind this article is that grading and testing is a bunch of crap. And that's entirely correct, IMHO.

We should restructure our whole Western (and Eastern, for all I know) education system on the lines of Ivan Illich's book Deschooling Society https://en.wikipedia.org/wiki/Deschooling_Society which (simplified) suggests that everyone gets an educational grant that they can spend about which and who they study with they can freely choose. But it is not going to happen, and we will stay with the whole grading and testing bullshit.


That's putting the cart before the horse, people need teaching to learn how to spend, trial and error is the slowest learning strategy and should only be used when there are no alternatives


> trial and error is the slowest learning strategy

But that's what they do now! I can't see any alternative to that, but people could be given an increased range of learning possibilities.


I do not understand how self directed learning like this is expected to work at all for children.

If you replace a high school with letting the students do whatever interests them, a small fraction might study something, and the majority will spend the time watching tiktok, playing league or legends / fortnite, listening to music, etc. maybe you think that letting kids do whatever they want for “education” like this is better, but IMO this proposal is much worse than the current system of directing students to spend time on things that are moderately useful to society and determined to be valuable to intellectual development


That is not at all what I got out of the article. The main thrust is that policies that are apparently dumb to non-teachers are not actually dumb. They are well-justified responses to the ways students will attempt to defeat the purpose of the system.


> students will attempt to defeat the purpose of the system.

Given any system, why would people not attempt to defeat it? And if they can defeat it, perhaps the system is wrong?


Yes, people will attempt to defeat systems to their own benefit. That is why these policies discussed in the article exist -- to harden the system to such attempts.

"Wrong" in your comment is underspecified. It depends what you mean by the word.


That relationship of a teacher being an obstacle to a grade that signals institutional approval, it is totally broken. This is gamefied "education," where the course material and even the instructors recognition doesn't persuasively have intrinsic value.

I'm dealing with something now outside academia, where there is absurd bureaucracy, and I'm sidestepping and shortcutting it because I see the mission outcome as separate from the prescribed process, and the trickling and breadcrumbing of details is an abuse of my time, so I sympathize with the student perspective - but in an educational setting, the prescribed process is essentially a sacrificial cost that enables you to "become," a person who has mastered it, as it makes the skill the effect of a skill, and not just the affect of the outcome. Education is necessarily transformative, otherwise it's just rote training.

Example would be, I take music lessons, I'm difficult to teach because I really like Bach and Chopin and I can play some simple preludes by ear, but my sight reading is maybe at a gr. 2 level, which is just enough to get the pieces under my fingers with a hill climbing struggle, but makes me useless as an actual musician, and probably very irritating to musicians whose performances are the effect of their years of real skills, and not the affect of hackery or savantism that an unskilled observer can't easily distinguish. Even if we played the same piece, comparing them to me would be insulting and debasing to them because it's like saying a recording of something is the same as a performance of it, so it's very diminishing.

In the case of the temperature monitoring circuit in the article, the process is designed to facilitate a mental transformation of exercising elementary skills, and being educated means being able to commit to that process of being molded by the experience. The details are to force commitment to the process and filter out those who aren't. Unfortuately, credentialism incentivises this affect of skills and drives enrollment, so if you are doing a job oriented degree, you're basically trained and not really educated through a process of becoming.

It would almost make sense to offer students a deal, where they can choose a training track that leads to a 52% / C- grade and do the minimum, which takes them out of the way of the TAs, they don't participate in discussions, and they can coast and draft others, maybe date each other and say they went there, but they can't impose costs, where others can elect to aim higher and choose the education track with an understanding of what that means.


Examples of "You understand when you're older ...". Any examples of when you thought you understood then, but only, truly understood when older?


Are people less reasonable than they used to be? I mean, was it necessary to use these kind of nudge/incentive techniques 30 years ago, 60 years ago?


Ha, the same happens to managers.

Unless you have a power to remove cheaters, you will have to throw rules at them. At expense of everyone else.

I still hold the line, but do suffer sometimes.


what makes you happy being a manager. i mean how does it provide you satisfaction. I've been trying to see if its a good idea to go into managmement but dealing with people isn't my strong suite.


Honestly, ability to build more. Can’t say I’m a big fan of people issues.


I had the impression that the assingments for math are often like the first (dismissed) example with the breadboard.

Draw the rest of the fudging owl.


Teaching can be fun when you get to pick your students.


In the light of remote teaching, remote work, gamification, AI etc, we should rethink teaching as a whole

It feels there is very little experimentation in the space, mostly trying to mimic a classroom in digital


So is management in a large company.


Firstly, I sympathise that teaching seems to be awful. I think a lot of the problems are misaligned incentives (universities are judged by their research output; it is hard to have a career as an excellent teacher rather than an excellent researcher)

But I worry that the argument has strayed too far from one based on Chesterton’s fence to one with a lot of status quo bias and rationalisation. I say this because my university (outside the United States) followed a system much like the one the author is arguing against.

- Almost all of the degree was determined by a final exam. In fact there wasn’t even a final grade for the degree but one for each year and the convention was to take your final-year grade so in some sense only the final exam mattered (you might be expelled or advised to transfer to another course/university if you did particularly poorly in earlier exams). However, there were no resits.

- There was homework but it was not graded (at least for my subject. Your individual questions would be marked as right/wrong and problems would be pointed out)

- Attendance to lectures was not required however one had to spend a certain reasonable number of nights at the university (for a particular definition of ‘at’ and ‘university’) in order to graduate.

- Attendance to lectures was strongly encouraged (because you would struggle to get notes/homework without attendance as ~everything was handwritten or on physical paper)

- Attempting homework was strongly encouraged because you would go to small (one on one or one on two) group teaching sessions to discuss it, so there was social pressure to not be (extremely conspicuously) absent and to have something to discuss Let me now briefly discuss how this alternative system addressed some of the author’s points.

- Preparing for exams by doing homework (and also a ‘homework’ set of example exam questions) was incentivised by the social pressure of it being very obvious if you didn’t do the work

- The homework system also addressed the problems of asking questions being scary in a big group and the (not discussed) system where lots of students in the US don’t realise that they are meant to seek out help in office hours (and worse, I understand this is a particular problem for poorer students who are less likely to know that unlike high school you aren’t meant to touch it out alone)

- Because homework wasn’t graded, some questions would be very difficult (because attempts and discussion could be interesting) or chosen for the pedagogical value. Looking at homework offered good opportunities for feedback

- Converting examinations to grades was complicated (you would get regular partial credit marks plus two different kinds of bonus mark for different levels of significant progress on a question which got outsize rewards to encourage doing fewer questions well over having a crack at more questions; there was a vector which you could dot with your vector of the three marks to get an ‘overall mark’) and borderline candidates would have their submitted answers carefully reviewed by the examination committee to allow for more fair subjective grading

- The university didn’t really offer many opportunities to appeal which reduced the pressure on teachers but has its own problems. There were some rare allowances for extenuating circumstances but in general it was encouraged to not start exams if there were serious complications (eg some health problem) and to wait a year, which was also a problematic system.

- But they did try to be particularly fair to students, e.g. they would collect the rubbish paper after the exams and if some student claimed that they had answered a question for which no answer had been submitted, the bins would be searched

- Cheating was relatively difficult as there was only one big opportunity for it: the final exams of which there were four (to allow for more time and averaging out a several days) which contained questions from all courses. More could be invested in invigilation for these few exams.

That doesn’t mean the system was without complaints. The big complaints were (1) pressure, which was slightly mitigated by the selection procedures of the university somewhat selecting students who were able to handle big exams; (2) unfairness with regards to poor performance during the exam week for random reasons (e.g. injury/personal circumstances/mild illness like a bad cold); (3) different standards for different courses, particularly a divide between pure and applied and harder courses tending to have easier questions; (4) The university is selective and many students felt that they could have gotten a higher grade by going to a less selective university, and many students felt their future would depend on the grade and not the institution next to it (many companies claimed to have ‘institution blind’ hiring for example) and therefore the university was unfairly damaging students’ career prospects with their desire to grade students based on how much they might be allowed to continue education/research at the university.


The title could also be why communism is fundamentally incompatible with human society.

The realization of human nature really disappoints.


teaching is broken


removed


I would be curious to know the name of that school.


> But some are an assault on reason, with every word of the assignment creatively misinterpreted. It was never stated which temperature circuit to build or how to prove it works or what level of explanation was necessary. And who’s to say what “build” means?

OK? So your students tried to do something and failed creatively. Sounds good. Reward them for their efforts, ask them to try again if you feel that they still need to get something out of the assignment.

> But some don’t, and they keep complaining and asking for regrades, and if those aren’t accepted they (or their parents) contact the principal/chair/dean/ombudsperson, who are required to have an investigation.

OK.

> hat gets misinterpreted too, so more details are added, and by the time the teacher retires you have a monstrosity that’s universally despised but almost impossible to complain about.

So your bad solution is good because it started off bad and ended worse. OK.

> Well, enjoy re-grading every single assignment from every student near a boundary,

Round up by default? If someone has an 89, just give them the 90. Honestly, who cares if a few students come up to you and want regrades, I imagine it takes all of 30 seconds to cross out the old grade and add the new one. How onerous...

> As far as I can tell, most follow the incentives and make little effort to stop cheating.

Cool. Most of the time cheating entails something like access to notes on a test that is artificially made more difficult by requiring memorization. That's why open note tests are far better.

> But some teachers are principled

Bummer. They don't sound principled so much as they sound unimaginative.

> Say you suspect students are copying from each other on an exam. You can silently prepare multiple versions of the exam with “micro differences” in questions.

Sounds dumb, I don't like the idea of trying to "trap" kids. I cheated exactly once on a test and got away with it - why? Because I was unhappy in school and I went home and spent my time distracting myself rather than preparing for it. Me cheating one time had literally no negative impact on my life, you trapping me and once again teaching me that education goes hand in hand with punishment would have done years of damage.

> They realized that they could skip learning the material, and instead complete the project by running an evolutionary algorithm with my father’s grading as a reward function.

Creative. Without knowing more about the assignment it's hard to judge, but I'm wary of any assignment that you can just brute force like that.

> your students will be lazy and fallible.

I had to undo years of being told I was "smart but lazy". Teachers need to erase that word from their vocabulary.

> So they won’t learn anything. That's OK, most people don't learn much from school.

> And then they will blame you for not forcing them to do the homework.

a) OK

b) I mean, maybe the parents would? I frankly don't believe that any student will blame a teacher for not forcing them to do homework.

> Surely what matters is if a student understands things, not if they ask questions in class?

Good question. What exactly is the point? To me, education serves a few functions.

1. Babysitting kids so that parents can work

2. Providing young people with a safe place for them to explore their emerging identities, interests, and view of the world

3. Stoking an interest in learning and providing the tools and resources to build a baseline knowledge for future education

So, is understanding really the goal? I don't see understanding as being particularly critical to the education system.

> Participation credit helps to internalize positive externalities.

100% agreed.

My transcript is an odd mix of grades - even within a single class, within a single semester I could go from an A or B to a D or F, or coast by on a C. What I value most is that during that time I dated, made lifelong friends, read books on physics and philosophy, discovered New York City while I skipped classes, played video games, learned to bike, etc. All of the stuff you're talking about, it's the stuff that got in the way of everything that has produced value in my life.

Anyway, those are my thoughts. I think school is pretty stupid, as is, but I find that I pretty much exclusively disagree with teachers about why. I sometimes read /r/teachers and the self indulgent pity party, and the "I wanted to be good but I just hate kids now!" theme, is sickening.

I also find it sad that so many people become what they hate. I think people seem to have an incredibly hard time empathizing with their former selves, which I find so weird. But I've had adults trivialize teenagers' problems, as if just because now they have "adult problems" that somehow means that when they were a kid they were just dramatic.

Maybe try to regain some insight into why your younger self would be disappointed, and what they might suggest.


>Round up by default? If someone has an 89, just give them the 90. Honestly, who cares if a few students come up to you and want regrades, I imagine it takes all of 30 seconds to cross out the old grade and add the new one. How onerous...

Ahh, here speaks someone who's never taught a class :) If word gets out that you round 89 up to 90, then next you'll be dealing with all the people who got 88.5. At some point you have to have a grade boundary. It may just as well be at 90 as at 89 or 88.5.

>Me cheating one time had literally no negative impact on my life

As the article explains, cheating has negative effects on everyone else. Of course cheating can be good from the cheater's point of view – that's why people cheat!


> If word gets out that you round 89 up to 90, then next you'll be dealing with all the people who got 88.5.

Why would word get out if you just grade that way? No one would know you were rounding up...

> As the article explains, cheating has negative effects on everyone else. Of course cheating can be good from the cheater's point of view – that's why people cheat!

I think you've completely missed my point. Cheating had no negative impact - on anyone, at all. Getting caught cheating would have huge negative impact.


>Why would word get out if you just grade that way?

Students compare grades and talk to each other. It's also not uncommon for students to ask about your policy on rounding in the first class, when you're going through the syllabus.

>Cheating had no negative impact - on anyone, at all.

I'm afraid your cheating did have a negative impact on others, albeit a small one. For example, suppose that the class you took was graded on a curve. Then by adding a false datapoint, you may have pushed up the cut off point for the higher grades. More generally, the larger the number of cheaters, the less meaningful grades become for everyone. Every fake A grade contributes to the devaluation of real A grades.

>Getting caught cheating would have huge negative impact.

You'd be surprised. As the article explains, punishing cheaters isn't really in anyone's narrow interests. It's sadly rather easy to get away with cheating at university, even if you do get caught.


> Students compare grades and talk to each other.

That works for multiple choice. Given the ".5" I'm assuming partial credit is discretionary. So you can just discretionarily choose to give +.5.

> For example, suppose that the class you took was graded on a curve.

It wasn't. Also I'm pretty sure I still failed the test because cheating is hard, I couldn't really read much of what the person in front of me wrote.

> You'd be surprised.

I would be, yeah. My school took that very seriously.

> t's sadly rather easy to get away with cheating at university, even if you do get caught.

Yeah, my CS degree had a hilarious amount of cheating going on.

My point isn't "cheating good".


>That works for multiple choice. Given the ".5" I'm assuming partial credit is discretionary. So you can just discretionarily choose to give +.5.

Yep, and then you'll deal with the students who want to know why their friends got the discretionary +.5 and they didn't! And you'll be in a difficult position, because arbitrarily adding points to some answers and not others does seem pretty unfair on the face of it. (Remember that the students who weren't sitting on a grade boundary will be comparing their scores with the students who were, so they'll see if you added +0.5 points to question 1 for Jack on 89.5 but not for Jane on 85.)

By the way, "partial credit" in this context means "credit for a partially correct answer", not "non-integer credit". You can perfectly well have a scoring system where a single correct answer is worth 0.5 points, as test points are a completely arbitrary unit :)

> My point isn't "cheating good".

It's not clear to me what your point is regarding cheating. You seem to not like the idea of people being punished for cheating. But as cheating is easy to do, it would run rampant without at least a tangible possibility of punishment. So I don't really understand how you (i) think that cheating is bad, (ii) recognize that it happens frequently, and yet (iii) don't think that cheaters should be punished.


> think that cheating is bad

I don't think that cheating is bad.

> recognize that it happens frequently,

Naturally. If you give people stupid chores they will almost universally try to find a way to avoid them.

> don't think that cheaters should be punished.

Even if I bought into everything else ie: that testing is good and cheating is bad, I would still not punish cheaters. As I said, I cheated that one time because I had other issues that made school difficult. Punishing would have done nothing except add additional stress, making me retreat further from my education. But of course, as I just said, I don't buy into all of that other stuff, so it's not only an ineffective and cruel way to approach education, but it serves no purpose.

> It's not clear to me what your point is regarding cheating.

My point is that most tests are stupid, and a lot of what "cheating" is is just making them less stupid. For example, I remember students would hide their notes during a test so that they could reference them. That's just good sense - in what real world situation do you need to have instant recall for arbitrary information? It teaches kids to memorize shit, which is damaging.

Two students checking each others answers? Sounds a lot like any normal adult problem solving.

So you can try to "tweak" the system until cheating is impossible or so scary that people will rarely try, or you can "give up" and let people cheat... or you can take a step back and realize that you've made up a problem with no solution.

As I said, school should focus on the three things I mentioned. Nonsensical testing strategies and finding ways to trick kids for doing what is, frankly, the sane thing to do, is purely damaging.


If you were smarter you would recognize that in many cases there are substantial financial rewards for good grades. And many classes are curved. It's not just wrong its evil to screw other pre meds for example by cheating. Of course you are obviously right its good to work together on problems, but what does that have to do with exams? Obviously in hard classes everyone could in fact would "work together" with the top students.


> If you were smarter you would recognize that in many cases there are substantial financial rewards for good grades.

Yes, well, I'm managing to scrape by somehow despite that.

> Of course you are obviously right its good to work together on problems, but what does that have to do with exams?

Yes, that's... the point. Exams are idiotic.


LOL at the idea that grades are just based on who you sit next to in a test.


TBH reading this article really didn’t give much, IMHO.

I guess the context here is mainly a university? Or is it senior high?

Anyway, I spent around three years in college and the value-add for me wasn’t the grades I got. The value I got out of it was the foundation that I laid and the inspiration I got. I got in touch with materials/domains that I might never have encountered otherwise. And to me that’s truly a gift that keeps on giving.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: