No snark, but if you sincerely believe that, then there can't be a Libertarian Utopia because the libertarians are outnumbered by scoundrels 1mil to 3. Why would you take a path destined for failure?
Society's bigotry is going to flood that bad boy so quick you might as well name it Gobbels.
I love ML. I want children to be safe. This is not the place for ML or AI or Quantum or any tech.
What needs to exist is better resources for those children, that mother grading the tests, the teachers of those children, and social services that are meant to support them. If you want to make a difference about this, look there.
Don't go building a automaton King Solomon who decides why this kid should be taken from these parents because speaking Spanish was worth -0.1 on some goddamn weight trained on data generated from a racist society.
This isn't a "spooky" correlation a cool algorithm can detect, it's a serious, layered social problem.
> Don't go building a automaton King Solomon who decides why this kid should be taken from these parents because speaking Spanish was worth -0.1 on some goddamn weight trained on data generated from a racist society.
Totally what I advocated for and not a strawman attack /s. Indeed, the chance that such an algorithm could be racist or classist and there being needs to avoid bad correlations and have appropriate controls is important.
I think there are opportunities here. Ideally ed-tech doesn't take humans out of the loop, but asks schoolteachers and administrators questions like, "Hey, are you sure students A, B, and C are being supported correctly for subject Z? Are you sure students D, E doesn't have some kind of abuse or other significant home problem? It sure looks like student F is in this subpopulation that research shows benefits from educational intervention Y. You might want to keep your eye out for that."
And then the teacher goes "Oh, crap. Now that I think about D, there were always these little things 1, 2, and 3 that seemed off... maybe this is worth a referral to social services to check on what's up."
Or "Oh, ... maybe F's struggles in reading really are a speech problem and we should handle that"
On one hand, I agree with you. I remember having to argue whether I showed my work or not by using imaginary numbers instead of standard formulas in high school physics.
But even with these examples, the path of appeal and rectification of mistakes is much easier with all humans involved. I fear soon people will side with the machine out of ignorance or to be justified in an incorrect stance.
The idea that we could be so poorly taught by broken automated systems, that we become incapable of detecting the system is broken seems like a possibility with AI that is much less likely in pure human systems of education (though not impossible).
Things like this story, Word's auto-grader, and Grammerly's style preferences are all surreal to me. We are asking a computer to validate prose meant for human consumption.
Not a reflection of physical reality like sensor data or even accounting information, but the method of communication explicitly invented for production and consumption by humans.
Of course feedback from humans is more valuable than feedback computers, it would be irrational/miraculous if anything was better at giving feedback than a human.
It is a shame it isn't self evident to instructors how poor of a solution this is, and how much better the results are when using critique by peers and instructors -- the classic way of doing things.
I'd argue that the expectation of perpetual population growth is one of the big problems (the unsustainability of social security bottlenecking at the baby boomers being an obvious example).
There is a compelling case for immigration for that sake alone.