Hacker News new | past | comments | ask | show | jobs | submit login
The exploited labor behind artificial intelligence (2022) (noemamag.com)
171 points by zhte415 on Jan 20, 2023 | hide | past | favorite | 205 comments



Neal Stephenson's "The Diamond Age" has a fake AI "character" -- a book that appears to be AI but is really operated by low-paid humans in 3rd world countries (reminiscent of the Mechanical Turk, a supposed chess playing automaton that actually contained a small person).

There's an old observation from Arthur C. Clarke, that sufficiently advanced technology seems like magic. One thing that learning how magic tricks are performed taught me is that magicians typically do their fake magic by doing an unreasonable amount of work behind the scenes -- "magic" in the real world is often just doing a large amount of work that people don't realize is happening.

Given all that it seems appropriate that the new "real world magic" -- ML systems imitating intelligence -- really rest on a lot of hidden work by human beings. Just like magical devices like iPhones exist due to a lot of surprisingly cheap labor. Imagining otherwise is like imagining that the delicious food from a 3-star kitchen just appears from the chef's mind, without the help of all of the low-paid kitchen workers, farm workers, etc. that in reality do most of the work.


> a book that appears to be AI but is really operated by low-paid humans in 3rd world countries

Uhm actually :) the AI definietly writes the text itself, and takes care of Nell, and senses the environment around itself, but for plot reasons it can’t do voice synthesis. So it employs humans to read up the words. At least until Nell learns to read.

So it is not just appears, but it is in fact an AI, with a veneer of human voice on it.


In Diamond Age the book is being performed by a skilled actor who is voicing many characters using a script entirely generated by the AI.


You perfectly illustrate the problem with typical western thinking. The workers you refer to may be low paid when compared to the US and other western countries, but they are highly paid in their respective countries. A lot of the time, the jobs (ie: iPhone assembly, AI related) are highly sought after because they are a great alternative to the other jobs that the workers with their skill set can get. It is also a great way to get a step up the job ladder for them and new acquire skills.


> A lot of the time, the jobs (ie: iPhone assembly, AI related) are highly sought after

This description alsp applies to mining toxic substances by hand and high-end prostitution.

The artisan cobalt-mining by hand is relatively popular, and kills you withing 10 years. It is not difficult to explot desperate people. I dont think we should be whitewashing it


The woman who started Sama (mentioned in the article) explicitly started the company to help people in those countries. Her entire life appears to have been directed toward helping people in Africa, she had a history of it. She wasn't there to exploit people.


You mean the same Sama that charged OpenAI $12.50 per hour for a contract and paid their African contractors $2 or less an hour?

https://time.com/6247678/openai-chatgpt-kenya-workers/


Discussed 2 days ago:

https://news.ycombinator.com/item?id=34426421 (570 comments)

(Warning: Shitshow)


Time... aren't exactly being truthful. They constantly refer to "take-home", rather than pre-tax. The all in cost for a person includes pre-tax wages, often taxes on top of that, desks, computers, managers, electricity, office space, benefits and so on. Common sense suggests they are likely making 30% or so margins. If it were much more than that a competitor would eat their lunch. Look at the publicly listed services companies like DXC, Wipro, Accenture - margins are 20-30%.


But it is exploitation though, right? If a bunch of people in a western company say "well, we could just hire people in africa because that would significantly reduce our costs," isn't that an exploitation of cheap labor in africa?


I guess it depends on your definition of "exploit".

If OpenAI had to pay more, they would have gone with another option. It's challenging to work across time zones, across cultures, across language barriers. Working with folks in Reno, NV or somewhere in the southern states of the US would have been the choice for OpenAI at a much higher price.

It's a competitive world. On the surface the Sama founder knew that and realized the options were higher wages for these folks in Africa, or none. The choice of even higher wasn't actually on the table.


> If OpenAI had to pay more, they would have gone with another option... Its a competitive world.

In 2000~s rating agencies rated subprime mortgages as AAA-bonds, causing the global financial crisis. If they rated the bonds as junk, the banks would go to another rating agency. Its a competitive world.

Therefore defrauding all of us was the right thing to do?

Just because the problem is inherent in the system and you individually can't change it, does not mean you cannot acknowledge the system's fault's.


That's a clumsy analogy. Issuers paying for ratings is and agreed problem (the ratings agencies, issuers, investors, regulators agree it's a problem) and only happens because they can't make another model work.

Competition driving costs down is viewed by many as a good thing. Many people view the higher wages the folks in this case got v their other opportunities as an opportunity, not as them being exploited. Capitalism is a system which some might not agree with, but pretending it's an obvious problem is wrong. Pretending low prices is an obvious problem is wrong. Many countries have explicitly chosen a capitalist system (competitive, market model where costs and prices are driven down) and have legislation and agencies devoted to protecting the system. Those countries aren't run by dictators.


Is it exploitation to buy things from poor people?


Is it exploitation to buy a kidney from a homeless man for $500?


It can be, if you're taking advantage of them being poor in order to pay them way less than you would pay others.


It would be much worse if they paid way above the local wages. It would trigger corruption. You'd could up with a black market in applications for the jobs, or protection racket, or highly paid leaks of the job interview problem sets, imagination has no limit.

You can say it's OpenAI's duty to make sure all these things don't happen. But they are not there to police the local society. They are there to run their own business. They don't have the competency to make sure corruption does not happen.


No, it's use of cheap labor in Africa.


All business is about exploiting someone somewhere for your benefit, no exceptions. The only question is whether that exploitation is within tolerable limits.


> All business is about exploiting someone somewhere for your benefit, no exceptions.

Where did you learn that? I don't think it's at all true. It seems maybe you have a no-true-scotsman definition of 'exploited', so that no evidence against your claim would change your mind.

Picture a baker, who makes bread for people, who get bread, the baker gets money. Where is the necessary exploitation? I can't imagine where your confidence - "no exceptions" comes from. There are no win-win exchanges in the world, and none even possible? I'm not a huge fan of capitalism but that seems absurd.


"Exploitation" has at least two (IMO very different) meanings.

Even though most of the time "exploiting an opportunity" is neutral and "exploiting our workers" is either a scam or abuse, I have seen some texts that used the word in the same sense for both cases.


The baker is exploiting his customers' need/want for bread. The customers are exploiting the baker's need/want for money.

Another way to describe business is that all business is about ripping someone off without pissing them off (and ideally making them happy). Middlemen who make their profit off margins are the most obvious example, but as I said this applies to all forms of business.

I reiterate: All business is about exploiting someone somewhere for your benefit, no exceptions.


I think you are using the word "exploit" in a different way than it is usually used, leading most people to misunderstand you. Or as a sibling comment suggests, the word has two meanings, and your argument uses equivocation (two different meanings in two different places) to achieve an apparently thick, substantial conclusion out of nothing.


Let me rephrase it using simpler language, then:

All business is about taking advantage of someone somewhere for your benefit, no exceptions.

The baker is taking advantage of his customers' need/want for bread. The customers are taking advantage of the baker's need/want for money.


Business is about trade. Trade has an amazing property where when we trade, we can create value out of nothing. Excess apples aren’t worth much to an apple farmer. If an apple farmer gives some apples to a plumber in exchange for fixing the pipes in her bathroom, both people are better off. Value for both parties has been created out of nowhere. It’s magic. And to this day, I can pay a plumber to fix my leaky tap and we both walk away from the trade feeling like we got a good deal.

Businesses are organisations designed to systematically “exploit” trade to generate profit - which ends up in the hands of their customers (my tap got fixed), and the employees and owners in the form of money.

Healthy businesses have created all the wealth that exists. Let that sink in! Trade (usually through businesses) is the mechanism that has given us the bounty of modern life.

I also agree that capitalism is a demon we have summoned from beyond the veil that doesn’t truly care about us. Unbridled capitalism will overfish, send children into cobalt mines, and encourage bankers to commit fraud. As the economist says, we need healthy regulation to curb capitalism’s excesses.

But classifying it as entirely evil, or entirely exploitative is too much. If you really feel that way, try living without trading if you like. Sounds like a sure fire way to end up homeless. I wouldn’t last a day.


When did I ever say business was "evil"? Nothing about business is inherently "evil".

Business is about exploiting someone somewhere for your benefit, no exceptions.

Trade is about exploiting someone somewhere for your benefit, no exceptions.

Put another way, business is about obtaining someone else's time so you don't have to use yours.

You should stop applying pre(mis)conceptions and look at what business is objectively.


Saying business is exploitative is pejorative - it implies wrongness. If you don’t mean to imply a moral judgement, maybe use a different word.

If you don’t mean to imply wrongness, then I’m not sure how your point of view is any different from mine. Trade is two people, mutually and consensually getting something they want from each other such that they’re both happier. If the only difference is your use of the word “exploit” but you mean it in a non-moralising way, sounds like we agree.


I would ask why /you/ are implying to exploit is "wrong". At best that's a very narrow and fairly modernist reading. To exploit someone or something in and of itself has no negative connotations associated with it.

Business is exploiting someone somewhere for your benefit, no exceptions. Put another way, business is ripping someone off without pissing them off. Put another way, business is buying and selling time at a price that is acceptable. Ideally, business will culminate in all parties being happy.

If you think business or the act of exploiting something is "evil", or that capitalism is a "demon", that's your problem.

I'm describing business for what it is objectively. Some guy has a surplus of something you need/want, and you try and exploit that fact for your benefit. That is business.


> I would ask why /you/ are implying to exploit is "wrong". At best that's a very narrow and fairly modernist reading. To exploit someone or something in and of itself has no negative connotations associated with it.

You can die on that hill if you want, but "exploiting someone" has negative connotations in modern colloquial English. If a word doesn't fit what you're trying to communicate, use a different word or you will be misinterpreted.

> exploit (verb): 2. make use of (a situation) in a way considered unfair or underhand: the company was exploiting a legal loophole. (Oxford Dictionary of English)


As evidenced by "exploit (verb): 2.", you are literally ignoring the /first/ definition of "exploit", which to quote Merriam-Webster[1] is:

>1

>: to make productive use of : UTILIZE

>exploiting your talents

>exploit your opponent's weakness

I would link to the Oxford English Dictionary, but apparently they demand I make an account and/or subscribe so that will have to wait for another day.

You have to go out of your way to associate negative connotations with the word "exploit", so please try again with your premisconceptions.

[1]: https://www.merriam-webster.com/dictionary/exploit


Ok; let me try again without my "premisconceptions" (not a word found in the Oxford English Dictionary, fwiw).

So your earlier comment should be read as:

> All business is about making productive use of someone somewhere for your benefit, no exceptions. The only question is whether that productive use is within tolerable limits.

... Except then that second sentence doesn't make sense now. If we take out the word "exploit", what are these "tolerable limits" of productive use you're pointing to? The original sentence read as if staying within "tolerable limits" is important to limit the implied harm we're doing by trading.

It reads like a motte-and-bailey argument[1] to me. Like what you're trying to say is this:

> All business is about exploiting ("1. to make productive use of / UTILIZE") someone somewhere for your benefit, no exceptions. The only question is whether that exploitation ("2. make use of (a situation) in a way considered unfair or underhand") is within tolerable limits.

When pushed on why you think business is dangerous you fall back on definition 1 of exploit. But then you lean on definition 2 of "exploit" to imply wrongness - "The only question is whether that productive use is within tolerable limits.". Or as another commenter wrote, "Are [workers] actually happy? Or are they exploited? That's the issue."

The blending together of these definitions feels slippery and underhanded. I don't care what word you use. Do you think trade is good or bad? Healthy or unhealthy for the participants?

[1] https://slatestarcodex.com/2014/11/03/all-in-all-another-bri...


>If we take out the word "exploit", what are these "tolerable limits" of productive use you're pointing to?

Business transactions fundamentally rely on the parties concerned coming to an agreement in order to proceed. If the parties are not in agreement, which is to say if the exploitation (aka the terms of the transaction) is not within "tolerable limits", the business transaction cannot proceed.

Therefore, and I reiterate: All business is about exploiting someone somewhere for your benefit, no exceptions. The only question is whether that exploitation is within tolerable limits.

>When pushed on why you think business is dangerous

I have never said nor implied such a thing.[1][2]

[1]: https://news.ycombinator.com/item?id=34461596

[2]: https://news.ycombinator.com/item?id=34462392


> Trade is about exploiting someone somewhere for your benefit, no exceptions.

This is absolutely wrong, there is a whole field of economics studying trade and you should read at least about comparative advantage. In healthy economy trade is for mutual benefit.

https://en.wikipedia.org/wiki/Comparative_advantage


Acting for your own benefit is not mutually exclusive with mutual benefit.

In fact, most business transactions are mutually beneficial through no particular impetus from the parties: One party gets whatever goods or services he needed/wanted, the other party gets whatever monies he needed/wanted; both parties are hopefully better off and happier than before.

I reiterate: Business is about exploiting someone somewhere for your benefit, no exceptions.


> I reiterate: Business is about exploiting someone somewhere for your benefit, no exceptions.

Given the controversy over the word "exploit", do you mind restating your belief without using the word "exploit" to make sure we all understand what you mean? Is this a fair restating?

> I reiterate: Business is about making productive use of someone somewhere for your benefit, no exceptions.


I already did.[1][2]

Your inability to comprehend plain English is not a controversy nor is it my problem.

[1]: https://news.ycombinator.com/item?id=34460783

[2]: https://news.ycombinator.com/item?id=34461596


it's communist nonsense that's used to justify the idea that walking dogs for ~25 hours a week is just too much.


I think your example explains why the article is a potentially gray area of exploitation.

I think it's clear exploitation when you offer someone employment that adversely impacts their rights. So artisan cobalt-mining is exploitative, because of it's health effects, would go against the UN's definition of "the right to work in just and favourable conditions"[1]. However, I'm not sure what, if any, rights are being compromised by the topic in the article. Maybe there's a case that it's unjust because of the asymmetry in the value created and payment. If there are some rights abuses, then it becomes a clearly exploitative endeavor.

[1] https://www.un.org/en/global-issues/human-rights


Living in Asia, I agree with the OP and find your total switch of topics in order to gain status points on HN gross. People in Asia desire good factory jobs manufacturing iPhone and what not. What the hell is wrong with that? Why you switch this to mining cobalt by hand is totally dumb. Also where are your sources? Mining cobalt by hand? Any mining operation in Asia will use the latest tools available. You think people here are running around in huts, scavenging cobalt and other precious metals from computer motherboards discarded in the year 1998? Get with it!


Exploitation involves, by definition, giving the exploited something they need. If you don't have food, and someone gives you food in return for slave labor, they are exploiting you.


If people in a poor country are offered jobs in a factory, and they prefer those jobs over subsistence farming (since the conditions are better and pay is better), who exactly is worse off as a result? Sounds like a profitable trade to me given both parties walk away happy.

Outsourced factory jobs are the mechanism by which previously poor countries like Taiwan and China (in many ways) have been pulled out of poverty. The process is happening before our eyes in Vietnam right now.

Do you think you’re doing poor people a favour by denying them well paid jobs? Should we do the same in the west, and have companies fire all our poorest employees?

I can hardly think of a more cruel policy.


> If people in a poor country are offered jobs in a factory, and they prefer those jobs over subsistence farming (since the conditions are better and pay is better), who exactly is worse off as a result?

You aren't understanding the meaning of "exploitation". See the GP.

> both parties walk away happy

Are they actually happy? Or are they exploited? That's the issue.


>> If people in a poor country are offered jobs in a factory, and they prefer those jobs over subsistence farming (since the conditions are better and pay is better), who exactly is worse off as a result?

> You aren't understanding the meaning of "exploitation". See the GP.

My question stands regardless of the meaning of the word "exploitation". I didn't use that word in my question. - Which I note you dodged answering. Who is worse off? Can you point to them?

>> both parties walk away happy

> Are they actually happy? Or are they exploited? That's the issue.

Why are they mutually exclusive? If being exploited isn't something with negative connotations, why can't I be "exploited" and be happy at the same time? If I hire a plumber to fix my leaky sink, who cares what the definition of the word exploited is? He's happy for the work, and I'm happy for my sink being fixed. We all walk away better off.

Was he exploited? Was I? Who cares when its consensual and we're both better off?

This argument about what "exploit" means seems pointless and unrelated to the actual topic - which seems to be, is it moral to hire someone in a poor country to do work for you. My claim is that not sending work to poor countries is often even more cruel - because it keeps people in poverty.


> This argument about what "exploit" means seems pointless and unrelated to the actual topic - which seems to be, is it moral to hire someone in a poor country to do work for you. My claim is that not sending work to poor countries is often even more cruel - because it keeps people in poverty.

Exploitation is not a pointless issue; it harms many millions, maybe billions of people.

The options are not only A) Exploit people, or B) Don't hire anyone. We can and should hire them in ways that aren't exploitative.


This is another common problem with tropical western thinking -- forgetting the historical context. Colonialism happened, and the people are so poor and desperate now because of things the west did in the past.


This does explain a lot of the difference in development, especially in very poor countries. And this isn’t just a result of “old”, traditional colonialism, but imperialism in the form of meddling in their internal politics. Guatemala is a great example, having had its democracy destroyed by Allen Dulles and his CIA in the 1950s. But you can also take a more recent example like the 2009 coup in Honduras, backed by Hillary Clinton.


As someone who does not live in the USA Iam very glad H Clinton did not become president

Impossible to know but I expect she would not have stayed out of the Syrian war.

It was she who was largely responsible for the debacle in Lybia

So Karen on using all that power and such a short term thinker

Trump was a catastrophe for you in the USA but not really for us.

Please elect an isolationist....


The DC establishment will do anything to prevent an isolationist from taking power-- that would begin the rollback of the sprawling empire from which they derive their largess. Trump was terrible domestically, and he was only isolationist in some areas (certainly less damaging overall than the Karen). He also appointed some usual suspects from the Deep State who are extremely interventionist (John Bolton, ffs).


The west doesn’t have a monopoly on evil. Plenty of Africans participated in the slave trade, and sold their brothers and sisters. And still do today.

There's something weirdly racist about the idea that only people in the west (or American diplomats) have any agency in the world. That America somehow is so powerful that y’all created the modern world. That local people and local leaders have no capacity for determining their own future.

That corruption in the police in South Africa is America’s fault. That the problem of gangs in Mexico was created by the US and that the Mexican gang members are blameless when they decapitate people. That Americans were the people who voted in Bolsenaro. Or the west are responsible for Japanese aggression in WW2.

Maybe ending colonial ideas starts at home.


> ...magicians typically do their fake magic by doing an unreasonable amount of work behind the scenes...

I probably heard this in a Penn & Teller interview, but the idea is much older... One of the keys to making a good illusion is to put far more work into it than people would think is reasonable, so it won't occur to them to think along the lines of what you actually did.


> Mechanical Turk

And also inspiration for Amazon's Mechanical Turk. [0]

[0]: https://www.mturk.com/


Most of smartphones' magic exists due to very expensive labor of hardware and software engineers. Production of parts and assembly is 1-2% of the total work done.


Reminds me of that "virtual youtubers" are actually humans behind the skin


>reminiscent of the Mechanical Turk, a supposed chess playing automaton that actually contained a small person

Reminiscent of the Amazon Mechanical Turk, a current real-life example of exactly they.


> a book that appears to be AI but is really operated by low-paid humans in 3rd world countries

I've seen this in real life before, no need for a work of fiction.


Diamond Age was first published in 1995.

I don’t think I am out on an intellectual limb believing very very few people would have seen an AI written book back then.


Not as a rule though - so many ML systems are utilizing data that is streaming in from passive sensors or transactional streams and is not human curated at all. The human aspect isn’t an intrinsic property of ML or even these algorithms only particular applications (and I would guess a minority of applications too).

Given that, it seems to be a clear miss to apply that logic generally. I have to believe it most likely stems from a lack of basic understanding and competence on the authors part.

Edit: Here are two examples that I have personally worked on:

Global calculations of weather composite reflectivity using 20 years of historical satellite imaging data from NASA

Superior RF demodulation (under certain circumstances) using RF transmissions as received and sent.

Both of these utilize modern ML imaging models, neither require any human labeling only streaming data (which in these cases began collection long before modern ML techniques were in widespread use). The applications in the natural sciences are endless not to mention the applications more on the business intelligence side using transactional data. Only in specific cases is human labeling required but because of the high cost of that data it is by its nature dwarfed by that which is collected naturally (not to mention often error prone). It is for that reason that techniques to ingest data that is more and more natural to collect are growing in favor.


>> I have to believe it most likely stems from a lack of basic understanding and competence on the authors part.

That is unlikely, given that one of the authors is Timint Gebru. I'm quoting below select passages from her wikipedia page indicating her background:

In 2001, Gebru was accepted at Stanford University.[2][5] There she earned her Bachelor of Science and Master of Science degrees in electrical engineering[8] and her PhD in computer vision[9] in 2017.[10] Gebru was advised during her PhD program by Fei-Fei Li.[10]

Gebru presented her doctoral research at the 2017 LDV Capital Vision Summit competition, where computer vision scientists present their work to members of industry and venture capitalists. Gebru won the competition, starting a series of collaborations with other entrepreneurs and investors.[11][12]

Gebru joined Apple as an intern while at Stanford, working in their hardware division making circuitry for audio components, and was offered a full-time position the following year. Of her work as an audio engineer, her manager told Wired she was "fearless," and well-liked by her colleagues

https://en.wikipedia.org/wiki/Timnit_Gebru


I didn’t say they were uneducated but an audio hardware engineer does not imply a good working knowledge of industry trends in ML applications.

Regardless, my point still stands, they completely ignore (willingly or ignorantly) that human labeled data is not intrinsic to ML or even the algorithms themselves and in all likelihood is a small minority of datasets used by modern ML applications. To then apply that critique generally to ML shows ignorance and a misunderstanding of the ecosystem.


Gebru is not an audio hardware engineer. I call your attention to this passage I quoted above:

Gebru presented her doctoral research at the 2017 LDV Capital Vision Summit competition, where computer vision scientists present their work to members of industry and venture capitalists. Gebru won the competition, starting a series of collaborations with other entrepreneurs and investors.[11][12]

And to the fact that she got her PhD in computer vision, i.e. the main area of AI research that the article seems to be criticising.


Her work experience is as an audio engineer - but again it doesn’t matter what her credentials are, she is wrong regardless and you are ignoring my whole point. She shows her ignorance of the subject matter (again willingly or not) when she applies her critique generally at Ml and not just at these specific applications - not sure how many times I need to say that.


>> Her work experience is as an audio engineer

Her PhD research is in computer vision and she and her co-authors are writing mainly about computer vision, but you spoke of "a lack of basic understanding and competence on the authors part". That is clearly incorrect and I don't understand what saying the same thing many times will change about that.


Computer Vision is a domain and is not equivalent to machine learning. They overlap yes, but not necessary. Again though you have completely ignored my point again and again. The authors ignorantly conflate specific applications of ML with the entire industry. That plainly demonstrates a clear lack of competence in this area.


As a third-worlder I welcome these jobs. They lower unemployment, raise average salaries, haggling power, increase the country's foreign currency reserves... Our minimum wage is similar to Kenya's and $1.46/hour after tax is way above the around 80c minimum.

It could make sense from an environmental perspective to produce and consume locally but for a job like this, the only reason to use US nationals would be to create local jobs.


I'd like to see a The Onion story about a third world person worried that the computer games they play are written by young exploited programmers in first world countries

Edit: I asked ChatGPT to write it for me:

Write me a story in the style of The Onion about a third world kid who worries the he is exploiting the young programmers in first world countries who make the games he plays

Headline:

Third World Child Struggles with Guilt over Exploiting First World Game Developers

Story:

In a small village in a developing country, a young boy named Ahmed sat huddled over his tablet, playing the latest mobile game. As he tapped away at the virtual farm, building and expanding his virtual empire, a feeling of guilt began to gnaw at him.

Ahmed couldn't shake the feeling that the game he was enjoying so much was built on the backs of exploited programmers in first world countries. He knew that the developers of the game were likely working long hours for low pay, all so that he, a third world child, could have a few moments of entertainment.

As the guilt grew stronger, Ahmed tried to put the game down, but found himself unable to resist its allure. He knew that he was complicit in the exploitation of his fellow human beings, but he couldn't bring himself to stop playing.

"I know it's wrong, but I just can't help it," Ahmed said, tears streaming down his face. "I wish there was a way to enjoy the game without feeling guilty."

As the sun set over Ahmed's village, the young boy made a silent vow to do whatever it takes to make sure that the game developers are treated fairly and compensated for their hard work. He knew it wouldn't be easy, but he was determined to make a difference, even if it meant sacrificing his own enjoyment of the game."

Ahmed's story is a reminder that in today's globalized world, the actions of one person can have a ripple effect that extends far beyond their immediate surroundings. While it may be easy to enjoy the latest mobile game or app, it's important to remember that behind every pixel is a team of hardworking individuals who deserve fair treatment and compensation for their labor.


This is also reminiscent of that German ad about raising money for Americans in poverty.


Hell yeah even I would gladly take such a job. 120 rupees an hour is a good deal in such tough times where it's almost impossible to find a work from home job that isn't an outright scam. Last week I sat for 4 hours solving a crazy competitive programming question just to get an interview call. It's rough out there. If Anybody knows where to apply for these data labelling jobs please tell me.


I'm building something right now that is a reverse bidding platform for image acquisition and labeling (company makes a solicitation, workers bid) where you get paid per accepted image and label. It will open to US labor first and might take up to a year to expand to India (assuming you're there).

If you want to be an early India user (paid) put an email in your bio and I'll get in touch. If you can program though data labeling might not be the best use of your time.


Sometimes I wonder if these "ethical" articles are only put up to stop new companies from competing.

Like "ok now that OpenAI Inc has a basic AI, lets be sure to close the door behind us so we don't have to deal with competing startups"


I believe this is a form of astroturfing and it is hard not to see this stuff. Remember all those articles about the AI winter a few years ago? In reality, there was no winter. Lot's of smart people took advantage of the opportunity and got incredibly wealthy during that period of time and now we see some of the fruits of that labour. It's just the beginning too.

It's important, in my opinion, to always ask 'who benefits from me believing this?'


I mean -

> who benefits from me believing this?

In the case of this article, presumably the exploited* laborers whose labor conditions might improve if the population to whom this article is addressed both became aware and started to care about the labor conditions of the people behind the systems they use. Trying to 4D chess ourselves into not having to actually address the issue at hand isn’t clever or insightful, it’s just abdication with numbers attached.

* I’m not arguing whether or not the behavior described in the article actually constitutes exploitation or is just the noble hand of the market more perfectly forming working conditions - that’s not my point. The point is if you DO read the article and you DO feel that behavior’s exploitative, scratching your head over who’s got an agenda to want this thing you find abhorrent changed, as opposed to trying to change the behavior you find abhorrent, is the kind of thing people think smart people are supposed to do, and tends to contribute to more things you find abhorrent happening in the world.


It can be both true and used for astroturfing.


>Remember all those articles about the AI winter a few years ago?

FTA:

>"MMC Ventures surveyed 2,830 AI startups in the EU and found that 40% of them didn’t use AI in a meaningful way."

While not the same as an AI winter, it can still be indicative of "peak hype". If accurate, a large portion of companies are claiming to have some nebulous AI advantage aren't really delivering on that promise. It's doesn't have to be astroturfing to be pointing out a hype machine. Particularly if that hype is predicated on a shaky ethical foundation.


It doesn't have to be, no, but it might be. It's worth reconsidering the things we believe every now and then.


I'm not sure how widespread the sentiment was that there was going to be a near-term AI winter.

What you did have was increasing skepticism that things like fully-realized door-to-door autonomous driving were going happen in the short-run--but a lot of the skeptics didn't think that would necessarily translate into an AI winter.


That's cause people today don't just say hey look what I built. Everyone acts as if they are doing something world changing. And it's bullshit, and the backlash is natural and well deserved.


By that logic any revelations about bad labor conditions underlying first world products or services only serve to prevent competition from also exploiting those conditions.

If your ethical conundrum is between continuing to allow the exploitation of the post-colonial third world, and to stifle competition in first world markets, I think you need to retake an ethics class because that is not a conundrum.


These articles are written to make people feel guilty about their lives. The ultimate goal is economic equalization.


Nobody can make you feel anything.

If true information induces feelings in you, perhaps it's more constructive to reflect on the information rather than the motives of the reporter?


I did journalism in high school and I know quite well that how you present true information (framing) or distort true information (for example, by comparing African salaries to the US) is done intentionally to make the reader feel something- guilt, anger, a desire to vote for somebody specific.


Does experience in high-school journalism support the claim that professional journalism is all a conspiracy to manipulate the reader? I doubt it, and I edited my high-school newspaper for two years lol


Did I say conspiracy? No. I'm making a generalization. If you have sources for objective journalism, I'd love to see them. The closest I've seen was the Economist about 5-10 years ago.


I once delivered papers so obviously I’m qualified to explain how the Star Chamber picks tomorrow’s headlines.

Like the movie says: there is no conspiracy. Big brother is not watching you. It’s a headless blunder operating under the illusion of a master plan.


Topical example to back up the claim that newspapers try to make you feel stuff, Prince Harry, UK tabloid headlines. Same fundamental information in all cases:

The Sun, positive: https://mobile.twitter.com/rainey_knight/status/161239752558...

The Mirror, positive: https://www.mirror.co.uk/news/uk-news/prince-harry-confirms-...

The Sun, negative: https://www.thesun.co.uk/news/21085974/piers-morgan-harry-ta...

The Mirror, negative: https://www.mirror.co.uk/news/royals/army-fury-prince-harry-...


> Nobody can make you feel anything

The size of the advertising industry suggests otherwise.


> Nobody can make you feel anything.

A simple phrasing like this does not reflect the reality of humanities evolution and resulting biology. We're social creatures, requiring that others make us feel things: https://www.apa.org/monitor/oct05/mirror

Unless you're a psychopath, perhaps: https://www.medicalnewstoday.com/articles/321839


> Nobody can make you feel anything.

This in 2023, when people feel offended by anything, sounds pretty off...


Nothing specific to 2023. In 399 BC, Socrates went on trial for corrupting the minds of the youth of Athens, and for two impious acts: "failing to acknowledge the gods that the city acknowledges" and "introducing new deities".


It is for sure specific to 21. century. Socrates was exposing fake specialists and decided to die instead of flea.. Specialists are debunked daily and they keep their posts, no consequences


The goal is awareness. There is no such thing as free lunch, somebody has to do the dirty work for too little money.


Are you sure it's too little money? articles like this often report wages that sound small to a person living in western Europe or US, but when compared to people in the same economy, these salaries are often competitive.

And yes, somebody has to do the dirty work.


This is why I can never understand these articles. How much is $1.46/hr? Is it a little? A lot? Average? So I head over to Google and look up the answer[0]. But everything is in yearly wages so $1.46*2080 ~= $3037. Looks like it would be a lot in Sudan or Myanmar, an average in Egypt, and not a lot in Sri Lanka or Ukraine. But are these numbers even trustworthy? We had a post about this just the other day.

But we don't even know the countries that are being used! So we're back to the original question: is it a little? A lot? Average? Honestly, when I see people spout out raw numbers alarm bells go off in my head. Doesn't matter if it is wage, number of car crashes, or number of murders. We have so many fucking people on this planet that a small percentage of a large number is still a large number. There are definitely people using this in deceitful manners. There are also dumb people, but idiocracy can still be accidentally malicious. The reason it is easy to lie with statistics and data is because it is actually really difficult to compare numbers and data across differing conditions. In this case, it really does not make sense to compare the wages of an American to the wages of a Sri Lankan. The price of an apple in both places isn't equal.

Still, I don't know the answer to my question.

[0] https://www.worlddata.info/average-income.php


The fact that it's a lot in Egypt or Myanmar just means wages are low in those countries. It's easy to say "well that's just the way it is" while you're sat near the top of the pyramid.


You're simplifying things too much though. There are important questions still. For example, how much does an apple cost? Rent? These are not equal in these countries. But also, let's say that they paid $3/hr in Egypt you could end up flooding the market. $15/hr and it could ironically be disastrous. We actually have seen stuff like this before when Westerners move in to other countries. The problem is that the local markets can't compete. Then we have a whole neocolonialism situation.

If your goal is to actually help people in a country get out of poverty this is actually quite a difficult goal to achieve without taking over that country (implicitly or explicitly). We don't want to physically invade. We don't want to economically invade. But we also don't want them to sit in poverty so we have to do something right? So how do you do that and let them maintain their independence? They might not even want to work with us to begin with! Don't pretend that these are trivial problems and us westerners sitting at a computer just know what's best. That's how colonialism started in the first place.


Paying extraordinarily higher than average wages in a poor country has downsides - most likely they'll be stolen (since most charity is stolen) but it also annoys the government when eg local doctors stop their practices to work for you.


Often competitive only means other jobs also pay too little money


The amount of dirty work increases as the price of labor goes down, because companies put less effort into inventing labor-saving techniques.


The circular reasoning is entertaining, as the low priced labor is theoretically generating the AI to be the ultimate labor-saving technique. But you don't need to save "too cheap to meter" labor. What was the last thing that was "too cheap to meter?" Ah yes nuclear power. Look how that turned out.

The two problems with trying to save the world via AI are the cheapest self driving car is a passenger train driven by "too cheap to automate" human, and an economy based on AI will be too poverty stricken from wealth inequality to permit the AI to generate a profit thus AI is not needed thus no need to destroy everything by applying AI.

The more AI is deployed, the poorer an economy will become, and the cheaper labor will become, making it quite a race to see if the AI gets smarter faster than economic activity implodes.

The most likely outcome of AI boosterism will be something like the environmentalist movement, but anti-AI. Who will be "The Lorax" of anti-AI?


If all you take away from this is that "someone is trying to make me feel guilty", then maybe you need to re-assess your own personal sense of guilt, and stop projecting it onto other people.


I don't see why somebody would feel guiltier about this than wearing Nike shoes?

It'll make a cottage industry of "data made in the USA"

It continues to answer the question: can the west run without maintaining a dedicated underclass -- no.

Should we be paying even less to make sure that we can keep that going? If those people get richer, these systems that benefit us will stop working


Would be less need for these articles if major tech companies would stop acting like villains in a sci-fi movie.


Wouldn't see them "acting like villains in a sci-fi movie" if there weren't systematic economic incentives for them to act this way.


And yet, every one of these organizations consists of individuals making sets of individual decisions on an ongoing basis, who indeed face incentives as well* but also persist in making the dubious ethical decisions that the company enacts, and we can in fact judge them for this, because companies are made of people who don’t actually abdicate moral responsibility for their actions by logging into a corporate email system.

*And here’s where I digress to say, again, that, indeed if you are the single mother of a child with a heart condition wholly reliant on the expensive company health care not lapsing for even a minute lest your child not just die but literally explode, taking your whole family and half the neighborhood with them, I’m not judging you. If, on the other hand, you’re a single dude between 20 and 35 with a college degree, significant assets, no debt, and a sellable skillset, yes, you’re the one I’m judging.


Cyberpunk*


did you check in which conditions and countries your shoes, t-shirts, pants, etc are manufactured?


Yes. Wherever possible, I try to be aware of the consequences of my actions, and adjust how I do things when I find they’re leading to outcomes I feel are not in alignment with my ethics.


Nobody cares about some possible fake words, from the internet dude. My point was more to show you that this "if major tech companies would stop acting like villains in a sci-fi movie." is wrong. Companies are companies, and aim for profit at all costs, regardless of their branch.


To me, the use of cheap labor for training suggests the level of importance AI companies place on avoiding social/cultural biases in the model in two ways.

1. Companies standards would be much higher if classification was done by engineers paid engineering salaries. And engineers would be directly accountable for model biases instead of bias being excused on the basis of how sausage gets made.

2. Outsourcing classification explicitly means classification is not valuable as a core competency; not an area in which expertise is important; and not an activity deserving of the benefits of direct employment such as financial rewards correlated to business success (e.g. bonuses, raises, options, etc.).


You are imagining that someone who otherwise would start a competing company would just choose not to after reading this article? Or what?


I noticed "Ethics of X" is generally a great field for those who can't cut it in field X. See the "Ethics of AI crowd as an example", that seems to be mostly made of "tech adjacent" folks.

Probably the best example here [0].

[0] https://syncedreview.com/2020/06/30/yann-lecun-quits-twitter...


That's not what this article is about.


I wonder if they are an "inside job": doing such a terrible job of making an ethical case against AI convinces people it must be fine...


This article is all over the place. I see the issue with the work being traumatizing, but apart from that, the points the article makes are just awful.

> labor exploitation is not central to the discourse surrounding the ethical development and deployment of AI systems

Yes, because workers' rights and AI Ethics are two separate issues. It seems the author is using the fact that AI ethics is a thing as a "gotcha" against AI companies, because they treat their workers unethically according to them.

> In this article, we [...] argue that supporting transnational worker organizing efforts should be a priority in discussions pertaining to AI ethics.

And even if you shoved workers rights into AI ethics; How do you organize labor for a job where there's an unlimited pool of people worldwide who would take the job in an instant with minimal training required most likely, making pretty much the whole workforce instantly replaceable?

> These corporations know that increased worker power would slow down their march toward proliferating “AI” systems requiring vast amounts of data, deployed without adequately studying and mitigating their harms.

> If corporations are not allowed to exploit labor from Kenya to the U.S., for example, they will not be able to proliferate harmful technologies as quickly — their market calculations would simply dissuade them from doing so.

Here it's suddenly about actual AI ethics again with the idea that progress needs to be artificially slowed down in order to ensure ethical implementation. Increasing the "unethically low" data labeling wages has the added benefit of slowing down the "unethical" development of AI.

> Talk of sentient machines only distracts us from holding them accountable for the exploitative labor practices that power the “AI” industry.

It really just looks like a weird ramble where the author picked a popular topic and then grasped at straws to draw lines to their political agenda with some luddism sprinkled on top.


The industrial revolution moved 90% of workers out of farming, yes 90% of employment in 1870 was agricultural, literally producing calories.

We sometimes mourn for this in the form of back to the land pastoralism, but quality of life empirics suggest the industrial revolution was a benefit anyway.

Instead of luddism, we should try to find ways that the coming apocalypse of white collar knowledge work can benefit humanity as a whole, and learn from our mistakes in the rust belt.


Ned Ludd's premise was of the quality of autonomy and life of the workers that were being automated. As automation came in, workers got less money, treated worse, and had worse lives.

Being called a 'Luddite' was NEVER about technology, but whom gains from technology.

And I dare-say he was right in his concerns. The gains of technology are privatized by the owner class, even though we worker class are the ones who utilize them. One needs to look no further than the "gig economy".


They just wanted apprenticeships for operators and decent pay. [1]

Luddites were the victims of a very successful smear campaign.

[1] https://www.smithsonianmag.com/history/what-the-luddites-rea...


> quality of life empirics suggest the industrial revolution was a benefit anyway. [citation needed]

I'm not trying to stuff AI back into Pandora's box. It's here, and it's coming. It can be a really great thing, or it can be catastrophic. So I mostly agree with your last point. But it we're going to talk about learning from our mistakes, the industrial revolution gave us The Jungle, and Amazon, and the obliteration of The Amazon.

Things didn't work out for the best; many of them worked out horribly. And things that did work out did so because the road was paved with human bodies (and tens of billions of nonhuman bodies).


The Luddites were not broadly opposed to new technology, they were opposed to the ownership structure which cut them out of the higher profits the new technology brought.


> We sometimes mourn for this in the form of back to the land pastoralism, but quality of life empirics suggest the industrial revolution was a benefit anyway.

Nope. That 'moved workforce' started living in industrial slums and dying at a ripe old age of ~40 instead of living until their late 60s.

http://www.filmsforaction.org/news/recovered_economic_histor...


And now the average worldwide lifespan is 72, higher if you look at countries that are fully industrialized.

Transitions suck for the people left behind, but that doesn't mean that progress is bad, it can really help people overall.


> And now the average worldwide lifespan is 72

Technology and science changed the living standards. Not capitalism or industrial revolution by themselves. In fact, average age for adults was already within 60-70 range before the industrial revolution - what brought down the averages was children dying early. Medicine during scientific revolution solved that. In that period, every bit of knowledge was being shared openly by everyone, entire science and technology was open source.

In fact Capitalist mode of economics has changed that. It pushed patents, and it started to monopolize and control science and technology starting from the latter part of the 19th century. That is also why the pace of scientific progress slows down around that time.


It doesn't much matter because capitalism is here to stay, and people like it.


That link is unhinged. It considers people to be self-sufficient on mere subsistence. A society which creates collective incentives towards collaboration and away from violent domination, creating wealth and value in excess of subsistence, and opening up the massive quality of life increase to all, is a significant departure in a positive direction from mere subsistence for an agrarian peasant that survives at the whim of people who could brutally and violently take from them.

That has to be the most hilariously and sadly unhinged reading and retelling of history I have ever seen. A wonderful example of lying with truths. The author seems to be part of a communist online writer collective, I suppose that should be unsurprising given the subject matter. Commies are wild.


> collaboration and away from violent domination

Working in an iron foundry or steelworks is more peacefull than quetly living on a farm? There is less conflict between workers than farmers?

What is the basis for this fantasy?


Human history is full of violence, and not all of it is between "betters" and "lessers", it's just people being violent towards one another to get what they want. Post-industrial society established social order and rule of law much more clearly than anything prior. A big piece of this was due to compulsory and inclusive education, but many other factors including the rise of enterprises which required social interactions to reach personal success changed society to a structure where collaboration was rewarded much more so than violence, which was punished.

This was not the case prior to industrialization. You have some idyllic pastoral fantasy in mind, which was not true.


> enterprises required social interactions to reach personal success changed society to a structure where collaboration was rewarded much more so than violence, which was punished.

One big hole in your argument - mexican cartels. Modern industrial society, loads of collaboration inside the cartel, very violent people achieve massive success. I am confident that life of a medieval farmer was far less violent that living with a violent cartel.

Can you actually bring any empirical evidence that victorian-age factory life was more peaceful than agrarian life 100 years prior? I think you are engaging in self-delusion, or comparing today to 500 years ago - which is unfair


> very violent people achieve massive success

Indeed. Also missing from the picture is the violence inherent in the system - the majority shareholders, ceos and execs of healthcare corporations that deny people treatment if they cant pay are inflicting much more violence than the tyrants of the past - its systemic, all-encompassing without any discrimination as to its target. But because the violent part is outsourced to the state's police to kick out and keep out those who cant pay, its magically 'not violent'. Whereas its possibly the worst systemic violence that ever invented in human history.


> A society which creates collective incentives towards collaboration

There is no 'collaboration' in industrial slums. Its machinized slavery.

> Commies are wild.

From the other perspective, your knee-jerk dismissal of the widely known and discussed 'tragedy of the commons' sounds like a religious fundamentalism. Especially to non-Americans.

The people of the age openly described what was happening, criticized it and lamented what they lost. The perpetrators of the act had no qualms in openly declaring their intentions and doing what they declared, because they openly and actually proudly thought that was the way it should be.

The social awareness that we have today was not present in those ages - back then peasants had to 'know their place' and 'their betters' had all the right to do whatever they pleased with what they owned. So they had no problem in openly declaring things that we would find as sociopathic today.

If what you read in that small bit of article shocks and surprises you, wait until you read the actual memoirs of the aristocrats and industrialists from that age...


I don't think you're reading the article as-written.

It'd be nice to see some sources for the quoted pamphlets, but if we assume that they are actual quotes from primary source material, it's quite telling.

The article does get a lot wrong, e.g. conflating feudalism with modern industrialized capitalism (hunting was been controlled by central political authorities for centuries before the industrial revolution).

But there's also a good point being made, that breaking up communal economic systems can be used as a tool of subjugation and control. There's nothing in here about self-sufficiency or subsistence per se.


tbh he's not wrong that the article says more about the partisan slant of the authors than it does about British industrial history. The article touches upon self sufficiency with the argument that that peasants could have made their own shoes from their own leather in a matter of hours so buying them proved they were poorer (a particular load of er... old cobblers) and I'm not sure various quotes about peasants being lazy proves anything more than the fact snobbery existed.

There's plenty of actually problematic stuff (the Enclosure Act) that happened to the British peasantry mostly before the Industrial Revolution without taking the view that peasantry was a particularly pleasant lifestyle that nobody would volunteer to change.


> (the Enclosure Act) that happened to the British peasantry mostly before the Industrial Revolution

The industrial revolution was already starting at that point. We associate the industrial revolution with machinization, but it was in fact translation of old feudal modes of land ownership to the entire economy. This trend started with the wool trade and the feudal lords finding wool trade more profitable than feudal land ownership. Coupled with the new modes of economic organization created during the Age of Exploration (corporations, stocks), this removed the incentive for the feudal aristocracy to maintain farming as a means of income and pushed them to maximize their revenue by moving to various emerging trades, with wool trade being the first. The pushing out of the peasants from the commons started around that time. What extra happened during the period that is directly labeled as the Industrial Revolution is just using the same method to push the peasantry into factories.


I think it's completely legitimate to ask, who benefits from denigrating peasants?

There is historical precedent for social biases being developed in order to justify economic and political institutions. It's often said for example that this is where modern racism comes from, a moral workaround for the obvious immorality of the Atlantic slave trade.

Thus, the presence of some kind of social or cultural bias might be a useful indicator of the presence of a developing or existing socioeconomic/political power dynamic.

I think self-sufficiency is a red herring here. Maybe the author was taken in by some kind of Marxist pre-industrial pastoral fantasy, but that doesn't mean they didn't make some good points along the way.


> conflating feudalism with modern industrialized capitalism

They are precisely the same - the ownership structures of modern capitalism already existed in the form of ownership of non-land resources (roads, mines, water passages, even buying/selling rights of specific commodities) in the middle ages and then with the invention of corporations at the start of the age of exploration, they were fully fleshed out as the things we know today. From shareholders to stock market, from rights to own and use anything to transferrable concepts.

Industrial revolution has been merely the machinization of the already existing structure.


It didn't work out well for a great many of those workers or their children. The people who got rich would not accept an ROI two generations down the road. Are you willing to accept that now - lose your career, much of your income, so that the changes in society will benefit your grandchildren (while billionaires and their children cash in right now)?

'It works out in the long run' is BS, and is always applied to someone else.


I think the next level is post scarcity. In a post scarcity world maybe we don’t labor and toil to live because it’s unnecessary to tie home, health, food, and life necessities to labor if our labor isn’t useful. Maybe life becomes about something other than working to live and living to work. Maybe tying labor to life necessities was necessary given scarcity of labor, but when labor scales independently of people we need a new way of allocating resources.


Why do contemporary discussions of post scarcity always require something in the future rather than appearing in the past due to "the assembly line" or "agriculture"?

Surely in the vast universe of past human discovery it seems likely if post-scarcity were possible in any form, that we'd have already discovered what will initiate post-scarcity so it should be here now... and it seems unlikely that any individual invention in the future will kick it off if none of the past inventions did.


Because productivity still scaled linearly with consuming humans, even if the constants improved. AI and other advances offer a potential for nearly autonomous productivity allow for productivity that scales independently of available labor.

Additionally I would say that each advance brought us closer to post scarcity. We have close to eliminated extreme poverty globally. Compared to hunter gatherer society’s we already live post scarcity.

Finally we may very well be post scarcity, but the notion of nobility in work and morality of labor means we can’t yet seriously consider decoupling work from life necessities. At some point there won’t be enough bullshit jobs left to justify pretending people need to labor to eat, and society will either collapse or we will move beyond work to live.

I would posit however the invention left undone is the one we use humans for now. Their ability to reason, make independent decisions, synthesize new ideas in any situation, learn new and different skills, interacting with a complex field of visual, auditory, and sensory stimulus effectively towards a goal, etc. That’s why we research AI. If our tools have that, then our tools don’t need us. If our tools don’t need us, we don’t have to do the work. If we don’t have to do the work, there is no scarcity because work scales independent of us.

There are also other inventions we know of but haven’t perfected that help here. Efficient fusion is one. With that energy is cheap and plentiful and presumably clean. Energy is the ability to do work. With artificial minds that can produce minds that can in turn produce minds, fueled with plentiful energy, what’s left?

So I disagree that we’ve invented everything that might be useful, or that what hasn’t yet been invented won’t lead to improved productivity to the point that human labor is redundant and all human needs can be met without it.


Would be lovely if there was actually any movement to avert the employment apocalypse. So far all I see is talk, and I have no idea how to do anything beyond that myself


The author has two points that continually distract from each other. They have a specific critique of worker conditions within tech companies, but also a broad skepticism that AI can really be all that effective or innovative, rather than an abstraction over the workers who "really" do the work. Their first point is solid and is accompanied by immediate calls to action, their second is vague, wrong, and potentially dangerously naive.

They use AI, "so-called AI", or "AI" (with scare quotes) interchangeably. Besides being unnecessarily confusing with their own terminology, they are bringing in a pointless philosophical debate about what "intelligence" or "learning" really is. The "40% of AI startups that don't actually use any AI" is mentioned not because they're misleading investors or users, but because they point out that the "AI" label makes workers less visible, and then they can tie that to Amazon's "artificial artificial intelligence" and go on from there to show or at least imply that pretty much all AI is a big smokescreen to justify the marginalization of workers.

However... isn't that begging the question? Do we actually care whether these systems learn and think the same way humans do, do their own original work, or whether they're just a worker pretending to be a chatbot? Does that make it okay to underpay gig workers, give them PTSD, or subject them to poor conditions?

Maybe "real AI" never materializes, but also maybe it just keeps improving until it doesn't matter. If so it's dangerous to deny that these systems can ever meaningfully replace or augment humans. We can't pretend that we live in a universe where something will forever be inferior to good old-fashioned human labor just because of the disturbing social implications.


If by "exploit" you mean "offer a clear and optional exchange of money for labour in return" then yes, I guess people were explotied here.

As per everyone in employment.


>The public’s understanding of artificial intelligence (AI) is largely shaped by pop culture

First clause of first sentence falsely posits that the "public" has a single mind, and claims the authors know the contents of this mind and what causes it to acquire these contents. It also adopts a false pose of authority and and understanding superior to that of the (nonexistent) mind of the public. It was nice of the authors to disqualify themselves from offering worthy ideas on the topic of AI so early in the article.


Why are you discrediting the authors points based off of a fluff sentence in the introductory paragraph? Typically the arguments themselves are evaluated based on their own merit.

The "popular" in pop culture and the "public" in publics understanding are referring to the same thing, so I don't understand your confusion. The only thing that's really being implied by the sentence is that scientific understanding doesn't dominate the public understanding of artificial intelligence as much as pop culture does.


Do you really believe that we cannot make generalizations about the layperson's understanding of AI? Unless you have a related degree in math, statistics, engineering, or comp. sci, i.e. a tiny minority of people, it's probably true that you get your understanding from pop culture.


>Do you really believe that we cannot make generalizations about the layperson's understanding of AI?

I never said anything like that.

Another point: They also continually conflate "AI", deep learning, and mechanical-turk style crowdsourcing.


> First clause of first sentence falsely posits that the "public" has a single mind

Seems hard to say this on one hand and allow for useful generalizations on the other.

> Another point: They also continually conflate "AI", deep learning, and mechanical-turk style crowdsourcing

Off the bailey and into your motte


>Off the bailey and into your motte

That's another misunderstanding and misrepresentation of my words, which were just making a separate point (could have been a top-level comment).


I assure you, even if the technology one day is fully unsupervised, with minimal involvement from the humans, no carbon emission, etc, people will still publish the exact same articles, form unions, protest, etc: we are abused, there is inequality and exploitation, the productivity gains of technologies must be equally shared regardless of the contributions, introduce ever higher taxation, and so on.

Amazon apparently pays around $25/hour to its starting warehouse workers. That’s 1k per week for 40 hours per week, or around 4k/month. That’s more than what French government pays its rocket scientists. People can discover their market rates based on their skills, and are free to work for another employer. There is no lock down.


> That’s more than what French government pays its rocket scientists.

I was about to call bullshit but then looked up the average salary for an aeronautical engineer in France and it was… bleak.


French rocket scientists get toilet breaks and healthcare, as well as other advantages of living in a country with some semblance of safety net. Just comparing raw salaries is misleading.


So open AI decides to gimp their creation and prevent it from returning certain responses because some people will screech about those responses, and they decide to hire people on other countries to do it, and now we see articles coming out about exploitative labor practices...

Give these crybabies what they claim to want: fire all those Kenyans and stop gimping chatgpt under the guise of not wanting to use exploitative labor practices. Rub their faces in it. You can't win with these people, don't even try.

And as a little bonus for the crybabies, when are you going to realize you're tools being used in corporate warfare? You think it's a coincidence that these articles are being written about an up and coming company running circles around the established big boys? Do you like being a tool?


>crybabies

Ironic comment.

>Rub their faces in it

None of that's gonna happen. Deal with it, lol.


I'm thankful for all the Western companies that exploited my labor from age 21 to 25 where I was paid a measly $100 per month or $5 / day.

Thanks to that experience, I now make 100x more.

If it were progressive liberals, sitting in their comfort home, trying to protect me, I would have been totally not exposed to the amazing opportunity this 'exploitative capitalistic company' gave me and literally changed my life and people around me forever.

Woke Mind Virus is real and a cancer to real progress


Same here. My first job in the freshly ex-communist Czechia was for Otis, an American company that entered the local Czech market for cheap labour (cheap from their perspective).

I got about $1 / hour from Otis and would have violently opposed anyone trying to "protect me" from that job.

The idea that they should have paid me $10 / hour instead (or whatever "fair" number in line with US wages) is laughable. Such fabulously sweet job would have been immediately snatched up by my boss' boss, or someone from their family – I'd never even hear of it.


Do we even know that the wondrous abilities of GTP3 are even given to us by a trained neural net? Might they not be generated by in real time by "data labelers"?

Sometimes I wonder whether I'm going to go from being amazed that ChatGTP3 can write wonderful javascript programs, to waking up one day inside of a "data labeling" company where I'm writing Javascript programs as fast as I can for $1.73 a pop.


This is a first world manipulation into some kind of contrived "workers unite" hit piece.

This has nothing to do with real AI and instead this author is trying to pass off hacks and gapfills as an entire industry or pursuit for the purpose of manufacturing suffering or abuse that isn't there.

This author has a terrible perspective.

If we have to enslave the whole world for 1 year to build an AI that elevates all for eternity. Would you do it?


> If we have to enslave the whole world for 1 year to build an AI that elevates all for eternity.

I think this shows a fundamental misunderstanding of the issues in the article.

Using your analogy, the article is asking the question: "if we have to continually enslave entire nations to build an AI that a single private corporations benefits directly from, and individuals in first world countries benefits indirectly from. Would you do it?"

The reason it's unethical is because of the system of exploitation we live in. So to answer your question yes, if we remove that system of exploitation and everyone benefits equally after suffering equally for a year I would "press that button" so to speak. The problem is that's not the reality for AI and those who work to support it.


https://en.wikipedia.org/wiki/Invention_Secrecy_Act

You don't know what the hell you're talking about.


The same "transnational corporations from Silicon Valley" paying their outsourced third world workforce 3-5 times less than US residents for the same engineering tasks. But nobody calling it "exploitation", because this is how markets are working.


> “So-called AI systems are fueled by millions of underpaid workers around the world, performing repetitive tasks under precarious labor conditions.”

You can say this exact thing about capitalism as a whole. Children working in coltan mines and on cocoa plantations and such.

It feels a bit superficial to write articles that just jump on some cool tech bandwagon just to point out that this tech is also exploitative. Because nearly everything is. And in case of that specific tech it's probably not that bad when compared to children in coltan mines.

It's a feature of our economy that we, well off people, all benefit from, our entire life.

We'd have to change entire rules of engagement and who's gonna pay to make that happen? We are not going to change it by playing whack-a-mole with global behemoths.


Children working in coltan mines and on cocoa plantations are part of the same issue, so I don't understand why you take issue with that issue being brought to the forefront using 'some cool tech bandwagon'. It's a cool tech bandwagon because people's eyes are on it, if you agree these issues are issues wouldn't you want them seen by those eyes?

I don't quite understand your final point. Yes, changing the labour relations globally would require global changes.

> Who's gonna pay to make that happen?

It's a misunderstanding to frame it like this, because the answer is that the exploited people are already paying for the current reality with their labour. Why is it assumed that they can continue to shoulder the majority of the work for the minority of the benefits, but the idea that the work and benefits should be spread more evenly is abhorrent?


> I don't understand why you take issue with that issue being brought to the forefront using 'some cool tech bandwagon'.

It just feels cheap and even less effective than other approaches. It promotes a whack-a-mole mindset.

> It's a misunderstanding to frame it like this, because the answer is that the exploited people are already paying for the current reality with their labour.

Ah, I see your misunderstanding. I didn't mean that somebody will bear the cost of a change. I meant that for any change to happen somebody must pay out of their ears for political influence to force that change.

The only maeningful laws that come into power are laws that somebody bought. Who's gonna buy 'non-exploitation' law if everybody with any money (billionaires, but also you and me) benefits from this exploitation?

Want to solve the problem? Figure a way how someone can earn big money on preventing exploitation of people.


Let's say this job was not available for $1.46 an hour after tax. Are those potential employees better off or worse off?


lets say profits were making their way down the chain - would the economy be better off or worse off?


There's a provider in Kenya, who offers these services at a certain rate. OpenAI isn't going go pay them more and instruct them to pass the money on to the workers. They're businesses, not charities.

How are so many comments in this thread so far detached from the real world?


Probably better. See? I answered your question. Not sure why it is so hard to answer mine.


because yours is a manipulative false dichotomy as demonstrated by the second question.


Stated more realistically, what will those people do in the mean time while we all wait for the economic structure of the world to reorient itself towards profit sharing? How long are they expected to wait?


The economic structure doesn't need to change for these companies to act as if it had, nor for these people to take these jobs if neither thing happens. All three things can be true at the same time. As I said, it was a false dichotomy. That's key to understanding the situation.


And how do you expect to effect such a change? Is it just a matter of generating sufficient outrage in your mind? Why should these people be forced to wait and suffer while you and your ilk write think pieces about silicon valley?

This is the problem with ideas like yours, there is no practical plan on offer to create the reality you are advocating for. Yet, you expect these people who are suffering right now to just sit tight. Its narcissism.


> there is no practical plan on offer to create the reality you are advocating for...Its narcissism.

Let's try this with another Idea:

Person A - I am protesting to end slavery / Fix climate change / stop cops killing people for being black or poor

Person B - You have no detailed plan how to achieve this, you are a narcissist!


The obvious difference is that (A)'s implementation doesn't also require stopping doing some further X, thereby causing suffering among vulnerable people right now. I'm not sure how you missed that.


How much do you think ML trainers previously getting paid $1.46/hr would get paid if the company distributed 100% of their profits to their employees?


Let's say we restructure global labour relations such that people arent getting paid a fraction of their worth because of where they were born or choose to live.

You know, since we're dealing in hypotheticals.


They would likely be better off in the long term as time spent on this kind of menial labor represents a massive opportunity cost.


> They would likely be better off in the long term

It's quite a dangerous mindset to think that taking freedom away from other people is a good thing because you know better.

In some cases you will be correct. In others, millions will die.


> In some cases you will be correct. In others, millions will die.

And in the average case nothing happens. There's no reason to think it's a barbell shaped distribution with outcomes weighted towards the extremes.

Further, it's not a trap-door decision, and it's not a decision that leads to instant death and despair. If it becomes problematic, it's quite straightforward to reverse course.


> And in the average case nothing happens

On average, casino players lose a little bit of money. But with a gambler mindset, you lose everything.

> If it becomes problematic, it's quite straightforward to reverse course.

And yet two of the worst famines in the last century were not reversed.


> On average, casino players lose a little bit of money. But with a gambler mindset, you lose everything.

This isn't a casino.

> And yet two of the worst famines in the last century were not reversed.

That's actually not really the right way of looking at it. You're measuring the things that did happen without measuring the things that didn't happen. How many were stopped? How many were stopped by the Farm Bill alone?

I'm not actually sure which of the famines you're referring to but if you're referring to the Holodomor, it was intentional.


> How many were stopped by the Farm Bill alone?

I'm not that versed in US governing. Which freedoms is Farm Bill taking away and how does that prevent famines?

> if you're referring to the Holodomor, it was intentional

It's a controversial topic, but regardless - it still correlates with the mindset that some kind of central authority knows better what's good for people. Better than people themselves.


What do you think the employees would be doing if they didn't take this job?


Its a supply and demand problem. The supply of human AI gig workers vastly staggeringly exceeds the demand for human AI workers. Trash talking the entire industry just means the poor will get poorer and be worse treated when the demand dries up even more. So I'm not really clear on the social justice angle of the article.

This does bring up an important long term market point. If there are an essentially infinite number of people willing to drive a car for practically nothing, tell me again why do we need self driving cars as an example of AI? Its not like we're running out of poor people anytime soon, they don't cost much of anything, so why waste all that money on kicking the poor people out of the economy? Eventually, once everyone's kicked out of the economy, there won't be anyone left to buy the very expensive self driving cars anyway, and no one will need self driving cars because they'll be nowhere economically viable left to drive to. Certainly we can "self reproduce" car drivers much cheaper than we can produce self driving cars full of unobtainable microprocessors, and producing human drivers is much more "black swan" proof than producing globalist microprocessors.

The other problem is a economy, in the long run, has to serve its participants. Lets say as a thought experiment its inevitable that all capital and all economic activity will solely be concentrated in those with IQs over 150 or sociopathic tendencies or outright criminality, and of course with everyone else kicked out of the economy they will be the sole beneficiaries of AI, much like the European Royalty was, in some sense, the the only beneficiary of gold mining industry. Well... won't 99% of the population go all French Revolution on them and fire up the guillotines? As soon as those people at the top of the pyramid are removed, and there won't be too many of them, the 99% of the population can go back to happy productivity. Any individual who rebels during the runup will get steamrolled by the rest of their competitors who don't rebel from the plan, but the endpoint of the plan is anyone cooperating with the plan to the inevitable endpoint will get the guillotine in the end, so what's the maximally efficient game theory perspective if you're trapped in a self destructive game where either total participation or obvious non-participation results in your demise? Well, everyone's better off if everyone "phones it in" and fails. So I expect a lot of "totally unexpected" AI failures.

Another philosophical problem is we've run a repeated massively parallel experiment on producing smarter better educated "non-artificial" intelligence entities known as the university system, etc. It really hasn't worked out other than the usual primate dominance ritual purposes and making certain the people at the top stay at the top. But people seem to think if we emulate that nonsense that never worked in a computer, it'll work next time. Sort of like perpetual motion doesn't work IRL out in the real world but if you abuse a CAD/CAM program hard enough a simulation of perpetual motion will work in a computer ... but even if we fake it in a computer simulation, that STILL doesn't mean it'll work IRL. Its almost like a "heisenberg uncertainty principle" of AI where if we had AI it seems the universe routes around it such that it can't make a net profit. Kind of like burning two barrels of crude oil to grow one barrel of ethanol to replace a barrel of crude oil, graph that out and see where it leads?

I'm just saying, if "low IQ" biologicals are already economically excluded, why would "low IQ" AIs not also be economically excluded? And we've proven we can't educate "better" biologicals, so trying to produce a master race of AIs is likely to fail just as badly as those past experiments.


> But around 15 years ago, before the proliferation of gig work, deep learning systems were considered merely an academic curiosity, confined to a few interested researchers.

Um, no. Google was using machine learning more than 15 years ago, notably in the SmartASS system that predicts whether someone will click on an ad [1]. I got my cousin Missy a gig (at $15 an hour) rating ads and search results.

This story is hardly a claim of humane treatment of gig workers, I should note. I intervened and got her treated fairly.

> Companies make sure to hire people from poor and underserved communities, such as refugees, incarcerated people and others with few job options, often hiring them through third party firms as contractors rather than as full time employees.

Citations, please. A rater who is many standard deviations away from the mean is extremely undesirable. You want their judgments to be representative of the population.

If you want to design something that appeals to Americans, you want American raters. Since the US is by far the largest consumer of Internet services, it stands to reason most raters of "quality" will need to be from the US.

If you just want to know if an image contains objectionable content, then yes, you could use Third World employees.

[1] https://albertcory50.substack.com/p/working-at-google-ads


These “journalists” see someone training image classifiers using Mechanical Turk, which awards a wage in excess of the local economy in the third world, and imagine to themselves this is how all models get trained because they don’t understand ML, they don’t understand market segmentation, and generally don’t understand technology or business.


I don't see what there is to understand.

Any 'ai' model right now requires classified data to train. It takes human work to classify data. Human work is cheaper in some places than others, so companies in expensive places exploit the situation of poorer places to lower their operating costs.

The wage size compared to the local economy doesn't really matter, since the exploitation comes from the difference in costs between the local economy and the local economy of the company.

In your reality, how is the data getting classified if not by people whose situation is being exploited?


> It takes human work to classify data. Human work is cheaper in some places than others, so companies in expensive places exploit the situation of poorer places to lower their operating costs.

1. Humans are not fungible in this way, as much as some people wish that they were. The ML models we built in my workplace were fed classified data that was built using some of the most expensive labor in the company, because of the highly specialized nature of the model. It would have been impossible to train using Mechanical Turk.

2. Taking money from wealthy countries and putting it into the economy of poor countries by paying an outsized local wage in the poor country that is cheap in the wealthy country, thus lifting up the poor country, is a very interesting way to classify "exploitation" (see: Asia and Eastern Europe).

> In your reality, how is the data getting classified if not by people whose situation is being exploited?

In your reality, apparently it's only acceptable for a company that would pay a First World worker $20/hr to do classification to also pay a Third World worker $20/hr to do classification, rather than $2/hr, which is an outsized compensation in a local market where people generally get $2/DAY, rather than per hour.

In my reality, BILLIONS of people have been lifted out of abject poverty by foreign investment dollars that arbitraged labor costs, to massively improve the economic situation and quality of life of those foreign workers.

Weirdly, my reality aligns with actual facts.


You seem to be arguing from abstract principles instead of considering the actual facts. What tristor has understood and you have not is:

"People" are not all the same. I just explained why US workers are preferred or even required for a lot of ML tasks. Did you read that?

As for exploitation, I'll limit myself to noting that certain people are always finding it whenever a rich person hires a poor person, despite the fact that nearly all jobs arise this way.


> Any 'ai' model right now requires classified data to train. It takes human work to classify data

Interestingly that isn’t actually the case. I would guess most data used in ML is not human labeled specifically because it is so expensive and error prone. For example, most work done in the natural sciences simply utilize sensor data. Want to forecast some aspect of the weather? Grab a couple decades of historical data collected by weather sensors. None of that needs to be human labeled, only sensor collected. Some applications certainly do rely on manually human labeled data but not all, and I would personally guess only a minority of applications because of the difficulty in collection.


Quite true. Although there is unsupervised learning, much ML is supervised, meaning there is some ground truth. In those cases, the ground truth comes from sensors, as you said.

Many ML models dealing with human interaction do need a human to provide the ground truth. More and more these days you can probably find corpuses of data already labelled, so you don't have to collect it afresh.


How is that exploitation?


Why don't US companies pay foreign workers US wages? The answer I come up with is that they chose to hire workers in disadvantaged situations(relative to opportunity and local labour laws) to reduce operating costs.

To me, using people in disadvantaged situations to reduce your operating costs is inherently exploitative.


Nobody is being forced to work for those companies. Is there any actual evidence of exploitation? If so, are the governments complicit in not acting against them?

This is silly. Rents and product costs are low too in those countries.

Are consumers exploiting business owners there by this logic?


> Why don't US companies pay foreign workers US wages?

Why doesn't money grow on trees? Why does inflation exist? Why are some nations wealthy and other nations poor? Why are some nations at different points in their economic, social, civil, and technological development? Why is the sky blue?

Are you trolling or do you seriously not understand how fancifully inane this question is?


>> they don’t understand ML

Please don't say that, it is upsetting that this false assumption thrown around so carelessly. At least check the authors' profiles, say on Wikipedia, if you want to know how much they understand.


If they have credentials but still say nonsensical things, then what is one to think? That they "understand" it but still somehow draw the wrong conclusions?


For example, I'd start by asking who says they draw the wrong conclusions. Are they "the wrong conclusions" or conclusions somebody disagrees with? Who is the person who disagrees with those conclusions and what is the expertise of that person? And so on.


Not everything is a 20-page dissertation replete with footnotes.


> A rater who is many standard deviations away from the mean is extremely undesirable. You want their judgments to be representative of the population.

An excellent point.


the "guilt-industrial complex"


They'd rather see poor people earn no money than see them earn anything below what their own preconceptions have labelled a fair wage. It seems that for a lot of people, it's not about building things, it's about tearing things down. The psychology appears akin to the childhood bully who, unable to build their own, goes around kicking over other children's sandcastles, then feels smugly content with their "accomplishment"... not recognising all they've done is ruin others work without contributing anything of value to the world.


OK, so then we should repeal labor laws and allow employers to abuse their employees? The problem is that your tired analogy has no nuance: all criticism is considered "bullying". We should simply allow everyone to do anything they want? Clearly that's absurd, so the sandcastle analogy doesn't work.

Why not just argue the criticism on its merits, rather than complaining that criticism shouldn't be allowed at all?


I don't know why you're being downvoted. Your point is valid. Many arguments can be made against the article's critique, but the analogy used by the comment above yours is rejecting the very idea that these companies can be criticized at all.


So all off shoring is labor abuse? TIL


I'm saying there's a conversation to be had about off-shoring labor, but the parent comment basically said "no, all criticism is bullying, and bullying is wrong."


They call it "critical theory" instead of "constructive theory" for a reason.

It takes no skill to critique.


I think your "building" and "tearing down" analogy is actually spot on.

We have a whole group of people who are so concerned with what's already built and building more and more onto it, that any attempt to point out where they went wrong in the foundation is met with criticisms like "all you people want to do is tear things down" and "well if your foundation is so good then why haven't you built as much as us".

Look at the whole phenomenon of critical theories, especially critical race theory. People were so enraged by the prospect of having the foundations questioned that the actual criticisms were less important.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: