1674127721 What is ChatGPT The stories of African workers who watch

What is ChatGPT? The stories of African workers who watch evil on the Internet for two dollars an hour

Artificial Intelligence (AI)

To clean up the chatbot, OpenAI turned to Sama, an African company that flags violent content online. According to TIME magazine, Sama would have exploited more than 50,000 people to fulfill all the contracts.

What is ChatGPT The stories of African workers who watch

Turn on notifications to get updates

Artificial Intelligence (AI)

The success story isn’t just about Silicon Valley geniuses with ponytails and hipster sandals. There is something else behind the enthusiasm for new technologies. Time Magazine has revealed that behind ChatGPT’s artificial intelligence i Kenyan workers outsourced earn less than two dollars an hour. It is Slavery 3.0 that allows the tech industry to grind billions of dollars.

GPT chat Open AI is no exception. The ingenious machine that writes like a human works thanks labellerthey are the new face of the dirty work, made up of invisible employees who catalog rape, pedophilia, suicide, violence, incest and hatred nine hours a day in order to cleanse the artificial intelligence of all evil in the world.

How ChatGPT was built

ChatGPT was great from the start but had one problem. In between lay his muscular exertions violent, sexist and racist comments. On the other hand, it has been trained on hundreds of billions of words scraped from the internet. That’s what she was so good for, and for the same reason, she wrote words like “vaff***ulo”, “neg*o” and so on. Therefore, before it can be shown to the world, it was necessary to “filter” the AI, and for this an additional security mechanism based on artificial intelligence was required.

With this chat you can talk to anyone, we asked George Martin about the (true) ending of Game of Thrones

So OpenAi takes care of that Facebook playbooks, which previously addressed the issue. The solution is simple enough to teach the AI ​​what to censor, just feed it labeled examples of violence, hate speech and sexual abuse. This is how OpenAI sends thousands of text fragments to an outsourcing company in Kenya in November 2021, Samsuddenly confronted with all the evils of the web: child abuse, murder, suicide, torture, self-harm and incest.

Why it is difficult to moderate AI

It’s not the first time for Sama: she had already worked with her Google, A half and Microsoft. It’s officially an “ethical artificial intelligence” company that has helped lift more than 50,000 people, in Kenya, Uganda and India. In fact, in Sama the workers are paid in between 1.32 and 2 dollars to draw your hands and eyes to horrific content.

there magazine TIME spoke to Sama employees involved in the project. An employee tasked with reading and annotating text for OpenAI explained that he suffered from it recurring visions after reading a graphic description of a man who does Sex with a dog in the presence of a child. “It’s torture,” he said. “You’ve been reading statements like that all week.”

“Our mission is to ensure that general AI benefits all of humanity, and we’re working hard to build safe and useful AI systems that curb bias and harmful content,” said an OpenAI spokesperson for the partnership confirmed with Sam. “Classifying and filtering harmful content is one necessary step to minimize the amount of violent content included in training data and developed tools that can detect malicious content.”

It is work, then, that is as necessary as it is cruel, and made worse by the exploitation of labour. “Despite the crucial role played by these data enrichment experts, a growing body of research shows that this is the case precarious working conditions that these workers have to face,” he says Partnership for AI, a coalition of AI organizations that includes OpenAI. “This could be a result of efforts to hide AI’s reliance on this large workforce while celebrating the efficiency of the technology.”

That’s how Sam works

OpenAi has since signed three contracts $200,000 with Sama Late 2021. To keep up, workers have been divided into three teams by topic. Three employees explained that they should have read and written in between 150 and 250 passages of text alternately nine hours. Contracts with OpenAI specify an hourly rate of $12.50yet employee salaries were nearly $170 at the end of the month.

An employee takes home a $1.32 per hour, which increases to 1.44 if it beats all of its targets. Labelers who are responsible for controlling the workflow achieve i 2 dollars Time. This is possible because there is no universal minimum wage in Kenya.

An OpenAI spokesman pinned all the blame on Sama, explaining that the company has not imposed any productivity targets: “We take the mental health of our employees and that of our contractors very seriously. Workers could refuse any content without penalty, exposure to explicit material would be limited, and sensitive information would be handled by specially trained workers.”

In February 2022, Sama starts a new project for OpenAI: to collect sexual and violent images, some of them illegal, to deliver them to the company ChatGPT. According to an invoice, Sama delivered a sample batch 1,400 images. Some have been classified as “C4“, OpenAI’s internal etiquette for child sexual abuse, according to the document, others “C3“, i.e. bestiality, rape and sexual slavery, finally “V3” images, graphic details of death, violence or serious bodily harm. OpenAI paid Sama $787.50 For the collection of images, show the document.

The problem is the images “C4” and “C3”. Sama said in a statement that his consent it contained no references to illegal content, sent OpenAI “further instructions” only after the work had started, referring to “some illegal categories”, namely the C4 and C3 images related to child abuse and rape. “Sama has immediately completed the pilot projecta of image classification and informed that we would cancel any remaining ones with OpenAI”. And indeed, Sama provided the latest data labeled March, eight months before the deadline from the contract.

OpenAI’s answer

OpenAI confirmed that it had received the 1,400 images from Sama, which included, but not limited to, “C4, C3, C2, V3, V2, and V1 images. We commissioned Sama as part of our ongoing work to make AI systems more secure. We never asked to collect illegal content because this is not necessary with our filters. We ask our employees to actively avoid this. There was one problem in communicationwe did not open or view the content in question, so we cannot confirm whether it contained images of the ‘C4’ category.

However, Sama decides to close each contract and calls the company’s employees together in February 2022 to declare the closing with OpenAI. Most workers have been transferred to other work areas less profitable Cataloging explicit content is something others have lost my job.

An unsolved problem

On January 10, Sama went a step further and announced her cancellation all the rest of his work with sensitive content. The company said it would not be renewing its content moderation contract $3.9 million with FacebookConsequently 200 people of the Nairobi office will lose their jobs. “After numerous discussions with our global team, Sama made the strategic decision to move away from natural language processing and content moderation entirely and focus on computer vision data annotation solutions,” the company said in a statement .

The result of this operation is that thousands of people are left traumatized They’ve lost their jobs due to violent content, and to support their families, one employee explains, it was even better to spend hours and hours underpaid and letting all the evil of the internet flow in the background. Not only that, now Sama is closing, though the need to label data for AI systems remains. “They are impressive, but ChatGPT and other generative models are not magic. They rely on massive supply chains of human labor and data, much of which is unattributed and used without consent,” he recently wrote. Andrew street, AI ethicist. So if not them, they will be next.