Artificial intelligence authentic exploitation Africans coach ChatGpt who admits Bassi…

Artificial intelligence, authentic exploitation. Africans coach ChatGpt who admits: “Bassi…

“Training AI models requires a large amount of data to train the model. This data is often collected by workers who manually tag the data such as images or text. These workers may be paid low wages and may not always enjoy decent working conditions. Additionally, the use of such data may also raise questions about privacy and the ethical use of the data collected.” he says it GPT chat of the USA Open AI, an artificial intelligence giant known precisely for the text generator launched in 2022, thanks to which the company is raising tens of billions of dollars in investments. We asked the chatbot if “worker exploitation is behind artificial intelligence?”. Already “interviewed” by ilfattoquotidiano.it, in this case the AI ​​offers an answer that could be that of a reader of the time, the American magazine that just published an article showing the low cost and low human labor protection required to train Chat GPT. Because if what amazes us above all is what it can do, it is no less important what not allowed. In order for her to understand how the interested party “said”, we need to explain to her what is and isn’t allowed. Despite the impressive capabilities of its predecessor GPT-3“It was a tough sell because the app was prone to violent, sexist, and racist comments,” the reporter writes Billy Perrigo punctual. And since OpenAI’s stated goal is to provide a safe and everyday product, someone needs to classify the content to give the AI ​​examples of “toxic” material to train it on. And that someone, reveals Time, are workers Kenyan paid outsourced $1.32 per hour Read and label text, including that graphically depicting child sexual abuse, bestiality, murder, suicide, incest, torture and self-harm.

AI means high computing speed and large amounts of data. That’s why, among Goole’s top executives, there were those who, when the search engines stunned us, admitted: “We’re actually building an artificial intelligence.” But we’ve also put our worst articulation skills into the web, and AI, like any other product meant to be sold to the masses, needs to reassure users, not scare them with inappropriate exits. To avoid this, like other Silicon Valley giants, it has also turned to OpenAI Sama San Francisco company that employs people in Kenya, Uganda and India Labeling data, explains Time, which examined hundreds of pages of internal documents from Sama and OpenAI, including workers’ payslips, which provide for salaries based on seniority and performance between $1.32 and $2 network time. Billions in the west, pennies in Africa. “For all its appeal, AI often relies on hidden human labor in the Global South, which can often be harmful and exploitable,” the study’s author writes. The company acknowledged the outsourcing to Kenya and the importance of the work done to be able to remove toxic data from the training datasets. “OpenAI, it is said, does not disclose the names of the outsourcers it works with, and it is not clear if it worked with other data labeling companies besides Sama for this project.”

The contracts signed between OpenAI and Sama in late 2021 included labeling textual descriptions of sexual abuse, hate speech and violence. Time has collected the testimonies of four workers who did nine-hour reading shifts for 150 or more texts of up to a thousand words each. All declared that they stayed mentally marked. Individual sessions with “professionally trained and authorized therapists” were also envisaged on paper. A fact that employees dispute, referring at best to group sessions that are not very useful and are not always granted in favor of productivity. OpenAI which would pay Sama $12.50 per hour per worker, explained that the agreement included a limit on exposure to explicit content and that it would be handled by specially trained workers. And that in each case “Sama is responsible for managing payments and the mental health of employees”. As the Time investigation reiterates, the relationship between the two companies is interrupted a few months before the natural term of the contracts. In addition to the lyrics actually also the pictures must be labeled, always for OpenAI – writes Perrigo – Sama has started cataloging sexual and violent images, including this one illegal in the USand then deliver them to the customer. The investigation reconstructs the black vote between the two companies on illegal content. OpenAI spoke of a “communication error”, Sama terminated the contracts and ended the psychological torture to which the workers were subjected, then transferred or fired.

A problem of consciousness by a company that presents itself as an ethical company and claims to have lifted more than 50,000 people out of poverty?. Rather, according to Time, it was the result of another February 2021 investigation detailing how Sama hired content moderators Facebook, whose work consisted of viewing images and videos of executions, rapes and child abuse for as little as $1.50 an hour. By analyzing internal communications, Time journalists have shown that Sama executives worked hard to limit recidivism after the investigation was published and three days after the ad was published Wendy Gonzalez wrote to some executives to announce that work on OpenAI would soon be completed. In 2023, the company announced that it would close all positions related to sensitive data. But the need to train artificial intelligence remains to free it from that part of the internet that is not needed and therefore on one side of us. Not a question of conscience, but marketing. As Time’s work shows, we’ve exploited other humans to hide the worst in us from the AI. Nothing smart.