The algorithms that will elect the next district president

The algorithms that will elect the next district president

In 1955, Isaac Asimov published his short story Universal Suffrage. In it, he describes how the first electronic democracy used the world’s most advanced computer (Multivac) to decide an entire nation’s vote with the intervention of a single human voter.

While we have yet to reach that ominous future, the role of artificial intelligence and data science is becoming increasingly important in the wake of democratic elections. The election campaigns of Barack Obama and Donald Trump, the Synthetic Party of Denmark and the massive campaign information theft by Macron are good examples.

The “mood analysis”

One of the first success stories of using big data techniques and social network analysis to tailor an election campaign was that of Barack Obama for the 2012 United States presidential election Voting intent based on phone calls or face-to-face interviews were supplemented by social network analysis.

These analytics provide a low-cost, near real-time method of estimating voter sentiment. For this purpose, natural language processing (NLP) techniques are applied, especially those dedicated to sentiment analysis. These techniques analyze the messages contained in tweets, blogs, etc. and they attempt to measure whether the opinions expressed therein are positive or negative in relation to a particular politician or election message.

The main problem they face is sample bias, as the most active users on social networks tend to be young and tech-savvy, and do not represent the entire population. Because of this, these techniques have limitations in predicting election outcomes, although they are very useful for studying electoral trends and the state of public opinion.

The case of Donald Trump

More troubling than the study of emotions on social networks is their use to influence expression and modulate voting. A well-known case is Donald Trump’s campaign for the 2016 US presidential election. Big data and psychographic profiles had a lot to do with a victory that the polls failed to predict.

It wasn’t mass manipulation, but different voters received different messages based on predictions about their susceptibility to different arguments, receiving biased, fragmented, and sometimes conflicting information with other messages from the candidate. The task was entrusted to the company Cambride Analytica, which was involved in a dispute over the unauthorized collection of information about millions of Facebook users.

Cambride Analytica’s method was based on Kosinski’s psychometric studies, which verified how, with a limited number of likes, one can obtain a user profile as accurate as that of his family or friends.

The problem with this approach is not the use of technology, but the “covert” nature of the campaign, the psychological manipulation of vulnerable voters through direct appeals to their emotions, or the targeted dissemination of fake news via bots. . Such was the case of Emmanuel Macron in the 2017 French presidential election. His campaign suffered a massive email theft just two days before the election. A variety of bots were tasked with disseminating evidence of the commission of crimes allegedly contained in the information that later turned out to be false.

Political action and government

No less disturbing than the previous point is the possibility that an artificial intelligence (AI) will rule us.

Denmark opened the debate in its last general election, which was attended by the Synthetic Party, led by an AI, a chatbot named Leader Lars, with a bid to enter parliament. Behind the chatbot are of course people, in particular the MindFuture Foundation for Art and Technology.

Chairman Lars has been trained since the 1970s on the electoral programs of Danish fringe parties to formulate a proposal that represents the 20% of the Danish population who do not vote.

Although the Synthetic Party may seem like an extravagance (with such bold proposals as a universal basic income of more than €13,400 a month), it has helped spark debate about an AI’s ability to govern us. Can a contemporary, well-trained and well-equipped AI really rule us?

If we analyze the recent past of artificial intelligence, we see that advances follow each other at breakneck speed, especially in the field of natural language processing after the appearance of architectures based on transformers. Transformers are giant artificial neural networks trained to learn how to generate text, but easily adaptable to many other tasks. Somehow these networks learn the general structure of human language and eventually have knowledge of the world through what they “read”.

One of the most advanced and spectacular examples was developed by OpenAI and is called ChatGPT. It’s a chatbot capable of coherently answering almost any natural language question, generating text, or performing tasks as complicated as writing computer programs from a few command prompts.

Free from corruption but without transparency

The benefits of using an AI for government action would be manifold. On the one hand, their ability to process data and knowledge for decision-making is far superior to that of a human. It would also be (in principle) free from the phenomenon of corruption and uninfluenced by personal interests.

But today, chatbots are just reactive, they feed on the information someone provides them with, and they provide answers. You are not really free to think “on the fly”, to take the initiative. It is more appropriate to view these systems as oracles, capable of answering questions such as “What do you think would happen if…”, “What would you suggest if…”, rather than as active or controlling agents ? .

The potential problems and dangers of this type of intelligence based on large neural networks have been analyzed in the scientific literature. A fundamental problem is the lack of transparency (“explainability”) of their decisions. In general, they act as “black boxes” without us being able to know what reasoning they used to reach a conclusion.

And let’s not forget that behind the machine are people who, through the texts they used to train it, were able to inject certain biases (consciously or unconsciously) into the AI. On the other hand, the AI ​​is not free from providing false data or advice, as many ChatGPT users have been allowed to experience.

Technological advances allow us to glimpse a future AI capable of “governing us”, for the moment, not without essential human control. The debate should soon move from the technical level to the ethical and social level.

Jorge Grace del Rio He is a Ramón y Cajal researcher in the field of computer languages ​​and systems at the University of Zaragoza.

you can follow THE AGRICULTURAL TECHNOLOGY on Facebook and Twitter or sign up here to receive our weekly newsletter.

The conversation