Brussels led AI regulatory project seems unequal to the risks posed

“Brussels-led AI regulatory project seems unequal to the risks posed by the breakthrough of tools like ChatGPT”

For some, ChatGPT is the first form of universal artificial intelligence (AI). We can ask him anything, simply about a question that we would ask a (human) expert in the subject matter if he were in front of us. He will answer you like him. For others, it’s a happy combination of two proven AI algorithms: conversational robots (chat) combined with an aggregation of all the web’s content by 2021. So, inevitably, for every question asked, there will be content somewhere on the web that answers the matter, transformed into a conversation.

Also read: Article reserved for our ChatGPT subscribers, the software capable of writing confusing small texts

Unlike the expert, ChatGPT doesn’t understand the importance of what he’s telling you: ask him a controversial question and if that’s the case the designers paid less attention to him learning to reject the wrong thing, you’ll find you stand a liar who answers you with the confidence of a psychopathic con artist. From one time to another he can answer everything and its opposite.

Because he doesn’t understand what he’s saying, ChatGPT is dangerous. ChatGPT could inflate the bogus content it ingests from the web if its responses are reinjected back onto the web by its users as legitimate content. What a magnificent poisoning of the internet, at the fingertips of conspirators of all stripes!

The Abandoned Web?

But it can also destroy the Google model: why waste time researching to answer a question when ChatGPT already has the solution? It is also rumored that Bing, competitor of Google’s search engine, will very soon combine chatGPT with its search tool. It is then the idea of ​​the web that takes a hit when ChatGPT becomes widespread: Why still access the web when an intermediary has already visited everything before you to answer all your questions? Is the web doomed to become a desert?

Also read: Article reserved for our subscribers Beyond artificial intelligence, the ChatGPT chatbot owes its language skills to humans

However, the AI ​​(Artificial Intelligence Act) regulation project being carried out by the Commission does not seem up to the risks posed by the breakthrough of tools like ChatGPT. This regulation classifies all AI applications into several categories depending on their degree of danger.

First, there is “forbidden” AI, which employs subliminal techniques outside of the consciousness of the person who is their object. It can cause him mental or physical harm. Forbidden AI is also one that exploits or influences the behavior of a group of people based on their age, physical or mental health. It is also the AI ​​that assigns a social score and leads to differentiated treatment. Anything that is biometric recognition in public space will also be banned within the framework of AI, except when it comes to locating crime victims or missing children, when it comes to averting an imminent and significant threat to life or physical health of persons or of a terrorist attack and when it comes to detecting, locating, identifying or prosecuting a criminal.

You still have 44.43% of this article to read. The following is for subscribers only.