Social media giants will be forced to help users block trolls from re-suppressing poisonous posts
- Users will be able to block anonymous unverified accounts in case of new repression
- These new measures have been added to the forthcoming online security bill
- Individuals will be allowed to choose whether to join the verification process
Social media users will be given new powers to control who can interact with them in the face of severe repression against hateful Internet trolls.
Platforms such as Facebook and Twitter by law it will have to give users the tools to block anonymous unverified accounts.
Users will also have the opportunity to confirm. It will be up to companies to find a suitable method of verification, but can range from taking selfies to providing proof of passport or driver’s license.
Platforms such as Facebook and Twitter will be required by law to provide users with tools to block anonymous unverified accounts (standard photo used)
The new measures have been added to the upcoming bill on online safety, which will impose a duty of care on technology companies to protect consumers.
However, people will be allowed to choose whether to join the vetting process – despite calls from some activists to do so.
Ministers were concerned that this could jeopardize the safety of vulnerable consumers. Online anonymity can be crucial for victims of domestic violence, activists living in authoritarian countries and young people exploring their sexuality.
The government has also announced a measure that will force platforms to provide adult users with tools to block “legal but harmful content” such as racist abuse and misinformation about Covid.
This may include allowing users to include settings that prevent them from receiving recommendations on specific topics or placing sensitivity screens on that content.
The Ministry of Digital Technology, Culture, Media and Sport (DCMS) said the new measures would “give more power to social media users” by giving them more choice as to whom to interact with.
Online anonymity can be crucial for victims of domestic violence, activists living in authoritarian countries and young people exploring their sexuality (standard photo used)
Digital Secretary Nadine Doris said: “Technology companies have a responsibility to stop anonymous trolls from polluting their platforms.
“We have heard our calls to strengthen our new online safety laws and are announcing new measures to give more power to social media users themselves.
“People will now have more control over who can contact them and will be able to stop the tidal wave of hatred brought to them by fraudulent algorithms.”
This comes after calls from lawmakers, footballers and celebrities to take action against internet trolls after highlighting the horrific abuse they have suffered.
The government has already announced tougher penalties for trolls, with those found guilty of the most serious abuse facing five years in prison under the new bill.
The latest measures will only apply to the largest social media platforms considered “category one”, such as Facebook, Twitter and Instagram, as they pose the most serious risk.
Watchdog Ofcom will be empowered to fine them up to 10 percent of annual global turnover for each violation or even block the use of sites in the UK.
A DCMS spokesman said the new measures: “While this will not prevent anonymous trolls from posting abusive content in the first place – provided it is legal and does not violate the platform’s terms and conditions – it will stop victims from being exposed to it.” and give them more control over their online experience. ‘
However, people will be allowed to choose whether to join the vetting process – despite calls from some activists to do so (using a standard photo)
The bill will also force social media giants to remove illegal content such as images of sexual abuse of children, incitement to suicide, hate crime and incitement to terrorism.
But there is a growing list of toxic content and behavior on social media that falls below the threshold of a crime that still causes significant harm.
The spokesman added: “Much of this is already explicitly prohibited in the rules and conditions of social networks, but too often it is allowed to stay.” Companies will have to provide tools that will allow users to block this in news programs.
Three ways to fight hatred
New measures added to the Online Safety Act today will force social media giants such as Facebook, Twitter and Instagram to ensure that users can:
Consumers should be given the opportunity to check. What methods they will use depends on the platforms, but this can range from uploading selfies to match their profile picture, or providing proof of a government ID such as a passport.
BAR ANONYMOUS TROLLS
Tools must be provided to allow people to block other users who choose to remain anonymous. These may include checking the box in the settings, which allows direct messages or replies to posts to be sent only from verified accounts.
HARMFUL CONTENT FILTER
Consumers must also be given a way to block content that is below the crime threshold but still causes significant harm, such as racist abuse. Tools may include allowing users to change settings so that the site does not recommend certain topics or placing sensitivity screens on such content.