Home / Business / The EU wants to crack down on misinformation online — but here’s why it probably won’t work

The EU wants to crack down on misinformation online — but here’s why it probably won’t work

The European Union wants social networks to deal with fake information or face consequences. But industry researchers and experts say it could be an impossible task — even as hundreds of accounts linked to Hamas have been removed by at least one platform.

Sheer size of social platforms makes it hard to catch all misleading, illegal content.

A megaphone is held up at the words fake news on a chalkboard

The European Union has publicly called on X, Facebook owner Meta and TikTok to deal with false information on their sites. But industry researchers and experts say it could be an impossible task — even as hundreds of accounts linked to Hamas have been removed by X, the social network previously known as Twitter.

On Thursday, Linda Yaccarino, the CEO of X, outlined efforts by her company to combat illegal content on the platform. She was responding to public demands from a top European Union official for information on how X is complying with the EU’s tough new digital rules as conflict escalates between Hamas and Israel.

“X is proportionately and effectively assessing and addressing identified fake and manipulated content during this constantly evolving and shifting crisis,” Yaccarino said in a letter to Thierry Breton, an EU commissioner who often leads the 27-nation bloc’s actions on its Digital Services Act.

Yaccarino said her platform has acted to “remove or label tens of thousands of pieces of content.”

But one former employee who worked with the social network’s trust and safety team said she is not sure it can do much about this problem, after it deliberately decided to roll back its moderation teams.

Linda Yaccarino is shown on stage at the Davos conference.

“I think Twitter has significantly reduced capacity, by the company’s own choosing, to address these issues,” Theodora Skeadas said in an interview with CBC News, adding that she does not believe it has the ability to be responsive to harmful content on its platform.

Skeadas said that almost the entire trust and safety team, including herself, was laid off in the months after Elon Musk closed the deal in October 2022 to purchase Twitter.

“There aren’t as many people involved in the ecosystem whose day-to-day job was connected to tackling disinformation,” she said.

Not just Twitter, but Facebook and TikTok, too

European authorities announced on Thursday that they are demanding more information from X on how it handles illegal content and complaints about that content. X also needs to provide information by Oct. 18 on how its crisis responses work.

The European Union has also posted letters on social media addressed to Meta, the owner of Facebook and Instagram, and to TikTok.

In a statement to CBC News, a Meta spokesperson said the company has staff who speak both Hebrew and Arabic and are monitoring the situation.

A company sign with a blue infinity symbol.

“Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and co-ordinate with third-party fact checkers in the region to limit the spread of misinformation,” the company said.

When asked repeatedly for comment by CBC News, X responded with an automatic email reply that said: “Busy now, please check back later.”

TikTok has indicated it will be responding to the EU. As well, a company spokesperson told CBC News it has added moderation resources in both Arabic and Hebrew.

Social network researcher Siva Vaidhyanathan said because of their sheer size, it may be too big a task to expect social platforms to eliminate all misleading or illegal content. As an example, he pointed out that Facebook alone has billions of user accounts.

“That means Facebook is constantly going to be facing many millions of uploads every second. A lot of them are puppy pictures … but a lot of them are going to be misleading videos that might have been taken somewhere else and some other time, but marked to make it seem as if it’s happening in Gaza or it’s happening in Israel right now,” said Vaidhyanathan, director of the Center for Media and Citizenship at the University of Virginia.

A man of south Asian descent stands in front of a bookshelf, with a microphone in front of him.

From Vaidhyanathan’s perspective, that sheer volume of content means that there is no way to design a system that could hire enough people and pay them enough to weed through and assess every piece of content properly.

“We’ve seen time and time again, no matter how much Facebook says it’s committed to cleaning up its service, that it’s never enough and it’s probably never going to be enough,” he said.

What about the law?

As for the European Union’s legal warnings to social networks, lawyers have said that enforcement of the its Digital Services Act remains untested.

Paul Bernal, who teaches information technology law in the United Kingdom, said while it’s clear that European authorities want to police online content deemed illegal, it’s not clear whether they can actually compel anything to happen.

An gentleman with white hair is pictured in front of signs showing "Munich Security Conference."

“if it turns out they can’t [enforce these rules], then … it’s really removing their power. They’ll feel like kind of paper tigers who don’t actually have any any power to do anything,” said Bernal, a professor at the University of East Anglia law school. Even though EU law does not directly apply in the U.K. at this time, he said, similar laws and regulations are brewing there.

In a statement, the European Commission said it could enforce fines on platforms or even ban them as a last resort, but it didn’t comment on where things stand when it comes to those steps with any of the social platforms it has sent letters to.

Fake information driven by anger: researcher

Dealing with fake information on social networks may not be possible through fact-checking and testing for legitimacy, said the University of Virginia’s Siva Vaidhyanathan, because readers are driven by emotion.

“The key to understanding any of those moments is not to pay attention to the truth or falsity of the claim, or even the truthfulness or falsity of the source…. Those help, but that’s a loser’s game,” he said, adding that when it comes to politics or crisis situations, social media users often seek out posts with amplified emotions.

How will the paid social media verification process affect you? | About That

A growing number of social media companies are changing the way they verify users, with a move to having them pay for the badges. About That producer Kieran Oudshoorn speaks with CBC News senior business reporter Anis Heydari about why it could affect how businesses and consumers interact online.

“You want to get mad. You feel like you need to feel something. You go to Twitter and then you find something that will make you mad and then you will pile on. That’s the dynamic,” he told CBC News, adding that he theorizes that groups such as Hamas deliberately post violent and misleading videos to trigger these responses.

“They are not looking to be loved. They are looking to be hated. And that’s so easy to do. And I think that’s what we’re seeing going on here,” Vaidhyanathan said.

It’s a concern echoed — and extended outside of the online world — by industry players like Theodora Skeadas.

“As false information that is inspired by hate spreads, it can lead people to do [real world] offline harm, which is damaging to everyone,” she said.

A woman wearing a headset looks into a webcam.

Skeadas, who continues to work as an independent consultant in online trust and safety, said she remains concerned about what happens with online platforms even outside of the immediate conflicts taking place in Israel and Gaza.

“Disinformation affects elections just as much as it affects times of crisis. And I’m concerned about the capacity of platforms like Twitter to meaningfully address this issue as we move toward a year with major elections,” she said.

ABOUT THE AUTHOR

Anis Heydari

Senior Reporter

Anis Heydari is a senior business reporter at CBC News. Prior to that, he was on the founding team of CBC Radio’s “The Cost of Living” and has also reported for NPR’s “The Indicator from Planet Money.” He’s lived and worked in Edmonton, Edinburgh, southwestern Ontario and Toronto, and is currently based in Calgary. Email him at anis@cbc.ca.

With files from The Associated Press

*****

Credit belongs to : www.cbc.ca

Check Also

Nearly 23% of the Canadian population reported food insecurity in 2022

Canada’s poverty rate rose to 9.9 per cent in 2022, approaching pre-pandemic levels as government …