How to fight disinformation while preserving free speech

There are solutions “within the framework of our traditions of freedom of speech and free expression” to counter the spread of disinformation online, Daniel Fried, a distinguished fellow at the Atlantic Council, said at the Council’s Disinfo Week event in Athens, Greece, on March 4.

“We are not hopeless,” Fried said.

Fried outlined what democratic governments can do to limit the effects of disinformation and make it harder for foreign actors to malignly influence public opinion. Rather than taking the easy way out and following authoritarian countries in censoring content and becoming “arbiters of truth,” tangible solutions to counter disinformation lie “in the direction of transparency and integrity, and possibly regulation around those lines,” Fried argued.

The Atlantic Council’s Disinfo Week brings together disinformation experts and government officials for four days of presentations and discussions in Athens (March 4), Madrid (March 5), and Brussels (March 6 and 7). The event series is part of the Atlantic Council’s efforts in “identifying, exposing, and fighting disinformation campaigns,” according to Atlantic Council Executive Vice President Damon Wilson.

Opening the event in Athens, US Ambassador to Greece Geoffrey Pyatt agreed that democracies should shun the temptation to adopt authoritarian solutions to the disinformation crisis. While “we have to grapple with managing the fire hose of information,” that the Internet provides, Pyatt was certain “that freedom of expression and freedom of speech will remain a bedrock of our democratic societies. It is one of the values that binds us together.”

US Ambassador to Greece Geoffrey Pyatt speaks at the Atlantic Council Disinfo Week Event in Athens, Greece on March 4, 2019.

While “the Internet was actually once celebrated as [a] metaphor for democracy,” Oxford Internet Institute researcher Samantha Bradshaw explained, foreign actors have been able to use social media algorithms, the “data richness of the personalization,” that happens on social media, and automated bots to push fake or misleading content to users who are most likely to fall for the deception.

“Both state and nonstate actors can employ a range of tools to conduct malign influence operations,” explained. This makes it important that the United States works “with our partners and allies to address this challenge that is affecting democracies worldwide,” he said.

Thodoris Georgkopoulos, content director of the Greek think-tank diaNEOsis, added that disinformation campaigns, or the “intentional dissemination of false information in order to gain something,” have “existed forever,” in the form of government propaganda or even the tabloid press. What is different about the current crisis is the shift from consumption of externally curated packages of information (from newspapers or TV programs) to the wide-open channels of the Internet, where users “get to curate their own package of information,” Georgkopoulos explained. This vast amount of choice gives more opportunity for malicious actors to push content toward vulnerable audiences. As Kyriakos Pierrakakis, research director for diaNEOsis, said, “it is much easier to destroy rather than create when you use 140 characters.”

To combat foreign-led disinformation campaigns, democracies should take tangible steps to help cue online users about the sources of their information. “Transparency means disclosing who actually is online,” Fried said, arguing that a group on Facebook calling itself the “Concerned Sons of Texas,” but actually run out of a Russian troll farm, should be properly labelled by social media companies as originating from a Russian account. Fried did not support, however, proposals to verify every user on social media platforms or the Internet, and Bradshaw pointed out the importance of anonymity to protect civil society actors in authoritarian states. But, Fried added, “anonymity should not give license to impostering,” and accounts that fraudulently depict themselves should be removed or their posts flagged.

Fried also argued that “governments ought to have the ability to regulate political ads online,” pointing to longstanding limits on commercial free speech in the United States, such as the ban on cigarette advertisements. These regulations could also be expanded to the use of bots, which Fried argued have no freedom of speech rights since “a bot is a robot” and not a human.

Finally, Fried proposed that governments look to push social media companies to change their algorithms. He pointed to the fairness doctrine in the United States which compelled news companies to dedicate time to different political viewpoints. “Maybe I in the United States should not be sent just confirmational news items [on social media] telling me what I already believe,” Fried said. “Maybe there ought to be an algorithmic requirement to send me stuff on all sides.”

Bradshaw added that much of the problem with fake content on social media is “a symptom” of the “underlying problems” in the business model of social media companies, which is built on the collection and sale of personal information and behavior history of users. Fried suggested that governments could compel social media companies to use “informational fiduciaries,” who would “limit their ability to sell us and our profiles online.”

While governments can take these steps and more to limit the effect of disinformation, “the heroes of this story are going to be the young activists who in their own countries are far more capable than foreigners at spotting foreign-based disinformation campaigns,” Fried said.

One such activist, Valentinos Tzekas who founded the Greek disinformation watchdog FightHoax, explained that his group has developed an algorithm that is “capable of reading, analyzing, and understanding thousands of news articles in seconds.”

Fried said that FightHoax, along with other groups such as the Atlantic Council’s Digital Forensics Research Lab, were best positioned to stop disinformation in real time. “We should put our faith in them,” Fried argued.

Fried was ultimately optimistic about stopping disinformation in its tracks. “Since the printing press, new information technology has been exploited by extremists,” he said, arguing that changing social norms and regulation have always been able to adjust eventually.

“Our job,” Fried maintained, “is to foreshorten the period of adjustment and to limit the damage.” The best way “we can deal with this problem,” he added, is “if we act together.”

The United States and the European Union, Fried explained, have enormous leverage when it comes to social media companies, and if used effectively can certainly change their behavior. “The social media companies are going to listen to us…and they will adjust,” he said.

David A. Wemer is assistant director, editorial, at the Atlantic Council. Follow him on Twitter @DavidAWemer.

Related Experts: Damon Wilson and Daniel Fried

Image: From left, diaNEOsis content director Thodoris Georgkopoulos, Oxford Internet Institute researcher Samantha Bradshaw, Atlantic Council distinguished fellow Daniel Fried, and Kathimerini reporter Marianna Kakaounaki speak at the Atlantic Council Disinfo Week event in Athens, Greece, on March 4, 2019.