Trump is no longer tweeting, but online misinformation will not go away

Darren Linvill thought he was prepared for 2020 and the fire hose of false information that would fall on the United States during an election year in which the country was bitterly divided.

Linvill is a researcher at Clemson University in South Carolina and follows disinformation networks associated with Russia.

In the years after 2016, Linvill shared his work with various government entities as the U.S. worked to find out exactly what Russia did to interfere in this race. He even created a game called “Spot The Troll”, which shows how difficult it is to differentiate a professional tease from an extremely obstinate American.

People tend to fail the test, but more importantly, Linvill tracks their online behavior after they do, and they seem to have more insight into the information they choose to share and promote.

“They realize, ‘Ah, maybe I’m not as smart as I thought,'” he says.

Of all the people looking at the political scenario in 2020, he should be ready for any misinformation that the year had to offer. But he was not.

“The minute the pandemic hit,” says Linvill, “shit hit the fan.”

Instead of monitoring a wave of foreign misinformation that seeks to sow distrust in democratic institutions and in elections, domestic sources have emerged doing the same thing.

“I am not even seeing [Russian disinformation] very English messages to the same extent that I saw in the past, because they don’t need to, “said Linvill.” I mean, the GOP took their ball and ran with it. “

In the past year, Americans have spent more time than ever online and obtained more information from untrusted or false sources. Despite the deploration of former President Trump, experts say the way Americans communicate and receive information online remains broken.

It is a crisis that is tearing families apart and led to a violent takeover of the United States Capitol in January.

A recent report on this attack by the non-partisan Election Integrity Partnership concluded that, while it was terrible to watch, it shouldn’t have been seen as surprising considering what was happening all year online.

“Many Americans were shocked, but they didn’t have to,” wrote the report’s authors.

It is also not a problem that occurs once every four years. Public health officials are currently competing with a flood of online misinformation to convince the public that coronavirus vaccines are safe.

“We are in serious trouble,” said Joan Donovan, research director at the Shorenstein Center for Media, Politics and Public Policy at Harvard University. “Disinformation has become an industry, which means that financial incentives and political gains are now aligned.”

Trump may no longer have access to his 80 million Twitter followers, but the system he capitalized on to spread more than 30,000 falsehoods remains intact.

“We will see more of this,” she added.

Defining the landscape

The pandemic has been terrible for millions of Americans who have lost loved ones in some cases and jobs in others. But it was arguably a boon to the world of technology.

Twitter and Facebook have seen meteoric increases in their share prices since last March, corresponding to a respective growth in the time spent on their platforms. This growth may mean an audience that is more receptive to conspiratorial thinking and less concerned with the truth.

Even before the pandemic, engagement with social media was increasing. Facebook, Twitter, Instagram and Reddit have seen an increase in the time people spend on their platforms since 2018, according to Activate Consulting, a company that keeps up with technology and media trends.

There was also a bigger jump in the daily time spent on the internet and media from 2019 to 2020 than in any of the previous years that Activate tracked, according to the company’s CEO, Michael Wolf.

The average Facebook user now spends about 15 and a half hours a month on the platform. And in general, Americans are spending more than 13 hours in total each day engaging in some type of technology or media, be it video, games, social media, messaging or audio.

“New habits have formed,” said Wolf. “These behaviors are simply not likely to be reversed.”

This means that information obtained through sources generated by algorithms has become an increasingly important part of Americans’ news diet.

About 1 in 5 Americans say they received political news mainly on social media in 2020, according to the Pew Research Center.

Those who obtained their information in this way were found to engage with conspiracy theories more often than other Americans, while expressing less concern about the damaging effects of unreliable information.

The problem is more pronounced for younger Americans, who have grown up with platforms. Of the Americans who relied most on social media for information about the election, half were under 30.

This week’s Electoral Integrity Partnership report detailed how allegations of electoral fraud went viral in conservative circles, and subsequent fact checks obtained only a fraction of the same traction.

Even if government officials did their best to prepare Americans for what to expect on election night and beyond, conspiracy theorists inspired by Trump and his allies he successfully painted these efforts to prevent the problem as further evidence of a manipulated system controlled by a “Deep State”.

It is not just election-related misinformation that is increasing: false narratives about the coronavirus pandemic have also exploded.

Disinformation tracking company Newsguard has compiled a list of more than 400 sites that are spreading lies about the pandemic.

The company also found that many of these same sites are being involuntarily funded through automated advertising, in part by some of the largest corporations in the world and even by the federal government’s own Centers for Disease Control and Prevention.

“If advertising platforms provided easy tools to avoid misinformation sites when placing ads, it would have a significant impact on the business model of that misinformation, greatly reducing the incentive for disinformation publishers to promote false claims,” ​​wrote Matt Skibinski, general manager of Newsguard, in the company’s report on the subject.

Chasing a symptom

Throughout his presidency, Trump has repeatedly pushed the limits of social media companies’ policies when it came to sharing false information.

At first, companies did nothing. Then they added fact-checking labels, although it is unclear whether these labels help or hinder the spread of misinformation.

But what they didn’t do was undermine Trump’s ability to speak his mind, even when election officials warned that the kind of falsehood he was spreading would lead to violence.

“Someone is going to get hurt, someone is going to be shot, someone is going to be killed,” said Gabriel Sterling, a Georgia election official, in December, less than a month before a crowd invaded the United States Capitol.

As Trump’s index of lies accelerated last year, so did his Twitter following.

At the time of Trump’s ban on the platform in January, his account had the sixth highest number of followers. Over 2020, Trump’s account saw a 30% increase in followers, from 68.1 million to 88.7 million, according to a research team at the University of Colorado Boulder.

Removing it, and thousands from other accounts that spread misinformation led to an immediate reduction in the spread of falsehoods on social networks, according to an analysis by the tracking company Zignal Labs.

But that action won’t magically fix the platforms, says Harvard’s Donovan.

Of the top 20 accounts that shared misinformation about the election using the hashtag #voterfraud for example, according to a Cornell University data analysis, 13 accounts remain active on Twitter.

Donovan says these types of accounts, like conservative media personalities Charlie Kirk and Jack Posobiec, both with more than a million followers, can still make a false narrative go viral almost immediately.

“Of the people who spread the most damaging lies about the 2020 elections, many of them maintain their social media accounts on most platforms,” ​​said Donovan. “When you don’t get the people who are writing the fictions, the people who are behind the orchestration of this misinformation, then it will end up coming back in different forms.”

While individual members often raise concerns, Congress has so far refused to pressure social media companies to make large-scale reforms to their platform designs. These systems have been notoriously found to drive political polarization and reward disinformation with engagement.

“Everywhere along the way these social media platforms have innovated, there has been a lack of accountability and regulation on the part of politicians,” said Donovan. “Mainly because this type of chaos serves many politicians.”

Timeline for a correction

For Whitney Phillips, disinformation researcher in Syracuse, there are some reasons for optimism.

Even though an election was required with an unprecedented level of foreign interference and another that ended with violence on the United States Capitol, people are at least beginning to recognize that there is a problem.

“When I started doing this research in 2008, there was so much resistance that anything bad that happened on the Internet was even real,” said Phillips. “And it was only in 2017 that there was a critical mass of people who said ‘maybe hate speech on the Internet is not good.’ Perhaps these things can correspond to the action of the real world … Now, there is very little to deny the dangers of a dysfunctional information ecosystem. “

Much of his new book, co-written with Ryan Milner, focuses on the role that memes played in normalizing hate speech and racism, adding humor or irony.

When asked how much restraint large companies need to do to correct the current state of information in the United States, she says the issue is wrong.

A future healthy information environment probably does not involve Facebook or Twitter, at least in something close to its current forms. It is a completely redesigned internet.

“My guess is that it will take 50 years,” she says.

Which means it has shifted its focus from platform moderation to K-12 education, so that future generations can be better equipped to fix what’s left: systems that allow falsehoods to spread quickly, regardless of the truth. .

“Our problem is that our networks are working exactly as they were designed to work. They work very well. They are not broken at all,” she said. “So, in order to equip people to navigate these networks that are designed to prepare us to be in Hell, we basically have to think about what we are teaching young people.”

What will happen between now and 50 years from now, she is not sure. Users who have recently become radicalized, for example, may find new ways to meet online if they are kicked off the main platforms.

“Something is going to grow out of it,” she said. “What exactly is difficult to say. But I have a feeling it won’t be incredible.”

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Source