Trump Isn’t the Only One Spreading Misinformation on Social Media

This article is part of the On Tech newsletter. You can sign up here to receive it weekdays.

Facebook and Twitter temporarily locked President Trump’s accounts this week after he inspired the rampage on the Capitol, and they are considering permanent bans.

It’s worth asking whether the major internet properties should revise their rules for him and other people with large online followings who regularly spread bogus or harmful information.

There are a small number of influential people, including the president, who have repeatedly been instrumental in stoking misinformation about the election or spreading unproven treatments for the coronavirus.

If the internet companies want to give everyone a voice and create healthier online spaces, perhaps Facebook, Twitter and YouTube should subject the prominent band of habitual online misleaders to stricter rules. This could dial back some of the internet’s dangers by penalizing those who do the most harm without stifling a vast majority’s free expression.

I’m not solely blaming internet companies for the relatively large percentage of Americans who don’t believe the election was legitimate or those who believe the coronavirus is overblown. Distrust and disbelief are chronic, whole-of-society problems with no simple solutions. But this is a moment for all of us to begin to repair what’s broken. (Assuming that we can agree on what’s broken, which is no sure thing.)

One place to start is with those with outsized influence on our beliefs and behavior. In November, my colleague Sheera Frenkel reported on analyses that found just 25 accounts, including those of Mr. Trump and the right-wing commentator Dan Bongino, accounted for about 29 percent of the interactions that researchers examined of widely shared Facebook posts about voter fraud.

In October, a coalition of misinformation researchers called the Election Integrity Partnership found that about half of all retweets related to dozens of widely spread false claims of election interference could be traced back to just 35 Twitter accounts, including those of Mr. Trump, the conservative activist Charlie Kirk and the actor James Woods. (Yes, most of the habitual super spreaders on crucial issues like the election have been right-wing figures.) Most of these 35 accounts helped seed multiple falsehoods about voting, the researchers found.

“It’s a small number of people with a very large audience, and they’re good tacticians in spreading misinformation,” Andrew Beers, a researcher with the Election Integrity Partnership, told me. “Moderation on these accounts would be much more impactful” than what the internet companies are doing now, he said.

And yet, as I’ve written before, online companies tend to consider only the substance of online messages, divorced from the identity of the messenger, to decide whether a post is potentially harmful or dangerously misleading and should be deleted or hidden.

It makes sense now to shift course and try subjecting prominent people to stricter rules than the rest of us, and applying harsher punishment for the influential repeat spreaders of false information. That includes Mr. Trump and other world leaders who have used their online accounts to inflame divisions and inspire mob violence.

YouTube has a “three strikes” policy that aims to punish people who repeatedly break its rules. The policy is riddled with inconsistencies, but it might be worth copying. I can imagine something like it for all the social media sites, with teams laser focused on accounts with large followings — say, more than a million followers, or maybe just for accounts found to be habitual spreaders of misinformation or division.

Each time a prominent account shares something that is deemed discredited information or that brushes close to existing rules against abusive behavior, the account would get a warning. Do so three times and the account would face a lengthy suspension or ban.

Some might call this internet censorship. It is. But the internet companies already have extensive guidelines prohibiting bullying, financial scams and deliberately misleading information about important issues like elections.

To do this, the internet companies will have to be willing to make powerful people angry.

The recalibration of how internet sites handle influential people would put a lot of pressure on users with large followings to be more careful about what they say and share. That’s not such a bad idea, is it?


  • The online plotting behind the Capitol mob: On “The Daily” podcast, Sheera traced the organizing online by the pro-Trump mob that stormed the Capitol. The chain of events, Sheera said, included the spread of “stop the steal” groups on Facebook before they were blocked, and real-time discussions on the site Gab on Wednesday of people deliberating tactics to break through glass doors at the Capitol.

  • Even storming the Capitol is an online performance: BuzzFeed News and Protocol singled out some of the striking scenes of the pro-Trump mob posting for social media selfies and video streams. Both news outlets called this another example of the blurring line between living our lives and performing our lives online.

  • Three words: Archives. Hashtag. Party. Once a month, my colleague Caity Weaver wrote, the National Archives gathers history enthusiasts on Twitter to manically peruse and post about historical documents and records. It’s fun! Last month’s archival gathering centered on baking-related materials including President Dwight D. Eisenhower’s 1959 request for Queen Elizabeth’s scone recipe. (It leaves out many important instructions.)

Three more words: Competitive. Dog. Dancing.


We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at ontech@nytimes.com.

If you don’t already get this newsletter in your inbox, please sign up here.

Comments are closed.