Twitter will test letting some users fact-check tweets.
Twitter said on Monday it would allow some users to fact-check misleading tweets, the latest effort by the company to combat misinformation.
Users who join the program, called Birdwatch, can add notes to rebut false or misleading posts and rate the reliability of the fact-checking annotations made by other users. Users in the United States who verify their email addresses and phone numbers with Twitter, and have not violated Twitter’s rules in recent months, can apply to join Birdwatch.
Twitter will start Birdwatch as a small pilot program with 1,000 users, and the fact-checking they produce will not be visible on Twitter but will appear on a separate site. If the experiment is successful, Twitter plans to expand the program to more than 100,000 people in the coming months and will make their contributions visible to all users.
Twitter continues to grapple with misinformation on the platform. In the months before the U.S. presidential election, Twitter added fact-check labels written by its own employees to tweets from prominent accounts, temporarily disabled its recommendation algorithm, and added more context to trending topics. Still, false claims about the coronavirus and elections have proliferated on Twitter despite the company’s efforts to remove them. But Twitter has also faced backlash from some users who have argued that the company removes too much information.
Giving some control over moderation directly to users could help restore trust and allow the company to move more quickly to address false claims, Twitter said.
“We apply labels and add context to tweets, but we don’t want to limit efforts to circumstances where something breaks our rules or receives widespread public attention,” Keith Coleman, a vice president of product at Twitter, wrote in a blog post announcing the program. “We also want to broaden the range of voices that are part of tackling this problem, and we believe a community-driven approach can help.”