Twitter enlists users to flag ‘misleading’ tweets as part of misinformation crackdown
Ailan Evan, DCNF
Twitter announced Tuesday it will test a feature allowing users to report tweets they believe are misleading, as the company cracks down on alleged misinformation.
Users in the U.S., South Korea, and Australia will be able to select the “It’s Misleading” option when reporting a tweet, the company announced Tuesday. The social media platform said it may not take direct action on each flagged tweet, but will use the reports to identify misinformation trends.
Twitter staff will review certain reported tweets depending on the topic or level of exposure and determine if they violate the company’s misinformation policies, a Twitter spokesperson told the Daily Caller News Foundation.
We’re testing a feature for you to report Tweets that seem misleading – as you see them. Starting today, some people in the US, South Korea, and Australia will find the option to flag a Tweet as “It’s misleading” after clicking on Report Tweet.
— Twitter Safety (@TwitterSafety) August 17, 2021
The initiative is the most recent attempt by Twitter to address the problem of misinformation on its platform. The tech company launched Birdwatch, a community-driven moderation program, in January 2021, allowing verified users to annotate tweets they believed to contain misinformation.
Twitter partnered with the Associated Press and Reuters earlier this month to “elevate credible information” and curate trending content, citing a responsibility “to help people understand the conversation happening on our service.”
The company began flagging and removing posts over alleged COVID-19 vaccine misinformation in March, and suspended the accounts of several users, including Rep. Marjorie Taylor Greene, for violating its misinformation policies. The company also labeled and in some cases removed tweets it deemed to be misinformation related to the 2020 presidential election.
The new initiative comes as the Biden administration pushed social media companies to more aggressively address the problem of COVID-19 misinformation. President Joe Biden said last month Facebook is “killing people” by not doing enough to remove misleading content, and the White House announced it was working with the platform to flag misinformation.