Twitter is expanding its use of warning labels in tweets that contain misleading details about coronavirus vaccines.
The change, announced in a blog post on Monday, it is designed to strengthen the existing Covid-19 orientation of the social network, which led to the removal of more than 8,400 tweets and challenged 11.5 million accounts worldwide.
In December, the platform started providing more labels, providing additional context for tweets with controversial information about the pandemic. Now the company is increasing its focus on vaccine publications specifically and initiating a strike system that “determines when new enforcement actions are needed”.
Twitter’s decision comes amid concerns about the spread of anti-vaccination material on social networks.
The labels will initially be applied only by humans, which will help automated systems to detect the breached content going forward. Users will not face any further action after the first warning.
Two strikes will lead to a 12-hour account lockout, with an additional 12 hours added for a third offense. A seven-day account lockout will be imposed after four attacks, followed by a permanent suspension for five or more attacks.
The company is starting with English content and says it will work to expand to other languages and cultural contexts over time.
“We believe that the strike system will help educate the public about our policies and further reduce the spread of potentially harmful and misleading information on Twitter, especially for repeated moderate and severe violations of our rules,” said the company.
Users, however, cannot specifically report other users for incorrect information from Covid, even though this type of content has been banned from the platform. Instead, users who think a particular tweet violates the company’s Covid rules should report it for another offense – such as “threat of harm” – and use the text box to add that it is forbidden disinformation.
The new Twitter policies come after Facebook completely banned misinformation about vaccines in early February, using a similar attack system that suspends users who post false allegations and permanently removes those with multiple violations.
Facebook is specifically targeting pages and groups with the new guidelines, which are not specific to Covid-related content and are also aimed at falsehoods, including the suggestion that vaccines cause autism – an unfounded claim made by many in the antivax community.
Twitter, Facebook and platforms like Instagram and TikTok started adding links and labels to any information about Covid-19 at the beginning of the pandemic. On Facebook, Instagram and TikTok, even posting the term “Covid-19” will receive a post accompanied by a warning label and a link to accurate information from the Centers for Disease Control and Prevention.