Twitter tests Safety Mode to block hateful messages


Twitter introduces a feature called “Safety Mode” that puts up a temporary line of defense between an account and the waves of toxic invective that Twitter is notorious for. The mode can be enabled from the settings menu, which toggles on an algorithmic screening process that filters out potential abuse that lasts for seven days.
The feature will initially be tested by a small number of English-speaking users, Twitter said, with priority given to “marginalized communities and female journalists” who often find themselves targets of abuse.
Safety Mode is the latest in a series of features introduced to give Twitter users more control over who can interact with them. Previous measures have included the ability to limit who can reply to a tweet.
Twitter said the new feature was a work in progress, mindful that it might accidentally block messages that were not in fact abusive. Like other social media platforms, Twitter relies on a combination of automated and human moderation.

Alongside dealing with abuse on the platform, Twitter has become more determined to crack down on misinformation. In August it partnered with Reuters and the Associated Press to debunk misleading information and stop its spread.

It has previously introduced Birdwatch, a community-moderation system, which allowed volunteers to label tweets they found to be inaccurate.


Please enter your comment!
Please enter your name here