Facebook will no longer allow graphic images of self-harm on its platform as the company tightens its policies following criticism of the moderation of violent and potentially dangerous content on social media.
The company also said on Tuesday that self-injury-related content would become harder to search for on Instagram, and such images would not appear as recommended content.
Twitter has already pledged content related to self-harm will no longer be reported as abusive in an effort to reduce the stigma around suicide.
We’ve also made updates to our reporting flow, to do our part in reducing the stigma around suicide. Now, reporting a Tweet that suggests someone intends to hurt themselves will no longer need to be reported as abusive content.
— Twitter Safety (@TwitterSafety) September 10, 2019
About 8 million people a year kill themselves, according to the World Health Organization.
Facebook has a team of moderators who watch for content such as live broadcasting of violent acts as well as suicides. The company works with at least five outsourcing vendors in at least eight countries on content review.
Governments across the world are wrestling with how to better control social media content, which is often blamed for encouraging abuse, spreading online pornography and influencing or manipulating voters.
Last month, Amazon said it would promote helplines to customers who use its site for searches linked to suicide.
Google, Facebook and Twitter issue helpline numbers in response to such user queries.
• In the UK and Ireland, Samaritans can be contacted on 116 123 or email firstname.lastname@example.org or email@example.com. In the US, the National Suicide Prevention Lifeline is 1-800-273-8255. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at www.befrienders.org