It’s the question everyone has been asking: just what is Facebook going to do about the distinctly unsettling material that is being shared on its platform?
Never mind fake news; recent months have seen examples of Facebook Live broadcasts of suicides and murders, while the sharing of revenge porn, violence and abuse is seemingly more prevalent than ever.
Now, a Guardian investigation, titled ‘The Facebook Files’ has uncovered 100 internal training manuals, spreadsheets and flowcharts – previously unseen by the public – which detail the advice given to moderators who are trying to keep up with the enormous volume of content posted on the site. One source told the Guardian: “Facebook cannot keep control of its content. It has grown too big, too quickly.”
Earlier this month, Mark Zuckerberg confirmed that the company would be employing 3,000 extra moderators to speed up the reviewing of content, but even if this proves to be sufficient the question still remains: what is and isn’t permitted under their self-created guidelines?
Here are some of their key approaches – and what is remarkable is just how much the balance is in favour of distressing material being allowed to stay up, rather than be taken down:
Facebook acknowledges that “people use violent language to express frustration online” and feel “safe to do so” on the site. Therefore, they have made the decision to ban, or not ban, depending on whether the violence is ‘credible’.
‘To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat’ (not credible)
‘I’m going to kill you’ (not credible)
‘You assholes better pray to God that I keep my mind intact because if I lose I will literally kill HUNDREDS of you’ (it is only an ‘aspirational’ or ‘conditional’ statement)
‘Someone shoot Trump’ (since the President, as a head of state, is in a ‘protected category’)
‘#stab and become the fear of the Zionist’ (credible)
Graphic Violence (Imagery/Videos)
- ‘We do not allow people to share photos or videos where people or animals are dying or injured if they also express ‘sadism’
- Facebook will also delete posts where there is a ‘celebration’ of violence – either where the user ‘speaks positively of the violence’, or shares it for ‘sensational viewing pleasure’. However, some surprising things do not qualify as celebration: ‘opinions on blame’ (eg. ‘he deserved it’), ‘supporting the death penalty’ or ‘enjoying the justice of a violent sentence’
So if it makes it through these, then Facebook’s overriding opinion is ‘we think minors need protection and adults need a choice’ – thus, if anything is decided to be ‘disturbing’, but not sufficiently disturbing to be deleted, an 18+ age restriction will kick in, auto-play is disabled and it adds a warning screen.
Videos of violent deaths
- ‘Videos of violent deaths are disturbing, but can help create awareness’
- Facebook will allow people to livestream attempts to self-harm because it ‘doesn’t want to censor or publish people in distress’
Images of animal abuse
‘Generally, imagery of animal abuse can be shared on the site’
‘Some extremely disturbing imagery may be “marked as disturbing”’ – but not deleted
Non-sexual physical abuse and bullying of children
- ’We do not action photos of child abuse. We mark as disturbing videos of child abuse. We remove imagery of child abuse if shared with sadism and celebration [as described above].’
- It explains in a slide that it takes this position, so that “the child [can] be identified and rescued, but we add protections to shield the audience”. It confirmed to the Guardian that there are “some situations where we do allow images of non-sexual abuse of a child for the purpose of helping the child”.
Imagery qualifies as ‘revenge porn’ only if it fits all three of the following:
- Image produced in private setting
- Person in image is nude, near nude or sexually active
- Lack of consent confirmed by ‘vengeful context’ (eg. caption or comment) OR ‘independent sources’ (eg. media coverage)
Only two of the three? Then it stays up.
- Facebook does not allow nudity, but does allow it in the context of the Holocaust and under ‘terror of war conditions’, a response to the outcry after the taking down of an iconic Vietnam war photo.
In essence, the investigation reveals just what an utter moral quagmire Facebook is facing. It is clearly desperate not to be a censor; both from a moral standpoint, and also because it is keenly aware that the more material that is shared – no matter how questionable – the more time people spend on the site and the more money it will make from advertising.
It is also attempting to apply different moral standards to every country it operates in; an almost impossible task without an army of moderators. Even with sufficient numbers, it would not be easy, given that context is so often crucial to whether something is, for example, ‘credible’ or ‘not credible’.
A report by British MPs on 1 May said, “the biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal or dangerous content, to implement proper community standards or to keep their users safe”.
Frankly, in the wake of these revelations, it is hard to disagree with this finding.