Who knows what to believe anymore? Perhaps this story you’re reading right now is completely made up? We promise it’s not. But then we would say that, wouldn’t we?
The scourge of fake news has taken hold over the past few years and, although its effects have been hard to measure, there is a general belief that this is Not A Good Thing and that we need to stamp it out.
The rise of the internet has certainly enabled fake news to proliferate – anyone is a publisher now, and anything can quickly be seen by millions of people before anyone can enforce any checks on its actual accuracy. But in the US election, although it was credited with helping Donald Trump win, both Republicans and Democrats were at it, while the EU Referendum (and even last week) in the UK saw outright, consequence-free lying direct from the mouths of politicians, without any need for the internet to get involved.
Nonetheless, Facebook, which has been blamed for allowing the spread of completely made-up, partisan stories via its platform, seems to be finally making steps to deal with it.
This tweet from US sports reporter John Ourand brought attention to what looks like Facebook’s first attempt to address fake news:
This was followed by Quartz’s Nikhil Sonnad, who saw the posting of a piece of fake news – a story by Newport Buzz that falsely claims that thousands of Irish people were brought to America as slaves (a piece presumably written to try and diminish the racist element of true slaves and their descendants) – all the way through, from putting it into his status, to it appearing on his timeline.
Upon pasting the link into the status box, a warning sign appears saying that the piece is ‘Disputed by [independent fake-checking site] Snopes.com and Associated Press’:
Clicking on the warning shows more details of Facebook’s apparent new policy:
If the warning is ignored, and ‘post’ is hit, the story is not shared immediately. Instead, another pop-up appears, repeating the warning:
If the article is posted anyway, the warning below it remains. However, it’s not clear if this warning is visible to the poster only, or to the poster and friends who see the post on their timeline.
We tried to repeat the actions, but, at the time of writing, we received none of the above warnings; from reading Twitter, it seems that this feature is currently being tested in one area of the United States only.
However, one thing that does seem strange is the errant comma (between ‘sites’ and ‘Snopes.com’) found on the initial pop-up on the mobile version (in John Ourand’s tweet) and on the desktop version – Facebook is normally flawless with its spelling and grammar.
In addition, of course, there is the issue of how Facebook will deal with stories which aren’t necessarily ‘false’, but are, for example, biased in the extreme. At what point do you draw the line? And, of course, how do you fact-check the fact-checkers? Some right-wing commentators claim that the likes of Snopes have an inherent left-wing bias.
Facebook has this to consider, along with the reactions it receives to the initial rollout, before it decides whether this is the correct course of action to take. Initially, this seems very much like dipping their toes in the water – after all, it’s not stopping you posting anything. But it may make people think that little bit more before mindlessly sharing false information.