Can Facebook win against ‘fake news’?

Facebook has enlisted partners to stop the spread of misinformation

Facebook has enlisted partners to stop the spread of misinformation

The social media site has partnered with AFP and Africa Check to combat misinformation. But this is only the first step in the battle against ‘false news’.

The spread of misinformation has been a major problem on Facebook for some time now. In recent years, the social network has been blamed not only for the proliferation of false reporting, but being used to sway elections, referendums and public opinion.

Things reached a head earlier this year, when Facebook CEO Mark Zuckerberg was hauled in front of the US Congress to answer questions pertaining to the site’s failure to protect user data. This failure had led to widespread manipulation by third parties across the platform.

As unfair as it sounds to say it, ‘fake news’ is to Facebook as ‘ending a celebrity’s career’ is to Twitter. Sure, it’s not the only aspect the platform is known for, but it’s become ubiquitous enough to become synonymous with it.

Facebook has taken note. Last year the site launched a third-party fact-checking programme in several key markets – including the United States. It has since expanded this programme and has started to roll it out on the African continent. The social network has partnered with Africa Check and French news agency AFP to extend the fact-checking service locally. It was launched in Kenya yesterday and South Africa today. Facebook says it has plans to add more African countries to the programme soon.

From the sounds of things, Facebook is taking the issue seriously. Under the fact-checking programme, local articles will be fact-checked alongside the verification of photos and videos. If an article (or any piece of multimedia) is identified by one of the site’s partners as false, it’ll be demoted in the news feed, which will go some way towards hampering its distribution.

Third-party fact checkers will also write articles about news stories and these will appear immediately below the article in the news feed. On top of that users and page admins will receive a notification if they share a story that’s been tagged as ‘false’. It’s Facebook’s hope that this programme will help users identify misinformation from the genuine article.

“We felt internally there was a need to address what had become a very serious issue,” says Sarah Brown, Head of Media Partnerships at Facebook. “We didn’t  want to rush into a situation though where we became the arbiters of what was fake news and what wasn’t. We wanted to draw on the expertise of our partners.”

“The rapid expansion of the programme in key markets is indicative of how successful it’s been,” she says.

One could say that the fact-checking programme’s arrival in South Africa at this time is rather precipitous, given that a general election is just around the corner. But according to Emilar Gandhi, Facebook’s Public Policy Manager in SADC, the fact-checking programme is only the first step in an ongoing initiative Facebook has to combat ‘false news’.

“It’s a component of the work Facebook has been doing for the last two years,” says Gandhi. “We’ve been running this concurrently with other programmes – such as the one we’ve been running in local high schools called ‘My World’. In it, we teach young people – both in and out of school – how to spot fake news, how to be literate on our platforms and how to be safe with regards to bullying and harassment.”

While this all sounds like Facebook putting its best foot forward, questions do remain about how effective this strategy will be. After all, even if a user is notified that an article is filled with misinformation, there’s nothing stopping them from sharing it – and if they do so with a user outside the programme, is that user any the wiser?

Facebook has said that publishers who continually post content that has been tagged as false or problematic will be notified, their articles demoted and repeat offenders could have their pages shut down. But right now, there’s no word on how long this process will take.

Furthermore, it’s unclear at this stage just how many staff (or algorithms) third-party partners have dedicated to the fact-checking programme. Given the abundance of misinformation on social media – and the speed at which it spreads – is it not unreasonable to think that there are bound to be ‘false news’ articles that’ll fall through the cracks?

And then there are Facebook’s two other platforms to be considered: Instagram and WhatsApp. The former may not be such an issue at this stage, but the latter is a big concern. WhatsApp is a conduit not only for misinformation, but for chat groups that have less than savoury aims. Once again, these are only two uses that have been spotted on the platform, but their ubiquity makes them somewhat synonymous with it.

Facebook says it’s aware of this problem – it’s even taken steps to educate users – but at the time of writing, no firm plans have been announced to roll out Facebook’s fact-checking programme to other platforms it owns. That having been said, the fact that Facebook is taking the initiatives it is gives one hope.

The battle against ‘fake news’ is only beginning. Whether or not it can be won remains to be seen, but at this stage, there is a lot more work to be done.

For more news your way, download The Citizen’s app for iOS and Android.

 

 

 

today in print