Avatar photo

By Citizen Reporter

Journalist


Facebook discloses new details on removing terrorism content

European countries, where civilians have been killed in terror attacks, have pressed social media sites to do more to remove militant content.


Facebook Inc on Thursday offered new insight into its efforts to remove terrorism content, a response to political pressure in Europe to militant groups using the social network for propaganda and recruiting.

Facebook has ramped up use of artificial intelligence such as image matching and language understanding to identify and remove content quickly, Monika Bickert, Facebook’s director of global policy management, and Brian Fishman, counterterrorism policy manager, explained in a blog post.

Facebook uses artificial intelligence for image matching that allows the company to see if a photo or video being uploaded matches a known photo or video from groups it has defined as terrorist, such as Islamic State, Al Qaeda and their affiliates, the company said in the blog post.

YouTube, Facebook, Twitter and Microsoft last year created a common database of digital fingerprints automatically assigned to videos or photos of militant content to help each other identify the same content on their platforms.

Similarly, Facebook now analyses text that has already been removed for praising or supporting militant organisations to develop text-based signals for such propaganda.

“More than half the accounts we remove for terrorism are accounts we find ourselves, that is something that we want to let our community know so they understand we are really committed to making Facebook a hostile environment for terrorists,” Bickert said in a telephone interview.

Germany, France and Britain, countries where civilians have been killed and wounded in bombings and shootings by Islamist militants in recent years, have pressed Facebook and other social media sites such as Google and Twitter to do more to remove militant content and hate speech.

Government officials have threatened to fine the company and strip the broad legal protections it enjoys against liability for the content posted by its users.

Asked why Facebook was opening up now about policies that it had long declined to discuss, Bickert said recent attacks were naturally starting conversations among people about what they could do to stand up to militancy.

In addition, she said, “we’re talking about this is because we are seeing this technology really start to become an important part of how we try to find this content.” (Reporting by Julia Fioretti; Editing by Jonathan Weber and Grant McCool)

Read more on these topics

Meta (Facebook) Social Media

For more news your way

Download our app and read this and other great stories on the move. Available for Android and iOS.