HOW DOES FACEBOOK MODERATE ITS CONTENT?

Currently Facebook relies on human reviewers and moderators and in some cases - like those relating to ISIS and terrorism - automatic removal for offensive and dangerous activity. 
The manual moderation relies largely on an appeals process where users can flag up concerns with the platform which then reviews it through human moderators.  

However,  none of the 200 viewers of the live broadcast of the Christchurch, New Zealand terror shooting flagged it to Facebook's moderators.
In some cases, Facebook automatically removes posts using an AI driven algorithm indicates with very high confidence that the post contains support for terrorism including ISIS and al-Qaeda.  

But the system overall still relies specialised reviewers to evaluate most posts, and only immediately remove posts when the tool's confidence level is high enough that its 'decision' indicates it will be more accurate than that of humans.

According to ex-Facebook executive Monika Bickert, its machine learning tools have been critical to reducing the amount of time terrorist content reported by users stays on the platform from 43 hours in the first quarter of 2018 to 18 hours in the third quarter of 2018. 
Ms Bickert added: 'At Facebook's scale neither human reviewers nor powerful technology will prevent all mistakes.'