close
Tuesday April 23, 2024

Facebook details policing for sex, terror, hate

By AFP
May 16, 2018

PARIS: Facebook pulled or slapped warnings on nearly 30 million posts containing sexual or violent images, terrorist propaganda or hate speech in the first three months of 2018, the social media giant said Tuesday. In an unprecedented report responding to calls for transparency after the Cambridge Analytica data privacy scandal, Facebook detailed its actions against such content in line with its “community standards”. Facebook said improved technology using artificial intelligence had helped it act on 3.4 million posts containing graphic violence, nearly three times more than it had in the last quarter of 2017. In 85.6 percent of the cases, Facebook detected the images before being alerted to them by users, said the report, issued the day after the company said “around 200” apps had been suspended on its platform as part of an investigation into misuse of private user data. The figure represents between 0.22 and 0.27 percent of the total content viewed by Facebook’s more than two billion users from January through March.