Facebook has decided to be more transparent than ever. Even more so since the recent scandal of account theft by Cambridge Analytica.
To demonstrate the beneficial work they are doing in favor of cleaning their social network of all types of content that endanger the privacy and safety of users, the Mark Zuckerberg team has brought the first report that reflects the measures that is taking place.
And the figures do nothing but confirm that there is a rebound of publications with explicit violence and terrorist propaganda which are driven at the speed of light in the networks.
Thanks to the artificial intelligence system they use from the company, they are able to get ahead between 86 and 99.5% of cases before a user reports the inappropriate content, as the case may be.
To give you an idea of the figures they have shown. Facebook has come to withdraw almost 2 million publications related to the promotion or exaltation of terrorist bands like Al Qaeda or ISIS in the first quarter of this 2018, 73% more than in the last quarter of 2017.
On the other hand, they have managed to eliminate 3.4 million violent images, almost three times more than in the last four months of 2017.
In the case of sexual images, the number skyrockets to 21 million items in just 4 months of 2018.
This is where users are most critical, because they believe that the system does not differentiate between a woman’s breasts and of a man (which is not usually eliminated, not considering the system as offensive).
In fact, Facebook came to censor the work of art “Liberty guiding the people” in which you can see a chest discovered in the central part of the painting. The social network apologized a month later.
This is where the company has more problems, since it is difficult to detect hate speech as such. Despite having managed to suppress 2 and a half million messages this year, 56% more than the last quarter of 2017, only 38% were detected before someone reported it.
So it’s something you know, you have to work more thoroughly, because the artificial intelligence system is not able to differentiate when those messages are being used to attack or defend.
In fact, some groups have come to complain that Facebook has vetoed certain articles in which they were trying to denounce precisely what they were erasing the publication.