Surely it has ever happened to you thatnotifies you that someone, usually a girl in my case, has sent you a friend request . The girl in question only has a picture, she has not published anything and, most of the time, the account has just been created. If you do not respond to the request, you will see that it will end up disappearing, not just the notification, but the full profile . This is just one example of the many types of Fake accounts that can be created on Facebook . The objective of these is to make SPAM, give likes to pages (what is called “buy likes”) and share publications.
Fortunately, Facebook does a good job with them , eliminating them in a margin of time that barely reaches a few minutes, but we did not imagine that the problem of false profiles was so huge. The company led by Zuckerberg has published a report in which they talk about the Community rules in terms of terrorist propaganda, graphic violence, nudes of adults and sexual content, unwanted publications, hate messages and, indeed, false accounts.
It is the first time that Facebook publishes this type of information.
As we can read in the, the social network has eliminated in the first quarter of 2018 more than 583 million fake profiles . As the owner of this post says, this is the equivalent of 12 times the Spanish population and 1.79 times the American population . In the same way, as Guy Rosen, Vice President of Product Management, says in the official blog of the company, “We eliminated 837 million unwanted messages and SPAM in the first quarter of 2018, almost 100% of which we found and dialed before someone reported it.”
In the last six months, from October 2017 to March 2018, the number of fake accounts eliminated exceeds 1,300 million.
Also, from the company they confirm having eliminated 21 million post with sexual content and nudes –, of which 96% has been done through artificial intelligence and Facebook technology . In the same way, 3.5 million violent publications have been deleted. Of these, 86% were detected by Facebook and not by human interventionists. These are pretty interesting numbers, but Mark’s network technology is not perfect.
According confesses Guy Rosen in the report , “For hate speech, our technology still does not work so well, so it must be reviewed by our review teams. We eliminated 2.5 million posts related to hate speech in the first quarter of 2018, 38% of which was marked by our technology. ” It also says what “This technology needs large amounts of training data to recognize significant patterns of behavior, which we often lack in less used languages or for cases that are not frequently reported” .
“In many areas, whether spam, adult content or fake accounts we are faced with sophisticated opponents who continually change tactics to bypass our controls” , continues Rosen . “That means we must continually build and adapt our efforts. That’s why we’re investing heavily in more people and better technology to make Facebook safer for everyone. ” , concludes.
The report, which is titled “Facebook publishes compliance numbers for the first time” , it is still a strategy on the part of the company to demonstrate that your technology works. When Zuckerberg appeared before the Senate, all his answers revolved around they were working on improving their AI to avoid future problems . Publishing these figures seems like a way to take iron out of, a fire still far from extinct.