By Rachel Kaser
Facebook today published its annual transparency report, and for the first time included the number of items removed in each category that violated its content standards. While the company seems to be very proficient at removing nudity and terrorist propaganda, it’s lagging behind when it comes to hate speech.
Of the six categories mentioned in the report, the number of hate speech posts Facebook’s algorithms caught before users reported them was the lowest:
For hate speech, our technology still doesn’t work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 — 38 percent of which was flagged by our technology.
Compare that percentage with the number of posts proactively purged for violent content (86 percent), nudity and sexual content (96 percent), and spam (nearly 100 percent).
But that’s not to say the relatively low number is due to an defect from Facebook. The problem with trying to proactively scour Facebook for hate speech is that the company’s AI can only understand so much at the moment. How do you get an AI to understand the nuances of offensive and derogatory language when many humans struggle with the concept?
Guy Rosen, Facebook’s Vice President of Product Management, pointed out the difficulties of determining context:
It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important. For example, artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.
If a Facebook user makes a post speaking about their experience being called a slur in public, using the word in order to make a greater impact, does their post constitute hate speech? Even we were all to agree that it doesn’t, how does one get an AI to understand the nuance? And what about words which are offensive in some language, but not another? Or homographs? Or, or, or — the caveats go on and on.
When it’s being asked to read that kind of subtlety, it shouldn’t be a surprise Facebook’s AI has only thus far had a success rate of 38 percent.
Facebook is attempting to keep false positives to a minimum by having each case reviewed by moderators. The company addressed the issue during its F8 conference:
Understanding the context of speech often requires human eyes – is something hateful, or is it being shared to condemn hate speech or raise awareness about it? … Our teams then review the content so what’s OK stays up, for example someone describing hate they encountered to raise awareness of the problem.
Mark Zuckerberg waxed poetic during his Congressional testimony about Facebook’s plans to use AI to wipe hate speech off its platform:
I am optimistic that over a five-to-10-year period we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate.
With that estimate, it’d be absurd to expect the technology to be as accurate as Zuckerberg hopes it’d be now. We’ll have to check Facebook’s transparency report in the next couple of years to see how the company’s progressing.
Rachel Kaser is a writer and former game critic from Central Texas. She enjoys gaming, writing mystery stories, streaming on Twitch, and horseback riding.