Facebook on Wednesday shared some of the advancements in AI that are contributing to the company's colossal task of enforcing community standards across its platforms. New techniques and systems that Facebook has quickly moved from research into production, such as its Reinforcement Integrity Optimizer (RIO), are helping to drive down the amount of hate speech and other unwanted content that Facebook users see, the company said.
"AI is an incredibly fast-moving field, and many of the most important parts of our AI systems today are based on techniques like self supervision, that seemed like a far off future just years ago," Facebook CTO Mike Schroepfer said to reporters Wednesday.
In the Community Standards Enforcement Report[1] published Wednesday, Facebook said that the prevalence of hate speech in Q2 2021 declined for the third quarter in a row. This was due to improvements in proactively detecting hate speech and ranking changes in the Facebook News Feed.
In Q2, there were five views of hate speech for every 10,000 views of content according to report. That's down from five to six views per 10,000 views in Q1.
Meanwhile, the company 31.5 million pieces of hate speech content from Facebook in Q2, compared to 25.2 million in Q1, and 9.8 million from Instagram, up from 6.3 million in Q1.
Systems like RIO, introduced late last year[2], help the company proactively detect hate speech.
The classic approach to training AI uses a fixed data set to train a model that's then deployed to make decisions about new pieces of content. Rio, by contrast, guides an AI model to learn directly from millions of current pieces of content. It constantly evaluates how well it's doing its job, and it learns and adapts to make Facebook platforms safer over time.
"This