Facebook Tackles Terrorism with AI Photo and Video Matching

Facebook in November announced it is successfully stamping out nearly all terrorist content posted on its site due to advances in the company’s AI photo and video matching technology.

“Today, 99 percent of the ISIS and Al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site,” Facebook stated in a blog. “We do this primarily through the use of automated systems like photo and video matching and text-based machine learning. Once we are aware of a piece of terror content, we remove 83 percent of subsequently uploaded copies within one hour of upload.”

Facebook said its system examines photos and videos as they are uploaded to see whether their imagery matches known terrorism-related content. For example, if Facebook previously removed an ISIS propaganda video, the photo and video matching technology can detect future attempts to upload the same video to the social media site.

“In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform,” Facebook said.
Facebook said that it has faced challenges in developing technology that works for varying types of media. The company noted the need for different solutions that are effective in processing photos, video and text.

Furthermore, Facebook discussed the challenges in determining the source of terror-related content.

“A solution that works for recognizing terrorist iconography in images will not necessarily distinguish between a terrorist sharing that image to recruit and a news organization sharing the same image to educate the public,” Facebook stated.

To help discern the intent of videos, Facebook said it’s beefing up its internal expertise on terrorism. The company is expanding its staff of specialists, including linguists, academics, former law enforcement personnel and former intelligence analysts. Facebook also is engaging in discussions with more than a dozen other technology firms about how to counter terrorist content.

The use of AI photo and video matching technology is essential in a time when the amount of content on social media has grown so vast that it can no longer be monitored and curated using conventional manual techniques.

Every minute, users post more than a half million comments on Facebook. Every day, 4.75 billion pieces of content are shared, and 300 million photos are uploaded.

The issue of terror-related content has grown in parallel with the rise of social media, with such sites becoming the primary vehicle for recruitment and propaganda. ISIS videos once became so popular on YouTube that the site’s advertising algorithms began automatically inserting commercials on them for popular brands like Toyota and Anheuser-Busch.

John Newsom is executive vice president and general manager for Veritone Government. He is a software executive with an evangelical passion for AI technology who aligns the Veritone aiWARE platform with customer and market needs.