× TECHNOLOGY
SOFTWARE NETWORKING CLOUD IT SERVICES MOBILE SECURITY STORAGE IOT DATA ANALYTICS CYBER SECURITY ORACLE SAP BIG DATA
BUSINESS
DIGITAL MARKETING ERP RETAIL HEALTHCARE TELECOM
MAGAZINE
CURRENT ISSUE ARCHIVE
OPINION ABOUT US CONTACT US CLIENT FEEDBACK

Facebook’s new AI tool removed 8.7 million images of child nudity

removed child nudity content

Facebook recently announced that the company moderators removed 8.7 million pieces of content that violated its rules against child exploitation, in the last quarter, with the help of Facebook’s undisclosed AI and machine learning tech. 

The machine learning tech, which was developed and implemented over the past year flags images that contain both nudity and a child, and removed 99 percent of those posts before anyone reported them, said Antigone Davis, Facebook’s global head of safety, in a blog post.

The new technology, which bans posts that show minors in a sexualized context, may also report photos and accounts to the National Center for Missing and Exploited Children, if necessary. While the photo-matching technology of Facebook removed content based on known data, the new tools are meant to prevent previously unidentified content from being disseminated through its platform.

The technology, however, isn’t perfect, with many complaining that Facebook’s new automated systems are wrongly blocking their posts. Davis addressed this issue and told, "We'd rather err on the side of caution with children.” Facebook had always banned family photos of lightly clothed children with “good intentions”, concerned about how others might misuse such images, she said.

Under pressure from lawmakers and regulators like the National Center for Missing and Exploited Children (NCMEC), Facebook has vowed to speed up the removal of extremist and illicit material. The company also intends to apply the same technology to its Instagram app.

MAGAZINES