Skip to content

Introducing Safer Predict: Using the Power of AI to Detect Child Sexual Abuse and Exploitation Online

July 19, 2024

5 Minute Read

Our ability to upload content to our favorite apps or chat with others online is part of how we socialize with our digital communities. Yet, these same features are also exploited by bad actors to harm children — such as to solicit, create and share child sexual abuse material (CSAM) or even sextort children online.

Platforms face an uphill battle trying to combat this misuse of their features. But it’s critical they do so. Not only does online child sexual exploitation put children and platform users at risk, but hosting CSAM is illegal for platforms. 

That’s why in 2019, Thorn launched our solution Safer to help content-hosting platforms end the viral spread of CSAM and with it, revictimization. Today, we’re excited to announce the next step in that effort: a significant expansion to our capabilities with Safer Predict.

The power of AI to defend children 

Our core Safer solution, now called Safer Match, offers a technology called hashing-and-matching to detect known CSAM — material that’s been reported but continues to circulate online. To date, Safer has matched over 3 million CSAM files, helping platforms stop CSAM’s viral spread and the revictimization it causes.

With Safer Predict, our efforts go even further. This AI-driven solution detects new and unreported CSAM images and videos, and to date has classified nearly 2 million files as potential CSAM.

Now, we’re proud to share that Safer Predict now has the ability to identify potentially harmful conversations that include or could lead to child sexual exploitation. By leveraging state-of-the-art machine learning models, Safer Predict empowers platforms to:

  • Cast a wider net for CSAM and child sexual exploitation detection
  • Identify text-based harms, including discussions of sextortion, self-generated CSAM, and potential offline exploitation
  • Scale detection capabilities efficiently

To understand the impact of Safer Predict’s new capabilities, particularly around text, it helps to grasp the scope of child sexual exploitation online.

The growing challenge of online child exploitation

Child sexual abuse and exploitation is rising online at alarming rates:

  • And in January 2024, U.S. Senators demanded tech executives take action against the devastating rise of child sexual abuse happening on their platforms.

The reality is, bad actors are misusing social and content-hosting platforms to exploit children and to spread CSAM faster than ever. They also take advantage of new technologies like AI to rapidly scale their malicious tactics.

Hashing-and-matching is critical to tackling this issue, but the technology doesn’t detect text-based exploitation. Nor does it detect newly generated CSAM, which can represent a child in active abuse. Both text and novel CSAM represent high-stakes situations — detecting them can provide a greater opportunity for platforms to intervene when an active abuse situation may be occurring and report it to NCMEC.   

Safer Predict’s predictive AI technologies allow platforms to detect these harms occurring on their platform.  These efforts can help uncover information critical to identifying children in active abuse situations and law enforcement’s ability to remove child victims from harm.

AI models built on trusted data

While it seems AI is everywhere these days, not all models are created equally. When it comes to AI, validated data matters, especially for detecting CSAM and child sexual exploitation.

Thorn’s machine learning image and video classification models are trained on confirmed CSAM provided by our trusted partners, including NCMEC. In contrast, broader moderation tools designed for various kinds of harm might simply use age recognition data combined with adult pornography, which differs drastically from CSAM content. 

Safer Predict’s text detection models are trained on messages:

  • Discussing sextortion
  • Asking for, transacting in, and sharing CSAM
  • Asking for a minor’s self-generated sexual content, as well as minors discussing their own self-generated content
  • Discussing access to and sexually harming children in an offline setting

By training Safer Predict’s models on confirmed CSAM and real conversations, the models are able to predict the likelihood that images and videos contain CSAM and messages that contain text related to child sexual exploitation.

Together, Safer Match and Safer Predict provide platforms with comprehensive detection of CSAM and child sexual exploitation, which is critical to safeguarding children — both online and off.

Partnering for a safer internet

At Thorn, we’re proud to have the world’s largest team of engineers and data scientists dedicated exclusively to building technology to combat child sexual abuse and exploitation online. As new threats emerge, our team works hard to react — such as combating the rise in sextortion threats with Safer Predict’s new text-based detection solution.

The ability to detect potentially harmful messages and conversations between bad actors and unsuspecting children is a big leap in our mission to defend children and create a safer digital world — where every child is free to simply be a kid.



Get the latest delivered to your inbox