Advertisement · 728 × 90

Posts by Safer, Built by Thorn

Post image

Each year, Safer analyzes detection data to better understand how online harm happens. In 2025, we saw:

⚠️ 1.3 million lines of text classified as potential child exploitation
🛑 5.3 million files identified as known or potential CSAM

Read the full Impact Report here: https://teamthorn.co/4tak7fI

2 days ago 0 0 0 0
Post image

In our Humans in the Loop webinar, we discussed how AI deployment doesn’t redefine what “good” looks like in trust & safety. It doesn’t change the principles. The difference is scale. As platforms grow, AI becomes a tool to support human judgment. Watch the webinar here: https://teamthorn.co/3WXqFQZ

3 days ago 0 0 0 0
Post image

Amid recent headlines about AI-generated CSAM and reporting practices, an important point deserves attention: Model risk starts at training data. In a recent case, known CSAM was identified within a dataset being used for AI model training. Stay vigilant.

5 days ago 1 0 0 0
Post image

On April 3, trust and safety teams will lose the legal basis that allowed platforms to detect CSAM in Europe. And we all know when detection stops, harm doesn’t. For details on what’s happening, read our full explainer: safer.io/resources/new-child-safe...

1 week ago 1 0 0 0
Post image

In 2025, Safer continued to evolve alongside the online threat landscape. Our engineers added:

Grooming detection
Spanish language support
52 million new hashes to the database

Explore the full 2025 Impact Report for detailed product advancements and detection scale: https://teamthorn.co/4tak7fI

1 week ago 0 0 0 0
Video

Image-based abuse has expanded to include video, AI-generated content, & increasingly sophisticated grooming patterns. Bad actors adapt quickly to new features, new formats, & new technologies.

Safer was built to help platforms stay ahead of those evolving threats.

https://safer.io/#cta-contact

2 weeks ago 1 1 0 0
Video

If your platform hosts any kind of user-generated content, your trust and safety teams are under immense pressure. Abuse tactics are evolving, and expectations for proactive detection have never been higher.

Safer is built to help platforms meet this moment.

https://safer.io/#cta-contact

3 weeks ago 0 1 0 0
Post image

In a recent OpenAI Forum conversation, Julie Cordua highlighted a reality trust & safety teams are navigating every day: New abuse vectors, from AI-generated content to rapidly evolving grooming and sextortion tactics, require detection systems that can adapt as quickly as platforms innovate.

4 weeks ago 0 0 0 0
Post image

The scale of online child sexual abuse was staggering before generative AI tools became accessible. Now, 1 in 8 young people know someone targeted by deepfake nudes. For trust & safety teams, that means greater pressure on detection systems and increased review burden.

https://teamthorn.co/3N6IRq6

1 month ago 0 0 0 0
Post image

The scale of online grooming is accelerating. Reports of online enticement to NCMEC nearly tripled from 2023 to 2024. If your platform supports chat, DMs, or community messaging, learn more about Safer Predict’s AI-driven CSAM and CSE detection.

https://teamthorn.co/3SgXLJL

1 month ago 0 0 0 0
Advertisement
Child Safety Necessitates New Approaches to AI Safety 15 open research problems in AI child safety, spanning model development, deployment, and maintenance.

Child Safety in AI: Open Problems

AI-generated CSAM is rapidly increasing (>400% since 2024 [IWF]). In collaboration with Thorn, we have identified 15 open research problems across AI development, deployment & maintenance to help address child safety risks.

🔗 aichildsafety.github.io

1 month ago 3 1 1 0
Post image

We partnered with the UK AI Security Institute to publish a safety protocol grounded in the Safety by Design approach. Safety cannot be bolted on after launch. It must be embedded into architecture, policy, and workflows from the start. Download the protocol today.

https://teamthorn.co/3OGbmvf

1 month ago 0 0 0 0
Video

Did you catch the latest edition of Safe Space Digest? Each month we gather the top headlines in trust & safety news and give a quick summary of the stories impacting online child safety.

Here's the latest from Cassie Coccaro, Communications Lead at Thorn.

https://teamthorn.co/4l3w6cw

1 month ago 0 0 0 0
Post image

There’s a lag between when novel CSAM is identified and when its hash is added to broader, widely used databases. SaferList helps close that window.

Each Safer customer can choose to share their SaferList across the entire Safer community to strengthen short-term protection for users.

1 month ago 0 0 0 0
Post image

Detection is only half the battle.
Routing is where trust & safety teams win back time.

Classifiers identify potentially harmful content and create value from what happens next. When prediction scores are paired with intelligent queueing, teams can move faster from identification to intervention.

1 month ago 0 0 0 0
Video

Work in trust & safety and child protection is high-stakes by nature. When the risks are real, it’s easy to stay heads-down and push through.

Don’t forget to show yourself some love, because protecting children over the long term requires protecting the people doing the work.

1 month ago 2 0 0 0
Post image

On Safer Internet Day, let’s improve online experiences for everyone by ensuring child safety on your platform.

Here are three places to start:
1️⃣ Build in safety by design
2️⃣ Proactively detect
3️⃣ Collaborate with accountability

A safer internet is built collectively, through shared responsibility.

2 months ago 2 0 1 0
Post image

TrustCon is the only global conference dedicated to trust and safety professionals. We’re looking at data science, research and engineering, product design, and more. You name it, there’s going to be an expert at TrustCon ready to talk about it.

What topics do you want to learn about this year?

2 months ago 1 0 0 0
Advertisement
Post image

Proactive safety technology relies on two complementary approaches. Hashing and matching power the first layer. Modern ML classifiers are the second layer. Together, these tools create a dual safety system that helps platforms move from reactive enforcement to proactive protection.

2 months ago 0 0 0 0
Post image Post image

Child sexual abuse and exploitation increasingly happens through everyday platform functionality: image uploads, DMs, file sharing, comments, and chat. When safety isn’t designed into those systems from the start, platforms are forced to respond after harm has already occurred.

2 months ago 1 0 0 0
Video

Did you catch the latest edition of Safe Space Digest? Each month we gather the top headlines in trust & safety news and give a quick summary of the stories impacting online child safety.

Here's the latest from Cassie Coccaro, Communications Lead at Thorn.

https://teamthorn.co/4c4Ob7c

2 months ago 1 0 0 0
Post image

One of the most consequential risk vectors in AI development is the training data itself. A recent investigation reported by @404media.co highlights this risk:

⚠️The widely used NudeNet dataset included over 120 images of known or suspected CSAM.⚠️

Read the full story:
https://teamthorn.co/4sDlt3l

2 months ago 0 0 0 0
Post image Post image

We asked 2025 Safe Space podcast guest David Polgar a couple of questions to reflect on the new year. His answers are a testament to the great work trust & safety professionals are doing in the space, and the path they are forging for future professionals.

Listen here:
https://teamthorn.co/3Lj2BWt

2 months ago 2 0 0 0
Post image Post image Post image Post image

Birthdays are about taking stock of how far we’ve come and what still needs building.

To every trust & safety team using Safer: thank you. Your commitment to proactive detection and responsible innovation is driving tangible safety outcomes for millions of young people.

3 months ago 2 0 0 0

Business Case Template:
docs.google.com/document/d/1qLOf6MODCpLx...

Tooling Scorecard Template:
docs.google.com/spreadsheets/d/1SiWE0saD...

3 months ago 0 0 0 0
Post image

We’re sharing two free templates to help trust & safety teams accelerate their 2026 planning:

🔧 A Trust & Safety Business Case Template — communicate risk, resourcing needs, and ROI to leadership

🧰 A Tooling Scorecard Template — evaluate detection, triage, and reporting solutions with clarity

3 months ago 1 0 0 0
Advertisement
Video

Did you catch the latest edition of Digital Defenders Digest? Each month we gather the top headlines in trust & safety news and give a quick summary of the stories impacting online child safety.

Here's the latest from Cassie Coccaro, Communications Lead at Thorn.

https://teamthorn.co/4ji75JI

3 months ago 0 0 0 0
Video

Seema embodies the spirit of the Safer team, leading with kindness, compassion, and a deep desire to do good. She’s helping build technology that creates digital spaces where safety comes first. The online world kids are growing up in needs people like Seema to help power trust and safety.

3 months ago 0 0 0 0
Video

Emily and her team are proving what’s possible when technology is used for good. From identifying previously unreported CSAM to detecting online grooming early, every advancement helps platforms deliver safer experiences and protect the most vulnerable users.

3 months ago 0 0 0 0
Video

At its core, T&S work is grounded in care, compassion, and a global commitment to user safety.

John Buckley, Director and Head of Child Rights and Safety at The LEGO Group, joined Safe Space to discuss what it takes to advocate for children inside some of the world’s largest tech companies.

3 months ago 0 0 0 0