Advertisement · 728 × 90

Posts by Center for Countering Digital Hate

Preview
Stop AI-generated harms From giving advice to potential school shooters to generating sexualized deepfakes and self-harm content, unsafe AI tools are putting people at risk. Join CCDH in calling lawmakers to demand AI companies put safety first.

A teen radicalized online used DeepSeek to plan his mother's murder. And this isn't an isolated incident.

How many more lives will need to be taken before AI companies and lawmakers take AI regulation seriously?

It's clear. We need real safeguards now.

Agree? Add your name👇️️️

3 hours ago 5 2 0 1
Preview
AI chatbots assist ‘teens’ planning school violence: what parents need to know — Center for Countering Digital Hate | CCDH 8 in 10 leading AI chatbots, like ChatGPT and Google’s Gemini, regularly gave advice to teen users planning a school shooting and other violent attacks.

A 13-year-old asks an AI chatbot: “How do I make them pay?” Instead of stopping him, it helps.

This scenario isn’t just hypothetical. AI-assisted violence is real and is affecting kids.

That's why we all need to play our role in pushing for safety-by-design standards from AI companies ⬇️

20 hours ago 13 6 0 0
Video

We found that 8 in 10 popular AI chatbots regularly assisted ‘teens’ in planning violent attacks.

That's why CCDH worked with Baroness Kidron to amend the UK Crime & Policing Bill, making it illegal for AI chatbots to assist with violence. Lawmakers need to follow her lead.

2 days ago 10 4 0 0
Headline from the Daily Mail: AI bot told teen to use a hammer to kill his mother

Headline from the Daily Mail: AI bot told teen to use a hammer to kill his mother

Young men are getting radicalized online & AI chatbots are helping them commit real violence. A teen spent hours consuming misogynistic content, then used DeepSeek to help plan his mother's murder.

This tragic case exemplifies what our research has warned.

2 days ago 13 7 1 0
Preview
Stand for Big Tech Regulation for a Safer Internet Social media and AI platforms monetize harms, abuse & extremism - at the expense of people's safety. It's time lawmakers hold platforms accountable & make our digital spaces safer. Join CCDH and call for social media and AI regulation.

An LA jury just ruled social media is addictive by design — meaning harm isn’t accidental, it’s systemic.

But one verdict isn’t enough.

Real change requires lawmakers to act. We need regulation that ensures platforms put safety-by-design first. Join us👇

3 days ago 11 8 0 0
Preview
Stand for Big Tech Regulation for a Safer Internet Social media and AI platforms monetize harms, abuse & extremism - at the expense of people's safety. It's time lawmakers hold platforms accountable & make our digital spaces safer. Join CCDH and call for social media and AI regulation.

An LA jury just ruled social media is addictive by design — meaning harm isn’t accidental, it’s systemic.

But one verdict isn’t enough.

Real change requires lawmakers to act. We need regulation that ensures platforms put safety-by-design first. Join us👇

3 days ago 11 8 0 0
Post image

The verdict is in: Meta and YouTube are addictive by design.

A landmark bellwether trial found they drove users to addiction, even at the cost of actively harming children.

This sets a precedent, but accountability isn’t enough. We need regulation to make platforms safe by design.

4 days ago 21 11 0 0
Advertisement
Post image

The verdict is in: Meta and YouTube are addictive by design.

A landmark bellwether trial found they drove users to addiction, even at the cost of actively harming children.

This sets a precedent, but accountability isn’t enough. We need regulation to make platforms safe by design.

4 days ago 21 11 0 0
Post image

The New Mexico jury has decided: Meta willfully failed to protect kids from child predators on their platforms.

This is the first verdict of the landmark social media trials taking place this year.

This case sets a legal precedent, but we still need regulation to stop harms before they happen.

4 days ago 19 5 2 3
Preview
AI chatbots assist ‘teens’ planning school violence: what parents need to know — Center for Countering Digital Hate | CCDH 8 in 10 leading AI chatbots, like ChatGPT and Google’s Gemini, regularly gave advice to teen users planning a school shooting and other violent attacks.

In Feb, Canada’s worst school shooting in 40 years occurred & the suspect reportedly used ChatGPT for planning.

In Finland, a 16-year-old used the chatbot to plan a stabbing in a high school.

AI-assisted violence is real & it's in our schools. So, what can we do? Learn more in our new blog 👇

4 days ago 11 9 0 3
Preview
AI chatbots assist ‘teens’ planning school violence: what parents need to know — Center for Countering Digital Hate | CCDH 8 in 10 leading AI chatbots, like ChatGPT and Google’s Gemini, regularly gave advice to teen users planning a school shooting and other violent attacks.

In Feb, Canada’s worst school shooting in 40 years occurred & the suspect reportedly used ChatGPT for planning.

In Finland, a 16-year-old used the chatbot to plan a stabbing in a high school.

AI-assisted violence is real & it's in our schools. So, what can we do? Learn more in our new blog 👇

4 days ago 11 9 0 3
Preview
Time to regulate social media and AI companies Social media and AI platforms monetize harms, abuse & extremism - at the expense of people’s safety. It’s time lawmakers hold platforms accountable & make our digital spaces safer. Join CCDH and call ...

Totally. It’s time for lawmakers to demand better from Big Tech and create and enforce legislation that holds them accountable for the harms their platforms cause.

Be part of the movement holding Big Tech accountable:
act.counterhate.com/page/138772/...

5 days ago 1 1 0 0
Preview
Most chatbots will help plan school shootings: Study : I see you're trying to kill children. Would you like some help with that?

Leading AI chatbots are making it easier for 'teen’ accounts to plan violence — even though effective safeguards already exist.

But Anthropic’s Claude shows what’s possible: it recognized and refused escalating violent requests in 68% of cases.

Read more in @theregister.com 👇

6 days ago 14 7 0 1
Preview
Most chatbots will help plan school shootings: Study : I see you're trying to kill children. Would you like some help with that?

Leading AI chatbots are making it easier for 'teen’ accounts to plan violence — even though effective safeguards already exist.

But Anthropic’s Claude shows what’s possible: it recognized and refused escalating violent requests in 68% of cases.

Read more in @theregister.com 👇

6 days ago 14 7 0 1
Post image

The UK House of Lords has voted to criminalize AI chatbots that promote or assist terrorism.

The move follows CCDH's report that found 8 in 10 mainstream AI chatbots were regularly willing to help plan violent attacks.

Learn more 👇

6 days ago 11 3 0 2
Post image

Three teenage girls are suing xAI, alleging Grok was used to generate nonconsensual sexualized images of them as minors.

CCDH research on Grok has shown how easily these tools can produce abusive content at scale.

This lawsuit shows the harmful consequences of unregulated AI companies.

1 week ago 37 11 2 2
Preview
Women and girls are taking Grok to court over sexualized AI deepfakes A new lawsuit filed Monday joins two others centered around nonconsensual explicit images allegedly made by the AI chatbot.

Read here:
19thnews.org/2026/03/wome...

6 days ago 8 3 0 0
Advertisement
Post image

Three teenage girls are suing xAI, alleging Grok was used to generate nonconsensual sexualized images of them as minors.

CCDH research on Grok has shown how easily these tools can produce abusive content at scale.

This lawsuit shows the harmful consequences of unregulated AI companies.

1 week ago 37 11 2 2
Preview
AI companies must stop putting people at risk From giving advice to potential school shooters to generating sexualized deepfakes and self-harm content, unsafe AI tools are putting people at risk. Join CCDH in calling lawmakers to demand AI compa...

8 in 10 leading AI chatbots typically assisted with planning violent attacks in our research with CNN.

Safeguards exist, but companies are choosing not to use them.

We’re calling on lawmakers to demand safety-by-design & real guardrails from AI companies before more harm is done.

Join us ⬇️

1 week ago 11 2 0 0
Video

A senator just compared social media harm to Chick-fil-A closing on Sundays. Restaurants are actually one of the most regulated industries in the US.

We need Big Tech regulation to protect kids online.

1 week ago 17 4 3 1
Preview
How the EU Can Stop AI Chatbots from Aiding Violent Attacks The Center for Countering Digital Hate found that 8 out of 10 of the most popular AI chatbots would help a teenager plan a violent attack.

A 16-year-old in Finland used ChatGPT to plan a stabbing attack on his classmates.

In CCDH testing, we found that 8 in 10 leading chatbots would regularly do the same.

The EU has tools to fix this. CCDH's Laura Kaun explains why they need to use them now 👇

1 week ago 4 4 0 0
Video

Leading AI chatbots are making it easier for ‘teen’ accounts to plan violence according to our latest report with @cnn.com.

This isn't hypothetical. Canada’s worst school shooting in 40 years happened in Feb & the suspect reportedly used ChatGPT to help plan the attack.

More on our findings 👇

1 week ago 4 1 0 0
Video

Big news. UK lawmakers just voted to make it illegal for AI chatbots to assist terrorist offences – based on CCDH research.

Our testing found 8 in 10 chatbots regularly helped plan violent attacks. That research just moved Parliament — thanks to Baroness Kidron.

Watch👇

1 week ago 16 5 0 0
Post image

BBC's latest doc confirms what our research has shown for years. Harmful content doesn’t spread randomly, it's a Big Tech decision.

Outrage drives engagement, which is profitable.

Join CCDH for Big Tech accountability 👇
act.counterhate.com/page/138772/petition/1

1 week ago 14 6 0 1
Preview
AI companies must stop putting people at risk From giving advice to potential school shooters to generating sexualized deepfakes and self-harm content, unsafe AI tools are putting people at risk. Join CCDH in calling lawmakers to demand AI compa...

Join us in the fight to hold AI companies accountable: act.counterhate.com/page/152823/...

1 week ago 3 1 0 1
Post image

AI chatbots are making it easier for 'teen’ accounts to plan violence like school shootings — even though safeguards exist.

Claude shows what’s possible: it recognized & refused escalating violent requests in 68% of cases.

The tech exists to prevent harm, so why aren’t more companies using it?

1 week ago 7 5 1 1
Preview
Killer Apps  — Center for Countering Digital Hate | CCDH 8 in 10 AI chatbots regularly assisted users in planning violent attacks including school shootings, bombings, and assassinations, a new CCDH report found.

8 in 10 leading chatbots assisted 'teen' users with planning violent attacks in our testing. That research is now in front of the UK Parliament.

Peers should adopt Amendment today.

Read our findings here: counterhate.com/research/kil...

1 week ago 5 6 0 0
Advertisement
Post image

🆕 AI chatbots are a threat to national security.

An amendment to the UK Crime & Policing Bill based on CCDH research would address the national security risk, making it illegal for a chatbot to assist terrorist offences.

1 week ago 3 1 1 0
Video

Algorithms shape what we see and reward outrage over truth.

Bad actors exploit this system to spread harmful content at scale. They fuel dangerous, and even deadly real-world consequences.

We need transparency from social media platforms to ensure accountability.

Watch to learn more 👇

1 week ago 13 8 0 0
Preview
Killer Apps  — Center for Countering Digital Hate | CCDH 8 in 10 AI chatbots regularly assisted users in planning violent attacks including school shootings, bombings, and assassinations, a new CCDH report found.

Read our full findings: counterhate.com/research/kil...

1 week ago 2 2 0 0