Advertisement · 728 × 90
#
Hashtag
#CSAM
Advertisement · 728 × 90
Original post on undefined.social

www.forbes.com/sites/emmawoollacott/202...

Europe Bans Platforms From Scanning For #CSAM

"This means genuine child protection through a paradigm shift: providers must technically prevent cybergrooming from the outset […]

0 0 0 0
Post image Post image Post image Post image

All you need to know about our services is here.

#OSINT #SOCMINT #digitalInvestigations #investigacionesDigitales #indaginiDigitali #digitalforensics #truthmatters #cybercrime #onlinesafety #csam #iotforensics #mobileforensics #expertevidence #pis

negativepid.com/

0 0 0 0
Preview
Some | Krp on huolissaan meppien päätöksestä: heikentää kykyä torjua lasten hyväksikäyttöä Someyhtiöt ovat ilmiantaneet poliisille hyväksikäyttö­materiaalia 15 vuoden ajan. Enemmistö suomalais­mepeistä haluaa lopettaa käytännön.

Tässä ei siis ole kyse uudesta #CSAM #ChatControl asetuksesta vaan nykyisen lainsäädännön jatkosta siihen asti kun uusi saadaan neuvoteltua. Ei mitään järkeä avata ja politisoida tätä tässä vaiheessa! 🤯 www.hs.fi/politiikka/a...

3 0 0 0
Post image

Mise à jour, le 27 Mars 2026 (annulation de l'extension #ChatControl, qui prend fin "après le 3 Avril 2026", en #Europe

blog.sosordi.net/2026/03/plus...

#CSAM #CSEM #Loi

0 0 0 0
Preview
The EU votes to ban AI ‘nudifier’ apps after explicit deepfake outrage. by RFi

The EU votes to ban AI ‘nudifier’ apps after explicit deepfake outrage.

The European Parliament on Thursday approved, by an overwhelming majority, a ban on artificial intelligence tools that ge…

#EuropeanParliament #Tech #Nudification #EU #Grok #ElonMusk #SocialMedia #AI #Deepfakes #CSAM #Politics

2 0 1 0
Preview
‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools Record levels of “dangerous” AI child sexual abuse imagery are now being discovered online as new polling reveals 82% of UK adults believe the government must now ensure “uncensored” AI systems are made safe by design. Image: Pikisuperstar / Freepik **A new report**, published on March 24, reveals the full scale of AI-generated child sexual abuse images and videos being discovered online by the Internet Watch Foundation (IWF). It shows how, in 2025, the IWF identified 8,029 AI-generated images and videos of realistic child sexual abuse – a 14% increase in criminal AI content on the previous year. It’s published alongside **new polling** from Savanta* which shows more than four in five UK adults want the government to introduce regulation to ensure AI systems are safe by design. The report, titled **_Harm without limits: AI child sexual abuse material through the eyes of our Analysts_** , also gives “unsettling” insight on the kind of offender conversations IWF analysts are witnessing as criminals vie with each other to create more and more lifelike and extreme child sexual abuse scenarios. Chillingly, offenders even discuss setting up and using hidden cameras to source still footage of real children, which they can then transform into AI sexual abuse video content. They also predict how, in a few years’ time, agentic AI tools may be able to create full child sexual abuse “movies” by feeding a prompt to an uncensored AI agent. “No skills with editing or tech will be required,” remarked one dark web forum user. In January, the IWF, which is Europe’s largest hotline dedicated to disrupting the spread of child sexual abuse imagery online, published data showing a more than 260-fold increase in videos of AI-generated child sexual abuse. This new report shows the combined surge in still images and videos, as well as horrifying details of the intentions of those producing them. ## **The data shows:** * In 2025, the IWF identified **8,029 AI-generated images and videos of realistic child sexual abuse** , a 14% increase in criminal AI content on the previous year. * An additional **82 items were classed as prohibited** , actioned under UK law even if the material is not photorealistic, such as cartoons, illustrations and animations. * Of the 3,443 AI-generated child sexual abuse videos identified, which is a **more than 260-fold increase** on the 13 videos found in 2024, **65% were classified as Category A**. This is the most severe legal category under UK law which encompasses offences such as rape, sexual torture and bestiality. * By comparison, **43% of non-AI criminal videos seen by the IWF in 2025 were Category A** – demonstrating that AI is being used to create more violent content. **Internet Watch Foundation Senior Analyst Natalia**** said:_“It is very apparent from the unsettling dark web conversations observed by the IWF Hotline that AI innovations are regarded with delight by users of child sexual abuse material._ _“Every new development in generative AI is extolled for its ability to enhance the realism, to heighten the severity, or make more immersive, any conceivable sexual scenario with a child. This could be through adding audio to video, being able to depict multiple people interacting or even being able to successfully manipulate imagery of a real child known to an offender._ _“Instead of being a vehicle for connection, the technology only deepens offenders’ capacity to view children and victims as abstract playthings, whose likenesses can be altered endlessly for their own enjoyment._ _“We know this affects victims and survivors, as its creation and distribution is just as keenly felt as with traditional forms of child sexual abuse.”_ **One offender quoted in the report describes how surprised they are at** _“just how uncensored”_ the technology is, exclaiming that the ability to edit and finetune is _“going to be nuts”_. **Another praises an AI child sexual abuse video saying it is** _“an absolute masterpiece”_ and how _“anything you desire is possible in extreme realism.”_ Analysts have also observed discussions on the ability to generate AI imagery of children known to offenders, **with one individual saying they are** _“impressed with the results of [AI] image to video conversions”_ , and how they want to use hidden cameras to obtain footage of real children to convert into AI videos. The IWF is calling on the UK government to tighten up laws around AI and make it mandatory for tech companies to evaluate and safeguard AI models before release to make it harder for criminals to abuse AI image generators and create child sexual abuse imagery. This is echoed by new polling which shows* more than four in five, or 82%, of UK adults say the government should introduce regulation to ensure AI systems are safe by design and futureproofed from causing harm. A further 78% of survey respondents agreed that AI companies should be made to test for AI-related harms before products are released to market. **Internet Watch Foundation CEO Kerry Smith** said: _“Advances in technology should never come at the expense of a child’s safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child’s life. This material is dangerous._ _“The UK government has made great strides in recognising the wide-reaching harms of AI child sexual abuse imagery and we welcome the move to allow designated authorities like the IWF to test AI models._ _“But this report’s in-depth view of the risks posed to children by AI, as well as emerging areas of concern, only serves to highlight the need for companies to adopt a safety-by-design approach that ensures child protection is baked into product development. This non-negotiable standard in AI development must be mandated by a clear government framework._ _“Children, victims and survivors cannot afford for us to be complacent. New technology must be held to the highest standard. In some cases, lives are on the line.”_ The report also highlights how offenders are already anticipating the next generation of AI tools and how they might exploit them. IWF analysts have observed offenders discussing the possibilities of “agentic AI”, systems designed to carry out complex tasks autonomously. **One offender wrote:** _“I believe in a year or two we will be able to create our own movies just by feeding a prompt to an uncensored AI agent. No skills with editing or tech will be required.”_ AI child sexual abuse content with an audio component is also an emerging area of concern. This may be in the form of recordings – audio deepfakes – which synthetically generate the sexualised voices of children. While typically the IWF does not assess audio only reports, one example identified by analysts was of a fully synthetic video showing a child who appeared to be between three and six years old speaking to the camera and performing a sexual act on an adult man. Both the video and audio were generated by AI. **Helen Rance, Deputy Director of CSA threat at the National Crime Agency** said: _“AI generated child sexual abuse material is illegal. It harms children. And it fuels and escalates offending. Alongside policing colleagues, we are arresting nearly 1,000 offenders and safeguarding over 1,200 children every month in relation to online sexual abuse. Offenders should be under no illusion that they will be caught and the consequences for them and their families will be life changing._ _“However, policing cannot tackle AI CSAM alone. We need industry around the world to invest its money, expertise and innovation in stopping this harm at source. We need to keep investing in the tools that help policing protect children at scale. And we need to equip children, parents, carers and professionals with the confidence and skills to navigate the challenges that AI brings._ _“We welcome this important report from IWF and will continue to work with them and other partners to disrupt this evolving ecosystem and keep children safe.”_ * The online survey was run by polling company Savanta in March 2026 and included 2,204 UK adults. Data was weighted to be representative of the UK by age, gender, region and social grade. ** Not her real name. IWF analysts’ identities are protected. Note: This post was originally published on the Internet Watch Foundation and republished here with permission. Reviewed by Ayaz Khan. Read next: • Fragmented phone use — not total screen time — is the main driver of information overload, study finds • Porn, the manosphere and misogyny are warping boyhood – but what can be done about it?

‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools Record levels of “dangerous” AI child sexual abuse imagery are now being discovered...

#AI #artificial-intelligence #child-safety #CSAM #news #safety #Technology

Origin | Interest | Match

0 0 0 0
Preview
Section 230, Free Speech, and Moderation: A Plain‑English Guide Unravel the complexities of Section 230 and its significance in the debate over social media and censorship issues.

Tying Section 230 protection to "best practices" for CSAM sounds good on paper — until it pushes encrypted services toward scanning or backdoors. I dig into why that trade‑off worries security folks and what it could mean for everyday users. #Section230 #Encryption #CSAM #Privacy

0 0 0 0
Preview
MEPs block tech firms from scanning for child sexual abuse material | POLITICO “perhaps there’s a different approach they could try taking, other than surveilling everybody’s messages?”

MEPs block tech firms from scanning for child sexual abuse material | POLITICO
https://alecmuffett.com/article/151666
#ChatControl #ClientSideScanning #EndToEndEncryption #csam #surveillance

2 1 0 0
DOJ Lawyer Ejected From Courtroom
DOJ Lawyer Ejected From Courtroom YouTube video by LegalEagle

@legaleagle.tv & @lizdye.bsky.social | LegalEagle: "DOJ Lawyer Ejected From Courtroom" | #ZahidQuraishi #Asylum #Immigration #PamBondi #CSAM #FAFO
www.youtube.com/watch?v=uV8q...

0 0 0 0
Elon Musk Hit With MAJOR Lawsuit From Teenagers After Explicit Images Revealed To Users
Elon Musk Hit With MAJOR Lawsuit From Teenagers After Explicit Images Revealed To Users YouTube video by The Damage Report

Elon Musk gets rocked by major lawsuit from three teenagers over his company's program Grok, which generated explicit images of underage girls.

#Elon #CSAM #AI #Nonconsensual #ExplicitImages #Children #Grok #XAI #X #Twitter #pedos

youtube.com/watch?v=ZlyB...

5 3 1 0
Post image

#NihilisticViolentExtremism (NVE) Definition Public Domain Image by #iPredator, NYC – #CriminalPsychology, Dark Psychology, Online Psychopaths, #Sextortion, #CSAM

1 0 1 0
Post image

#NihilisticViolentExtremism (NVE) “Violence without Purpose” Public Domain Image by #iPredator, NYC – #CriminalPsychology, Dark Psychology, Online Psychopaths, #Sextortion, #CSAM

1 0 1 0
Post image

#Maga #Nazi & pervert #Epstein buddy Elon #Musk proudly posts #Grok #AI generated video of clearly underage girl 🤢 looks like a weak defense for his class action lawsuit #Tennessee kids have going against Grok for making #CSAM of them ...

www.npr.org/2026/03/16/n...

1 0 0 0

Now sue Musk for making #CSAM with #grokAI

7 3 0 0
Preview
Amount of AI-generated child sexual abuse material found online surged in 2025 Internet Watch Foundation verified 8,029 pieces of realistic AI-made content, with 65% of videos in worst category

Amount of #AI generated child sexual abuse material #CSAM found online surged in 2025 ... why are these rotten companies still allowed to train their AI software how to generate CSAM? They must be feeding it CSAM to produce it 🤢🤮☠️

www.theguardian.com/technology/2...

0 0 0 0
’19 Kids’ Duggar Couple Busted in Shocking Investigation
’19 Kids’ Duggar Couple Busted in Shocking Investigation YouTube video by Law&Crime Network

youtube.com/watch?v=5niz...
Surprise surprise... the ultra conservative Baptist Duggar Family has yet another CSAM predator in it..
Just disgusting
#duggar
#baptist
#csam

4 0 0 0
Preview
'Painfully familiar' pattern: Duggars questioned after third family member arrested Last week, another one of the Duggar family brothers was arrested after questions over inappropriate actions with a minor, but now his wife has been taken into custody too on a completely unrelated ma...

'Painfully familiar' pattern: Duggars questioned after third family member arrested

#Duggars #pedophiles #CSAM #abuse

www.alternet.org/duggar-famil...

2 1 0 0
Preview
Le système mondial de détection d’images pédopornographiques en ligne est boiteux Des chercheurs de la KULeuven ont percé l’algorithme ultrasecret de PhotoDNA, la technologie de Microsoft utilisée par toutes les plateformes pour débusquer les pédocriminels. Quelques secondes suffis...

New Le Soir piece on COSIC research: we reverse‑engineered Microsoft’s secret #PhotoDNA and show how #CSAM detection can be bypassed or weaponised to falsely flag innocents. Our results challenge the push for large‑scale client‑side scanning.
www.lesoir.be/735999/artic... [paywall]

9 6 0 0
Preview
Behavioral therapist arrested for allegedly distributing CSAM on social media, calling it ‘European art’ A behavioral therapist that reportedly works with autistic children was arrested on Friday after allegedly distributing child sexual abuse material on a social media site.

so is this an epstein class thing? #CSAM Art as Assault? damn idiots

1 0 0 0

📰 Operasi Alice: Polisi Berangus 373.000 Situs Penipuan Penjual Konten Eksploitasi Anak (CSAM)

👉 Baca artikel lengkap di sini: ahmandonk.com/2026/03/22/operasi-alice...

#beritaTeknologi #csam #darkWeb #eksploitasiAnak #europol #kejahatan

0 0 0 0
Another Duggar arrested for hurting kids.
Another Duggar arrested for hurting kids. YouTube video by Parkrose Permaculture

@parkroseperma.bsky.social | Parkrose Permaculture: "Another Duggar arrested for hurting kids." | #MAGA #Homeschooling #PurityCulture #RespectabilityPolitics #JosephDuggar #TLC #EpsteinFiles #CoverUp #CSAM #FarRight #Trumpers #HypersexualCulture #SexualizingChildren
www.youtube.com/watch?v=qkDA...

0 0 0 0
Original post on mastodon.social

Een pijnlijk duidelijk voorbeeld waarom #ChatControl niet gaat helpen bij het voorkomen van kindermisbruik en het maken van #CSAM:

"De opvang deed alles volgens het boekje: ze volgden de meldcodes en klopten aan bij landelijke instanties zoals de politie. (...) weten dat de instanties destijds […]

0 1 0 0
Preview
Teens Sue xAI Over Sexualized Images Generated by Grok Sexualized images of the girls, created by Grok, were allegedly shared across Discord and Telegram groups.

a group of teenagers is suing Elon Musk’s AI company over allegations xAI’s model generated sexualized images and videos of them. This is the first time minors are pursuing legal action against the companies enabling #GenAI

#CSAM #teen #teens #deepfake #ai #xai #grok #legal #technology #tech

1 0 1 0
Preview
Abus sexuels sur mineurs : récit d’un désastre à la bruxelloise - Contexte « Inflexibles », « irresponsables »… Depuis trois jours, Parlement et Conseil se rejettent la faute sur la fin imminente de la détection des abus sexuels sur mineurs par les services en ligne. Un éche...

"Chat control 1.0" est mort, et bientôt avec lui la détection des abus sexuels sur mineurs en ligne pour quelques mois. Je vous raconte les derniers jours qui ont mené à ça à Bruxelles 👇 #CSAM
www.contexte.com/fr/article/m...

6 7 1 0

Musk’s tactic of blaming users for Grok sex images may be foiled by EU law https://arstechni.ca #ArtificialIntelligence #childsexabusematerials #europeanunion #takeitdownact #ElonMusk #chatbot #Policy #aicsam #AIAct #csam #grok #xAI #AI

0 0 0 0
The front cover of a book entitled “Child Sex Abuse-Power, Profit, Perversion” by Beverley Chalmers

The front cover of a book entitled “Child Sex Abuse-Power, Profit, Perversion” by Beverley Chalmers

~ahem~

#pedophile #pedophiles #csam #childabuse #childsexabuse #sextrafficking #childsextrafficking #trump

0 0 0 0