Advertisement · 728 × 90

Posts by Kristin Alvandi

Preview
'Euphoria' made sex work go viral. Real sex workers are still getting censored. If sex work is everywhere on 'Euphoria,' why is it still being erased online?

If sex work is everywhere on 'Euphoria,' why is it still being erased online?

4 days ago 13 6 0 2

I know it's controversial politically, but @bsky.app really needs the pronouns thing, because I've seen lots of people accidentally getting people's pronouns wrong (including cis people's) because they're going off a display name and avatar when replying.

4 days ago 36 4 5 2
Preview
Musk’s Grok AI chatbot is still making sexual deepfakes, despite X’s promise to stop it An NBC News review found dozens of AI-generated sexualized images of real women posted to X over the past month.

Grok has moved from undressing women's photos to "just" putting them "in more revealing clothing, such as towels, sports bras, skintight Spider-Woman outfits or bunny costumes." But "None of the women in Grok-generated images...were naked, [or] appeared to be minors." www.nbcnews.com/tech/tech-ne...

6 days ago 6 2 0 0
Preview
The Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought An analysis by WIRED and Indicator found nearly 90 schools and 600 students around the world impacted by AI-generated deepfake nude images—and the problem shows no signs of going away.

NEW: Teenage boys are pulling classmates' photos off Insta and running them through cheap nudify apps and the fallout has now hit nearly 90 schools across 28 countries with 600+ known victims since 2023, per a WIRED/Indicator analysis.

UNICEF estimates 1.2M children were targeted last year alone.

1 week ago 583 326 31 56
Preview
How Do You Find an Illegal Image Without Looking at It? 61.8 million files of suspected child abuse were reported in 2025 alone. This is how machines detect them at internet scale — without any human ever seeing the content.

this is honestly THE best write up of how CSAM detection and perceptual handing works. the visual aids are very helpful in understanding how content is transformed and how detection methods work

mahmoud-salem.net/the-invisibl...

2 weeks ago 208 80 8 10
The EU Killed Voluntary CSAM Scanning. West Virginia Is Trying To Compel It. Both Cause Problems. Last week, the European Parliament voted to let a temporary exemption lapse that had allowed tech companies to scan their services for child sexual abuse material (CSAM) without running afoul of strict EU privacy regulations. Meanwhile, here in the US, West Virginia's Attorney General continues to press forward with a lawsuit designed to force Apple to scan iCloud for CSAM, apparently oblivious to the fact that succeeding would hand defense attorneys the best gift they've ever received.

The EU Killed Voluntary CSAM Scanning. West Virginia Is Trying To Compel It. Both Cause Problems.

Last week, the European Parliament voted to let a temporary exemption lapse that had allowed tech companies to scan their services for child sexual abuse material (CSAM) without running afoul of…

3 weeks ago 3 2 0 0
Preview
Why AI ‘Model Cards’ Are an Urgent Necessity for Child Safety Children deserve to be protected as robustly as possible — and that requires tools we can actually understand, writes a team of experts.

AI tools detect CSAM, grooming and self-harm, but nobody knows how well. Much of the AI industry has adopted 'model cards' for transparency—it's time the developers of child safety tools caught up, write Camille François, Margaret Mitchell, Yacine Jernite, Vinay Rao & J. Nathan Matias.

2 weeks ago 5 1 1 1

Protect trans kids.

1 year ago 946 169 10 5
Post image

@aaron.bsky.team shouting out how great using Osprey is for investigating! Go @roost.tools @julietshen.bsky.social #opensource #tssummit #trustandsafety

4 weeks ago 10 4 0 0

Anyone that has done content moderation before knows that emotional support/wellbeing tools are always developed secondary to the work (usually in response to burn out etc).

Putting it first, ensures that moderators can take an approach to the work that actually centers their wellbeing.

4 weeks ago 25 8 0 1
Advertisement

Well. 😅

4 weeks ago 0 0 0 0
Preview
Meta Must Pay $375 Million Over Allegedly Enabling Child Exploitation The lawsuit concerned allegations that Meta covered up its platforms’ impact on children's mental health and its knowledge of child exploitation online.

The lawsuit concerned allegations that Meta covered up its platforms’ impact on children's mental health and its knowledge of child exploitation online.

4 weeks ago 9112 3051 890 480

Well thanks now I am too.

1 month ago 1 1 0 0
Preview
Instagram Will Alert Parents If Teens Search For Suicide, Self-Harm Content Meta is updating its child safety standards as countries across the globe consider social media bans for teenagers.

Meta is updating its child safety standards as countries across the globe consider social media bans for teenagers.

1 month ago 319 99 89 25
Preview
Meta’s AI sending ‘junk’ tips to DoJ, US child abuse investigators say Officers say flood of low-quality reports is draining resources and slowing cases amid New Mexico lawsuit

new from @katiemcque.bsky.social:

@riana.bsky.social has talked about this at length, but the actionability of reports submitted by tech companies to NCMEC is a huge unsolved and growing problem

www.theguardian.com/technology/2...

1 month ago 8 3 1 1

When Congress incentivizes over-reporting but leaves what to report up to platforms’ discretion (since for constitutional reasons the govt can’t tell private actors what to do), this is what you get. If NCMEC or LE can’t directly tell platforms what to (not) do, they drag them in the press instead.

1 month ago 3 1 1 0
West Virginia’s Anti-Apple CSAM Lawsuit Would Help Child Predators Walk Free West Virginia Attorney General JB McCuskey wants you to think he's protecting children. His press release says so. His legal complaint opens with the genuinely horrific line that Apple has, in internal communications, described itself as the "greatest platform for distributing child porn." He makes sure you know that Google made 1.47 million CSAM reports to the National Center for Missing and Exploited Children (NCMEC) in 2023 while Apple made just 267.

West Virginia’s Anti-Apple CSAM Lawsuit Would Help Child Predators Walk Free

West Virginia Attorney General JB McCuskey wants you to think he's protecting children. His press release says so. His legal complaint opens with the genuinely horrific line that Apple has, in internal communications,…

1 month ago 21 4 0 2
Preview
What US Lawsuits Reveal About Platform Design That DSA Reports Don’t EU risk assessments and platform litigation in the US represent distinct approaches to governing and mitigating risks posed by social media platforms.

Through the EU's DSA and US litigation, new evidence is emerging about how social media platforms understand and address risks to minors. Peter Chapman and Matt Steinberg analyze what these parallel processes uncover about how platforms assess risk and design.

1 month ago 10 6 0 1

🫠

1 month ago 0 0 0 0
Advertisement

The American president is more insulated from accountability than a British royal.

Our political system provides the elite with immunity. It has to change.

2 months ago 25959 5877 792 270
Preview
India’s AI Summit Could Prove to be New Delhi's Lost Opportunity Amid the dollar signs and demos, the opportunity to shift the global debate about AI and what world we are building appears to be lost, writes Amber Sinha.

Amid the deals and demos at the India AI Impact Summit this week, the opportunity to shift the global debate about AI and what kind of world we are building appears to be lost, writes Tech Policy Press contributing editor Amber Sinha. www.techpolicy.press/indias-ai-su...

2 months ago 3 2 0 0

OK "god bless america" and then naming every country in the americas from south to north is absolute king shit

2 months ago 28864 4990 177 153

A must read fiction book honoring Fred in the best way. SO good. One of my fave books this year! Even more chilling with what is happening now.

2 months ago 1 0 0 0

New #research from Resolver “The Com” my takeaway: this isn’t one group it’s a growing network of online harms that spreads across platforms and mixes CSAM, self-harm, extremism & cybercrime. Addressing it requires coordinated action. No more silos! Read the full briefing here: ter.li/7ffbot

2 months ago 0 0 0 0

Incredibly rough to read but there needs to be more clarity in what platforms need to report to NCMEC & LE these annotations in the form are confusing.

2 months ago 2 2 0 0

It is very cool!!

2 months ago 0 0 0 0
Advertisement

“We intentionally use an over-inclusive threshold for scanning, which yields a high percentage of false positives,” this sounds to me like they were not reviewing any matches and doing automatic reports. Companies should be called out for bad reports to NCMEC - it impacts kids being helped.

2 months ago 1 1 0 0

This is huge news. I have spent the past 6 months wondering wtf was up with Amazon: they filed 380,000 AI-related CyberTipline reports to NCMEC in the first half of 2025.

Turns out ALL of it was known CSAM they found by screening their AI training data. It's NOT AI-generated or AI-morphed CSAM.

2 months ago 587 255 1 18

We are fun at parties

2 months ago 1 0 0 0

production-grade tools for online safety CAN be built in the public, and CAN be collaboratively developed by engineers across different organizations.

We're proud to announce Osprey and so grateful to all the contributors who made this possible!

Release notes here: github.com/roostorg/osp...

2 months ago 94 30 1 0