Posts by Kristin Alvandi
I know it's controversial politically, but @bsky.app really needs the pronouns thing, because I've seen lots of people accidentally getting people's pronouns wrong (including cis people's) because they're going off a display name and avatar when replying.
Grok has moved from undressing women's photos to "just" putting them "in more revealing clothing, such as towels, sports bras, skintight Spider-Woman outfits or bunny costumes." But "None of the women in Grok-generated images...were naked, [or] appeared to be minors." www.nbcnews.com/tech/tech-ne...
NEW: Teenage boys are pulling classmates' photos off Insta and running them through cheap nudify apps and the fallout has now hit nearly 90 schools across 28 countries with 600+ known victims since 2023, per a WIRED/Indicator analysis.
UNICEF estimates 1.2M children were targeted last year alone.
this is honestly THE best write up of how CSAM detection and perceptual handing works. the visual aids are very helpful in understanding how content is transformed and how detection methods work
mahmoud-salem.net/the-invisibl...
The EU Killed Voluntary CSAM Scanning. West Virginia Is Trying To Compel It. Both Cause Problems.
Last week, the European Parliament voted to let a temporary exemption lapse that had allowed tech companies to scan their services for child sexual abuse material (CSAM) without running afoul of…
AI tools detect CSAM, grooming and self-harm, but nobody knows how well. Much of the AI industry has adopted 'model cards' for transparency—it's time the developers of child safety tools caught up, write Camille François, Margaret Mitchell, Yacine Jernite, Vinay Rao & J. Nathan Matias.
Protect trans kids.
@aaron.bsky.team shouting out how great using Osprey is for investigating! Go @roost.tools @julietshen.bsky.social #opensource #tssummit #trustandsafety
Anyone that has done content moderation before knows that emotional support/wellbeing tools are always developed secondary to the work (usually in response to burn out etc).
Putting it first, ensures that moderators can take an approach to the work that actually centers their wellbeing.
Well. 😅
The lawsuit concerned allegations that Meta covered up its platforms’ impact on children's mental health and its knowledge of child exploitation online.
Well thanks now I am too.
Meta is updating its child safety standards as countries across the globe consider social media bans for teenagers.
new from @katiemcque.bsky.social:
@riana.bsky.social has talked about this at length, but the actionability of reports submitted by tech companies to NCMEC is a huge unsolved and growing problem
www.theguardian.com/technology/2...
When Congress incentivizes over-reporting but leaves what to report up to platforms’ discretion (since for constitutional reasons the govt can’t tell private actors what to do), this is what you get. If NCMEC or LE can’t directly tell platforms what to (not) do, they drag them in the press instead.
West Virginia’s Anti-Apple CSAM Lawsuit Would Help Child Predators Walk Free
West Virginia Attorney General JB McCuskey wants you to think he's protecting children. His press release says so. His legal complaint opens with the genuinely horrific line that Apple has, in internal communications,…
Through the EU's DSA and US litigation, new evidence is emerging about how social media platforms understand and address risks to minors. Peter Chapman and Matt Steinberg analyze what these parallel processes uncover about how platforms assess risk and design.
🫠
The American president is more insulated from accountability than a British royal.
Our political system provides the elite with immunity. It has to change.
Amid the deals and demos at the India AI Impact Summit this week, the opportunity to shift the global debate about AI and what kind of world we are building appears to be lost, writes Tech Policy Press contributing editor Amber Sinha. www.techpolicy.press/indias-ai-su...
OK "god bless america" and then naming every country in the americas from south to north is absolute king shit
A must read fiction book honoring Fred in the best way. SO good. One of my fave books this year! Even more chilling with what is happening now.
New #research from Resolver “The Com” my takeaway: this isn’t one group it’s a growing network of online harms that spreads across platforms and mixes CSAM, self-harm, extremism & cybercrime. Addressing it requires coordinated action. No more silos! Read the full briefing here: ter.li/7ffbot
Incredibly rough to read but there needs to be more clarity in what platforms need to report to NCMEC & LE these annotations in the form are confusing.
It is very cool!!
“We intentionally use an over-inclusive threshold for scanning, which yields a high percentage of false positives,” this sounds to me like they were not reviewing any matches and doing automatic reports. Companies should be called out for bad reports to NCMEC - it impacts kids being helped.
This is huge news. I have spent the past 6 months wondering wtf was up with Amazon: they filed 380,000 AI-related CyberTipline reports to NCMEC in the first half of 2025.
Turns out ALL of it was known CSAM they found by screening their AI training data. It's NOT AI-generated or AI-morphed CSAM.
We are fun at parties
production-grade tools for online safety CAN be built in the public, and CAN be collaboratively developed by engineers across different organizations.
We're proud to announce Osprey and so grateful to all the contributors who made this possible!
Release notes here: github.com/roostorg/osp...