Today on @indicator.media’s free Briefing: Google ran ads on search results for queries about removing nonconsensual intimate imagery (NCII).
Most of the ads promoted generic content removal products that don’t provide a tailored service for NCII. Most importantly, all of them are paid services.
Posts by Ronald E. Robertson
NEW on @indicator.media:
I found a group of X accounts that worked together to remove Community Notes from British Conservative Party accounts during the 2024 UK general election.
Bloody hell. Researchers invented a disease, published two fake papers to see if LLM’s would ingest them and kick them up as fact — and then it broke containment and all the major AI’s bought in. Information pollution.
www.nature.com/articles/d41...
Our PhD student Kennedy Orwa, who studies applications of AI to health care, was hastily deported today to Kenya along with his 13-year-old son without opportunity to speak to legal counsel.
King 5 reports that he held a valid visa that was rescinded without explanation.
AJPS article from @kenbenoit.bsky.social, Scott de Marchi, Conor Laver, Michael Laver, and Jinshuai Ma on using LLMs to analyse political texts.
🧵 1/8 Social science research requires reliable extraction of information from texts to identify authors' preferences on various policies and issues.
New paper out! How can we study people's web search motivations in a way that's anchored in people's actual search behaviour? We suggest a survey design inspired by a combination of diary studies and data donation approaches.
Led by Elsa Lichtenegger out now at CHIIR doi.org/10.1145/3786...
I think it’s telling that people very interested in AI (like Eugene and myself) still have no interest in using it as a proxy for human communication.
“Being a good writer” is not the same thing as having social agency, and making the models even better at writing won’t change that.
Our new paper is out today in @pnasnexus.org with colleagues at Yale (@matthewshu.com, Danny Karell, @keitarookura.bsky.social)
We wanted to understand how using AI-generated summaries to learn about history influenced attitudes compared to existing resources like Wikipedia. 1/4
📣CFP: 7th edition International Workshop on Cyber Social Threats (CySoc)
We welcome papers that examine a diverse range of issues related to online harmful communications.
📅Submission: March 22nd, 2026
📅Notification: April 8th, 2026
🔗 Details: cy-soc.github.io/2026/
I've seen a few history of T&S things but don't remember any with that focus. If you come across the right people to write it please send them our way! tsjournal.org/index.php/jo...
🚨🚨🚨
Can #AI #chatbots reliably tell you whether a political claim is true or false? If not, what would it take to make them trustworthy fact-checkers?
A new study led by Matt DeVerna tackles these questions by evaluating 15 #LLMs on more than 6K claims fact-checked by PolitiFact over an 18-year period.
Here's the recording! www.youtube.com/watch?v=MsH_...
This is huge news. I have spent the past 6 months wondering wtf was up with Amazon: they filed 380,000 AI-related CyberTipline reports to NCMEC in the first half of 2025.
Turns out ALL of it was known CSAM they found by screening their AI training data. It's NOT AI-generated or AI-morphed CSAM.
"Machine learning research is not serious research & therefore hallucinated references are not necessarily a big deal, agrees prestigious group of ML researchers"
I ❤️ when the titles write themselves
But seriously, I don't think there's much to debate statmodeling.stat.columbia.edu/2026/01/26/m...
This is courage. Looking a masked, government fascist dead in the eye, face to face, knowing full well that your life is on the line.
If elected officials had 1/10th of Alex Pretti's resolve, we would have far fewer troubles.
I have an op-ed in the NYT today about the Grok scandal, sharing my research from last year finding that legal risk hinders AI companies from making their models safer against CSAM - an echo of the years where white-hat hackers were chilled from good-faith research. www.nytimes.com/2026/01/12/o...
A picture of the website where you can register for the course.
👩🏫 I spent 6 months building something I’m super proud of: a course for journalists on using AI for investigations. It officially launched today! These approaches have already improved my own investigations. Get started: careercatalyst.asu.edu/programs/ai-...
New research on Gifted dogs is out, in @science.org!
Huge congrats to SHANY DROR for her effort and this incredible achievement.
📄 www.science.org/doi/10.1126/...
🚨 New paper from an awesome group led by Noam Kolt and
@nickacaputo
.
We hear a lot about what important concepts and methods from AI research that lawyers need to understand. But it's really a two-way street...
🧵🧵🧵
Reminds me of a passage from Kurt Vonnegut's Timequake. Can't fit the whole thing here, but the key part goes: "... the situation is social rather than scientific. Any work of art is half of a conversation between two human beings, and it helps a lot to know who is talking at you."
Johns Hopkins Data Science and AI Institute is hiring Postdoctoral Fellows (Deadline Jan 23rd, 2026)! 💫
Reach out and apply if you're interested in working with me! I'm especially excited to work with postdocs on AI for social sciences/human behavior, social NLP, and LLMs.
Should scientists apply to OpenAI's fund for research on AI & mental health? Should policymakers consider it a credible safety effort?
Avriel Epps & I see it as "grantwashing," and it's an insult to anyone whose loved one's death involved chatbots. We explain:
www.techpolicy.press/beware-of-op...
We're hiring interns in the Computational Social Science group at Microsoft Research NYC!
If you're interested in designing AI‑based systems and understanding their impact at both individual and societal scales, apply here by Jan 9, 2026: apply.careers.microsoft.com/careers/job/...
The Center for Information Technology Policy at Princeton invites applications for a Postdoctoral Fellow to work with Andy Guess (Politics/SPIA), Brandon Stewart (Sociology), and me (CS).
puwebp.princeton.edu/AcadHire/app...
Please apply before Sunday, the 13th of December!