Advertisement · 728 × 90

Posts by Ronald E. Robertson

Preview
Briefing: Google shows ads when people search for help removing intimate images Plus: X pays clickbait accounts a little less

Today on @indicator.media’s free Briefing: Google ran ads on search results for queries about removing nonconsensual intimate imagery (NCII).

Most of the ads promoted generic content removal products that don’t provide a tailored service for NCII. Most importantly, all of them are paid services.

4 days ago 7 4 1 0
Preview
Inside a pro-Conservative influence operation on Community Notes Inside a pro-Conservative group effort to influence X's Community Notes

NEW on @indicator.media:

I found a group of X accounts that worked together to remove Community Notes from British Conservative Party accounts during the 2024 UK general election.

1 week ago 44 28 1 2
Scientists invented a fake disease. AI told people it was real Bixonimania doesn’t exist except in a clutch of obviously bogus academic papers. So why did AI chatbots warn people about this fictional illness?

Bloody hell. Researchers invented a disease, published two fake papers to see if LLM’s would ingest them and kick them up as fact — and then it broke containment and all the major AI’s bought in. Information pollution.

www.nature.com/articles/d41...

1 week ago 2930 1496 50 159
Preview
UW graduate student deported through SEA as protesters demand answers A union representing University of Washington graduate student workers says Kennedy Orwa’s student visa was rescinded without explanation.

Our PhD student Kennedy Orwa, who studies applications of AI to health care, was hastily deported today to Kenya along with his 13-year-old son without opportunity to speak to legal counsel.

King 5 reports that he held a valid visa that was rescinded without explanation.

1 week ago 4159 2280 87 113
Post image

AJPS article from @kenbenoit.bsky.social, Scott de Marchi, Conor Laver, Michael Laver, and Jinshuai Ma on using LLMs to analyse political texts.

🧵 1/8 Social science research requires reliable extraction of information from texts to identify authors' preferences on various policies and issues.

3 weeks ago 25 12 1 1
Preview
Beyond the Query: A Survey Design for Eliciting Underlying Motivations in Web Search | Proceedings of the 2026 Conference on Human Information Interaction and Retrieval

New paper out! How can we study people's web search motivations in a way that's anchored in people's actual search behaviour? We suggest a survey design inspired by a combination of diary studies and data donation approaches.

Led by Elsa Lichtenegger out now at CHIIR doi.org/10.1145/3786...

4 weeks ago 5 1 1 0

I think it’s telling that people very interested in AI (like Eugene and myself) still have no interest in using it as a proxy for human communication.

“Being a good writer” is not the same thing as having social agency, and making the models even better at writing won’t change that.

1 month ago 156 11 10 5
Post image

Our new paper is out today in @pnasnexus.org with colleagues at Yale (@matthewshu.com, Danny Karell, @keitarookura.bsky.social)

We wanted to understand how using AI-generated summaries to learn about history influenced attitudes compared to existing resources like Wikipedia. 1/4

1 month ago 21 9 1 1
CySoc 2026 - International Workshop on Cyber Social Threats

📣CFP: 7th edition International Workshop on Cyber Social Threats (CySoc)

We welcome papers that examine a diverse range of issues related to online harmful communications.

📅Submission: March 22nd, 2026
📅Notification: April 8th, 2026

🔗 Details: cy-soc.github.io/2026/

1 month ago 4 5 0 0
Advertisement
Journal of Online Trust and Safety The Journal of Online Trust and Safety is a cross-disciplinary, open access, fast peer-reviewed journal that publishes research on how consumer internet services are abused to cause harm and how to pr...

I've seen a few history of T&S things but don't remember any with that focus. If you come across the right people to write it please send them our way! tsjournal.org/index.php/jo...

1 month ago 1 0 0 0

🚨🚨🚨

1 month ago 1 1 0 0
Post image

Can #AI #chatbots reliably tell you whether a political claim is true or false? If not, what would it take to make them trustworthy fact-checkers?

A new study led by Matt DeVerna tackles these questions by evaluating 15 #LLMs on more than 6K claims fact-checked by PolitiFact over an 18-year period.

1 month ago 6 2 1 0

Here's the recording! www.youtube.com/watch?v=MsH_...

2 months ago 4 2 0 0

This is huge news. I have spent the past 6 months wondering wtf was up with Amazon: they filed 380,000 AI-related CyberTipline reports to NCMEC in the first half of 2025.

Turns out ALL of it was known CSAM they found by screening their AI training data. It's NOT AI-generated or AI-morphed CSAM.

2 months ago 587 255 1 18
Preview
Simple LLM based Approach to Counter Algospeak Jan Fillies, Adrian Paschke. Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024). 2024.

aclanthology.org/2024.woah-1....

2 months ago 1 0 1 0
Machine learning research is not serious research and therefore hallucinated references are not necessarily a big deal, agrees a prestigious group of machine learning researchers | Statistical Modeli...

"Machine learning research is not serious research & therefore hallucinated references are not necessarily a big deal, agrees prestigious group of ML researchers"

I ❤️ when the titles write themselves

But seriously, I don't think there's much to debate statmodeling.stat.columbia.edu/2026/01/26/m...

2 months ago 46 14 1 2
Post image

This is courage. Looking a masked, government fascist dead in the eye, face to face, knowing full well that your life is on the line.

If elected officials had 1/10th of Alex Pretti's resolve, we would have far fewer troubles.

2 months ago 9902 2603 125 173
Advertisement
Preview
Opinion | There’s One Easy Solution to the A.I. Porn Problem

I have an op-ed in the NYT today about the Grok scandal, sharing my research from last year finding that legal risk hinders AI companies from making their models safer against CSAM - an echo of the years where white-hat hackers were chilled from good-faith research. www.nytimes.com/2026/01/12/o...

3 months ago 132 54 2 11
A picture of the website where you can register for the course.

A picture of the website where you can register for the course.

👩‍🏫 I spent 6 months building something I’m super proud of: a course for journalists on using AI for investigations. It officially launched today! These approaches have already improved my own investigations. Get started: careercatalyst.asu.edu/programs/ai-...

3 months ago 8 4 2 0
Preview
Dogs with a large vocabulary of object labels learn new labels by overhearing like 1.5-year-old infants Children as young as 18 months can acquire novel words by overhearing third-party interactions. Demonstrating similar learning processes in nonhuman species would indicate that the social-cognitive sk...

New research on Gifted dogs is out, in @science.org!

Huge congrats to SHANY DROR for her effort and this incredible achievement.

📄 www.science.org/doi/10.1126/...

3 months ago 31 8 0 5
Post image

🚨 New paper from an awesome group led by Noam Kolt and
@nickacaputo
.

We hear a lot about what important concepts and methods from AI research that lawyers need to understand. But it's really a two-way street...

🧵🧵🧵

3 months ago 5 4 2 0

Reminds me of a passage from Kurt Vonnegut's Timequake. Can't fit the whole thing here, but the key part goes: "... the situation is social rather than scientific. Any work of art is half of a conversation between two human beings, and it helps a lot to know who is talking at you."

3 months ago 13 2 1 0

Johns Hopkins Data Science and AI Institute is hiring Postdoctoral Fellows (Deadline Jan 23rd, 2026)! 💫

Reach out and apply if you're interested in working with me! I'm especially excited to work with postdocs on AI for social sciences/human behavior, social NLP, and LLMs.

4 months ago 8 7 0 0
Preview
Beware of OpenAI's 'Grantwashing' on AI Harms | TechPolicy.Press J. Nathan Matias and Avriel Epps say OpenAI's announced research funding is the perfect corporate action to make sure we don't find answers for years.

Should scientists apply to OpenAI's fund for research on AI & mental health? Should policymakers consider it a credible safety effort?

Avriel Epps & I see it as "grantwashing," and it's an insult to anyone whose loved one's death involved chatbots. We explain:

www.techpolicy.press/beware-of-op...

4 months ago 116 58 3 6
Advertisement
Preview
Research Intern - Computational Social Science | Microsoft Careers Research Interns put inquiry and theory into practice. Alongside fellow doctoral candidates and some of the world's best researchers, Research Interns learn, collaborate, and network for life. Researc...

We're hiring interns in the Computational Social Science group at Microsoft Research NYC!

If you're interested in designing AI‑based systems and understanding their impact at both individual and societal scales, apply here by Jan 9, 2026: apply.careers.microsoft.com/careers/job/...

4 months ago 21 18 0 0
Post image

The Center for Information Technology Policy at Princeton invites applications for a Postdoctoral Fellow to work with Andy Guess (Politics/SPIA), Brandon Stewart (Sociology), and me (CS).

puwebp.princeton.edu/AcadHire/app...

Please apply before Sunday, the 13th of December!

4 months ago 16 10 0 0