Alternatively, they could have presented this as what it clearly is: a large and genuinely impressive collection of user feedback. But describing this as “a new form of social science” is a bold claim — especially to anyone familiar with social science. 10/10
Posts by Michal Luria
If Anthropic really wanted to lean into the strengths of qualitative research, they could have collaborated with trained qualitative researchers, including region-specific experts. 9/
In using word-for-word translation, even the most sophisticated AI cannot account for subtleties of tone, implied meaning, or culturally specific values; nuances diminish and themes are likely oversimplified. It's not enough to have global reach; an understanding of local norms is necessary. 8/
And then there’s the claim of "regional perspectives." The interviews did include people from a vast number of countries, which is commendable. But that’s not the same as cultural understanding; it does not capture the local and socio-economic context that shapes how people think about AI. 7/
Similarly, many qualitative researchers would argue that counting how many times people mentioned a topic undermines the purpose and usefulness of qualitative research. 6/
In the findings, "Jobs and economy" or "Governance" are not qualitative themes. Qualitative themes hold insight; they are specific, detailed, and well-developed ideas. Instead of "jobs and economy" a theme might have been "people see AI as both an opportunity and a threat to job stability." 5/
Even setting aside this fundimental misunderstanding, the study still has notable limitations. First, Anthropic asked users questions like "If you could have a magic wand, what would AI do for you?" This question carries assumptions, reflecting a positivist worldview about the usefulness of AI. 4/
Anthropic used an "AI interviewer" to elicit data, categorize responses and pull out quotes. That can be useful for certain types of research. But there is still a significant gap between that and claiming the "largest and most multilingual qualitative study ever conducted." 3/
That is because qualitative research is inherently human. It is messy, subjective, and requires deep engagement. It involves researchers working with real people—developing questions, engaging in conversation, building rapport, probing for understanding, and iteratively interpreting meaning. 2/
Let’s start from the end: what Anthropic did here is not qualitative research. Is it an impressive scale for a free-response field survey? Yes. An interesting way to collect user feedback en masse? Absolutely. Qualitative research? Not quite. 🧵 1/
New piece by @mluria.bsky.social and me on what today's hearing on kids' safety is missing: the perspectives of parents and teens who are deeply skeptical of whether these bills will meet their needs. Huge thanks to @viacristiano.bsky.social for his sharp edits. www.techpolicy.press/congress-chi...
AI Labs Want More of Your Time. That's a Serious Problem.
In @compiler.news, CDT’s @mluria.bsky.social & Amy Winecoff warn that major AI labs are quietly optimizing chatbots for engagement — despite public claims to the contrary. As financial pressures rise, companies risk prioritizing stickiness over safety, even for vulnerable users.
If AI companies truly cared about “healthy engagement,” they’d design for healthy disengagement. Most aren’t, write @mluria.bsky.social and Amy Winecoff from @cdt.org.
It was especially fun to use one of my favorite design research methods in this project — and for the first time in a policy context. #Speeddating allows participants to review and reflect on many possible futures scenarios to explore and articulate their full and nuanced perspectives.
Delighted to share my collaboration with @aliyabhatia.bsky.social on research that grounds online safety debates in what families actually say they need and value. We touched on four key topics: age verification, feed controls, screen-time features and parental access.
I receive so many questions about what research in civil society is like -- Join our virtual panel to hear more about exactly that! @cdt.org @datasociety.bsky.social @aclu.org
With @thakurdhanaraj.bsky.social @alicetiara.bsky.social and @mkgerchick.bsky.social
To register:
cdt.org/event/advoca...
Guests @zevesanderson.com, CDT Research Fellow @mluria.bsky.social, and guest host
@aliyabhatia.bsky.social unpack what users really think about age checks, how they shape online behavior, and what’s at stake for balancing child safety with digital rights.
🚨 NEW BLOG led by CDT Intern @adinawa.bsky.social, with CDT’s Ruchika Joshi & @mluria.bsky.social, explores dark patterns in conversational AI — subtle design tricks in tools like ChatGPT, Replika & Character.AI that influence spending, attention, & data sharing:
9/9 As Savage mentioned in her testimony, UX researchers generally advocate for users — especially vulnerable users like children. To get there, it is crucial to ensure that they have freedom to ask hard questions, pursue answers with the appropriate methodology, and communicate findings clearly.
8/ In parallel, we need clear auditing and accountability processes within companies, as well as access to data to vetted independent researchers.
7/ That’s why it’s not enough to have research teams — it’s also about making sure research is conducted and reported at the highest standards. In this case, the researchers themselves seem to have held the highest standards; but everyone, all the way to top leadership, must be onboard too.
6/ Doing research inside a big tech company is already extremely difficult. Researchers face significant pressure from internal and external stakeholders, and the potential for conflicts of interest are an everyday reality. Still, this work is deeply necessary, and understaffed.
5/ This testimony matters, and these whistleblowers are courageous for coming forth. At the same time, such allegations shouldn’t undermine trust in UX research, which would be a devastating outcome — company researchers are among those working hardest to surface safety risks and push for change.
4/ The only thing worse than having no research on a critical safety-related research question, is having misleading research; A gap in knowledge can be acknowledged and addressed. But if the research is riddled with malpractice, the harm is harder to detect — and far more damaging.
3/ If true, this profoundly undermines research integrity. Excluding findings, deleting data that sheds light on people’s safety, misrepresenting findings — these would all directly violate the most fundamental research ethics code.
2/ The testimony focused on alarming internal interactions between research teams, leadership, and legal, that allegedly suppressed, altered, and misrepresented research and research findings to protect the company from liability and damage to reputation.
It’s not every day that UX researcher whistleblowers testify before the Senate. Yesterday, two former Meta researchers, Jason Sattizahn and Cayce Savage, shared their concerns with safety research for Meta VR products, and more broadly within Meta. So how does UX research move on from here? 🧵
With a wave of tragic headlines about AI-related deaths and lawsuits it's critical to reconsider the design choices that enable this -- as a first, stir away from intentionally humanlike chatbots.
We used a Design Research approach (#speeddating) to test scenarios currently being proposed and debated in policy circles with teens and parents. Here is what we found on age verification approaches 👉
Full report on all topics, including screen-time features and algorithm controls coming soon.