Advertisement · 728 × 90

Posts by Michal Luria

Alternatively, they could have presented this as what it clearly is: a large and genuinely impressive collection of user feedback. But describing this as “a new form of social science” is a bold claim — especially to anyone familiar with social science. 10/10

1 month ago 1 0 0 0

If Anthropic really wanted to lean into the strengths of qualitative research, they could have collaborated with trained qualitative researchers, including region-specific experts. 9/

1 month ago 1 0 1 0

In using word-for-word translation, even the most sophisticated AI cannot account for subtleties of tone, implied meaning, or culturally specific values; nuances diminish and themes are likely oversimplified. It's not enough to have global reach; an understanding of local norms is necessary. 8/

1 month ago 1 0 1 0

And then there’s the claim of "regional perspectives." The interviews did include people from a vast number of countries, which is commendable. But that’s not the same as cultural understanding; it does not capture the local and socio-economic context that shapes how people think about AI. 7/

1 month ago 1 0 1 0

Similarly, many qualitative researchers would argue that counting how many times people mentioned a topic undermines the purpose and usefulness of qualitative research. 6/

1 month ago 0 0 1 0

In the findings, "Jobs and economy" or "Governance" are not qualitative themes. Qualitative themes hold insight; they are specific, detailed, and well-developed ideas. Instead of "jobs and economy" a theme might have been "people see AI as both an opportunity and a threat to job stability." 5/

1 month ago 0 0 1 0

Even setting aside this fundimental misunderstanding, the study still has notable limitations. First, Anthropic asked users questions like "If you could have a magic wand, what would AI do for you?" This question carries assumptions, reflecting a positivist worldview about the usefulness of AI. 4/

1 month ago 0 0 1 0

Anthropic used an "AI interviewer" to elicit data, categorize responses and pull out quotes. That can be useful for certain types of research. But there is still a significant gap between that and claiming the "largest and most multilingual qualitative study ever conducted." 3/

1 month ago 0 0 1 0

That is because qualitative research is inherently human. It is messy, subjective, and requires deep engagement. It involves researchers working with real people—developing questions, engaging in conversation, building rapport, probing for understanding, and iteratively interpreting meaning. 2/

1 month ago 0 0 1 0

Let’s start from the end: what Anthropic did here is not qualitative research. Is it an impressive scale for a free-response field survey? Yes. An interesting way to collect user feedback en masse? Absolutely. Qualitative research? Not quite. 🧵 1/

1 month ago 3 0 1 0
Advertisement
Preview
Congress’ Child Safety Bills Sound Good. Families Suggest They Won't Work. Lawmakers risk advancing bills that may not be effective nor in line with what some parents and teens actually want, Michal Luria and Aliya Bhatia write.

New piece by @mluria.bsky.social and me on what today's hearing on kids' safety is missing: the perspectives of parents and teens who are deeply skeptical of whether these bills will meet their needs. Huge thanks to @viacristiano.bsky.social for his sharp edits. www.techpolicy.press/congress-chi...

1 month ago 5 5 0 0
AI Labs Want More of Your Time. That's a Serious Problem.

AI Labs Want More of Your Time. That's a Serious Problem.

In @compiler.news, CDT’s @mluria.bsky.social & Amy Winecoff warn that major AI labs are quietly optimizing chatbots for engagement — despite public claims to the contrary. As financial pressures rise, companies risk prioritizing stickiness over safety, even for vulnerable users.

4 months ago 8 6 1 2
Preview
A.I. labs want more of your time. That's a problem. Healthy disengagement — not stickiness — should become a core safety feature within AI chatbots.

If AI companies truly cared about “healthy engagement,” they’d design for healthy disengagement. Most aren’t, write @mluria.bsky.social and Amy Winecoff from @cdt.org.

4 months ago 4 3 0 0

It was especially fun to use one of my favorite design research methods in this project — and for the first time in a policy context. #Speeddating allows participants to review and reflect on many possible futures scenarios to explore and articulate their full and nuanced perspectives.

5 months ago 4 0 0 0

Delighted to share my collaboration with @aliyabhatia.bsky.social on research that grounds online safety debates in what families actually say they need and value. We touched on four key topics: age verification, feed controls, screen-time features and parental access.

5 months ago 2 2 1 0
Preview
Advocating with Evidence: Lessons for Tech Researchers in Civil Society Advocating with Evidence: Lessons for Tech Researchers in Civil Society November 13, 2025, 10-11am ET online Civil society is struggling to address how technology and the tech industry contribute to e...

I receive so many questions about what research in civil society is like -- Join our virtual panel to hear more about exactly that! @cdt.org @datasociety.bsky.social @aclu.org

With @thakurdhanaraj.bsky.social @alicetiara.bsky.social and @mkgerchick.bsky.social

To register:
cdt.org/event/advoca...

5 months ago 13 6 0 0
Preview
Tech Talks: Age Verification In this episode of Tech Talks, we dive into the growing debate over online age verification. While often framed as a way to protect children, these policies carry major implications for how everyone, adults and children alike, accesses the internet. Unlike showing an ID at a bar, online age checks can require the collection and […]

Guests @zevesanderson.com, CDT Research Fellow @mluria.bsky.social, and guest host
@aliyabhatia.bsky.social unpack what users really think about age checks, how they shape online behavior, and what’s at stake for balancing child safety with digital rights.

6 months ago 3 3 0 0
Advertisement
Preview
AI-Powered Deception: A Deeper Dimension of Dark Design Patterns in Conversational AI Tools and Platforms  This post was led by CDT Intern Adinawa Adjagbodjou Have you ever struggled to cancel a subscription you didn’t know you were enrolled in? Or have you been invited to join a platform by a friend only to discover that it was the site impersonating someone in your contact list? If so, you’ve encountered a […]

🚨 NEW BLOG led by CDT Intern @adinawa.bsky.social, with CDT’s Ruchika Joshi & @mluria.bsky.social, explores dark patterns in conversational AI — subtle design tricks in tools like ChatGPT, Replika & Character.AI that influence spending, attention, & data sharing:

7 months ago 11 7 3 0

9/9 As Savage mentioned in her testimony, UX researchers generally advocate for users — especially vulnerable users like children. To get there, it is crucial to ensure that they have freedom to ask hard questions, pursue answers with the appropriate methodology, and communicate findings clearly.

7 months ago 3 0 0 0

8/ In parallel, we need clear auditing and accountability processes within companies, as well as access to data to vetted independent researchers.

7 months ago 3 0 1 0

7/ That’s why it’s not enough to have research teams — it’s also about making sure research is conducted and reported at the highest standards. In this case, the researchers themselves seem to have held the highest standards; but everyone, all the way to top leadership, must be onboard too.

7 months ago 3 0 1 0

6/ Doing research inside a big tech company is already extremely difficult. Researchers face significant pressure from internal and external stakeholders, and the potential for conflicts of interest are an everyday reality. Still, this work is deeply necessary, and understaffed.

7 months ago 2 0 1 0

5/ This testimony matters, and these whistleblowers are courageous for coming forth. At the same time, such allegations shouldn’t undermine trust in UX research, which would be a devastating outcome — company researchers are among those working hardest to surface safety risks and push for change.

7 months ago 1 0 1 0

4/ The only thing worse than having no research on a critical safety-related research question, is having misleading research; A gap in knowledge can be acknowledged and addressed. But if the research is riddled with malpractice, the harm is harder to detect — and far more damaging.

7 months ago 3 0 1 0
Advertisement

3/ If true, this profoundly undermines research integrity. Excluding findings, deleting data that sheds light on people’s safety, misrepresenting findings — these would all directly violate the most fundamental research ethics code.

7 months ago 3 1 1 0

2/ The testimony focused on alarming internal interactions between research teams, leadership, and legal, that allegedly suppressed, altered, and misrepresented research and research findings to protect the company from liability and damage to reputation.

7 months ago 3 0 1 0

It’s not every day that UX researcher whistleblowers testify before the Senate. Yesterday, two former Meta researchers, Jason Sattizahn and Cayce Savage, shared their concerns with safety research for Meta VR products, and more broadly within Meta. So how does UX research move on from here? 🧵

7 months ago 18 13 1 3

With a wave of tragic headlines about AI-related deaths and lawsuits it's critical to reconsider the design choices that enable this -- as a first, stir away from intentionally humanlike chatbots.

7 months ago 1 0 0 0

We used a Design Research approach (#speeddating) to test scenarios currently being proposed and debated in policy circles with teens and parents. Here is what we found on age verification approaches 👉

Full report on all topics, including screen-time features and algorithm controls coming soon.

7 months ago 2 1 0 0
Preview
Teen and Parent Perspectives on Approaches to Age Verification Age verification (AV) is being rolled out by online services to comply with new laws in Europe and across states in the United States. Proposed legislation at the federal level in the U.S., such as the Kids Off Social Media Act and the Kids Online Safety Act, may also result in increased adoption of AV […]

CDT’s @mluria.bsky.social + @aliyabhatia.bsky.social's interviews with families reveal concerns about privacy, efficacy, & the need for transparent, user-centered approaches that support both agency & parental discretion. Read more:

7 months ago 1 1 0 1