Advertisement · 728 × 90

Posts by Katelyn Mei

If you are interested in more details, please read our paper! We also include findings on engagement with Grok reply, main clusters of Grok users, and topics that Grok users
@grok for.
Paper link: arxiv.org/pdf/2602.11286

1 month ago 0 0 0 0
OSF

If you are interested in more details on how Grok is used for fact-checking in particular, please also see this amazing recent work osf.io/preprints/ps... by
@thomasrenault.bsky.social @mmosleh.bsky.social @dgrand.bsky.social!

1 month ago 4 1 1 0

👏This work is co-led with Robert Wolfe and supported by fantastic collaborators @nmw.bsky.social, and
@msaveski.bsky.social .

1 month ago 0 0 1 0

We close the paper with a discussion of what Grok and other general-purpose AI like it may mean for platform governance and for human experiences in blended human-AI online spaces. 💪We’re confident there is much exciting research to come in this rapidly evolving area of study.

1 month ago 0 0 1 0
Post image Post image

👉We then identified 10 social roles that Grok plays on X. Grok most often serves as an Oracle (providing info) but that, different from LLM use in private 1-1 settings, Grok also commonly takes on roles related to online disputes, such as Truth Arbiter, Advocate, and Adversary.

1 month ago 0 0 1 0
Post image

We identified the most common tasks for which people use Grok on X, including information seeking (51%), fact-checking (22%), seeking advice and opinions (13%), and creative and generative interactions (12%), debating with Grok (14%), etc.

1 month ago 1 0 1 0

We examine how often users prompt Grok and engage with Grok responses. We find that Grok responds to about 62% of requests, that the majority (51%) are in English, and that engagement is low 😯, with half of Grok’s responses receiving 20 or fewer views after 48 hours.

1 month ago 0 0 1 0
Post image

We used the official API to collect and analyze about three months of online conversations between users and Grok on X, consisting of more than 40k discrete interactions from August through November 2025.

1 month ago 0 0 1 0
Advertisement

But Grok’s deployment on X represents a new modality for human-AI interaction, where an AI chatbot enters a public social space. If social media has become the forum for important societal conversations, what happens when an AI chatbot can also partake in these conversations? 🤔

1 month ago 0 0 1 0

How people use AI is one of the most critical questions for understanding the individual and societal impact of intelligent technologies. It’s a subject that’s been frequently studied in research that focused on private, one-on-one interactions with LLMs across various domains.

1 month ago 0 0 1 0
Post image

🚨🎉Excited to announce that our paper “Grok in the Wild: Characterizing the Roles and Uses of Large Language Models on Social Media” is accepted at
@icwsm.bsky.social 2026! In this paper, we investigate how, when, and to what effect Grok is used on X.

1 month ago 17 3 1 1
Preview
Digital Culture Shock How culture shapes the design and use of technology—and how we can resist the one-size-fits-all approach to technology design

🚀 We turned LabintheWild into a book! 📖
Digital Culture Shock = global insights + LabintheWild tests so you can explore your cultural background.

How does culture shape tech—cars, apps, websites, ChatGPT?

👉 Can’t wait to hear what you think!

7 months ago 2 2 0 0
Preview
Digital Culture Shock How culture shapes the design and use of technology—and how we can resist the one-size-fits-all approach to technology design

What happens when a robotaxi from California tries to drive in Cairo? Is that website colorful or chaotic? And when did chatbots get so rude? In a new book, #UW prof @katharinareinecke.bsky.social explores how culture shapes tech use and design. press.princeton.edu/books/hardco... #HCI #AI #BookSky

8 months ago 5 3 0 0
Preview
Addressing Pitfalls in Auditing Practices of Automatic Speech Recognition Technologies: A Case Study of People with Aphasia Automatic Speech Recognition (ASR) has transformed daily tasks from video transcription to workplace hiring. ASR systems' growing use warrants robust and standardized auditing approaches to ensure aut...

🎙️ NEW WORK! @allisonkoe.bsky.social, Hilke Schellmann, Anna Seo Gyeong Choi, Katelyn Mei and I just released our stakeholder-grounded #AI #audit of speech-to-text transcription systems, examining the how well they work for people with #aphasia. More : arxiv.org/abs/2506.08846

10 months ago 5 1 0 1
Preview
Careless Whisper: Speech-to-Text Hallucination Harms | Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency

This article was inspired by our paper “Careless Whisper: hallucinations: Speech-to-Text Hallucination Harms” dl.acm.org/doi/10.1145/...
led by @allisonkoe.bsky.social together with @monasloane.bsky.social and Hilke Schellmann!

1 year ago 3 1 0 0
Advertisement
Preview
What are AI hallucinations? Why AIs sometimes make things up When AI systems try to bridge gaps in their training data, the results can be wildly off the mark: fabrications and non sequiturs researchers call hallucinations.

🚀 What is hallucination in AI systems? @annaseogyeongchoi.bsky.social and I recently explored this topic in depth, and our article has just been published in The Conversation! 🎉 theconversation.com/what-are-ai-...

1 year ago 10 1 1 0