CSCW folks, I wanted to highlight how excited and proud I am to see work from our community (dl.acm.org/doi/10.1145/..., CSCW '24 best paper winner led by @jiachenyan.bsky.social and @mlam.bsky.social) grow and expand ambition into this Science paper. CSCW has a ton to offer the world.
Posts by Michelle Lam
A circular flow diagram that compares current and proposed practices for LLM development using data from adopters and non-adopters. Three gray boxes represent current practices: “R&D,” “Chat Models,” and “Adopters’ Needs and Usage Data,” connected in a clockwise loop with black arrows. A blue box labeled “Non-adopters’ Needs and Usage Data” adds a proposed feedback path, shown with blue arrows, linking non-adopter data back to R&D and adopters’ data.
As of June 2025, 66% of Americans have never used ChatGPT.
Our new position paper, Attention to Non-Adopters, explores why this matters: AI research is being shaped around adopters—leaving non-adopters’ needs, and key LLM research opportunities, behind.
arxiv.org/abs/2510.15951
A huge thank you to co-authors @fredhohman.bsky.social, @domoritz.de, @jeffreybigham.com, @kenholstein.bsky.social, and Mary Beth Kery! This work was done during my summer internship w/ Apple AIML, and I’m thankful to work with this wonderful team :)
arxiv.org/abs/2409.18203
#UIST25 talk: Wed 11am!
Broader usage scenarios inclue multi-stakeholder collaboration (live mode, git for policy, policy forks, participatory maps) and model evaluation + auditing (policy test suite, policy audits)
We can extend policy maps to enable Git-style collaboration and forking, aid live deliberation, and support longitudinal policy test suites & third-party audits. Policy maps can transform a nebulous space of model possibilities to an explicit specification of model behavior.
An evaluation with 12 LLM safety experts found it was much easier to identify policy gaps and author policies with the system compared to in their normal work.
With our system, LLM safety experts rapidly discovered policy gaps and crafted new policies around problematic model behavior (e.g., incorrectly assuming genders; repeating hurtful names in summaries; blocking physical safety threats that a user needs to be able to monitor).
Given the unbounded space of LLM behaviors, developers need tools that concretize the subjective decisionmaking inherent to policy design. They should have a visual space to systematically explore, with explicit conceptual links between lofty principles and grounded examples.
Policy maps chart LLM policy coverage over an unbounded space of model behaviors. Here, an AI practitioner is designing a policy for how an LLM should summarize violent text. Policy map abstractions (right) allow the policy designer to interactively author and test policies that govern a model’s behavior using if-then rules over concepts. The designer can create any desired concept by providing a simple text definition to capture cases of model behavior. Our Policy Projector tool (center) renders cases, concepts, and policies as visual map layers to aid iterative policy design.
Our system creates linked map layers of cases, concepts, & policies: so an AI developer can author a policy that blocks model responses involving violence, visually notice a gap of physical threats that a user ought to be aware of, and test a revised policy to address this gap.
LLM safety work often reasons over high-level policies (be helpful & polite), but must tackle on-the-ground cases (unsolicited money advice when stocks are mentioned). This can feel like driving on an unfamiliar road guided by a generic driver’s manual instead of a map. We introduce: Policy Maps 🗺️
Somehow only just became aware of LlooM, a toolkit that uses a combination of clustering and prompts to extract concepts and describe custom datasets — similar to a topic model. Looks nice, with lots of documentation and open colab notebooks!
Has anyone used it?
stanfordhci.github.io/lloom/about/
We made updates to LLooM after the CHI publication to support local models (and non-OpenAI models)! More info here, though we haven't run evals across open-source models: stanfordhci.github.io/lloom/about/...
Qualitatively, I found that the BERTopic groupings were still rather large, so I anticipate the GPT labels would still be quite generic (as opposed to specific/targeted concepts).
That's a good point! In the technical evaluations, we used GPT to automatically find matches between the methods (including a GPT-only condition), but it could have evened the playing field even more to generate GPT-style labels for BERTopic before the matching step.
Thanks so much for sharing our work! :)
We're excited to host a second iteration of the HEAL workshop! Join us at CHI 2025 :)
→ Deadline: Feb 17, more info at heal-workshop.github.io