PSA for anyone submitting to HCI conferences: Do not leave the "suggested reviewers" section blank. It is your greatest (and only) way to heighten the chances your paper will be reviewed by someone who really cares about your research topic. Plus, it helps the 1AC!
Posts by Ian Arawjo
Looking forward to the open source release, but it sounds like they’re treating model weights as static variables in code rather than data in an ML library, which means the Rust compiler can optimize 1000x better
Thanks for this. What’s crazy too is that I can’t even access the podcast to double-check if it might be grossly misrepresenting my work. Checked my papers from CHI, but blocked from viewing the podcast under just “Basic Edition“ access to ACM.
I had heard people at #CHI2026 say that they didn't want AI-generated podcasts to be made on top of their work. While this perspective is totally understandable, I thought the podcasts could potentially be useful. But after looking up one of mine, I can't believe this feature was allowed to launch.
“Everyone I met knew, at some level, that AI either means that nothing matters—a kind of creeping techno-nihilism—or that everything that has always mattered—humanism, human values—is all that ever mattered, and our tool tinkering had always been a distraction.” 🙏
the first paper from my phd project is now out and i'm presenting it tomorrow morning at #chi2026! it's an explorative design project about supporting players' rule experimentation through game design. you can read/download from this link dl.acm.org/doi/10.1145/...
The paper is now officially out in the #chi2026 proceedings!
You can download the dataset, as well as the autoethnographic memos, as Supplementary Material
dl.acm.org/doi/10.1145/...
Retweeting this, just because it seems not enough papers at #CHI2026 are declaring their relevance to AI: #makeit100percent #ensureAIrelevanceNow
“The real threat is a slow, comfortable drift toward not understanding what you're doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can't produce understanding.”
@devamarh.bsky.social knows
The Stats for LLM Evals guide uses `promptstats` library, which we're building out together with the guide. github.com/ianarawjo/pr... I'll also post investigations and updates over at our Substack, but probably after the CHI conference: substack.com/@statsforevals
Stats for Evals is now live, and we got a site, too: statsforevals.com We'll be posting regular investigations across the summer. For now, we're starting with the basics: comparing models and prompts. Also has resources, principles, example code, and guidance for others:
It would be cool if HCI had: 1) an open reviewing platform, 2) a quid pro quo credit system like CritiqueCircle with added kudos by experts for quality reviews, 3) anonymization of reviewers (but where you can see fuzzy metrics of reviewer quality)
"you cheated not only the game but yourself" is not how i feel about video games at all but it is how i feel about people using LLMs to "write" essays for them
“*Claude Code used regex instead of…” Fixed that for you.
Also, Montréal HCI is hiring! Looking for one PhD student (~Winter 2027). Also looking for stats/coding collaborators for our LLM evals project (the latter, can be anyone—doesn't matter as long as you know your stuff!). See us at CHI and say "bonjour hi"! 5/5
I’ll also attend two workshops: HEAL (evaluating LLMs) and one on LLMs as simulated participants—presenting a position paper "That’s Enough about AI Replacing Users in User Research" (and trying not to step on too many toes in the process!) 🧵 4/5
Paper 2: "Reporting and Reviewing LLM-Integrated Systems: Challenges and Considerations." An interview study of how authors report systems with LLM components, and how reviewers grill them, with guidelines for HCI scholars when reporting LLM components in HCI papers. 🧵 3/5
First up: "How Notations Evolve: A Historical Analysis with Implications for Supporting User-Defined Abstractions." A historical analysis of notational systems, from quantum circuits to dance notations to SignWriting, with implications for abstraction co-creation. 🧵 2/5
Montréal HCI is headed to sunny Barcelona for #CHI2026! ☀️ We’re presenting two exciting, rather unique papers and joining two workshops. Details below! 🧵 1/5
I've made a Substack for Stats for LLM Evals progress. We'll be releasing a website soon, but progress will be piecemeal (first release is focused on model comparison, prompt comparison, and model x prompt). Subscribe here for regular updates: substack.com/@statsforevals
I don't think people understand how hard it is to work in the video-game industry right now. If you've been laid off, it can take months if not years to find new work. If you haven't been laid off, you're anxious that you will be laid off.
This week's column: www.bloomberg.com/news/newslet...
Submitting an AI-powered system paper to #UIST2026 ? Wish you could sense what reviewers were thinking, and how to maximize your chance of acceptance? Check out our #CHI2026 paper, “Reporting and Reviewing LLM-integrated Systems in HCI”, for tips and guidelines: arxiv.org/abs/2602.05128
now desk reject the papers of reviewers whose human-written reviews are worse than an LLM 😂 … #icml2026
I want my workflow to feel more like this. One big reconfigurable space for deep work
HCI summer research opportunity 📣 My group has two openings for research assistants this summer, both in scientific tools for thought. Applicants are welcome at any level. Please help me share the news! andrewhead.info/positions/20...
Writing an HCI paper about an AI-powered system to a venue like UIST 2026 or CHI 2027? Wondering what reviewers expect you to report, and how to approach paper framing and writing? Check out our reporting guidelines: medium.com/p/7c3ae86341...
#CHI2026 program (the draft) is out: programs.sigchi.org/chi/2026/pro... a monster sized CHI that will definitely be fun and intellectually stimulating. Huge kudos to Pablo Cesar and Heloisa Candello, as well as our assistants for making this possible in such a short time ! Check it out!
To date, HCI researchers have had no support on signaling their paper's relevance to AI, esp. when that connection is tenuous at best. We introduce a systematic framework to ensure LLMs are mentioned at every stage of paper reporting—from framing, to evaluation, to implications.
Those are great topics. I am reacting more to papers that aren’t about AI directly but where authors shoehorn phrases like “LLM-based” into their titles and framing. If a topic is not directly about LLMs, then, it shouldn’t be pressured to answering “but what is the relevance for AI?”