Haha for a moment I had actually thought about adding this meme to my original post! 😂
Posts by Daniel Buschek
Collage of three photos, showing the conference venue, the main exhibition hall from above, and the Sagrada Familia in Barcelona. A text overlay says "Thank you CHI 2026!"
Thank you #CHI2026 for an amazing week of inspiring conversations in Barcelona!
Looks like it...
Fun fact for #CHI2026 folks: The generated "Session Summary Podcasts" in the ACM DL "cite" the session's papers and are picked up by Google Scholar, so all papers get +1 citation automatically now!
The "memory gap" was most acute in mixed workflows:
1️⃣ If you elaborated on an AI idea, there's a 37.7% chance you remember the idea came from AI.
2️⃣ If you used AI to elaborate on your idea, there's a 64% chance you remember the idea as yours.
Preprint: arxiv.org/abs/2509.11851
Illustration showing the study flow overall: Left part shows Phase 1 with a user and chatbot working on ideas and elaborations for the example problem "How might we solve the problem of plastic waste in the ocean?" Text says that participants have to come up with five ideas in 1-3 keywords and write elaborations in one sentence each. An arrow with annotation ("One week later") points to a second illustration of a user with question-marked though bubbles. Text says that in this Phase 2, participants were asked: Did you work on this? Source of idea? Source of elaboration? Further text says that they were also asked this for unseen items (so-called distractors). Following another arrow to the right is a box with the title "Key findings" and three bullets: Negative impact of AI on source memory overall; mixed workflows harder to remember than never/always using AI; and people tend to be overconfident about their own performance.
Can you remember which ideas & sentences were your own and which were generated with AI?
In a controlled study (n=184), we found that AI use significantly reduces the accuracy of content attribution after one week.
#CHI2026 preprint & numbers in 🧵
@robinwelsch.bsky.social @svengoller.bsky.social
The list has now grown to 500+ #CHI2026 preprints!
Congrats, great work all! 🎉 Great that you kept pushing with this following the earlier work at CHI23, which is still one of my favourite projects I had the opportunity to contribute to!
Cool! I'm watching arXiv for this, so the list actually already has several posters! Will update again in the next days.
Added!
Key findings (14 teams, 1 week):
📍Profiles were seen as personal territory but it's ok to interact with others' agents/tasks/comments
📍User-initative preferred
📍Thus, teams incorporated agents into social collaboration norms, rather than treating them as "equal" team members.
Typical chatbots force co-writers to leave shared docs. Our #CHI2026 paper explores collaborative AI use in shared docs via 3 features:
🤖 Shared agent profiles
☑️ Repeatable tasks, triggered by users or system
💬 Agents respond in shared comments
Preprint in 🧵
w @florianlehmann.bsky.social
Wait, what? Why? Not sure if that's a good idea. I like 2-column submissions but page limits instead of word limits penalize using well-designed, reasonably sized figures and tables.\vspace{-.1em}
📢 Looking for current research on #HCI + #AI? Here's a categorised collection of 300+ #CHI2026 preprints, collected via arXiv:
dbuschek.medium.com/chi26-prepri...
This is a unique project in several ways: Specialists working with an actual AI writing tool product (not the latest chatbot or own prototype) as part of their actual job, in an important yet overlooked writing domain (German "easy and plain language" for accessibility).
Reminder for #CHI2026 ACs: You can still click "excellent review" (for your 1AC papers) to award specific reviewers for great work!
Was just thinking about it this morning, could we maybe change a PCS setting to send these out to all authors? 🙂
Finalising your #CHI2026 revision? Or suffering from not being allowed to do so? I've "reviewed-to-reject" one of our papers with ChatGPT, to show how this leads to bad reviews - and give ideas for responding to this: dbuschek.medium.com/dont-review-...
I highly recommend Ken Hinckley's article on excellence in reviews - and championing papers: kenhinckley.wordpress.com/wp-content/u...
Have we lost peer appreciation? Not just at #CHI2026 I notice many review-to-reject. All studies have tradeoffs, all papers have limited scope. Sure, reviewing is fast, even with "passing knowledge", if we list what isn't there and conclude it's lacking. But we should review what *has* been done.
Fortunately, I did not get obviously LLM-generated reviews in my CHI AC stack of papers but my sample size is much smaller than with your role.
Three weeks ago, my feed filled with Grammarly ads - and I wondered: What if we took their messages seriously? My anecdotal "ad-vestigation" connects them to Human-AI interaction concepts, leading to counterfactual questions for alternative design directions. medium.com/p/1bab97d0fa98
For me, these issues have never shown up before in such density. And LLMs make it easier to get summaries of results and related work.
While reviewing for #CHI2026, I've noticed four new writing issues in #HCI papers, likely due to an increased use of #LLMs / #AI. I describe them here - and how to fix them: dbuschek.medium.com/when-llms-wr...
Sieht super aus! 👍 🤩
Good luck! From a review we got a while ago (paraphrased): "This tool clearly helps HCI researchers [...] I don't see how this contributes to research."
Finish your discussions today - you got this! 💪😄#CHI2026
It's really good!
Screenshot from PCS showing the information on paper length for a "standard paper", saying the average is 7000-8000 words.
Is this average based on data or a nudge? 😄 #CHI2026