Advertisement · 728 × 90

Posts by Daniel Buschek

Haha for a moment I had actually thought about adding this meme to my original post! 😂

2 days ago 1 0 0 0
Collage of three photos, showing the conference venue, the main exhibition hall from above, and the Sagrada Familia in Barcelona. A text overlay says "Thank you CHI 2026!"

Collage of three photos, showing the conference venue, the main exhibition hall from above, and the Sagrada Familia in Barcelona. A text overlay says "Thank you CHI 2026!"

Thank you #CHI2026 for an amazing week of inspiring conversations in Barcelona!

2 days ago 1 1 0 0

Looks like it...

2 days ago 1 0 0 0

Fun fact for #CHI2026 folks: The generated "Session Summary Podcasts" in the ACM DL "cite" the session's papers and are picked up by Google Scholar, so all papers get +1 citation automatically now!

2 days ago 1 0 3 0
Preview
The AI Memory Gap: Users Misremember What They Created With AI or Without As large language models (LLMs) become embedded in interactive text generation, disclosure of AI as a source depends on people remembering which ideas or texts came from themselves and which were crea...

The "memory gap" was most acute in mixed workflows:
1️⃣ If you elaborated on an AI idea, there's a 37.7% chance you remember the idea came from AI.
2️⃣ If you used AI to elaborate on your idea, there's a 64% chance you remember the idea as yours.

Preprint: arxiv.org/abs/2509.11851

3 weeks ago 1 0 0 0
Illustration showing the study flow overall: Left part shows Phase 1 with a user and chatbot working on ideas and elaborations for the example problem "How might we solve the problem of plastic waste in the ocean?" Text says that participants have to come up with five ideas in 1-3 keywords and write elaborations in one sentence each. An arrow with annotation ("One week later") points to a second illustration of a user with question-marked though bubbles. Text says that in this Phase 2, participants were asked: Did you work on this? Source of idea? Source of elaboration? Further text says that they were also asked this for unseen items (so-called distractors). Following another arrow to the right is a box with the title "Key findings" and three bullets: Negative impact of AI on source memory overall; mixed workflows harder to remember than never/always using AI; and people tend to be overconfident about their own performance.

Illustration showing the study flow overall: Left part shows Phase 1 with a user and chatbot working on ideas and elaborations for the example problem "How might we solve the problem of plastic waste in the ocean?" Text says that participants have to come up with five ideas in 1-3 keywords and write elaborations in one sentence each. An arrow with annotation ("One week later") points to a second illustration of a user with question-marked though bubbles. Text says that in this Phase 2, participants were asked: Did you work on this? Source of idea? Source of elaboration? Further text says that they were also asked this for unseen items (so-called distractors). Following another arrow to the right is a box with the title "Key findings" and three bullets: Negative impact of AI on source memory overall; mixed workflows harder to remember than never/always using AI; and people tend to be overconfident about their own performance.

Can you remember which ideas & sentences were your own and which were generated with AI?

In a controlled study (n=184), we found that AI use significantly reduces the accuracy of content attribution after one week.

#CHI2026 preprint & numbers in 🧵

@robinwelsch.bsky.social @svengoller.bsky.social

3 weeks ago 9 3 1 0

The list has now grown to 500+ #CHI2026 preprints!

3 weeks ago 2 0 0 0

Congrats, great work all! 🎉 Great that you kept pushing with this following the earlier work at CHI23, which is still one of my favourite projects I had the opportunity to contribute to!

1 month ago 1 0 0 0

Cool! I'm watching arXiv for this, so the list actually already has several posters! Will update again in the next days.

1 month ago 2 0 0 0

Added!

1 month ago 0 1 0 0
Advertisement
Preview
Collaborative Document Editing with Multiple Users and AI Agents Current AI writing support tools are largely designed for individuals, complicating collaboration when co-writers must leave the shared workspace to use AI and then communicate and reintegrate results...

📄 Preprint: arxiv.org/abs/2509.11826

1 month ago 0 0 0 0

Key findings (14 teams, 1 week):
📍Profiles were seen as personal territory but it's ok to interact with others' agents/tasks/comments
📍User-initative preferred
📍Thus, teams incorporated agents into social collaboration norms, rather than treating them as "equal" team members.

1 month ago 0 0 1 0
Video

Typical chatbots force co-writers to leave shared docs. Our #CHI2026 paper explores collaborative AI use in shared docs via 3 features:
🤖 Shared agent profiles
☑️ Repeatable tasks, triggered by users or system
💬 Agents respond in shared comments
Preprint in 🧵
w @florianlehmann.bsky.social

1 month ago 3 2 1 0

Wait, what? Why? Not sure if that's a good idea. I like 2-column submissions but page limits instead of word limits penalize using well-designed, reasonably sized figures and tables.\vspace{-.1em}

1 month ago 1 0 1 0
Preview
CHI’26 Preprint Collection Looking for current research on HCI + AI? Here’s a list.

📢 Looking for current research on #HCI + #AI? Here's a categorised collection of 300+ #CHI2026 preprints, collected via arXiv:
dbuschek.medium.com/chi26-prepri...

1 month ago 11 3 3 2

This is a unique project in several ways: Specialists working with an actual AI writing tool product (not the latest chatbot or own prototype) as part of their actual job, in an important yet overlooked writing domain (German "easy and plain language" for accessibility).

2 months ago 8 0 1 1

Reminder for #CHI2026 ACs: You can still click "excellent review" (for your 1AC papers) to award specific reviewers for great work!

3 months ago 2 0 0 0
Advertisement

Was just thinking about it this morning, could we maybe change a PCS setting to send these out to all authors? 🙂

3 months ago 2 0 0 0
Preview
Don’t Review with an LLM (Laundry List Method) The problem of asking AI for problems

Finalising your #CHI2026 revision? Or suffering from not being allowed to do so? I've "reviewed-to-reject" one of our papers with ChatGPT, to show how this leads to bad reviews - and give ideas for responding to this: dbuschek.medium.com/dont-review-...

4 months ago 2 1 0 0

I highly recommend Ken Hinckley's article on excellence in reviews - and championing papers: kenhinckley.wordpress.com/wp-content/u...

5 months ago 3 0 0 0

Have we lost peer appreciation? Not just at #CHI2026 I notice many review-to-reject. All studies have tradeoffs, all papers have limited scope. Sure, reviewing is fast, even with "passing knowledge", if we list what isn't there and conclude it's lacking. But we should review what *has* been done.

5 months ago 9 0 1 0

Fortunately, I did not get obviously LLM-generated reviews in my CHI AC stack of papers but my sample size is much smaller than with your role.

5 months ago 1 0 0 0
Preview
How AI writing tools fail to speak to writers Seven design insights from taking Grammarly ads (too) seriously

Three weeks ago, my feed filled with Grammarly ads - and I wondered: What if we took their messages seriously? My anecdotal "ad-vestigation" connects them to Human-AI interaction concepts, leading to counterfactual questions for alternative design directions. medium.com/p/1bab97d0fa98

5 months ago 0 0 0 0

For me, these issues have never shown up before in such density. And LLMs make it easier to get summaries of results and related work.

5 months ago 3 0 0 0
Advertisement
Preview
When LLMs Write Our Papers Four writing issues I notice as a reviewer — and how to fix them

While reviewing for #CHI2026, I've noticed four new writing issues in #HCI papers, likely due to an increased use of #LLMs / #AI. I describe them here - and how to fix them: dbuschek.medium.com/when-llms-wr...

5 months ago 28 5 2 2

Sieht super aus! 👍 🤩

7 months ago 1 0 1 0

Good luck! From a review we got a while ago (paraphrased): "This tool clearly helps HCI researchers [...] I don't see how this contributes to research."

7 months ago 5 0 1 0

Finish your discussions today - you got this! 💪😄#CHI2026

7 months ago 4 0 0 0

It's really good!

7 months ago 0 0 0 0
Screenshot from PCS showing the information on paper length for a "standard paper", saying the average is 7000-8000 words.

Screenshot from PCS showing the information on paper length for a "standard paper", saying the average is 7000-8000 words.

Is this average based on data or a nudge? 😄 #CHI2026

7 months ago 6 0 1 0