Advertisement · 728 × 90

Posts by Dan Liebling

Preview
Complex technology requires cultural innovations for distributing cognition Over the last decade, new research has shown how human collectives can develop technologies that no single individual could discover on their own. However, this research often overlooks how technology...

learning changes with new technology, doesn't have to mean that it isn't present anymore. Might need to happen in other places.

www.cell.com/trends/cogni...

6 days ago 3 1 1 0
Six-panel composite figure. Caption: Interactive artifacts always rely on people’s interpretive and interactional practices. Rowwise from top left to bottom right: A. Aegeus consults the oracle at Delphi (cup from Vulci, 440-430 BCE). B. Byzantine mosaic depicting the zodiac, from the floor of the 6th century CE Beth Alpha synagogue. C. One-sided sense-making in an experimental psychotherapy session, (McHugh 1968). D. Still from a BBC documentary showing a person interacting with ELIZA via a computer terminal, late 1960s. E. Researchers interacting with the PARC copier (Suchman 2007 [1987]). F. Screenshot of large language model chat interface, 2026.

Six-panel composite figure. Caption: Interactive artifacts always rely on people’s interpretive and interactional practices. Rowwise from top left to bottom right: A. Aegeus consults the oracle at Delphi (cup from Vulci, 440-430 BCE). B. Byzantine mosaic depicting the zodiac, from the floor of the 6th century CE Beth Alpha synagogue. C. One-sided sense-making in an experimental psychotherapy session, (McHugh 1968). D. Still from a BBC documentary showing a person interacting with ELIZA via a computer terminal, late 1960s. E. Researchers interacting with the PARC copier (Suchman 2007 [1987]). F. Screenshot of large language model chat interface, 2026.

New! Interactional foundations for critical AI literacies doi.org/10.5281/zeno...

Why do Anthropic engineers talk to Claude as a witch-doctor to his potions? How is prompt engineering like spider divination? Can one reason without reasons?

ft. Lovelace, Adorno, Suchman, Weizenbaum & many more ☺️

6 days ago 98 38 4 7

Extremely important analysis. Yes, people can be lazy and incurious. But also, some of this cognitive offloading is a rational response to existing within SO many systems that our human brains and bodies were not built for.

6 days ago 57 13 1 0

every figure generator should have a --no_cute_robots flag

1 week ago 0 0 0 0
Preview
Paper page - PaperBanana: Automating Academic Illustration for AI Scientists Join the discussion on this paper page

genai scientific illustration huggingface.co/papers/2601.... -- based on the suggestions from Semantic Scholar, this is an active field

1 week ago 0 0 1 0
Preview
CigaLLM: A Domain-Specialized Large Language Model for Cigarette Appearance Quality Analysis The evaluation of the quality of the appearance of cigarettes represents a critical challenge in tobacco manufacturing, where existing models do not capture the specialized knowledge required for a co...

LLMs for ... cigarette analysis? (hey they cited our work so) ieeexplore.ieee.org/document/113...

1 week ago 0 0 0 0

NASA isn't why the US doesn't have universal healthcare, or a social safety net. The US doesn't have those things because politicians with the power to provide them choose specifically not to (with varying levels of support from voters). Enthusiasm for human spaceflight doesn't drive that choice.

1 week ago 20008 4440 345 208

truly with 30+ chat rooms at work we have reinvented email and forums in the worst way possible

1 week ago 1 0 0 0
Preview
A Conditional Companion: Lived Experiences of People with Mental Health Disorders Using LLMs Large Language Models (LLMs) are increasingly used for mental health support, yet little is known about how people with mental health challenges engage with them, how they evaluate their usefulness, a...

🤖 “A Conditional Companion: Lived Experiences of People with Mental Health Disorders Using LLMs” — examines how people with mental health conditions use LLMs, highlighting benefits, situational use, and clear limitations.

📕 arxiv.org/abs/2602.00402

(4/7)

1 week ago 0 1 1 0

I assume Gemini has a tiny generative model that makes these since they sometimes seem to include relevant context, but I’m not actually sure how it works.

1 week ago 2 0 0 0
Advertisement

in a serious take I do think the spinner is a tiny opportunity for interpretable AI. I assume when it says things like “pursuifying pursuits” people know it’s fake, but when it says “analyzing impact of X on Y” is it really doing that and how do those messages affect perception of final output?

1 week ago 2 0 1 0

great example of a feature copied indiscriminately that imo has low value for users

1 week ago 1 0 2 0
Partial screenshot of Claude app with recipe ingredients that are scalable to the number of servings

Partial screenshot of Claude app with recipe ingredients that are scalable to the number of servings

Cute “cooking mode” in Claude app that scales up or down and has timers. But US units don’t round or change order of magnitude so you wind up with silly units like 0.91 lbs of chicken or 12 tsp

1 week ago 0 0 0 0
Illustration showing the study flow overall: Left part shows Phase 1 with a user and chatbot working on ideas and elaborations for the example problem "How might we solve the problem of plastic waste in the ocean?" Text says that participants have to come up with five ideas in 1-3 keywords and write elaborations in one sentence each. An arrow with annotation ("One week later") points to a second illustration of a user with question-marked though bubbles. Text says that in this Phase 2, participants were asked: Did you work on this? Source of idea? Source of elaboration? Further text says that they were also asked this for unseen items (so-called distractors). Following another arrow to the right is a box with the title "Key findings" and three bullets: Negative impact of AI on source memory overall; mixed workflows harder to remember than never/always using AI; and people tend to be overconfident about their own performance.

Illustration showing the study flow overall: Left part shows Phase 1 with a user and chatbot working on ideas and elaborations for the example problem "How might we solve the problem of plastic waste in the ocean?" Text says that participants have to come up with five ideas in 1-3 keywords and write elaborations in one sentence each. An arrow with annotation ("One week later") points to a second illustration of a user with question-marked though bubbles. Text says that in this Phase 2, participants were asked: Did you work on this? Source of idea? Source of elaboration? Further text says that they were also asked this for unseen items (so-called distractors). Following another arrow to the right is a box with the title "Key findings" and three bullets: Negative impact of AI on source memory overall; mixed workflows harder to remember than never/always using AI; and people tend to be overconfident about their own performance.

Can you remember which ideas & sentences were your own and which were generated with AI?

In a controlled study (n=184), we found that AI use significantly reduces the accuracy of content attribution after one week.

#CHI2026 preprint & numbers in 🧵

@robinwelsch.bsky.social @svengoller.bsky.social

2 weeks ago 9 3 1 0

I’m looking for #reviewers for any of the following submissions. Please contact me if you can review, and boost in any case. Thanks!

(Details in next post)

#LowResource #Turkic #NLP #NLProc #review

2 weeks ago 1 1 1 0

got reminded to turn on live captions in my Google Slides which made me realize these vibecoded slide deck hotness do not have a11y by default. would be interesting to see ppl add that in

2 weeks ago 0 0 0 0

Woohoo! Great new open access piece out in Internet Pragmatics by Kendra Calhoun: “This is so Vine coded”: Genre, nostalgia, and strategies of multimodal intertextuality on TikTok

www.jbe-platform.com/docserver/fu...

2 weeks ago 17 7 1 0

data scientists yes, coders seem less likely esp as APIs become designed for agents not humans

3 weeks ago 0 0 1 0
Advertisement

Visibility into Gaza scam bots (since one just RT’d me)

3 weeks ago 0 0 0 0

this was my point at our CHIIR panel yesterday: chat mostly sucks and here is a huge opportunity to invent useful experiences!

3 weeks ago 1 1 1 1
Post image

Today we're releasing MolmoWeb, an open source agent that can navigate + complete tasks in a browser on your behalf.

Built on Molmo 2 in 4B & 8B sizes, it sets a new open-weight SOTA across four major web-agent benchmarks & even surpasses agents built on proprietary models. 🧵

3 weeks ago 46 8 2 0

sneaky fusion bois

3 weeks ago 0 0 0 0
Blind Usability Issues with Discussion Forums
> An interview study with 14 blind screen reader users explored auditory usability challenges in online discussion forums.
> Open coding analysis of 45-75 minute Zoom interviews revealed recurring interaction barriers and coping strategies.
> Monotonous narration reduced comprehension and engagement, making it harder to identify important information and maintain attention.
> Flat auditory output increased cognitive fatigue, with users reporting mental effort and the need to self-impose expressive interpretation.
> Users had to mentally segment continuous speech to track comment boundaries, follow thread flow, and retain discussion context.

Blind Usability Issues with Discussion Forums > An interview study with 14 blind screen reader users explored auditory usability challenges in online discussion forums. > Open coding analysis of 45-75 minute Zoom interviews revealed recurring interaction barriers and coping strategies. > Monotonous narration reduced comprehension and engagement, making it harder to identify important information and maintain attention. > Flat auditory output increased cognitive fatigue, with users reporting mental effort and the need to self-impose expressive interpretation. > Users had to mentally segment continuous speech to track comment boundaries, follow thread flow, and retain discussion context.

For screen reader users reading discussion forums, “flat auditory output increased cognitive fatigue” findings from VoxVista #chiir2026

3 weeks ago 1 0 0 0

Claude would be 10x more useful for me if it could open email attachments and write Google Docs directly. I was shocked that the best way to create docs is ... LibreOffice tool creates a docx that gets converted to Google Docs (?!)

3 weeks ago 0 0 0 0
Post image

And new data release: French-Science-Commons, the largest scientific corpus in French in open access including 1.25 million documents/42 million pages re-digitized with VLM (dots ocr). huggingface.co/datasets/Ple...

3 weeks ago 46 13 4 1
For robust research, center values, not technology Large language models are interesting, but linguistics and cognitive science should be cautious about centering any new technology as a magic bullet. Doing so reinforces the historically “narrow focus...

New! For robust research, center values, not technology, w/ @nerdpro.bsky.social zenodo.org/records/1894...

Our commentary on Futrell/Mahowald "How linguistics stopped worrying and learned to love the LMs”, forthcoming in BBS

1 month ago 36 11 1 1
Advertisement
Preview
Ambient Co-presence Creating a subtle, peripheral, and synchronous sense of shared space and context on the web

as AI in Science situates formerly collaborative tasks like ideation into a more personal space, hmw incorporate ambient awareness of those processes? maggieappleton.com/ambient-copr...

3 weeks ago 1 0 0 0

MSFT has a fine tradition of being picky about brand language

for a hot moment we had to refer to Word 2003 as Microsoft Office System 2003 Word

Windows Phone 7 was not to be abbreviated as WP7

3 weeks ago 2 0 0 0
Preview
What are the differences? A comparative study of generative artificial intelligence translation and human translation of scientific texts - Humanities and Social Sciences Communications Humanities and Social Sciences Communications - What are the differences? A comparative study of generative artificial intelligence translation and human translation of scientific texts

I think this paper reveals interesting subtle things about Chinese student translators as well as LLMs

“What are the differences? A comparative study of generative artificial intelligence translation and human translation of scientific texts”

www.nature.com/articles/s41...

4 weeks ago 0 0 0 0
The human knowledge loophole in the 'bitter lesson' for LLMs | ICLR Blogposts 2026 Are LLMs a proof that the 'bitter lesson' holds for NLP? Perhaps the opposite is true: they work due to the scale of human data, and not just computation.

If you work in ML, you may have heard of Sutton's 'bitter lesson' (the idea that compute generally beats human knowledge in methods dev).

🔥 Hot take: large language models are *not* a case of this. Nor can there be such a method for NLP.

ICLR'26 blog: iclr-blogposts.github.io/2026/blog/20...

4 weeks ago 17 5 2 1