learning changes with new technology, doesn't have to mean that it isn't present anymore. Might need to happen in other places.
www.cell.com/trends/cogni...
Posts by Dan Liebling
Six-panel composite figure. Caption: Interactive artifacts always rely on people’s interpretive and interactional practices. Rowwise from top left to bottom right: A. Aegeus consults the oracle at Delphi (cup from Vulci, 440-430 BCE). B. Byzantine mosaic depicting the zodiac, from the floor of the 6th century CE Beth Alpha synagogue. C. One-sided sense-making in an experimental psychotherapy session, (McHugh 1968). D. Still from a BBC documentary showing a person interacting with ELIZA via a computer terminal, late 1960s. E. Researchers interacting with the PARC copier (Suchman 2007 [1987]). F. Screenshot of large language model chat interface, 2026.
New! Interactional foundations for critical AI literacies doi.org/10.5281/zeno...
Why do Anthropic engineers talk to Claude as a witch-doctor to his potions? How is prompt engineering like spider divination? Can one reason without reasons?
ft. Lovelace, Adorno, Suchman, Weizenbaum & many more ☺️
Extremely important analysis. Yes, people can be lazy and incurious. But also, some of this cognitive offloading is a rational response to existing within SO many systems that our human brains and bodies were not built for.
every figure generator should have a --no_cute_robots flag
genai scientific illustration huggingface.co/papers/2601.... -- based on the suggestions from Semantic Scholar, this is an active field
NASA isn't why the US doesn't have universal healthcare, or a social safety net. The US doesn't have those things because politicians with the power to provide them choose specifically not to (with varying levels of support from voters). Enthusiasm for human spaceflight doesn't drive that choice.
truly with 30+ chat rooms at work we have reinvented email and forums in the worst way possible
🤖 “A Conditional Companion: Lived Experiences of People with Mental Health Disorders Using LLMs” — examines how people with mental health conditions use LLMs, highlighting benefits, situational use, and clear limitations.
📕 arxiv.org/abs/2602.00402
(4/7)
I assume Gemini has a tiny generative model that makes these since they sometimes seem to include relevant context, but I’m not actually sure how it works.
in a serious take I do think the spinner is a tiny opportunity for interpretable AI. I assume when it says things like “pursuifying pursuits” people know it’s fake, but when it says “analyzing impact of X on Y” is it really doing that and how do those messages affect perception of final output?
great example of a feature copied indiscriminately that imo has low value for users
Partial screenshot of Claude app with recipe ingredients that are scalable to the number of servings
Cute “cooking mode” in Claude app that scales up or down and has timers. But US units don’t round or change order of magnitude so you wind up with silly units like 0.91 lbs of chicken or 12 tsp
Illustration showing the study flow overall: Left part shows Phase 1 with a user and chatbot working on ideas and elaborations for the example problem "How might we solve the problem of plastic waste in the ocean?" Text says that participants have to come up with five ideas in 1-3 keywords and write elaborations in one sentence each. An arrow with annotation ("One week later") points to a second illustration of a user with question-marked though bubbles. Text says that in this Phase 2, participants were asked: Did you work on this? Source of idea? Source of elaboration? Further text says that they were also asked this for unseen items (so-called distractors). Following another arrow to the right is a box with the title "Key findings" and three bullets: Negative impact of AI on source memory overall; mixed workflows harder to remember than never/always using AI; and people tend to be overconfident about their own performance.
Can you remember which ideas & sentences were your own and which were generated with AI?
In a controlled study (n=184), we found that AI use significantly reduces the accuracy of content attribution after one week.
#CHI2026 preprint & numbers in 🧵
@robinwelsch.bsky.social @svengoller.bsky.social
I’m looking for #reviewers for any of the following submissions. Please contact me if you can review, and boost in any case. Thanks!
(Details in next post)
#LowResource #Turkic #NLP #NLProc #review
got reminded to turn on live captions in my Google Slides which made me realize these vibecoded slide deck hotness do not have a11y by default. would be interesting to see ppl add that in
Woohoo! Great new open access piece out in Internet Pragmatics by Kendra Calhoun: “This is so Vine coded”: Genre, nostalgia, and strategies of multimodal intertextuality on TikTok
www.jbe-platform.com/docserver/fu...
data scientists yes, coders seem less likely esp as APIs become designed for agents not humans
Visibility into Gaza scam bots (since one just RT’d me)
this was my point at our CHIIR panel yesterday: chat mostly sucks and here is a huge opportunity to invent useful experiences!
Today we're releasing MolmoWeb, an open source agent that can navigate + complete tasks in a browser on your behalf.
Built on Molmo 2 in 4B & 8B sizes, it sets a new open-weight SOTA across four major web-agent benchmarks & even surpasses agents built on proprietary models. 🧵
sneaky fusion bois
Blind Usability Issues with Discussion Forums > An interview study with 14 blind screen reader users explored auditory usability challenges in online discussion forums. > Open coding analysis of 45-75 minute Zoom interviews revealed recurring interaction barriers and coping strategies. > Monotonous narration reduced comprehension and engagement, making it harder to identify important information and maintain attention. > Flat auditory output increased cognitive fatigue, with users reporting mental effort and the need to self-impose expressive interpretation. > Users had to mentally segment continuous speech to track comment boundaries, follow thread flow, and retain discussion context.
For screen reader users reading discussion forums, “flat auditory output increased cognitive fatigue” findings from VoxVista #chiir2026
Claude would be 10x more useful for me if it could open email attachments and write Google Docs directly. I was shocked that the best way to create docs is ... LibreOffice tool creates a docx that gets converted to Google Docs (?!)
And new data release: French-Science-Commons, the largest scientific corpus in French in open access including 1.25 million documents/42 million pages re-digitized with VLM (dots ocr). huggingface.co/datasets/Ple...
New! For robust research, center values, not technology, w/ @nerdpro.bsky.social zenodo.org/records/1894...
Our commentary on Futrell/Mahowald "How linguistics stopped worrying and learned to love the LMs”, forthcoming in BBS
as AI in Science situates formerly collaborative tasks like ideation into a more personal space, hmw incorporate ambient awareness of those processes? maggieappleton.com/ambient-copr...
MSFT has a fine tradition of being picky about brand language
for a hot moment we had to refer to Word 2003 as Microsoft Office System 2003 Word
Windows Phone 7 was not to be abbreviated as WP7
I think this paper reveals interesting subtle things about Chinese student translators as well as LLMs
“What are the differences? A comparative study of generative artificial intelligence translation and human translation of scientific texts”
www.nature.com/articles/s41...
If you work in ML, you may have heard of Sutton's 'bitter lesson' (the idea that compute generally beats human knowledge in methods dev).
🔥 Hot take: large language models are *not* a case of this. Nor can there be such a method for NLP.
ICLR'26 blog: iclr-blogposts.github.io/2026/blog/20...