💯
Posts by Mattias Rost
The inaugural issue of ACM AI Letters is now published! AILET is envisioned to become the premier rapid-publication venue for impactful, concise, and timely communications in AI. Check out the thought-provoking articles at dl.acm.org/toc/ailet/20.... @rost.me @richardjeanso.bsky.social @acm.org
Abstract Large language models (LLMs) are changing how we interact with computers. As they become capable of generating software dynamically, they invite a fundamental rethinking of the computer’s role in human activity. In this conceptual paper, we introduce LLM-mediated computing: a paradigm in which interaction is no longer structured around fixed applications, but emerges in real-time through human intent and LLM interpretation. We make three contributions: (1) we articulate a new interaction metaphor of reflective conversation to guide future design, (2) we use the lens of postphenomenology to understand the human-LLM-computer relation, and (3) we propose a new mode of computing based on co-disclosure, in which the computer is constituted in use. Together, they define a new mode of computing, provide a lens to analyze it, and offer a metaphor to design with.
Intriguing CHI '26 paper from @rost.me, "Co-Disclosing the Computer: LLM-Mediated Computing through Reflective Conversation".
PDF: rost.me/assets/publi...
Hi Katherine! The link gives me 404 I'm afraid.
One motivation for this paper:
Treating LLM outputs as static artifacts hides the dynamics that produce them: commitments, stabilizations, and path dependencies during generation.
Proto-interpretation is an attempt to name and study that middle ground.
We often describe LLMs as “next-token predictors.”
That description is correct, and deeply insufficient.
In a new AI Letters paper, I argue for proto-interpretation: understanding inference as a temporally structured interpretive process.
dl.acm.org/doi/10.1145/...
Exactly, outputs are not invertible. Just hidden last layer activations before the linear transformation to token logits. I don't read this is saying that the weight carry information (such as proprietary data), but rather they form a structure making input-"output" 1-to-1.
Would love to hear your take on this. I think the mathematical analysis is interesting. But fail to see what interesting work it opens up. Not sure when you have access to internal activations but not the input. It's a transformer after all... It transforms.
LLM, Aletheia, and Poiesis. Offering yet another way to view LLMs that rejects the instrumentalist view.
rost.me/2025/07/18/l...
New post out about LLMs as relational. How language is a medium for thought, not a carrier of information.
rost.me/2025/07/15/L...
We’re hosting a workshop at Aarhus 2025:
The End of Programming (as we know it)
Rethinking coding in the age of AI—beyond productivity, toward new practices, tools & ideas.
📅 Aug 19 (hybrid)
📝 Submit by June 27
🔗 mi.sh.se/~shmnjn07/th...
Good question. To me, e.g. diffusion models output representations. LLMs on the other hand produce relational responses. That said, image generation can, through practice, become part of an interpretive, iterative process.
If it were just about generation, it could output random strings. But we care about what it outputs. We engage and respond (and so does it). That co-shaped process is interpretive, not merely generative.
Sure, the model doesn’t carry memory or intent. But that’s not the point. Even if it were generating an endless transcript, the moment a human reads it, the meaning takes shape in relation to them. Even a single prompt-response is a relational act of sense-making.
Yes, the model’s inference is stateless. But the turn-taking happens in the interaction, and it’s within that interaction that sense-making emerges. It's where the relational framing comes in. Interpretation isn’t something internal but it’s something that's enacted between human and model.
Generative put the emphasis on output, whereas Interpretive puts the emphasis on input.
Short post by @rost.me arguing that calling LLMs "generative AI" is misleading nowadays, since generating plausible text is really only one of many things they do (he proposes the term "interpretive AI"). rost.me/2025/05/27/i...
Qwq on stacking three eggs. It's showing a lot of humour in its reasoning and is certainly taking its time to reach a final solution. gist.github.com/rrostt/be0fa...
🤖 From narratives of AI to its ethical landscape - @coeckelbergh.bsky.social takes us through a philosophical journey on the #HCAI Podcast. What does responsible #AI development really mean? 🔗 Tune in at hcai.se/podcast/24-1...
#ethicalAI #humancenteredAI #responsibleAI #aidemocracy
Arn't we all the dog?
Sad dog in space
Throwback to our episode on Human-centered AI-mediated Communication with @informor.bsky.social. Check out/listen to the full episode at hcai.se/podcast/24-0... #humanceteredai #humancetered #hcai #communication
Some skills, like doing my taxes or filling out travel reimbursement requests, I will happily leave for cognitive atrophy. Others, not so much. In this blog post, I ruminate on how HCAI tools should teach people to fish rather than feed them.
niklaselmqvist.medium.com/teach-a-man-...
We had a great discussion with @nelmqvist.bsky.social about how AI, HCI, and data visalization can work together to develop tools that support us. Check out the episode ⬇️
🚀 What role does #AI play in amplifying and augmenting human abilities? @nelmqvist.bsky.social joins the #HCAI Podcast to discuss the intersection of data visualization, #HCI, and AI tools that empower, not replace, us.
🔗 Listen now hcai.se/podcast/24-1...
#humancenteredai #dataviz
What have you tried using? I've never as much fun coding as when I assist it using LLMs.
Pilot hungover I'm sure... :)
Did they give a reason for cancelling?
New episode of the HCAI podcast! We had the privilege to speak to @j2bryson.bsky.social on governance, policymaking, regional control, and philosophy. You'll find links to the episode below #HCAI #AIAct #DSA #AIGovernance #AIPolicy #GDPR #CCPA #humancenteredartificialintelligence #humancenteredai
A few weeks ago, we had a great discussion with Natali Helberger about human-centricity, transparency, policy, and use of AI. The episode it out now. hcai.se/podcast/24-0...
#humancenteredartificialintelligence #hcai #transparency #transparentai #aigovernance