Advertisement · 728 × 90

Posts by Mattias Rost

💯

1 week ago 1 0 0 0
Preview
AILET: Vol 1, No 1

The inaugural issue of ACM AI Letters is now published! AILET is envisioned to become the premier rapid-publication venue for impactful, concise, and timely communications in AI. Check out the thought-provoking articles at dl.acm.org/toc/ailet/20.... @rost.me @richardjeanso.bsky.social @acm.org

3 weeks ago 2 1 0 0
Abstract

Large language models (LLMs) are changing how we interact with computers. As they become capable of generating software dynamically, they invite a fundamental rethinking of the computer’s role in human activity. In this conceptual paper, we introduce LLM-mediated computing: a paradigm in which interaction is no longer structured around fixed applications, but emerges in real-time through human intent and LLM interpretation. We make three contributions: (1) we articulate a new interaction metaphor of reflective conversation to guide future design, (2) we use the lens of postphenomenology to understand the human-LLM-computer relation, and (3) we propose a new mode of computing based on co-disclosure, in which the computer is constituted in use. Together, they define a new mode of computing, provide a lens to analyze it, and offer a metaphor to design with.

Abstract Large language models (LLMs) are changing how we interact with computers. As they become capable of generating software dynamically, they invite a fundamental rethinking of the computer’s role in human activity. In this conceptual paper, we introduce LLM-mediated computing: a paradigm in which interaction is no longer structured around fixed applications, but emerges in real-time through human intent and LLM interpretation. We make three contributions: (1) we articulate a new interaction metaphor of reflective conversation to guide future design, (2) we use the lens of postphenomenology to understand the human-LLM-computer relation, and (3) we propose a new mode of computing based on co-disclosure, in which the computer is constituted in use. Together, they define a new mode of computing, provide a lens to analyze it, and offer a metaphor to design with.

Intriguing CHI '26 paper from @rost.me, "Co-Disclosing the Computer: LLM-Mediated Computing through Reflective Conversation".

PDF: rost.me/assets/publi...

1 month ago 9 1 1 0

Hi Katherine! The link gives me 404 I'm afraid.

2 months ago 0 0 1 0

One motivation for this paper:

Treating LLM outputs as static artifacts hides the dynamics that produce them: commitments, stabilizations, and path dependencies during generation.

Proto-interpretation is an attempt to name and study that middle ground.

2 months ago 1 0 0 0
Preview
Proto-Interpretation: The Temporality of Large Language Model Inference | ACM AI Letters We show that autoregressive generation in large language models exhibits a temporal structure: each token is not only conditioned on the past but also reshapes the future continuation space. We call this process proto-interpretation: the probabilistic ...

We often describe LLMs as “next-token predictors.”

That description is correct, and deeply insufficient.

In a new AI Letters paper, I argue for proto-interpretation: understanding inference as a temporally structured interpretive process.

dl.acm.org/doi/10.1145/...

2 months ago 3 0 1 0

Exactly, outputs are not invertible. Just hidden last layer activations before the linear transformation to token logits. I don't read this is saying that the weight carry information (such as proprietary data), but rather they form a structure making input-"output" 1-to-1.

5 months ago 0 0 1 0
Advertisement

Would love to hear your take on this. I think the mathematical analysis is interesting. But fail to see what interesting work it opens up. Not sure when you have access to internal activations but not the input. It's a transformer after all... It transforms.

5 months ago 1 0 0 0
Preview
Mattias Rost Researcher and Coder

LLM, Aletheia, and Poiesis. Offering yet another way to view LLMs that rejects the instrumentalist view.

rost.me/2025/07/18/l...

9 months ago 0 0 0 0
Preview
Mattias Rost Researcher and Coder

New post out about LLMs as relational. How language is a medium for thought, not a carrier of information.

rost.me/2025/07/15/L...

9 months ago 0 0 0 0
You are being redirected to https://glossy-twister-5ec.notion.site/Workshop-The-End-of-Programming-1c7e06f235ea80b09717ce04663c448e

We’re hosting a workshop at Aarhus 2025:
The End of Programming (as we know it)
Rethinking coding in the age of AI—beyond productivity, toward new practices, tools & ideas.
📅 Aug 19 (hybrid)
📝 Submit by June 27
🔗 mi.sh.se/~shmnjn07/th...

10 months ago 1 2 0 0

Good question. To me, e.g. diffusion models output representations. LLMs on the other hand produce relational responses. That said, image generation can, through practice, become part of an interpretive, iterative process.

10 months ago 0 0 0 0

If it were just about generation, it could output random strings. But we care about what it outputs. We engage and respond (and so does it). That co-shaped process is interpretive, not merely generative.

10 months ago 0 0 0 0

Sure, the model doesn’t carry memory or intent. But that’s not the point. Even if it were generating an endless transcript, the moment a human reads it, the meaning takes shape in relation to them. Even a single prompt-response is a relational act of sense-making.

10 months ago 0 0 0 0

Yes, the model’s inference is stateless. But the turn-taking happens in the interaction, and it’s within that interaction that sense-making emerges. It's where the relational framing comes in. Interpretation isn’t something internal but it’s something that's enacted between human and model.

10 months ago 0 0 0 0

Generative put the emphasis on output, whereas Interpretive puts the emphasis on input.

10 months ago 0 0 0 0
Advertisement

Short post by @rost.me arguing that calling LLMs "generative AI" is misleading nowadays, since generating plausible text is really only one of many things they do (he proposes the term "interpretive AI"). rost.me/2025/05/27/i...

10 months ago 15 7 6 4
Preview
QwQ stacking three eggs on top of each other QwQ stacking three eggs on top of each other. GitHub Gist: instantly share code, notes, and snippets.

Qwq on stacking three eggs. It's showing a lot of humour in its reasoning and is certainly taking its time to reach a final solution. gist.github.com/rrostt/be0fa...

1 year ago 0 0 0 0
Preview
HCAI Podcast Episode 12 - Democratizing AI with Mark Coeckelberg | Human-centered AI @ GU In this episode, we talk to Mark Coeckelbergh, Professor of Philosophy of Technology and Media at the University of Vienna, about the evolving landscape of AI and its implications for society. Mark br...

🤖 From narratives of AI to its ethical landscape - @coeckelbergh.bsky.social takes us through a philosophical journey on the #HCAI Podcast. What does responsible #AI development really mean? 🔗 Tune in at hcai.se/podcast/24-1...
#ethicalAI #humancenteredAI #responsibleAI #aidemocracy

1 year ago 16 6 0 0

Arn't we all the dog?

1 year ago 1 0 0 0
Sad dog in space

Sad dog in space

1 year ago 1 0 1 0
Video

Throwback to our episode on Human-centered AI-mediated Communication with @informor.bsky.social. Check out/listen to the full episode at hcai.se/podcast/24-0... #humanceteredai #humancetered #hcai #communication

1 year ago 8 2 0 0
Preview
Teach A Man To Fish When can cognitive atrophy be a good thing?

Some skills, like doing my taxes or filling out travel reimbursement requests, I will happily leave for cognitive atrophy. Others, not so much. In this blog post, I ruminate on how HCAI tools should teach people to fish rather than feed them.

niklaselmqvist.medium.com/teach-a-man-...

1 year ago 4 1 0 0

We had a great discussion with @nelmqvist.bsky.social about how AI, HCI, and data visalization can work together to develop tools that support us. Check out the episode ⬇️

1 year ago 4 1 0 0
Preview
HCAI Podcast Episode 11 - Ubiquitous HCAI with Niklas Elmqvist | Human-centered AI @ GU In this episode, we discuss the evolving intersection of human-computer interaction and AI with Niklas Elmqvist, a professor at Aarhus University and expert in data visualization. Niklas shares his in...

🚀 What role does #AI play in amplifying and augmenting human abilities? @nelmqvist.bsky.social joins the #HCAI Podcast to discuss the intersection of data visualization, #HCI, and AI tools that empower, not replace, us.
🔗 Listen now hcai.se/podcast/24-1...
#humancenteredai #dataviz

1 year ago 6 5 0 1
Advertisement

What have you tried using? I've never as much fun coding as when I assist it using LLMs.

1 year ago 1 0 0 0

Pilot hungover I'm sure... :)

1 year ago 0 0 0 0

Did they give a reason for cancelling?

1 year ago 0 0 1 0
Preview
HCAI Podcast Episode 6 - Governing AI with Joanna Bryson | Human-centered AI @ GU The sixtth episode of the HCAI podcast in which we talk about artificial intelligence governance with Joanna Bryson.

New episode of the HCAI podcast! We had the privilege to speak to @j2bryson.bsky.social on governance, policymaking, regional control, and philosophy. You'll find links to the episode below #HCAI #AIAct #DSA #AIGovernance #AIPolicy #GDPR #CCPA #humancenteredartificialintelligence #humancenteredai

1 year ago 5 3 0 0
Preview
HCAI Podcast Episode 7 - Human-centered Transparency with Natali Helberger | Human-centered AI @ GU The seventh episode of the HCAI podcast in which we talk about transparency, interdisciplinarity, and use of technology with Natali Helberger.

A few weeks ago, we had a great discussion with Natali Helberger about human-centricity, transparency, policy, and use of AI. The episode it out now. hcai.se/podcast/24-0...
#humancenteredartificialintelligence #hcai #transparency #transparentai #aigovernance

1 year ago 2 2 0 0