Advertisement · 728 × 90

Posts by Iwan Williams

[7/7] In short: text-bounded LLMs could, in principle, represent the real world. And it's possible that structural correspondences help them to do so.

But we need further empirical work (taking care to establish exploitation!) to rule out more deflationary explanations of LLM behaviour.

1 month ago 0 0 0 0

[6/7] A complication is that this requires selecting appropriate task-success criteria. Different training procedures may warrant different criteria. This might result in different exploited correspondences... which would ground different contents!

1 month ago 0 0 1 0

[5/7] I discuss some empirical evidence that the first condition is met — LLM processing may be sensitive to activation vector offsets.

To test the second condition, I argue that we need targeted intervention experiments that modulate candidate exploited correspondences independently.

1 month ago 0 0 1 0

[4/7] Structural correspondences are cheap. For one to genuinely ground representation, the system must exploit it. This requires two things:
(i) processing must be causally sensitive to the relevant internal structure, and
(ii) the correspondence must contribute to successful task performance.

1 month ago 0 0 1 0

[3/7] When an LLM's internal states mirror real-world geography, maybe that's just an artefact of the fact that those states track the statistics of geographic language (which happens to approximately mirror the actual geography).

1 month ago 0 0 1 0
Post image

[2/7] Some researchers have found that LLMs' internal states structurally mirror real-world domains — colour spaces, spatial layouts, temporal orderings. But does finding such a correspondence mean the LLM represents those things? I argue: not so fast.

1 month ago 0 0 1 0

[1/7] My paper "Can structural correspondences ground real-world representational content in large language models?" is now out at Mind & Language.

Q: Can text-only LLMs represent things in the real world, even though they never directly interact with it?

onlinelibrary.wiley.com/share/author...

1 month ago 1 0 1 0
Preview
#26 Iwan Williams: Do Language Models Have Intentions?

I recently sat down with Sam Bennett on the AITEC podcast to talk about my thoughts on intentions in Large Language Models. This was a fun conversation!

👂 Listen here:
open.spotify.com/episode/3sm9...

📃 Read the preprint here:
philpapers.org/rec/WILIRI-4

2 months ago 1 0 0 0
Advertisement
Artificial General Intelligence: A Philosopher’s Manifesto Public talk by Anandi Hattiangadi, Professor of Philosophy at Stockholm University.

Join us for a public talk by Prof. Anandi Hattiangadi (Stockholm University) Center for Philosophy of AI (University of Copenhagen).

Artificial General Intelligence: A Philosopher’s Manifesto

📅 Dec 10, 18:30-20:00
📍HUSET, Rådhusstræde 13, 1466 Copenhagen.

Register: cpai.ku.dk/events/artif...

4 months ago 4 1 0 0
Center for Philosophy of AI: Launch Half-day workshop

🧠🤖 Join us for the launch of the Center for Philosophy of AI (University of Copenhagen)!

📅 Sept 3, 13:00-17:00
📍CPH Conference

Keynotes on philosophy of LLMs by @parismarx.com, Ellie Pavlick, @dcm.social.sunet.se.ap.brid.gy, Tom Sterkenburg & @zhijingjin.bsky.social

Register: cpai.ku.dk

8 months ago 11 2 0 0

Similarly, current advanced chatbots exhibit some, but not all, of the capacities characteristic of full-fledged assertion. And some capacities they possess partially.

Our take? We should think of current LLM-driven chatbots as proto-asserters.

[5/5]

1 year ago 2 0 0 0

We need a different perspective.

Take young children: toddlers lack some of the cognitive capacities exercised by adult asserters, but many features are partially present.

In this phase, a child's speech is not (exactly) assertion but it's not *not* assertion: they are proto-asserters!

[4/5]

1 year ago 1 0 1 0

Some have tried to "split the difference" between the "pro" and "con" cases.

We argue that previous attempts to do this – treating chatbots as asserters in a merely fictional sense, or holding that they only make "proxy"-assertions on behalf of humans – are unsatisfactory.

[3/5]

1 year ago 1 0 1 0

We identify some considerations in favour of a "yes" answer, then review recent objections to the idea of chatbot assertion.

We argue that neither flat rejection nor straightforward endorsement is compelling. So how should we think about chatbot assertion?

[2/5]

1 year ago 0 0 1 0
Preview
Chatting with bots: AI, speech acts, and the edge of assertion This paper addresses the question of whether large language model-powered chatbots are capable of assertion. According to what we call the Thesis of Chatbot Assertion (TCA), chatbots are the kinds ...

My paper with Tim Bayne "Chatting with bots: AI, speech acts, and the edge of assertion" is now up at Inquiry.

Our question: can large language model powered chatbots make assertions (can they state, claim or affirm things)?

[1/5]

www.tandfonline.com/doi/full/10....

1 year ago 19 5 1 1