Advertisement · 728 × 90

Posts by Linas Nasvytis

OSF

Shoutout again to the amazing advisor team of
@gershbrain.bsky.social and @fierycushman.bsky.social!

Full paper: osf.io/preprints/ps...

7 months ago 5 0 0 0

This has implications for AI and cognitive modeling:

When designing systems to reason socially, we shouldn’t assume full inference is always used — or always needed.

Humans strike a balance between accuracy and efficiency.

7 months ago 1 0 1 0

We model this in a Bayesian framework, comparing 3 hypotheses:
1. Full ToM: preference + belief (inferred from environment) → action
2. Correspondance bias: preference → action
3. Belief neglect: preference + environment (ignoring beliefs) → action

People flexibly switch depending on context!

7 months ago 3 0 1 0

With minimal training, participants started engaging in full joint inference over beliefs and preferences.

But without that training, belief neglect was common.

This suggests people adaptively allocate cognitive effort, depending on task structure.

7 months ago 2 0 1 0
Post image

Belief neglect is different from correspondence bias:

People DO account for environmental constraints (e.g., locked doors).

But they skip reasoning about what the agent believes about the environment.

It’s a mid-level shortcut.

7 months ago 2 0 1 0

We find that, by default, people often neglect the agent’s beliefs.

They infer preferences as if the agent’s beliefs were correct — even when they’re not.

This is what we call belief neglect.

7 months ago 3 0 1 0
Video

In our task, participants watched agents navigate grid worlds to collect gems.

Sometimes, gems are hidden behind doors. Participants were told that some agents falsely believed that they couldn't open these doors.

They then had to infer which gem the agents preferred.

7 months ago 2 0 1 0

The question we ask is: When do people actually engage in full ToM reasoning?

And when do they fall back on faster heuristics?

7 months ago 2 0 1 0

Theory of mind (ToM) — reasoning about others’ beliefs and desires — is central to human intelligence.

It's often framed as Bayesian inverse planning: we observe a person's action, then infer their beliefs and desires.

But that kind of reasoning is computationally costly.

7 months ago 2 0 1 0
Post image

🚨New paper out w/ @gershbrain.bsky.social & @fierycushman.bsky.social from my time @Harvard!

Humans are capable of sophisticated theory of mind, but when do we use it?

We formalize & document a new cognitive shortcut: belief neglect — inferring others' preferences, as if their beliefs are correct🧵

7 months ago 50 16 2 1
Advertisement