Advertisement · 728 × 90

Posts by David Barner

Post image

New pre-print alert! How do children learn magnitude words like "long" and "high", which often denote multiple domains? With @urvi.bsky.social and @drbarner.bsky.social, we find that children start with narrow meanings restricted to the labeled domain, before analogically extending

osf.io/ucxra_v1

1 day ago 30 8 0 0
Preview
Human Brain Organoids and Consciousness - Neuroethics This article proposes a methodological schema for engaging in a productive discussion of ethical issues regarding human brain organoids (HBOs), which are three-dimensional cortical neural tissues crea...

Today in my CogSci/SciFi class we're talking about Greg Egan's Learning to be Me & this really fun paper on the ethics of human brain organoids & consciousness. What is the future of human-machine mental integration & the use of biological organisms for computing? link.springer.com/article/10.1...

4 days ago 4 0 1 0

Looks like Hungary won the election!

1 week ago 8 0 0 0

I’m sorry Larissa. Two big fails for this department. You and your work are awesome and deserve a home that values you.

1 week ago 14 0 1 0
Hungary 2026 — Election Tracker Two polling realities before elections in Hungary. Tracking the parliamentery election in Hungary in 2026.

Results soon in Hungary. hungary-2026.com

1 week ago 2 0 0 0

(they're all count nouns?)

1 week ago 1 0 0 0
The visual learning lab at UCSD is attending the Cognitive Development Society 2026 conference. There are three poster presentations: Relating changes in children's descriptions and line drawings of visual concepts across childhood by Nicole Sahrling at poster session 1 on Friday April 10, Examining the precision of infants' visual concepts by leveraging vision-language models and automated gaze coding by Tarun Sepuri at poster session 2 on Friday April 10, and Children's integration of visual concepts via line drawings of hybrid categories by Haoyu Du at poster session 4 on Saturday April 11.

The visual learning lab at UCSD is attending the Cognitive Development Society 2026 conference. There are three poster presentations: Relating changes in children's descriptions and line drawings of visual concepts across childhood by Nicole Sahrling at poster session 1 on Friday April 10, Examining the precision of infants' visual concepts by leveraging vision-language models and automated gaze coding by Tarun Sepuri at poster session 2 on Friday April 10, and Children's integration of visual concepts via line drawings of hybrid categories by Haoyu Du at poster session 4 on Saturday April 11.

Exciting work from the Visual Learning Lab at UCSD will be at @cogdevsoc.bsky.social #CDS2026! Come chat with us about children's drawings, visual knowledge, and leveraging computational models for developmental research.

www.vislearnlab.org

1 week ago 22 3 0 1
Video

Awesome view from Artemis II this morning.

2 weeks ago 15149 2333 972 454

You mean that thing that created the tech that is currently the only thing holding together the stock market?

2 weeks ago 15 0 2 0
Postdoctoral Research Fellow Postdoctoral Research Fellowship Applications are invited for a 1-year postdoctoral research fellowship in the Department of Psychology at Wesleyan University (Middletown, CT). The position may be ren...

Postdoctoral fellowship in developmental science at Wesleyan University. Joint in H. Barth & R. T. Dubar labs. Particular focus on undergrad research mentoring. 1 year, extendable to 2 years. Share and apply! wesleyan.wd5.myworkdayjobs.com/en-US/career...

2 weeks ago 18 25 1 0
Advertisement

Study shows plants represent number AND time, so seems likely they have a Language of Pot.

2 weeks ago 4 0 0 0
Preview
Scientists Discover Plants Can “Count” – and May Be Smarter Than We Thought New research challenges the long-held assumption that brains are required for learning, suggesting plants may process information in unexpected ways.

The approximate number system is REEEEEEALLY evolutionarily ancient. Won't buy it until I see studies of seedlings. scitechdaily.com/scientists-d...

2 weeks ago 2 1 0 2

Yes - Pinker's book is an effort to play up the innateness claims of Chomsky, drawing on psychological evidence, and is a precursor to the current debate re: LMs and what they tell us about the mind. Ancestors of LMs existed back then, but were mostly called "neural nets" or connectionist models.

3 weeks ago 1 0 1 0

Oh true haha. Poverty of the Stimulus argument: the idea from Chomsky that language Is not learnable on the basis of available evidence.

3 weeks ago 1 0 1 0

Some interesting thoughts about our latest episode from @drbarner.bsky.social!

disi.org/what-can-ai-...

3 weeks ago 7 4 1 0

Thanks - and apologies again for the weird bsky quote post fail!

3 weeks ago 1 0 0 0

(bsky failed me w/ the attempted quote post of the @manymindspod.bsky.social pod).

3 weeks ago 1 0 0 0
Advertisement

Not sure why this didn't quote post properly - will try again: bsky.app/profile/many...

3 weeks ago 0 0 0 0

In the case of LMs, it seems like the reverse is true: the engineering product is here, but we are very far from knowing whether it is a good model of how the brain implements language.
Anyway, nice work as always @kensycoop.bsky.social! Fun pod!

3 weeks ago 5 0 2 0

Specifically, in the case of nukes the engineering solution arose only once the physics was well understood (i.e., our physical theory was sufficiently precise that we could make predictions about exactly how atomic particles would behave when split, & use this knowledge to engineer weapons).

3 weeks ago 4 0 1 0

The second interesting point that came up is the analogy between nuclear weapons and physics. This wasn’t the intended implication of the discussion, but there is a very important dis-analogy between these (well, several I’m sure)

3 weeks ago 3 0 1 0

But it leaves open every weaker version of PoS (kids don’t receive enough data to explain acquisition by input alone), while also leaving completely open the empirical facts of how language actually works, and how it is learned. I think the impact on a priori arguments is actually quite narrow.

3 weeks ago 4 0 1 0

Crucially what they can tell us is exactly as much as we would learn from discovering an alien species that could learn language given all our data. But how much is this? It addresses one quite extreme version of the PoS arg – that human languages CANNOT be learned at all from any amount of input.

3 weeks ago 5 0 2 0

The important connection to LMs is that humans & machines might accomplish the same behaviors via completely different solutions & in many (many) cases it would be hard to get evidence to tell them apart. I think this is important b/c the topic of the pod was what LMs tell us about the human mind.

3 weeks ago 3 0 1 0

It’s that many (many) meanings could correspond to a particular act of reference, and therefore so could many (many) internal states. So many, in fact, that Quine thought there wasn’t much sense in trying to study those internal states. And so… behaviorism.

3 weeks ago 2 0 1 0
Advertisement
Post image

First, I loved Mike’s link to Quine’s topiary. Quine is often misunderstood (by word learning researchers) as arguing that word learning is hard because reference is ambiguous. The problem for Quine is quite a bit different though.

3 weeks ago 3 0 1 0
Post image

I found this discussion thought provoking, as somebody who is not quite so boosterish as @mcxfrank.bsky.social & @glupyan.bsky.social, though also an advocate of IRS accounts of meaning (& therefore excited about implications for that view). Two ideas in particular stuck out to me as interesting.

3 weeks ago 7 0 2 2

Infant literature redux.

3 weeks ago 12 0 0 0

Yep, same here. We also caught some bots that made it past their checks, which does not inspire confidence.

3 weeks ago 4 1 3 0

Colleague attended an LLM/NLP panel and said it was basically incomprehensible. I replied, "they kept talking about vector space didn't they" and they were all "yesss so much vector space" and I feel like this is proof people need to learn how to talk about the forest, not just the trees.

4 weeks ago 46 5 3 1