New pre-print alert! How do children learn magnitude words like "long" and "high", which often denote multiple domains? With @urvi.bsky.social and @drbarner.bsky.social, we find that children start with narrow meanings restricted to the labeled domain, before analogically extending
osf.io/ucxra_v1
Posts by David Barner
Today in my CogSci/SciFi class we're talking about Greg Egan's Learning to be Me & this really fun paper on the ethics of human brain organoids & consciousness. What is the future of human-machine mental integration & the use of biological organisms for computing? link.springer.com/article/10.1...
Looks like Hungary won the election!
I’m sorry Larissa. Two big fails for this department. You and your work are awesome and deserve a home that values you.
(they're all count nouns?)
The visual learning lab at UCSD is attending the Cognitive Development Society 2026 conference. There are three poster presentations: Relating changes in children's descriptions and line drawings of visual concepts across childhood by Nicole Sahrling at poster session 1 on Friday April 10, Examining the precision of infants' visual concepts by leveraging vision-language models and automated gaze coding by Tarun Sepuri at poster session 2 on Friday April 10, and Children's integration of visual concepts via line drawings of hybrid categories by Haoyu Du at poster session 4 on Saturday April 11.
Exciting work from the Visual Learning Lab at UCSD will be at @cogdevsoc.bsky.social #CDS2026! Come chat with us about children's drawings, visual knowledge, and leveraging computational models for developmental research.
www.vislearnlab.org
Awesome view from Artemis II this morning.
You mean that thing that created the tech that is currently the only thing holding together the stock market?
Postdoctoral fellowship in developmental science at Wesleyan University. Joint in H. Barth & R. T. Dubar labs. Particular focus on undergrad research mentoring. 1 year, extendable to 2 years. Share and apply! wesleyan.wd5.myworkdayjobs.com/en-US/career...
Study shows plants represent number AND time, so seems likely they have a Language of Pot.
The approximate number system is REEEEEEALLY evolutionarily ancient. Won't buy it until I see studies of seedlings. scitechdaily.com/scientists-d...
Yes - Pinker's book is an effort to play up the innateness claims of Chomsky, drawing on psychological evidence, and is a precursor to the current debate re: LMs and what they tell us about the mind. Ancestors of LMs existed back then, but were mostly called "neural nets" or connectionist models.
Oh true haha. Poverty of the Stimulus argument: the idea from Chomsky that language Is not learnable on the basis of available evidence.
Some interesting thoughts about our latest episode from @drbarner.bsky.social!
disi.org/what-can-ai-...
Thanks - and apologies again for the weird bsky quote post fail!
(bsky failed me w/ the attempted quote post of the @manymindspod.bsky.social pod).
Not sure why this didn't quote post properly - will try again: bsky.app/profile/many...
In the case of LMs, it seems like the reverse is true: the engineering product is here, but we are very far from knowing whether it is a good model of how the brain implements language.
Anyway, nice work as always @kensycoop.bsky.social! Fun pod!
Specifically, in the case of nukes the engineering solution arose only once the physics was well understood (i.e., our physical theory was sufficiently precise that we could make predictions about exactly how atomic particles would behave when split, & use this knowledge to engineer weapons).
The second interesting point that came up is the analogy between nuclear weapons and physics. This wasn’t the intended implication of the discussion, but there is a very important dis-analogy between these (well, several I’m sure)
But it leaves open every weaker version of PoS (kids don’t receive enough data to explain acquisition by input alone), while also leaving completely open the empirical facts of how language actually works, and how it is learned. I think the impact on a priori arguments is actually quite narrow.
Crucially what they can tell us is exactly as much as we would learn from discovering an alien species that could learn language given all our data. But how much is this? It addresses one quite extreme version of the PoS arg – that human languages CANNOT be learned at all from any amount of input.
The important connection to LMs is that humans & machines might accomplish the same behaviors via completely different solutions & in many (many) cases it would be hard to get evidence to tell them apart. I think this is important b/c the topic of the pod was what LMs tell us about the human mind.
It’s that many (many) meanings could correspond to a particular act of reference, and therefore so could many (many) internal states. So many, in fact, that Quine thought there wasn’t much sense in trying to study those internal states. And so… behaviorism.
First, I loved Mike’s link to Quine’s topiary. Quine is often misunderstood (by word learning researchers) as arguing that word learning is hard because reference is ambiguous. The problem for Quine is quite a bit different though.
I found this discussion thought provoking, as somebody who is not quite so boosterish as @mcxfrank.bsky.social & @glupyan.bsky.social, though also an advocate of IRS accounts of meaning (& therefore excited about implications for that view). Two ideas in particular stuck out to me as interesting.
Infant literature redux.
Yep, same here. We also caught some bots that made it past their checks, which does not inspire confidence.
Colleague attended an LLM/NLP panel and said it was basically incomprehensible. I replied, "they kept talking about vector space didn't they" and they were all "yesss so much vector space" and I feel like this is proof people need to learn how to talk about the forest, not just the trees.