Are you an advanced PhD student looking for a reviewing experience? I’m looking for one emergency reviewer for one paper in ACL Rolling Review in the track "Linguistic Theories, Cognitive Modeling, and Psycholinguistics" --due before next Tuesday April 28th.
Posts by Abdellah Fourtassi
Call for a PhD position in Cognitively inspired natural language processing with Lisa Beinborn and me. Part of the new SPP "Robust Assessment & Safe Applicability of Language Modelling: Foundations for a New Field of Language Science & Technology” (LaSTing). huds.uni-goettingen.de/assets/Call_...
Applications open for the School on Analytical Connectionism
📅 August 17-28, 2026
🗺️ Chalmers University of Technology Gothenburg
📚 Topical focus: language acquisition ... with me, Michael Biehl, Paul Smolensky & many other incredible researchers 😊 www.analytical-connectionism.net/school/2026/
@vkempe.bsky.social I'd say yes, a priori. Since it's an ACL workshop, some minimal computational component would still be expected. For instance, work using LLMs could fit if different models are compared systematically and/or their generated output is evaluated in a systematic way.
…and here is the link that was missing in my previous message! royalsocietypublishing.org/rstb/article...
What if you could automatically transcribe children's speech sounds from their first babbles to full sentences?
Screening for speech delays. Comparing how kids learn to talk across languages. Following how sounds evolve month by month.
We're building toward this with BabAR🧵 (sound on 🔊)
Open PhD/Postdoc position (start: Oct 2026). Topic: AI/LLMs and child language/communicative/cognitive development. The exact project will be shaped with the candidate. Join our team @univ-amu.fr at the intersection of computer and cognitive science (& right next to the Calanques!). Send me your CV!
yes !
Deadline approaching (March 20), consider submitting!
The process is flexible to accommodate different situations:
- Hybrid presentation mode
- Direct submission or via ARR
- Non-archival option: present work that is (or will be) published elsewhere
- Short (4 pages) or long (8 pages) papers
📢 PhD position in the NeuroAI of Language
Why can LLMs predict brain activity so well? We're hiring a PhD student to find out -- AI interpretability meets neuroimaging
Deadline March 20
Please RT 🙏
👇
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-neuroai-language
My interview with the Photizo channel on how AI can help us understand language development!
The interview is in French, but you can enable the auto-translated English subtitles at your own risk 🙂
www.youtube.com/watch?v=GJ8h...
*Preprint*: Pedagogy in the speech-gesture couplings of caregivers: Evidence from a corpus-based analysis by @marinewang.bsky.social, @eddonnellan.bsky.social and myself: doi.org/10.31234/osf...
📣📣📣Job alert Multimodal Language Department Max Planck Institute for Psycholinguistics MAX PLANCK RESEARCH GROUP LEADER POSITION (W2 BBESG) lnkd.in/eaq5MW9a
The paper is part of the issue "Mechanisms of learning from social interaction" with a amazing set of papers you should read as well!
Many thanks to @elenaluchkina.bsky.social @elmlingersteven.bsky.social for coordinating the whole thing and to @ilcb.bsky.social for the generous support!
This top-line suggests there to be room for more syntactic structure improvement from interaction, opening the door for testing a diversity of caregivers' feedback (not just simple clarification requests) and also by integrating multimodal feedbacks extracted from video data (our next step).
For comprehension, benchmarks (e.g., Zorro) that test textbook structures did not improve—at least not uniformly across cases.
Interesting twist: when we replace the real caregiver reward with an artificial one that recognizes and punishes grammatical mistakes, comprehension scores improved too.
We found a striking difference between comprehension and production
The model improved its production: its utterances became more grammatical. This is noteworthy: We we never asked it to “learn grammar”. We only asked it to avoid using utterance that would have triggered a caregiver CR !
Step 2: we use this classifier as the “reward model” in RLHF: a) Train an LLM only on caregiver input (as in Huebner et al., 2021), b) generate an utterance, c) score it with the reward model. A negative reward mean that the utterance is of the kind that would trigger a caregiver CR in CHILDES
Step 1: train a classifier on child–caregiver data (CHILDES) We predict caregiver feedback (here, focusing on clarification request, CR) from child utterance. E.g., an utterance like: “I want toy” may not trigger a caregiver CR, but “want toy” might (dropping the subject may cause ambiguity).
RLHF is a technique used to align chatbots with human preferences. But here we used RLHF to test whether caregiver feedback can help the learning of syntax—above and beyond models trained on child linguistic input only and no interaction (as in Huebner et al.)
There are 2 main steps:
We know children learn from interaction with caregivers—how can we quantify/test this tricky learning signal using natural data? In the paper (now out in Phil Trans of @royalsocietypublishing.org, link below), Mitja and I used a technique called Reinforcement Learning from Human Feedback (RLHF)
🚨 New Paper: How can AI help us understand child lang dev? If we train models on children’s environment, they can tell us if this environment support learning.
E.g., models tested child linguistic input (Huebner et al.) and visual input (Vong et al.).
What about Social Interaction? (a thread 🧵)
Excited to present this poster today (w/ @gabriellavigliocco.bsky.social, @gretagandolfi.bsky.social, @fourtassi.bsky.social, and Yan Gu) at the 3rd International Workshop on Naturalistic Experimentation of Child Development at Birkbeck, University of London. Check out the poster here: osf.io/876ab
A new theme issue of #PhilTransB examines the mechanisms of learning from social interaction. Read articles for free: buff.ly/K8v43YM
New Paper out in @cp-trendscognsci.bsky.social!
Language learning as ontogenetic adaptation
Marisa Casillas and I argue that language learning:
👪 is a by-product of social interaction
↘️ integrates a wealth of information sources
🌐 adapts to the cultural context
www.cell.com/trends/cogni...
One more week to apply for the PhD position on curiosity in early development (B4) in my group!
We're currently accepting paper submissions. We also warmly invite volunteers to join us as reviewers — come help shape and grow this new community!
Workshop link/CFP:
comp-dev-ling.github.io
CDL @ ACL 2026 aims to turn these emerging methods and questions into a real community around Computational Developmental Linguistics. If you study how models learn language, and what this reveals about human learning, we'd love to see you at CDL.
Now, with LLMs and deep learning, we have tools to study dynamics more directly; addressing "the logical problem of language learning," including the role of input and inductive biases—with surprising parallels (and gaps!) with human development (e.g., BabyLM challenge)
For decades, developmental linguistics flourished through interdisciplinary insights from linguistics, formal learning theory, psych/cog sci; gathering naturalistic data, developing experimental paradigms, and contrasting theories of acquisition