Advertisement · 728 × 90

Posts by Abdellah Fourtassi

Are you an advanced PhD student looking for a reviewing experience? I’m looking for one emergency reviewer for one paper in ACL Rolling Review in the track "Linguistic Theories, Cognitive Modeling, and Psycholinguistics" --due before next Tuesday April 28th.

25 minutes ago 1 1 0 0

Call for a PhD position in Cognitively inspired natural language processing with Lisa Beinborn and me. Part of the new SPP "Robust Assessment & Safe Applicability of Language Modelling: Foundations for a New Field of Language Science & Technology” (LaSTing). huds.uni-goettingen.de/assets/Call_...

2 weeks ago 11 5 0 0
2026 School on Analytical Connectionism A 2-week summer course hosted at Chalmers University of Technology on analytical approaches to language acquisition and higher-level cognition.

Applications open for the School on Analytical Connectionism
📅 August 17-28, 2026
🗺️ Chalmers University of Technology Gothenburg
📚 Topical focus: language acquisition ... with me, Michael Biehl, Paul Smolensky & many other incredible researchers 😊 www.analytical-connectionism.net/school/2026/

2 weeks ago 8 5 0 0

@vkempe.bsky.social I'd say yes, a priori. Since it's an ACL workshop, some minimal computational component would still be expected. For instance, work using LLMs could fit if different models are compared systematically and/or their generated output is evaluated in a systematic way.

1 month ago 1 0 0 0
Preview
Modelling children's grammar learning via caregiver feedback in natural conversations Abstract. Many debates in the language acquisition literature have revolved around the role of negative evidence for the acquisition of grammar. The scient

…and here is the link that was missing in my previous message! royalsocietypublishing.org/rstb/article...

1 month ago 0 0 0 0
Video

What if you could automatically transcribe children's speech sounds from their first babbles to full sentences?

Screening for speech delays. Comparing how kids learn to talk across languages. Following how sounds evolve month by month.

We're building toward this with BabAR🧵 (sound on 🔊)

1 month ago 53 19 3 6
Post image Post image

Open PhD/Postdoc position (start: Oct 2026). Topic: AI/LLMs and child language/communicative/cognitive development. The exact project will be shaped with the candidate. Join our team @univ-amu.fr at the intersection of computer and cognitive science (& right next to the Calanques!). Send me your CV!

1 month ago 4 5 0 0

yes !

1 month ago 1 0 1 0

Deadline approaching (March 20), consider submitting!
The process is flexible to accommodate different situations:
- Hybrid presentation mode
- Direct submission or via ARR
- Non-archival option: present work that is (or will be) published elsewhere
- Short (4 pages) or long (8 pages) papers

1 month ago 5 2 0 0
Advertisement
Post image

📢 PhD position in the NeuroAI of Language

Why can LLMs predict brain activity so well? We're hiring a PhD student to find out -- AI interpretability meets neuroimaging
Deadline March 20
Please RT 🙏
👇
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-neuroai-language

1 month ago 50 40 2 1
COMMENT L'IA AIDE À COMPRENDRE L'ACQUISITION DU LANGAGE CHEZ L'ENFANT (avec Abdellah Fourtassi)
COMMENT L'IA AIDE À COMPRENDRE L'ACQUISITION DU LANGAGE CHEZ L'ENFANT (avec Abdellah Fourtassi) YouTube video by Photizo on AI

My interview with the Photizo channel on how AI can help us understand language development!

The interview is in French, but you can enable the auto-translated English subtitles at your own risk 🙂

www.youtube.com/watch?v=GJ8h...

1 month ago 2 0 0 0
OSF

*Preprint*: Pedagogy in the speech-gesture couplings of caregivers: Evidence from a corpus-based analysis by @marinewang.bsky.social, @eddonnellan.bsky.social and myself: doi.org/10.31234/osf...

1 month ago 8 3 1 1
LinkedIn This link will take you to a page that’s not on LinkedIn

📣📣📣Job alert Multimodal Language Department Max Planck Institute for Psycholinguistics MAX PLANCK RESEARCH GROUP LEADER POSITION (W2 BBESG) lnkd.in/eaq5MW9a

1 month ago 17 20 1 2

The paper is part of the issue "Mechanisms of learning from social interaction" with a amazing set of papers you should read as well!
Many thanks to @elenaluchkina.bsky.social @elmlingersteven.bsky.social for coordinating the whole thing and to @ilcb.bsky.social for the generous support!

1 month ago 4 0 1 0

This top-line suggests there to be room for more syntactic structure improvement from interaction, opening the door for testing a diversity of caregivers' feedback (not just simple clarification requests) and also by integrating multimodal feedbacks extracted from video data (our next step).

1 month ago 2 0 1 0

For comprehension, benchmarks (e.g., Zorro) that test textbook structures did not improve—at least not uniformly across cases.

Interesting twist: when we replace the real caregiver reward with an artificial one that recognizes and punishes grammatical mistakes, comprehension scores improved too.

1 month ago 2 0 1 0
Advertisement

We found a striking difference between comprehension and production

The model improved its production: its utterances became more grammatical. This is noteworthy: We we never asked it to “learn grammar”. We only asked it to avoid using utterance that would have triggered a caregiver CR !

1 month ago 2 0 1 0
Post image

Step 2: we use this classifier as the “reward model” in RLHF: a) Train an LLM only on caregiver input (as in Huebner et al., 2021), b) generate an utterance, c) score it with the reward model. A negative reward mean that the utterance is of the kind that would trigger a caregiver CR in CHILDES

1 month ago 1 0 1 0

Step 1: train a classifier on child–caregiver data (CHILDES) We predict caregiver feedback (here, focusing on clarification request, CR) from child utterance. E.g., an utterance like: “I want toy” may not trigger a caregiver CR, but “want toy” might (dropping the subject may cause ambiguity).

1 month ago 1 0 1 0

RLHF is a technique used to align chatbots with human preferences. But here we used RLHF to test whether caregiver feedback can help the learning of syntax—above and beyond models trained on child linguistic input only and no interaction (as in Huebner et al.)

There are 2 main steps:

1 month ago 1 0 1 0

We know children learn from interaction with caregivers—how can we quantify/test this tricky learning signal using natural data? In the paper (now out in Phil Trans of @royalsocietypublishing.org, link below), Mitja and I used a technique called Reinforcement Learning from Human Feedback (RLHF)

1 month ago 2 0 1 0
Post image

🚨 New Paper: How can AI help us understand child lang dev? If we train models on children’s environment, they can tell us if this environment support learning.
E.g., models tested child linguistic input (Huebner et al.) and visual input (Vong et al.).

What about Social Interaction? (a thread 🧵)

1 month ago 21 5 1 0
Post image

Excited to present this poster today (w/ @gabriellavigliocco.bsky.social, @gretagandolfi.bsky.social, @fourtassi.bsky.social, and Yan Gu) at the 3rd International Workshop on Naturalistic Experimentation of Child Development at Birkbeck, University of London. Check out the poster here: osf.io/876ab

2 months ago 5 2 0 3
Post image

A new theme issue of #PhilTransB examines the mechanisms of learning from social interaction. Read articles for free: buff.ly/K8v43YM

2 months ago 38 15 1 1
Preview
Language learning as ontogenetic adaptation Language learning is a multi-threaded, multi-mechanism process. It is multi-threaded in that it emerges as a byproduct of addressing multiple goals while engaging in social interactions. It is multi-m...

New Paper out in @cp-trendscognsci.bsky.social!

Language learning as ontogenetic adaptation

Marisa Casillas and I argue that language learning:

👪 is a by-product of social interaction
↘️ integrates a wealth of information sources
🌐 adapts to the cultural context

www.cell.com/trends/cogni...

2 months ago 7 3 0 0

One more week to apply for the PhD position on curiosity in early development (B4) in my group!

2 months ago 8 4 0 0
Advertisement
Workshop on Computational Developmental Linguistics (CDL)

We're currently accepting paper submissions. We also warmly invite volunteers to join us as reviewers — come help shape and grow this new community!
Workshop link/CFP:
comp-dev-ling.github.io

3 months ago 0 0 0 0

CDL @ ACL 2026 aims to turn these emerging methods and questions into a real community around Computational Developmental Linguistics. If you study how models learn language, and what this reveals about human learning, we'd love to see you at CDL.

3 months ago 0 0 1 0

Now, with LLMs and deep learning, we have tools to study dynamics more directly; addressing "the logical problem of language learning," including the role of input and inductive biases—with surprising parallels (and gaps!) with human development (e.g., BabyLM challenge)

3 months ago 1 0 1 0

For decades, developmental linguistics flourished through interdisciplinary insights from linguistics, formal learning theory, psych/cog sci; gathering naturalistic data, developing experimental paradigms, and contrasting theories of acquisition

3 months ago 0 0 1 0