Advertisement · 728 × 90

Posts by Stephan Meylan

Figure showing that child-directed listening lets caregivers understand children, prompting caregiver actions, including locomotion, grasping, communicative responses, belief update, and joint attention. This in turn may serve as an error signal that children may use for learning.

Figure showing that child-directed listening lets caregivers understand children, prompting caregiver actions, including locomotion, grasping, communicative responses, belief update, and joint attention. This in turn may serve as an error signal that children may use for learning.

Our results show just how strongly early communication may be supported by adults & their powerful inferential abilities. It also opens new questions for lang development: how can young children learn from what adults say/do in response to their early speech? Stay tuned! 7/

2 years ago 6 0 0 0
Schematic showing that the process of a caregiver understanding a child can be approximated by the process of a transcriber understanding a child

Schematic showing that the process of a caregiver understanding a child can be approximated by the process of a transcriber understanding a child

It’s worth being up front about a limitation in our approach: what we called the “right answer” for what word the child said was what transcribers’ thought they said. This is an imperfect proxy for what parents think their child is saying, but a reasonable first step. 6/

2 years ago 5 0 1 0
Schematic showing the difference between child-directed speech and child-directed listening. Child-directed listening is concerned with how caregivers interpret children’s speech or other vocalizations

Schematic showing the difference between child-directed speech and child-directed listening. Child-directed listening is concerned with how caregivers interpret children’s speech or other vocalizations

We’re calling this adult adaptation to what children talk about and how they are likely to talk about things “child-directed listening.” This complements a long history of research on how adults speak to children. 5/

2 years ago 7 0 1 0
Figure showing the perfromance of the baseline, phonetics-only model, with 42% agreement with annotators. This model is identified as “Model of child mispronunciations only"

Figure showing the perfromance of the baseline, phonetics-only model, with 42% agreement with annotators. This model is identified as “Model of child mispronunciations only"

A phonetic-only model—without any info on larger communicative context—only agreed with adult listeners 42% of the time. The difference suggests that adult expectations are hugely important for understanding children! 4/

2 years ago 6 0 2 0
Figure showing the perfromance of the best model, with 90% agreement with annotators. This model is identified as a “BERT model fine-tuned on child language: with knowledge of conversational topic, speaker turns, objects in scene, child vocab, etc.”

Figure showing the perfromance of the best model, with 90% agreement with annotators. This model is identified as a “BERT model fine-tuned on child language: with knowledge of conversational topic, speaker turns, objects in scene, child vocab, etc.”

Our best model guessed what words adult transcribers thought kids said with > 90% accuracy. This model used a BERT-based language model that could take advantage of what had  been said by parent and child and what kids like to talk about (cats, not mortgages) 3/

2 years ago 5 0 1 0
Schematic showing a Bayesian model of spoken word recognition to model how the adult interprets the child’s speech signal. The probability of an interpretation depends on the probability that the phonetic data was generated given a specific word and context, multiplied by the probability of the word given the context. We use different probabilistic pronunciation models and probabilistic language models in this inference.

Schematic showing a Bayesian model of spoken word recognition to model how the adult interprets the child’s speech signal. The probability of an interpretation depends on the probability that the phonetic data was generated given a specific word and context, multiplied by the probability of the word given the context. We use different probabilistic pronunciation models and probabilistic language models in this inference.

To test our hypotheses about what matters in understanding children, @ruthe.bsky.social, Nicole Wong, @bergelsonlab.bsky.social, @rplevy.bsky.social & I made some new language models to guess what words English-learning 1-4 year olds said in a set of transcriptions of at-home recordings 2/

2 years ago 6 0 1 0
Figure showing the difference in performance between pour  best model with rich expectations (90% agreement with annotators) and a baseline model that only uses phonetics (42%). We label the difference between the model results as the contribution of language expectations.

Figure showing the difference in performance between pour best model with rich expectations (90% agreement with annotators) and a baseline model that only uses phonetics (42%). We label the difference between the model results as the contribution of language expectations.

How do adults understand children’s early, highly variable speech? Our new paper in Nature Human Behavior (www.nature.com/articles/s41...) provides evidence that adults’ interpretations depend quite strongly on language expectations—what they think children are likely to say. 1/

2 years ago 71 30 5 5