Posts by Magdalena Kachlicka
Excited to see this Version Of Record of my work out in @elife.bsky.social!
elifesciences.org/articles/106...
We investigate the mental representation of geometric shapes in adults and children using fMRI and MEG. Each figure has a video of me explaining the figure: go and read it, or read below.
Our findings demonstrate that although default strategies in speech perception are difficult to resist, lifelong perceptual habits can be adjusted with as little as 3 hours of training.🧵5/5 #neuroskyence #psychscisky
The control group, who practiced English vocabulary, relied more on pitch in lexical stress categorization and phrase boundary production after training, suggesting that without targeted instruction, listeners default to existing strategies. 🧵4/5
After our novel prosody training, participants relied more on duration during phrase boundary categorization but showed no clear change for contrastive focus and lexical stress, suggesting that cue weighting training is most effective when targeting a feature’s primary cue. 🧵3/5
Learning a new language is challenging partly due to perceptual strategies inherited from our first language. For example, speakers of tone languages like Mandarin overuse pitch in English prosody perception and production. Can these strategies become more native-like with targeted training?🧵2/5
3rd🚨new paper🚨this year! Here we're looking at the effects of targeted training on how L2 learners weight perceptual cues doi.org/10.1017/S136... @audioneurolab.bsky.social @birkbeckpsychology.bsky.social @ashleysymons.bsky.social, Yaoyao Ruan, Kazuya Saito, Fred Dick, @adamtierney.bsky.social 🧵1/5
These results suggest that perceptual strategies are shaped by the reliability of encoding at early stages of the auditory system. 🧵5/5
We find that neural tracking of pitch is linked to pitch cue weighting during word emphasis and lexical stress perception. Specifically, higher pitch weighting is linked to increased tracking of pitch at early latencies within the neural response, from 15 to 55 ms. 🧵4/5
Here, we tested the hypothesis that the reliability of early auditory encoding of a given dimension is linked to the weighting placed on that dimension during speech categorization. We tested this in 60 first language speakers of Mandarin learning English as a second language. 🧵3/5
Linguistic categories are conveyed in speech by many acoustic cues at the same time, but not all of them are equally important. There are clear and replicable individual differences in how people use those cues during speech perception, but the underlying mechanisms are unclear. 🧵2/5
🚨New paper🚨about mechanisms underlying individual differences in cue weighting doi.org/10.1162/IMAG... from fun times at @audioneurolab.bsky.social @birkbeckpsychology.bsky.social with @ashleysymons.bsky.social, Kazuya Saito, Fred Dick, and @adamtierney.bsky.social #psychscisky #neuroskyence 🧵1/5
📜🎉 Our project on aperiodic neural activity during sleep, led by the wonderful @mosameen.bsky.social, is now published!
This project shows how time-resolved measures of aperiodic neural activity track changes of sleep stages + lots of other analyses in iEEG & EEG!
www.nature.com/articles/s44...
Together, these results suggest that the precision with which people perceive and remember sound patterns plays a major role in how well they understand accented speech, and that auditory training may help listeners who struggle. 🧵5/5
Native English speakers who were better at understanding the accent were also better at detecting pitch differences, remembering sound patterns, and attending to pitch. Musical training also helped. Better speech perception was also linked to stronger neural encoding of speech harmonics. 🧵4/5
In this study, we asked L1 English speakers to listen to the prosody of Mandarin-accented English. We found that some listeners are better at understanding accented speech than others. 🧵3/5
Non-native speakers of English speak with varying degrees of accent. So far, research has focused more on factors that help learners communicate more effectively. But what about the listeners? Are there factors that make it easier for native listeners to understand accented speech? 🧵2/5
🚨New paper🚨 about accented speech perception doi.org/10.1016/j.ba... by brilliant (MSc student at the time!) Amir Ghooch Kanloo accompanied by myself, Kazuya Saito and @adamtierney.bsky.social from fun times at @audioneurolab.bsky.social @birkbeckpsychology.bsky.social 🧵1/5
"The Human Insula Reimagined: Single Neurons Respond to Simple Sounds during Passive Listening"
Single neuron activity in the insula
#iEEG
in #JNeurosci @sfnjournals.bsky.social
www.jneurosci.org/content/46/4...
New work from our lab showing the human frontal lobe receives fast, low-level speech information in **parallel** with early speech areas!
🧠🗣️
doi.org/10.1038/s414...
"Human cortical dynamics of auditory word form encoding"
by the Chang lab @changlabucsf.bsky.social, published in @cp-neuron.bsky.social
www.cell.com/neuron/fullt...
#iEEG #ECOG
If you haven't, you should, it's brilliant!
New preprint by Mika Nash and others on how selective attention affects neural tracking of prediction during ecologically valid music listening: www.biorxiv.org/content/10.1...
As it's hiring season again I'm resharing the NeuroJobs feed. Add #NeuroJobs to your post if you're recruiting or looking for an RA, PhD, Postdoc, or faculty position in Neuro or an adjacent field.
bsky.app/profile/did:...
Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.
In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.
Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!
My PhD student Yue Li is looking for L1 speakers of Chinese and Spanish for her online English experiment! Please see below for details!