The Causality in Cognition Lab -- a supportive, bluesky-colored team -- is looking for a predoc to join us! Here are infos about the lab (cicl.stanford.edu) and the position (careersearch.stanford.edu/jobs/iriss-p...). The application deadline is May 1st.
Please share, thank you 🙏
Posts by Irmak Ergin
Our novel measure:
❗Overcomes the constraints of traditional static measurements
❗Can be effectively integrated with neuroimaging techniques like EEG and MEG, offering a tool for research on dynamic processes during naturalistic listening 🗣️🧠
11/11
➡️ A TRF model fit to the slider time course revealed individual differences in response delays, showing that the slider can be used to model individual trajectories of comprehension.
10/11
Question 4️⃣: Can you track multiple comprehension manipulations with the slider at once?
Yes!
➡️ When we manipulated both surprisal and speech rate, slider scores successfully tracked comprehension changes associated with each.
9/11
Question 3️⃣: How does comprehension decline as speech speeds up?
➡️ It drops categorically, not purely linearly, at 3x speed.
➡️ Shows a limit to how fast we can process information while listening.
8/11
Question 2️⃣ : Why not use traditional post-hoc comprehension measures?
We show that they:
❌ Provide only a snapshot, not real-time changes.
❌ Blur memory and true comprehension.
❌ Are influenced by how the test is designed.
7/11
Question 1️⃣: Does our novel real-time measure successfully capture comprehension?
Yes!
➡️ Allows reporting moment-by-moment comprehension
➡️ Explained by comprehension scores obtained using existing comprehension measures
➡️ Doesn’t interfere with comprehension.
6/11
Method:
➡️ Participants listened to audiobook segments, sped up to five speech rates (x1, x2, x3, x4, x5)
➡️ Control measures: working memory (digit span) and auditory acuity (digit in noise).
➡️ Post-hoc comprehension measures to validate our novel measure
5/11
Approach:
We developed a slider device that:
➡️Provides millisecond read-out of real-time comprehension
➡️Is paramagnetic
➡️Can be synchronized with behavioral and neural recording software
4/11
Goal: The ability to capture time-resolved neural activity and pair that with time-resolved behavior, would open new doors for studying the dynamic neural processes involved in speech comprehension! 🧠
3/11
Do you use naturalistic listening paradigms, but wish you could record a continuous behavioural measure of comprehension?
2/11 ⬇️
Excited to share our new publication, “Measuring Naturalistic Speech Comprehension in Real Time”!
➡️ rdcu.be/fa3hk #psynomBRM
w/ @kriesjill.bsky.social, Shiven Gupta, Maria Papworth Burrel, & @lauragwilliams.bsky.social
🧵1/11
📢 PhD position in the NeuroAI of Language
Why can LLMs predict brain activity so well? We're hiring a PhD student to find out -- AI interpretability meets neuroimaging
Deadline March 20
Please RT 🙏
👇
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-neuroai-language
Soon hiring a lab manager! Looking for someone who is really interested in language neuroscience, who is organised, motivated, a great communicator, and who works well in a research team. Express interest by submitting this form: tinyurl.com/glysn-labman...
Reposts appreciated!
Excited to share our new publication “The Spatio-Temporal Dynamics of Phoneme Encoding in Aging and Aphasia”, published in JNeurosci 🧠
➡️ www.jneurosci.org/content/46/4...
with @lauragwilliams.bsky.social & @mvandermosten.bsky.social 🤝
Check out @stanfordbrain.bsky.social ’s summary of it ⬇️
are you into auditory neuroscience, speech perception and comprehension, music, tonotopy, psychophysics, MEG, EEG, time-resolved signals, data science for neuroscience?
reflecting on 2024, i am proud to share the first three papers from my lab! brief description of each below 🧠 🚀
Our novel measure:
❗Overcomes the constraints of traditional static measurements
❗Can be effectively integrated with neuroimaging techniques, offering a tool for research on dynamic processes during naturalistic listening 🗣️🧠
8/8
Question 3️⃣: How does comprehension decline as speech speeds up?
➡️ It drops categorically, not purely linearly, at 3x speed.
➡️ Shows a limit to how fast we can process information while listening.
7/8
Question 2️⃣ : Why not use traditional post-hoc comprehension measures?
We show that they:
❌ Provide only a snapshot, not real-time changes.
❌ Blur memory and true comprehension.
❌ Are influenced by how the test is designed.
6/8
Question 1️⃣: Does our novel real-time measure successfully capture comprehension?
Yes!
➡️ Allows reporting moment-by-moment comprehension
➡️ Explained by comprehension scores obtained using existing comprehension measures
5/8
Method:
➡️ Participants listened to audiobook segments, sped up to five speech rates (x1, x2, x3, x4, x5)
➡️ Control measures: working memory (digit span) and auditory acuity (digit in noise).
➡️ Post-hoc comprehension measures to validate our novel measure
4/8
Approach:
We developed a slider device that:
➡️Provides millisecond read-out of real-time comprehension
➡️Is scanner compatible
➡️Can be synchronized with behavioral and neural recording software
3/8
Goal:
The ability to capture time-resolved neural activity, and pair that with time-resolved behavior, would open new doors for studying the dynamic neural processes involved in speech comprehension! 🧠🌊
2/8
Approach:
We developed a slider device that:
➡️Provides millisecond read-out of real-time comprehension
➡️Is scanner compatible
➡️Can be synchronized with behavioral and neural recording software
3/8
Goal:
The ability to capture time-resolved neural activity, and pair that with time-resolved behavior, would open new doors for studying the dynamic neural processes involved in speech comprehension! 🧠🌊
2/8
Such wonderful colleagues are certainly something to be grateful for during the holiday season 🫶 @lauragwilliams.bsky.social @ckaicher.bsky.social @kriesjill.bsky.social
✨i'm hiring a lab manager, with a start date of ~September 2025! to express interest, please complete this google form: forms.gle/GLyAbuD779Rz...
looking for someone to join our multi-disciplinary team, using OPM, EEG, iEEG and computational techniques to study speech and language processing! 🧠