Attentional disengagement during external and internal distractions reduces neural speech tracking in background noise https://pubmed.ncbi.nlm.nih.gov/41997874/
Posts by Auditory-Visual Speech Association (AVISA)
Er, that would be Liberman et al (1967) ... Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological review, 74(6), 431-461.
www.haskinslaboratories.org/s/LibermanEt...
Alvin Liberman proposed (1967) that we perceive speech by mentally simulating how we would produce it. This controversial theory sparked decades of debate about the link between perception and production.
🧠 Liberman et al. - Motor Theory paper
#SpeechScience #Theory
Good-day, mate 🦘
Toward a fully distributed inhibitory control system: converging evidence across modalities academic.oup.com/cercor/artic... Situates large scale cortical interactions during response inhibition in a literature spanning EEG, fMRI & human intracranial electrophysiology -> an emergent system
Not worth my time! Understanding factors that make speech socially engaging https://pubmed.ncbi.nlm.nih.gov/41973810/
"This means that a deep understanding of processes of human interaction and sense-making will be a foundational resource for the growing arsenal of methods in critical AI literacy" 😘
Six-panel composite figure. Caption: Interactive artifacts always rely on people’s interpretive and interactional practices. Rowwise from top left to bottom right: A. Aegeus consults the oracle at Delphi (cup from Vulci, 440-430 BCE). B. Byzantine mosaic depicting the zodiac, from the floor of the 6th century CE Beth Alpha synagogue. C. One-sided sense-making in an experimental psychotherapy session, (McHugh 1968). D. Still from a BBC documentary showing a person interacting with ELIZA via a computer terminal, late 1960s. E. Researchers interacting with the PARC copier (Suchman 2007 [1987]). F. Screenshot of large language model chat interface, 2026.
New! Interactional foundations for critical AI literacies doi.org/10.5281/zeno...
Why do Anthropic engineers talk to Claude as a witch-doctor to his potions? How is prompt engineering like spider divination? Can one reason without reasons?
ft. Lovelace, Adorno, Suchman, Weizenbaum & many more ☺️
Do you want to use vibration stimuli in remote research studies? 👀 📳🤳🏼
Our latest paper in Behaviour Research Methods might be of interest to you!
Coauthors include: @kalvinroberts.bsky.social @peircej.bsky.social @multisensorylab.bsky.social
link.springer.com/article/10.3...
Perceptual multistability: a multifaceted window into brain dysfunctions www.sciencedirect.com/science/arti... "Perceptual multistability emerges as a promising noninvasive tool for clinical applications, facilitating translational research and enhancing our mechanistic understanding"
Link to the paper how one can quantify if two communicative movements are similar or not 👐👐🙌🙌
Took a while to get the "All the President's Men' reference - but, yeah, I can imagine why no common term has been used 🫣
Really excited about our new work on aphasia! Even in fairly profound aphasia, we can recover semantic maps through visual stimuli and use them to decode language. This is a big step! Language BCIs in aphasia might be possible!
"there is no vernacular term to describe the subjective
correlate of formant dispersion...it is subjectively very salient. Human listeners often use words such as `deeper' and `more
resonant' to describe sounds with closely-spaced
formant frequencies" Fitch (1999) anyone want to come up with a term?
Foreign Language Learners Show a Kinematic Accent in Their Co-Speech Hand Movements direct.mit.edu/opmi/article... Pre-registered kinematic-acoustic study tested whether a foreign accent is present in the timing of co-speech manual movements - results demonstrate a ‘kinematic accent’ on cognates
Effect of Avatar Head Movements on Communication Behavior and Subjective Evaluations of Presence and Success in Triadic Conversations journals.sagepub.com/doi/10.1177/... Study evaluated the effect of virtual animated characters’ head movements on participants’ communication behaviour & experience
📦 My first #RStats package on CRAN:
{readelan}
A package dedicated to reading all files associated with ELAN: eaf, etf, ecv. Reads annotations, metadata, controlled vocabularies. Relevant for many in #linguistics perhaps?
More info here:
borstell.github.io/misc/readelan/
Using more realistic speech material to enhance ecological validity in the Everyday Conversational Danish Sentence Test https://pubmed.ncbi.nlm.nih.gov/41941320/
Event-Related Potentials to Emotional Incongruence in Dogs Using Non-Invasive EEG www.researchsquare.com/article/rs-9... EEG study with dogs (N= 3) viewing AV clips of owner's facial expression (happy or angry) with congruent or incongruent vocalisation. Larger P300 & N400 for incongruent pairings 🐕
ADHD traits are not related to multisensory integration in a university population www.sciencedirect.com/science/arti... Looked @ Sound-Induced Flash, McGurk & speech-in-noise effects as a function of Adult ADHD Self-Rating Scale in undergraduates (research participation pool) ->No link ADHD traits
Rhythmic skills mediate the link between music training and cognition via attention and phonological processing www.nature.com/articles/s44... Interesting that there was no difference on speech in noise performance ....
With @deouell.bsky.social and Ran Hassin. Read more in the open-access version of the paper:
journals.sagepub.com/doi/10.1177/...
A lucid intro> will recommend to students, adding ->Prieto et al 2025. Towards a novel conceptualization of prosody that accounts for spoken and visual signal: "A modality-neutral view of prosody will enrich current formal and developmental theories of language" www.jbe-platform.com/content/jour...
www.science.org/doi/10.1126/... Tested if bumble bees could differentiate arbitrary flashing patterns on the basis of their rhythmic structure. Step 1: bees tested simple patterns of repeating flashing lights ✔️ step2: tested on 2 irregular patterns with no local cues ✔️ step 3: tempo-generalization ✔️
"...g representation for English intonation. It gives an account of what different, tunes are possible and how they are aligned with different texts. It characterizes the rules which map the underlying representations into phonetic realizations"... 😉
Visual Gestures of the Head & Eyebrows Support Prosody Perception for Individuals with Cochlear Implants journals.sagepub.com/doi/10.1177/... @matthewwinn.bsky.social & other examined visual speech cues 🤨in prosody perception ->used vocal mimicry (clever); found visual cues-> mimicry (F0/intensity)
We concluded by suggesting the finding that language can affect sensory processing only for consciously perceived stimuli is consistent with bottom-up accounts of early perceptual processing.
Although valid cues did not affect the frequency of completely nonconscious percepts (0: “didn't see”), they were associated with a higher frequency of high-awareness experiences.
Congruent spoken cues (same digit) did not boost identification of unseen targets. Nor did these cues increase "conscious" ratings. They also did not amplify the Visual Awareness Negativity (an EEG marker of visual awareness).
So, what did. we do? Dark-adapted participants viewed digits presented at perceptual threshold. Before each presentation, they heard a matching, mismatching, or reversed (no-cue) spoken digit. EEG was recorded throughout.