With @deouell.bsky.social and Ran Hassin. Read more in the open-access version of the paper:
journals.sagepub.com/doi/10.1177/...
Posts by Gal Chen
Negative words are detected *less* often when you focus on another task, regardless of task difficulty, the specific word set, or potentially confounding features. The decision of the system to disengage from a primary (visual, in our case) task might not conform to our conscious intuition.
We found that, while low-level/phonetic features, and pronunciation intelligibility play a large role in determining awareness, word valence plays a role too - suggesting semantic information is prioritized before conscious awareness. And not in the direction you'd think!
Paradigm description (see ms fig 1)
We designed a dual task in which dozens of spoken words evade consciousness, but without degrading the words - if you knew they were coming, you would've heard them. Which features mitigate this "inattentional deafness" that is so familiar in daily lives?
Results of exp 2 (lower awareness rate for negative words)
New in Psychological Science! We've been studying visual non-conscious prioritization processes for decades, but not a lot is known about non-conscious speech processing. Does it conform to the same principles, despite the very different way it functions?
#consciousness #psychscisky #PsychSky
An extraordinary experience!! Huge thanks to all mentors and organizers.
First paper is now out in Cortex! We find behavioral and neural evidence for non-conscious speech processing, using a new dual-task paradigm that creates repetitive occurrences of in attentional deafness without masking or degrading stimuli. @deouell.bsky.social
www.sciencedirect.com/science/arti...
Sometimes when diving into a new topic, you don't even know the right keywords to look for. In that sense LLMs can be helpful right from the start, as long as you enter the papers yourself
If you mean logistic regression-like for proportion data then yeah beta. but note that for logistic 50 to 40 is not like 5 to 4, the latter is actually a smaller effect size in log odds ratio (making sense because the change from 95 to 96 is tiny).
1/3) This may be a very important paper, it suggests that there are no prediction error encoding neurons in sensory areas of cortex:
www.biorxiv.org/content/10.1...
I personally am a big fan of the idea that cortical regions (allo and neo) are doing sequence prediction.
But...
🧠📈 🧪
When we listen to speech, we do it while constantly predicting upcoming contents. Is this prediction associated with the subjective experience of engaged, conscious listening? What happens when we fail to listen? Come take a look at my poster (P116) tomorrow at @assc28.bsky.social 16:30
The conclusions from 5 EEG and behavior studies draw a coherent picture: when speech is task-relevant and supraliminal, inattentional deafness might not be absolute: we can process speech contents non-consciously, and use their meaning to prioritize information for consciouness.
These conditions, without masking, allowed repeated cases of inattentional deafness. We then asked (1) which words are detected more often (the answer will surprise you!) and (2) what happens in the brain when we miss a word, and is it goal-dependent?
We are often too busy to listen to things we need to notice. How can we study the very frequent case of "hearing without listening" without maybe-too-aggressive masking? We developed a new dual task that requires noticing unexpected spoken words during visual task performance.