Advertisement Β· 728 Γ— 90

Posts by Jess Alexander

Heading out to @snlmtg.bsky.social to geek out with other neurolinguists this weekend. If you are interested in emotional prosody, speech intelligibility, and/or vocoded speech, come visit my poster (B68) on Friday afternoon! 🧠 πŸ€“

7 months ago 0 0 0 0

🚨 Just over a week left to register for the #CNSP2025 Online Workshop (details in post below)! 🚨

Link to the workshop registration form: docs.google.com/forms/d/e/1F...

8 months ago 6 7 1 0
Preview
GitHub - jessb0t/emoSPIN: project | comparing humans and LLMs on decoding emotional speech in noise project | comparing humans and LLMs on decoding emotional speech in noise - jessb0t/emoSPIN

The less background noise, the better humans can understand speech. Some speech-to-text models perform similarly. But what happens when the speaker's voice is imbued with emotion? I was curious, so I did a simple mini investigation. The results surprised me! πŸ€“ github.com/jessb0t/emoSPIN

8 months ago 1 0 0 0

Looking forward to September! πŸ€“

8 months ago 0 0 0 0

And many more… 🎢

10 months ago 2 0 0 0

The deadline has been extended to the 10th June. There are still a couple of spots available. Apply before it's sold out! EEG/fNIRS/hyperscanning/TRFs/Speech/Music/Ping pong!

10 months ago 4 4 0 0

But how loud that background noise is, as well as what kind of emotional state the speaker is currently in, will both play a role in how accurately we understand the words spoken and how accurately we perceive the underlying emotion.

Please reach out if you have any questions about our data! (8/8)

10 months ago 0 0 0 0

So what? Well, our daily interactions require us not only to understand what people are saying, but also to intuit how they are feeling so that we respond appropriately. And we usually pull off both these incredible feats in some level of background noise.

10 months ago 0 0 1 0
Advertisement

For emotion recognition, we find that background noise induces perceptual biases, causing listeners exposed to higher levels of noise to behave differently than listeners exposed to more moderate noise levels. And the ability to recognize the emotion doesn't seem to help in understanding the words.

10 months ago 0 0 1 0

Interestingly, the intelligibility advantage doesn't correlate well with raw acoustic intensity, but rather with how intensity is distributed across different frequency bands.

10 months ago 0 0 1 0

Here, across four different levels of speech-shaped background noise, we find an advantage for high-arousal emotions (angry, happy) relative to neutral for both speech intelligibility and emotion recognition.

10 months ago 0 0 1 0

Prior work has also presented conflicting results on whether vocal emotions differ in how accurately they are recognized in the presence of typical background noise, like the din of a busy restaurant. Angry speech seems to have a recognition advantage, but is it special...or just more intense?

10 months ago 0 0 1 0

Acoustics of speech differ based on the emotional state of the speaker. In English, for instance, angry and happy speech tend to have higher mean F0 and mean intensity than neutral speech. But the literature is divided on whether this leads to any intelligibility difference across vocal emotions.

10 months ago 0 0 1 0
Preview
High-arousal emotional speech enhances speech intelligibility and emotion recognition in noise Prosodic and voice quality modulations of the speech signal offer acoustic cues to the emotional state of the speaker. In quiet, listeners are highly adept at i

Officially out in JASA!

Paper: doi.org/10.1121/10.0036812
Data+Code: osf.io/g4kyh/

A short 🧡 below with details... (1/8)

10 months ago 2 0 1 0

Don’t miss this year’s CNSP workshop! Also, if you are a predoctoral or postdoctoral scholar, consider submitting a proposal for a methods tutorial! Submission form here: tinyurl.com/submit-cnsp-tutorial. πŸ§ πŸ§‘β€πŸ’»πŸ’‘

11 months ago 1 0 0 0

Congrats, @vshirazy.bsky.social !!!

1 year ago 1 0 0 0

Thanks much!

1 year ago 0 0 0 0

This epitomizes why I love scientists. Oh, and chefs. πŸ§‘β€πŸ”¬πŸ§‘β€πŸ³

1 year ago 1 0 0 0
Advertisement

Listening task? Are headphones required? If so, how do you ensure participants are using them? For instance, do you request a return if they fail a headphone check more than once? Asking for a friend… 🀫

1 year ago 0 0 1 0

πŸ˜‚πŸ˜‚πŸ€“πŸ˜‚πŸ˜‚

1 year ago 0 0 0 0

This is great fun! Each round is a head-to-head between models that listen to your audio prompt and respond in text. Then you pick the winner. And, all the while, you contribute to πŸ—£οΈ AI research!

1 year ago 1 0 0 0

Attending #ASA187 hosted by @acousticsorg? Come check out my flashtalk in tomorrow's Suprasegmentals session! I will be sharing our recent results showing enhanced intelligibility and emotion recognition for happy and angry speech in noise, plus a dive under the hood of listener behavior.

1 year ago 2 0 0 0