Advertisement Β· 728 Γ— 90

Posts by Ryan Panela

APA PsycNet

Excited to share our paper (with @jzacks.bsky.social), now out in JEP:LMC!

Event boundaries sometimes disrupt temporal order memory in list-based paradigmsβ€”but what happens in narratives with more complex structures that better resemble real life?

✨ Link: psycnet.apa.org/record/2027-...

1 month ago 42 13 2 0
OSF

1/ 🚨 New preprint

Key Moments Scaffold the Semantic Structure of Narratives

Using spoken recall and annotations from three naturalistic datasets with topic modeling, we ask: which parts of a narrative contribute most to its semantic structure and subsequently memory?

Preprint: osf.io/dcfvw

1 month ago 24 10 1 2

With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.

3 months ago 591 238 16 10
Preview
Neural signatures of engagement and event segmentation during story listening in background noise Speech in everyday life is often masked by background noise, making comprehension effortful. Characterizing brain activity patterns when individuals listen to masked speech can help clarify the mechan...

Finally out: www.eneuro.org/content/earl...

fMRI during naturalistic story listening in noise, looking at event-segmentation and ISC signatures. Listeners stay engaged and comprehend the gist even in moderate noise.

with @ayshamota.bsky.social @ryanaperry.bsky.social @ingridjohnsrude.bsky.social

3 months ago 18 11 1 1
OSF

New Preprint 🚨

This research with @alexbarnett.bsky.social, Yulia Lamekina, @barense.bsky.social, and @bjherrmann.bsky.social examines how background noise shapes event segmentation during continuous speech listening and its consequences for memory.

osf.io/e67qr_v1
@auditoryaging.bsky.social

3 months ago 9 4 0 0

Building Bridges in Brain Data.

The event will focus on open science practices, innovative methods, and community in the neurosciences, with opportunities to engage in collaborative projects or explore new tools. No prior expertise is required.

Registration for BrainHack 2026 is still open!

3 months ago 4 2 0 0
Preview
Mobile Eye-Tracking Glasses Capture Ocular and Head Markers of Listening Effort To extend the assessment of listening effort beyond a sound booth, we validated mobile eye-tracking glasses (Pupil Labs Neon) by comparing them to a stationary system (Eyelink DUO) in a controlled env...

New work from the lab: www.biorxiv.org/content/10.1...

Mobile eye-tracking glasses assess listening effort through pupil size and eye movements as good as a stationary eye tracker. But mobile glasses also show that people reduce their head movements when listening becomes more effortful.

7 months ago 3 1 0 1

Excited to share the publication of our work which explores the application of LLMs in event segmentation and memory research.

For researchers interested in applying these validated methods, an open-source module is available on GitHub (github.com/ryanapanela/EventRecall).

4 months ago 12 3 0 0
Advertisement

Such cool work!

8 months ago 0 0 1 0
Post image

🚨 New preprint 🚨

Prior work has mapped how the brain encodes concepts: If you see fire and smoke, your brain will represent the fire (hot, bright) and smoke (gray, airy). But how do you encode features of the fire-smoke relation? We analyzed fMRI with embeddings extracted from LLMs to find out 🧡

9 months ago 32 8 1 2
Preview
Finding the music of speech: Musical knowledge influences pitch processing in speech Few studies comparing music and language processing have adequately controlled for low-level acoustical differences, making it unclear whether differe…

Short speech utterances can be looped and after a few repetitions it sounds like the speaker is singing and once the switch from speech to song happens it never seems to go back. This paper we showed evidence for music knowledge being activated after the switch. www.sciencedirect.com/science/arti...

1 year ago 10 2 1 1
Post image Post image Post image Post image

And that's a wrap on #ARO2025. Grateful for the opportunity to give my first international conference talk and for the chance to (re-)connect with an incredible group of researchers.

See you next year in Puerto Rico for #ARO2026.

@auditoryaging.bsky.social

1 year ago 10 1 0 0
Preview
Event Segmentation Applications in Large Language Model Enabled Automated Recall Assessments Understanding how individuals perceive and recall information in their natural environments is critical to understanding potential failures in perception (e.g., sensory loss) and memory (e.g., dementi...

New Preprint 🚨

This research with @bjherrmann.bsky.social, @alexbarnett.bsky.social, and @barense.bsky.social extends previous work exploring how LLMs can simulate human event segmentation, with applications for automated recall assessments.

arxiv.org/abs/2502.13349

1 year ago 13 7 0 0