Advertisement · 728 × 90

Posts by

This was an incredibly rewarding project to work on. Thanks to the amazing team that made this possible! Carly Millanski, Allison Chen, Lisa Wauters, Jordyn Anders, Shilpa Shamapant, @smwilson.bsky.social, @alexanderhuth.bsky.social, @mayalhenry.bsky.social

8/8

1 week ago 3 0 0 0
Post image

We also found good decoding performance from individual brain regions. This suggests that we could move our decoder from fMRI into more portable systems. The best regions differed across participants so we think fMRI will remain very important for localizing recording sites

7/8

1 week ago 1 0 1 0
Post image

Moving forward we hope to develop practical systems that can support communication. We found that decoding performance reliably improved with the amount of training data. So we’re optimistic that there's plenty of room for improvement

6/8

1 week ago 1 0 1 0
Post image

Next we explored why this works. We found that conceptual processing was largely spared outside of damaged brain regions. This suggests that our approach could generalize across patients with a wide range of lesion profiles and speech / language impairments

5/8

1 week ago 1 0 1 0
Post image

We found that the decoder predictions could describe what the participants with aphasia were hearing about / seeing / imagining! This shows how brain-computer interfaces could predict the concepts that patients are thinking about but struggling to express

4/8

1 week ago 3 0 1 1
Post image

We previously showed that concepts can be decoded from neurologically healthy participants using functional MRI. Here we used a transfer learning approach where decoders are trained on neurologically healthy participants and then transferred to participants with aphasia using stories and movies 3/8

1 week ago 2 0 1 0
Post image

Aphasia is one of the most common and debilitating effects of stroke. Patients with aphasia struggle with different aspects of language (e.g. word finding, grammatical construction, phonological encoding). But many patients have relatively spared conceptual knowledge

2/8

1 week ago 1 0 1 0

We're excited to share our new study on decoding brain activity in participants with post-stroke aphasia! We think this is an important step towards cognitive brain-computer interfaces for patients with language disorders

www.biorxiv.org/content/10.6...

1/8

1 week ago 26 10 1 3
Advertisement
Schematic of semantic tuning shifts from L2-English to L1-Chinese

Schematic of semantic tuning shifts from L2-English to L1-Chinese

Our work on bilingual language processing is now out in @pnas.org! Our fMRI study compares cortical representations btwn native and non-native languages. We find that representations are largely similar, but systematically modulated btwn languages www.pnas.org/doi/10.1073/...

1 month ago 5 2 2 0
Research Coordinator I - TCH Neurosurgery Research Coordinator I - TCH Neurosurgery

My group is hiring a full time research coordinator to work with our collaborators in Houston on understanding speech and language development in children with epilepsy. Great for folks looking to get direct experience with clinical/translational research. Please repost! jobs.bcm.edu/job/Research...

1 month ago 14 10 1 0
Post image

I'm recruiting PhD students to join my new lab in Fall 2026! The Shared Minds Lab at @usc.edu will combine deep learning and ecological human neuroscience to better understand how we communicate our thoughts from one brain to another.

6 months ago 121 72 9 3

Practically, our approach may enable us to adapt our semantic language decoders for people with impaired language comprehension. I’ve been working with
@mayalhenry.bsky.social and
@alexanderhuth.bsky.social
to test our approach in people with aphasia, stay tuned!

5/5

1 year ago 1 0 0 0
Post image

We tested our approach on neurologically healthy participants, and found that silent movies are nearly as effective as narrative stories for transferring semantic decoders. Scientifically, this adds to the growing evidence that semantic representations are shared between language and vision

4/5

1 year ago 1 0 1 0
Post image

Our new approach can decode language from a participant without language data! First we train decoders on reference participants with language data. Then we use *silent movies* to align brain responses across participants. Finally we decode new participant responses using the reference decoders

3/5

1 year ago 1 0 1 0

Language decoders that target semantic representations have the potential to help people with aphasia, who struggle to map concepts to lexical-phonological output. But many people with aphasia have impaired language comprehension, and current decoders are trained on brain responses to language

2/5

1 year ago 1 0 1 0

I'm excited to share our new paper (with @alexanderhuth.bsky.social) on transferring language decoders across participants and modalities!

authors.elsevier.com/a/1kZRD3QW8S...

1/5

1 year ago 18 6 3 2