Advertisement · 728 × 90

Posts by Zitong Lu

Congrats!!!

3 days ago 1 0 1 0

Happy to see this paper officially published in Cognitive Psychology! A nice collaboration with Yuxuan Zeng and David Osher at OSU. We find face-like holistic processing in non-face stimuli - and Configuration is the answer!
doi.org/10.1016/j.co...

1 week ago 7 3 0 0

Our new paper on brain networks engaged during imagining is out now in Neuron!

Here is a download link (free for 50 days):
authors.elsevier.com/c/1msNE3BtfH...

Congratulations to Nate Anderson for leading this work @rementurus.bsky.social

🧵

3 weeks ago 85 39 3 2
Achieving more human brain-like vision via human EEG representational alignment - Communications Biology EEG-aligned fine-tuning makes artificial vision models more brain-like, enhancing model-human representational similarity across EEG, fMRI, and behavior.

Can we use human EEG data to optimize vision models? Yes!
Our ReAlnet paper is finally out @commsbio.nature.com w/ Yile Wang & @juliedgolomb.bsky.social !!
Our alignment framework makes ANNs more brain-like, enhancing model-human similarity across EEG, fMRI, and behavior.
doi.org/10.1038/s420...

1 month ago 6 1 0 0
Post image

The cerebellum supports high-level language?? Now out in @cp-neuron.bsky.social, we systematically examined language-responsive areas of the cerebellum using precision fMRI and identified a *cerebellar satellite* of the neocortical language network!
authors.elsevier.com/a/1mUU83BtfH...
1/n 🧵👇

2 months ago 70 20 2 4
Vacancy — PhD Position in NeuroAI for Video Perception in the Human Brain <p><span>Are you interested in using AI to unravel the mysteries of the brain? Do you want to perform cutting-edge NeuroAI research and leverage deep learning to understand human vision? Then check out the vacancy below and apply for a PhD position in this exciting research direction.</span></p>

I have a PhD opening for my #VIDI BrainShorts project 📽️🧠🤖! Are you or do you know an ambitious, recent (or almost) MSc graduate with a background in NeuroAI and interest in large-scale data collection and video perception? Check out our vacancy! (deadline Feb 15).
werkenbij.uva.nl/en/vacancies...

3 months ago 31 26 1 0
(a) Mean cross-validated R2 for each model, averaged across participants and shown separately for each visual area (V1–hV4) and across all ROIs combined. Error bars indicate ± 1 SEM. All pairwise differences were significant except between the Contour-Based and Line Drawing–Steerable Pyramid Models in hV4. (b) Mean R2 by population receptive field eccentricity. Eccentricities of voxels are binned up to 4.2 visual angle which was the extent of the image from the central fixation mark. (c) Left: Visual ROI on an inflated surface map in fsaverage space. Right: R2 difference surface map in fsaverage space. R2 from the Photo-Steerable Pyramid Model was subtracted from the Contour Model. Positive values indicate bigger R2 values from Contour compared to Photo-Steerable Pyramid Model.

(a) Mean cross-validated R2 for each model, averaged across participants and shown separately for each visual area (V1–hV4) and across all ROIs combined. Error bars indicate ± 1 SEM. All pairwise differences were significant except between the Contour-Based and Line Drawing–Steerable Pyramid Models in hV4. (b) Mean R2 by population receptive field eccentricity. Eccentricities of voxels are binned up to 4.2 visual angle which was the extent of the image from the central fixation mark. (c) Left: Visual ROI on an inflated surface map in fsaverage space. Right: R2 difference surface map in fsaverage space. R2 from the Photo-Steerable Pyramid Model was subtracted from the Contour Model. Positive values indicate bigger R2 values from Contour compared to Photo-Steerable Pyramid Model.

What determines the perception of orientations in visual cortex, sharp contours or oriented spatial frequencies?

It's the contours, the building blocks for shape. Brilliant paper by Seohee Han out in Scientific Reports:
www.nature.com/articles/s41...

@uoftpsychology.bsky.social

3 months ago 10 4 0 0
Post image

Wanna to know how the human brain encodes the spatial location in 3D space? Come to my poster at #SfN25 this afternoon!

5 months ago 3 0 0 0

Come to see our poster tomorrow morning at R2!

5 months ago 1 0 0 0
Post image

My lab at Boston University has open positions for a postdoc and PhD students. We study visual perception, attention, and decision making with a focus on temporal dynamics. Check out our recent work here sites.bu.edu/denisonlab/ and email me if you're interested in learning more

5 months ago 26 12 0 1
Advertisement

We’re looking for a postdoc to join our Max Planck group in Germany some time in 2026. If you have computational and/or neuroimaging expertise, and are interested in questions intersecting perception and cognition, please reach out! I’ll also be happy to chat at the #Bernsteinconference this week.

6 months ago 63 54 1 1

Do it!

6 months ago 1 0 0 0

6/n This unique EEG-fMRI dataset is the first large-scale, multimodal neuroimaging dataset for 3D visual perception. We will make this rich and novel resource openly available to support future investigations~

8 months ago 0 0 0 0
Post image

5/n Also, check this super cool figure of the timecouse of spatial processing in human brains!

8 months ago 0 0 1 0
Post image Post image

4/n In addition to feature- and coordinate-level representations, We further provide novel evidence for 3D geometric distance representations in regions such as the parahippocampal cortex, highlighting its role in encoding higher-order spatial structure.

8 months ago 0 0 1 0
Post image

3/n We demonstrate that the human brain flexibly encodes spatial information using multiple spatial features in multiple coordinate systems at different points in time and brain space.

8 months ago 0 0 1 0
Post image Post image

2/n Our findings reveal a spatiotemporal gradient in the encoding of spatial features: early, widespread representations of 2D features, followed by later, more selective depth and 3D feature encoding across distinct cortical regions.

8 months ago 0 0 1 0
Post image

1/n Here we introduce a multimodal framework combining individualized perceptual depth calibration, large-scale EEG and fMRI recordings (over 66,000 trials in total across 10 participants), and computational approaches to characterize neural encoding of 3D spatial locations.

8 months ago 0 0 1 0
Advertisement

Excited to share new preprint! This is one of my most important PhD projects w/ @juliedgolomb.bsky.social.
We explored the integrated nature of 3D visual perception - how individual spatial features are jointly represented & how they converge into coherent 3D representations in human brain?

8 months ago 4 1 1 0
Things and Stuff: How the brain distinguishes oozing fluids from solid objects
Things and Stuff: How the brain distinguishes oozing fluids from solid objects YouTube video by McGovern Institute

Super excited to share our new article: “Dissociable cortical regions represent things and stuff in the human brain” with @nancykanwisher.bsky.social, @rtpramod.bsky.social and @joshtenenbaum.bsky.social

Video abstract: www.youtube.com/watch?v=B0XR...

Paper: authors.elsevier.com/a/1lWxv3QW8S...

8 months ago 33 12 1 2
Preview
Research Associate Screen reader users may encounter difficulty with this site. For assistance with applying, please contact hr-accessibleapplication@osu.edu. If you have questions while submitting an application, pleas...

I'm recruiting a lab manager for my soon-to-be-launched lab at Ohio State! If you know of any recent grads who may be interested both in helping to build the lab and in developing skills in the cognitive neuroscience of memory, please share!

osu.wd1.myworkdayjobs.com/en-US/OSUCar...

8 months ago 58 52 3 2

PhDone in 4 years! Deeply grateful to have worked in @juliedgolomb.bsky.social 's lab - thank you for all the support and guidance!
Also, couldn't be happier to graduate with @yongminchoi.bsky.social - we did it!

9 months ago 9 0 0 0

The blue-highlighted projects are the three main studies I’ll cover in my dissertation talk.
The red boxes mark the four projects I’m most proud of during my PhD.

9 months ago 1 0 0 0
Post image Post image

4 years ago, I arrived in the U.S. with everything unknown.
During these unforgettable years at OSU, I've worked on 15 papers across experimental psychology, cognitive neuroscience, and neuroAI — all to better understand human visual perception.
Tomorrow, I will defend my PhD! 🎓

9 months ago 4 0 1 0

Congrats, Brynn!

9 months ago 1 0 1 0
Post image

🚨 New publication alert! 🚨
Ever wonder how we perceive a stable world despite constantly moving our eyes? 👀
We investigated how the brain maintains visual stability across eye movements in natural scenes.

11 months ago 4 2 5 0
Video

Now out in Nature Human Behaviour @nathumbehav.nature.com : “End-to-end topographic networks as models of cortical map formation and human visual behaviour”. Please check our NHB link: www.nature.com/articles/s41...

10 months ago 53 21 4 6

See our recent work about nonface configuration triggering holistic processing! Nice collaboration with Osher lab!

1 year ago 1 0 0 0
Advertisement
Preview
Postdoctoral Researcher, Brain & AI Meta's mission is to build the future of human connection and the technology that makes it possible.

🚨Job alert:
Postdoc in our Brain & AI team at Meta, Paris
to work on self supervised learning and fMRI.
Apply here:
www.metacareers.com/jobs/6951286...
(And please feel free to RT)

1 year ago 8 7 0 0

So excited to share our new paper in JEP:HPP with former undergrad mentee @avaaaaran.bsky.social and @juliedgolomb.bsky.social!
Check how we apply a super interesting design to investigate object-location binding of a moving object!

1 year ago 3 0 0 0