Advertisement · 728 × 90

Posts by Gabriel Fajardo

Thank you!!

1 week ago 0 0 0 0

Thanks Jae!!

1 week ago 1 0 0 0

I’m thrilled to share that I was awarded the NSF Graduate Research Fellowship! Thanks to all my mentors and lab mates for the incredible support :)

1 week ago 23 3 5 0
Flyer for GSMI 2026 program: https://www.cientificolatino.com/gsmi

Flyer for GSMI 2026 program: https://www.cientificolatino.com/gsmi

Sign ups are ✨OPEN✨ for #GSMI2026! 🤩⁠
➡️ www.cientificolatino.com/gsmi

The GSMI program supports 100 graduate school applicants through 1-on-1 mentorship, fee waivers, professional development, and community building!⁠

Applications accepted until 5/31/26!⁠

3 weeks ago 4 2 0 1

New preprint from Lindsey Tepfer (@ltjaql.bsky.social) and me! We silenced portions of internal monologues in two films to manipulate participants' access to characters' thoughts. Using ISC and RSA, we found that this aligned later neural processing of the narrative & encoding of trait impressions.

2 months ago 51 16 2 1
Video

Happy to introduce our 25 2025 Alumni Scholarship Recipients!!

This year we have launched our 2025 Alumni Scholarship Fund to support 1st year grad students, graduates of our GSMI program, to cover educational expenses in their 1st year of graduate school.

4 months ago 10 9 1 0

Some new work from the lab: @yuzhu194.bsky.social and Aidas Aglinskas introduce a deep-learning based fMRI denoising method that outperforms CompCor by over 200%.

4 months ago 4 1 0 0

We're a month further into the job market - how are things looking? The good news is that the market does seem to have been delayed: ~150 new listings have appeared since my last post. The bad news is that the total is still substantially lower than what it was during the covid dip.

6 months ago 51 21 2 3
studyforrest.org

We are grateful to the curators of the StudyForrest dataset (studyforrest.org) for making this project possible. And special thanks to Mengting and Stefano - I could not have asked for kinder and sharper mentors to guide me through my first research project!

preprint: osf.io/preprints/ps...

6 months ago 1 0 0 0
Fig. 2. a. Voxels showing significant effects (p < 0.05, FWE corrected) for the combination of auditory responses with responses in V3d and V5 (red), and auditory responses with responses in V3v and V4 (green). b. Voxels showing significant effects for the combination of responses in V3v and V4 with responses in V3d and V5 (blue). c. Fisher transformed Pearson correlation values between the auditory+dorsal and auditory+ventral combined-minus-max models, computed across the top 50 voxels in the STS (left) and the top 100 voxels across the whole brain (right) showing the greatest change in variance explained across both models. d. Pearson correlation values between combined-minus-max effect patterns from the auditory+dorsal and auditory+ventral models within an STS ROI. We computed these correlations across 500 splits of the participants into two equal groups, comparing pattern similarity within the same model across splits (e.g. AUD+dorsal and AUD+dorsal) to the similarity of patterns between different models across splits (e.g. AUD+dorsal in split 1 to AUD+ventral in split 2: “AD1 / AV2”).

Fig. 2. a. Voxels showing significant effects (p < 0.05, FWE corrected) for the combination of auditory responses with responses in V3d and V5 (red), and auditory responses with responses in V3v and V4 (green). b. Voxels showing significant effects for the combination of responses in V3v and V4 with responses in V3d and V5 (blue). c. Fisher transformed Pearson correlation values between the auditory+dorsal and auditory+ventral combined-minus-max models, computed across the top 50 voxels in the STS (left) and the top 100 voxels across the whole brain (right) showing the greatest change in variance explained across both models. d. Pearson correlation values between combined-minus-max effect patterns from the auditory+dorsal and auditory+ventral models within an STS ROI. We computed these correlations across 500 splits of the participants into two equal groups, comparing pattern similarity within the same model across splits (e.g. AUD+dorsal and AUD+dorsal) to the similarity of patterns between different models across splits (e.g. AUD+dorsal in split 1 to AUD+ventral in split 2: “AD1 / AV2”).

Using artificial neural networks, we examined the relationship between multivariate response patterns in the auditory cortex, the two visual streams, and the rest of the brain, revealing that distinct portions of the STS combine information from the two visual streams with auditory information.

6 months ago 2 0 1 0
Advertisement

Does the STS function as a centralized hub, combining auditory input with visual input from both streams? Or do distinct regions separately integrate auditory information with ventral and dorsal visual inputs?

6 months ago 1 0 1 0

The superior temporal sulcus (STS) combines auditory and visual information. However, the division of the visual system into a ventral and a dorsal stream prompts questions about their relative contributions in this process.

6 months ago 1 0 1 0
Fig. 1. a. Visual and auditory regions of interest (ROIs). b. Responses in a combination of visual (e.g., early dorsal visual stream; Fig. 1a, middle panel) and auditory regions were used to predict responses in the rest of the brain using MVPN. c. In order to identify brain regions that combine responses from auditory and visual regions, we identified voxels where predictions generated using the combined patterns from auditory regions and one set of visual regions jointly (as shown in Fig.  1b) are significantly more accurate than predictions generated using only auditory regions or only that set of visual regions.

Fig. 1. a. Visual and auditory regions of interest (ROIs). b. Responses in a combination of visual (e.g., early dorsal visual stream; Fig. 1a, middle panel) and auditory regions were used to predict responses in the rest of the brain using MVPN. c. In order to identify brain regions that combine responses from auditory and visual regions, we identified voxels where predictions generated using the combined patterns from auditory regions and one set of visual regions jointly (as shown in Fig. 1b) are significantly more accurate than predictions generated using only auditory regions or only that set of visual regions.

I’m excited to share my 1st first-authored paper, “Distinct portions of superior temporal sulcus combine auditory representations with different visual streams” (with @mtfang.bsky.social and @steanze.bsky.social ), now out in The Journal of Neuroscience!
www.jneurosci.org/content/earl...

6 months ago 22 11 1 0
Preview
Colorado Social Vision & Mind Lab The Social Vision & Mind Lab (Director: Youngki Hong, Ph.D.) at the University of Colorado Boulder explores how people perceive and make sense of the physica...

I’m admitting 1–2 Ph.D. students to join my lab in the Department of Psychology and Neuroscience at CU Boulder, starting Fall 2026. We study person perception, stereotyping and prejudice, and intervention science.

Application info: www.colorado.edu/psych-neuro/...
Lab info: www.svmlab.org

7 months ago 21 12 1 0
Postdoctoral Associate

Job alert: I'm hiring a postdoc for my lab at CU Boulder starting Fall 2026!

We study person perception, stereotyping & prejudice, and intervention science using behavioral & neuroimaging methods.

Link: jobs.colorado.edu/jobs/JobDeta...

Review starts Nov 1 and continues until filled.

7 months ago 19 19 0 2

Six years in the making, a postdoc project with @freemanjb.bsky.social is finally now out in print. Many thanks to Jon and @hennavartiainen.bsky.social and everyone who made this important work possible.

6 months ago 20 8 0 1
Post image Post image

Excited to share the preprint for my 1st 1st-author manuscript! @markthornton.bsky.social and I show that people hold robust, structured beliefs about how individual mental states unfold in intensity over time. We find that these beliefs are reflected in other domains of mental state understanding.

7 months ago 34 6 2 1
Original members of SCRAP Lab

Original members of SCRAP Lab

Current members of SCRAP Lab

Current members of SCRAP Lab

Today, SCRAP Lab returned (right) to the Path of Life Garden in Windsor, VT - the site of our first in-person get-together as a lab 5 years ago (left) - to welcome our newest member, graduate student @gabefajardo.bsky.social!

7 months ago 16 4 0 0
Advertisement

The psych job market may not be dead... but it is gravely injured 😬 So far it's looking like the Trump administration's attacks on higher ed/research are going to have more than 2x the impact on the job market as the covid-19 pandemic. #psychjobs #neurojobs #academicjobs

7 months ago 165 73 14 10
Preview
A high-dimensional model of social impressions People form social impressions from visual cues such as faces, which are argued by various models to arise from some limited set of fixed dimensions (e.g., trustworthiness and dominance). We argue tha...

In a TiCS paper, @chujunlin.bsky.social & I propose a high-dimensional model of social impressions.

Existing models focus on 2–4 latent dimensions (e.g. trustworthy/warm), but they often fall apart across different contexts, cultures, & perceivers. We need a paradigm shift.

shorturl.at/7GD1n (1/8)

10 months ago 57 18 3 1
Home

Excited to share that I’ll be joining the Department of Psychology and Neuroscience at @colorado.edu as an Assistant Professor this fall! My lab will study social cognition, focusing on the cognitive and neural bases of stereotyping and bias interventions.

10 months ago 36 5 3 2

🚨 A new rule would let career scientists like NSF/NIH program officers be replaced by political appointees

Already 14,000+ public comments, deadline is Friday

📣 Comments can be short. Courts consider them—and scientists with NSF/NIH experience are especially impactful

Speak up! shorturl.at/WKuBj

11 months ago 536 509 25 50

🥳Excited to share that I am joining Columbia July 2025
@columbiauniversity.bsky.social

Looking for🚨lab managers🚨postdocs🚨grad students! Pls REPOST🙏

We study⭐️person perception⭐️social cognition using experimental, cross-cultural, & computational methods!

App👉shorturl.at/5UVPl
More👉shorturl.at/q18GM

11 months ago 54 19 10 2

Despite everything going on, I may have funds to hire a postdoc this year 😬🤞🧑‍🔬 Open to a wide variety of possible projects in social and cognitive neuroscience. Get in touch if you are interested! Reposts appreciated.

11 months ago 130 102 3 5
Post image

SCRAP Lab had a great time at #SANS2025! Can't wait till next year!

11 months ago 38 5 0 0

Thanks :)

11 months ago 0 0 0 0

Now accepting applications! 🚨 As the current lab manager, I can confidently say this is an incredible opportunity to gain lots of hands-on research experience and prepare for grad school. You'll be part of a vibrant community (+ city) and work alongside many brilliant scientists - don't miss out!

1 year ago 7 1 0 0
Graduate School Mentorship Intiative (GSMI). Our mission is to help STEM students from underserved communities get accepted into graduate programs. Apply by 5/31/25 to be one of our 100 scholars.
Program benefits: 
* Personal STEM mentor
* Application Advice
* Supportive community
* Fee waivers
* Mock interviews
* Webinars and resources.
For more info: cientificolatino.com/gsmi
Cientifico Latino and Simons Foundation logos.

Graduate School Mentorship Intiative (GSMI). Our mission is to help STEM students from underserved communities get accepted into graduate programs. Apply by 5/31/25 to be one of our 100 scholars. Program benefits: * Personal STEM mentor * Application Advice * Supportive community * Fee waivers * Mock interviews * Webinars and resources. For more info: cientificolatino.com/gsmi Cientifico Latino and Simons Foundation logos.

Applying to STEM grad programs? 🎓

Sign ups are ✨OPEN✨ for #GSMI2025! 🤩

The GSMI program supports applicants through 1-on-1 mentorship, fee waivers, professional development, and community building!

Applications accepted on a ROLLING basis until 5/31!
cientificolatino.com/gsmi

MORE INFO in 🧵 1/

1 year ago 13 13 2 3
Advertisement

Thanks!!

1 year ago 1 0 0 0

Thank you!!!

1 year ago 1 0 0 0