🚨 NSF is already quietly eliminating the SBE Directorate, despite Congress’ mandate that NSF support the behavioral & social sciences.
Steps to counter this are in motion.
If you
- have an SBE proposal under review
- serve on an SBE grant panel
You can help! Fill out this form: shorturl.at/xuKw2
Posts by Michelle Greene
90s era CD wallet collection with highly questionable music choices.
It’s a shame that my students’ generation doesn’t know the joy of getting to the stage of friendship when you’re allowed to flip through your buddy’s music collection and bond over the obscure and/or cringe music you both love.
JOB ALERT: PhD opening in my lab!
@cimecunitrento.bsky.social
in Italy, as part of an Italian FIS3 starting grant.
The project will use advanced analysis methods of MEG data to investigate how our world's naturalistic hierarchical structure facilitates predictive neural processing.
Postdoc opening in Applied Mathematics at Brown! Bridging APMA + brain science or CS. Two-year appointment starting July 2026 — review begins April 1!
Great opportunity to collaborate with @carneyinstitute.bsky.social faculty at the Nancy G. Zimmerman Center for Computational Brain Science.
Neat!
Many, many thanks go to my excellent trainees (Gillian and Skylar) who led this project, and to collaborator Bruce Hansen who patiently let this "side hustle" metastasize into a monster that included ~200 million VLM-generated words!
This question has basic and applied implications. It shows us a hard limit to the scene information that can be gained through massive image-text pairings. As cognitive scientists try to scale experiments using VLMs, it's also worth being aware of their limits. 8/8 Link: arxiv.org/abs/2603.26589
It might also have something to do with training data. Humans are great at scene affordances, and we tend not to tell other humans obvious things. In image caption datasets, we saw slightly less affordance information than affect-based information, but this is unlikely telling the whole story. 7/
For some models and tasks, it might be about the (in)ability of the models to project themselves as agents in the 3D scene. Adding this information to the prompt recovered affordance information in some cases. 6/
Why might this be the case? We assessed six different hypotheses. We could rule out the simple hypotheses:
❌ It wasn't about prompt length
❌ These were not stylistic differences
❌ It wasn't a simple failure of the visual encoder
So, what is it? 5/
That said, VLMs were pretty bad at assessing scene affordances (18 percentage points worse than other tasks). They also struggled with sensory experience tasks, such as describing how loud the scene would be or its physical temperature. 4/
Not surprisingly, VLMs were pretty good at general knowledge. Interestingly, they weren't bad at affect tasks, such as assessing how safe one would feel in a given environment, or future prediction tasks, even though these are not strongly grounded in pixels. 3/
Because many of these tasks do not have an objective "ground truth" answer, we assessed similarity by asking how similar the VLM responses were to the distribution of human observers, using both NLP and distances in embedding space. 2/
🚨New preprint alert! 🚨
Do multimodal LLMs (VLMs) reason about high-level visual perception like humans do? We asked over 2000 human observers and 18 VLMs to describe scenes using 15 different tasks, ranging from general knowledge, affordances, affect, sensory experiences, and future prediction. 1/
Holy New York.
I fondly remember your “Mercury in tardigrade”. Legendary!
Three smiling badass scientists bask in the glory of paper submission… with pastries!
Yeah, the world is a dumpster fire right now. But the feeling of celebrating students’ first paper submission? Still magical!
If you spot ICE agents at an airport in the coming days, I’m still collecting stories and I want to hear from you.
Reach me via Signal at marisakabas.04
Ha, relatable! I kill the most hearty plants, horrifying my mother, who used to work in a greenhouse.
OK so this young trans gentleman was cut off by his parents, has been unable to get MIT to up his financial aid, has been working multiple jobs while maintaining an A- average AT MIT (no one does this), but he is gonna need to actually pay his tuition to graduate: www.gofundme.com/f/help-matth...
Hey #visionscience friends - I'm looking to touch base with someone who knows lots about color, esp. contrast/assimilation effects. I'm working on a video about something and I'm worried I'm getting tangled up in part of the explanation. Thanks!
Here's a striking visual illusion - the 9 purple dots.
Focus your eyes on the top left dot. That one is more purple than the others, right? Now try another dot... that one becomes the purple one! pubmed.ncbi.nlm.nih.gov/41744429/
Unfortunately, no. Not at this time. Best of luck with your job search!
I'm still accepting applications! Very exciting opportunity to work on cognitive computational neuroscience of vision in NYC! #PsychJobs #NeuroJobs #neuroskyence #PsychSciSky
Our reply to 11 commentaries on our article ("Rethinking category-selectivity in human visual cortex") is out in Cognitive Neuroscience! Thanks to @susanwardle.bsky.social @maryamvaziri.bsky.social Dwight Kravitz @cibaker.bsky.social and all who contributed! 1/x www.tandfonline.com/doi/full/10....
I’ll be right over!
She and Rev Bayes should throw themselves a pity party!
So when thousands of women (and children) were being duplicated--including in non-consensual AI porn--it was no big deal. But it took "only a 15-sec clip" of these two famous white men to draw outrage and fear. Gotcha. Cool cool.
Babe, wake up, new sarcastic but entirely worthwhile internet abbreviation just dropped