Many claim memory biases toward percepts reflect corruption in sensory signals. We challenge this view by showing that ppl adapt their integration rationally w/ experience. w/ @timbrady.bsky.social
Humans adaptively integrate memory and perception based on stimulus history | osf.io/preprints/ps...
Posts by Brian Odegaard
If you're interested in doing a 2 year Schmidt "AI in science" postdoc fellowship in neuroscience/AI stuff with me starting in July or Oct 2027 take a look at this and get in touch soon. We've had a lot of luck recently getting these fellowships.
www.imperial.ac.uk/electrical-e...
"we show that [the lateral prefrontal areas] 8Av/45 encodes the color of a visual stimulus, regardless of its behavioral relevance."
www.nature.com/articles/s41...
On the inseparability of the prior and neural resources in behavioural bias www.biorxiv.org/content/10.6...
Top-down effects (of knowledge) on perception observed under ambiguous (low-light) conditions psycnet.apa.org/doiLanding?d...
This looks like a significant discovery from Doris Tao's lab:
Rapid concerted switching of the neural code in the inferotemporal cortex
@nature.com
"..our findings indicate that there is a previously unknown mechanism for neural representation:.."
www.nature.com/articles/s41...
www.sciencedirect.com/science/arti...
very nice paper on a mice model of blindsight
i'm not sure if i'm fully convinced by their saliency control, but maybe i'm just picking hair
let me explain: (a thread to follow)
Linking working memory maintenance and readout in monkey sensory and prefrontal cortex www.biorxiv.org/content/10.6...
can i repost this 1000x please?
learning to develop "good" research questions is so important, and so undercut by the "move fast break things" vibe coding mentality.
(yes, i also vibe code... for weekend projects!)
shameless plug for my recent paper on this: www.nature.com/articles/s41...
Each year, people speak 338 less words per day (on average). These effects accumulate. In 2019, people were speaking 28% less words each day than in 2005 (!).
journals.sagepub.com/doi/full/10....
www.science.org/doi/10.1126/...
"mental imagery reactivates the same sensory codes used during visual stimuli, suggesting the existence of a generative model capable of synthesizing detailed sensory contents from an abstract, semantic representation."
The horizontal bias in visual processing is also evident in functional connectivity.
fully agree with Stefano. I spent years of my training learning to translate scientific thinking into models and code. if that’s no longer a bottleneck, what was the point — and what should replace it?
🚨 Congress held NIH and NSF budgets roughly flat. Scientists celebrated.
Here is what actually happened.
NIH awarded 4,641 fewer grants in FY2025 than the 4-year avg. New awards dropped 14%. Over $1.3B never reached investigators.
NSF awards fell 22%.
The money was there.
Graphic with the text "Action Alert: AAS urges advocacy against proposed cuts to science in President's Budget Request"
The President's Budget Request proposes a 47% cut to NASA Science, a 55% cut to NSF, and a 13% cut to the DOE Office of Science. AAS President Dr. Dara Norman sent a call to action to AAS members via email today. Urge your members of Congress to reject these cuts now! aas.org/reject-2027-...
Among the atrocities and contradictions, SBE is the NSF directorate that funds the brain and mind research relevant to NSF’s new AI-focused priorities. Make it make sense appears to be expecting too much (sigh).
A line grant of the cumulative numbers of awards for NIH from fiscal years 2021-2026. The curve for fiscal year 2026 lies below the other curves.
NIH and NSF Updates (using graphs from grant-witness.us)
NIH all awards (New and Non-Competitive Renewal, Type 1 and Type 5, consistent with what is shown on NIH Reporter)
The flattening MAY be due to the lack of an OMB apportionment, which has now been remedied.
1/5
These are essential questions to be asking right now. Ignoring the power of these models (which are only getting smarter by the month) is a recipe for professional obsolescence for one's self and their students.
🧵 I gave Claude two things: a short paper (doi.org/10.1073/pnas...) and a raw behavioural dataset with 3 lines of variable descriptions.
Then I asked it to fit three computational RL models described only by equations in the manuscript. No code, no toolbox, no guidance on the fitting procedure. 1/3
A line graph showing awards over time at NSF's SBE. The line for this year, 2026, is extremely FLAT compared to prior years. By this time in 2025, 179 grants had been awarded. In 2021, it was 243. This year... 16.
NSF's Social, Behavioral and Economic Directorate has awarded only 16 grants since October.
No big budget cuts went through. No freeze. The courts have acted to keep funds flowing at every turn. The money just isn't going out.
Why...?
1/x
source: grant-witness.us/funding_curv...
🟦🧠 #academicsky
Happy to report that our survey study on the diversity with which people seem to experience their mental imagery is now published in RSOS :) doi.org/10.1098/rsos...
I posted a longer thread summarising the findings some months ago when we first put out the preprint: bsky.app/profile/samp...
Metacognition ppl, check out this upgraded hmetad package for estimating metacognitive metrics (e.g., M-ratio)!
☑️More efficient
☑️Easier to implement
☑️Comprehensive documentation
☑️New non-confounded measure of metacognitive bias (meta-delta)
I’ve just applied this model to my data - working nicely!
Visual #workingmemory peeps, you're going to love this. This is super challenging - might give you pause on how we are measuring color memory...
My color memory is a 40.4/50. Please do worse so I feel better.
dialed.gg?c=3PCHDE
🚨New preprint alert! 🚨
Do multimodal LLMs (VLMs) reason about high-level visual perception like humans do? We asked over 2000 human observers and 18 VLMs to describe scenes using 15 different tasks, ranging from general knowledge, affordances, affect, sensory experiences, and future prediction. 1/
The recording of Mel Goodale's talk at the MIT Consciousness Club is now available here: www.youtube.com/watch?v=kThw....
Super proud of @sjoerdmeijer.bsky.social who demonstrated that amygdala-TUS slows initial threat acquisition and enhances subsequent extinction. Great collaboration with @lennartverhagen.bsky.social and @deVoogdld.bsky.social ea. Thanks to @ERC.europa.eu.
www.science.org/doi/10.1126/...
In a new preprint, we use a combination of 2AFC and discrimination tasks to quantify sensory, decisional, and metacognitive noise in units of the physical stimulus. We find that, across two experiments, sensory and decisional noise are comparable, while meta noise is lower.
osf.io/preprints/ps...
Example images sketching research in the lab. Top left: MRS voxel locations in medial prefrontal cortex. Top right: EEG time-frequency representation during perceptual choice. Bottom left: screenshot from a foraging paradigm. Bottom right: Drug-induced changes in delay and effort discounting
Please repost!
We have one postdoc and one PhD student position open in my lab (starting from Sept 2026), neuroscience of decision making and learning. Join us @hhu.de in gorgeous Düsseldorf!
For details please look here:
www.psychologie.hhu.de/en/research-...
@biodgps-dgpa.bsky.social
Humans happen to have the same problem...
pubmed.ncbi.nlm.nih.gov/32483374/
Conclusion : AI is human? 🤪😱🤦
A cautionary tale...!