My review on the "Confidence-accuracy dissociations in perceptual decision making" is now published. I think that this will be useful to both experts and newcomers to the field.
www.sciencedirect.com/science/arti...
Posts by Doby Rahnev
🚨 Announcing another edition of the Metacognitive Science satellite in NYC, August 2nd 2026 (day before CCN) 🧠 🧪
Abstract submission is now open and closes May 15th!
metacognitivescience.org
Co-organised with @meganakpeters.bsky.social @luciecharlesneuro.bsky.social @dobyrahnev.bsky.social
SCORE, a collaboration of 865 researchers, is now released as three papers in Nature, six preprints, and a lot of data (cos.io/score/). SCORE examined repeatability of findings from the social-behavioral sciences and tested whether human and automated methods could predict replicability.
In a new preprint, we use a combination of 2AFC and discrimination tasks to quantify sensory, decisional, and metacognitive noise in units of the physical stimulus. We find that, across two experiments, sensory and decisional noise are comparable, while meta noise is lower.
osf.io/preprints/ps...
Fantastic opportunity here 👇
If you're not at Cosyne - and you are in or around Atlanta - you could do worse than drop by the CoCo conference! Tomorrow from 8am
coco.psych.gatech.edu/coco-confere...
New paper from the lab in which we test whether DDM and confidence distributions can be used to distinguish between perceptual and decisional effects. We show that putative signatures of perceptual effects emerge in a purely cognitive task.
link.springer.com/article/10.3...
Our new paper is out in Cognition! What determines whether confidence follows the classic "folded-X" pattern vs. the "double-increase" pattern? The answer lies in the type of stimulus manipulation. Big thanks to my advisor Doby @dobyrahnev.bsky.social and co-first author @herrickfung.bsky.social !
More evidence for ‘task-defining’ vs. ‘auxiliary’ stimulus manipulations, each with distinct effects on confidence.
Check out my new paper with @dobyrahnev.bsky.social and @kaixue98.bsky.social, now in Cognition. Also check out our earlier sister paper on this matter!
Kai's thread 👇
a while back i threatened to share this. finally online
for detection tasks we often systematically estimates sensitivity wrong. we need to control for unequal variance in models, but we often don't coz it needs extra data
now there's a virtually 'free' way to do it
www.cell.com/iscience/pdf...
Turns out that individual differences in accuracy, confidence, and RT among ANNs that only differ in their random initialization mimic the individual differences in humans.
It may be time for NeuroAI to take individual differences even more seriously.
Check out Herrick's thread 👇
🚨 New preprint on individual differences in artificial neural networks and human behavior.
We show that individual differences among ANN instances trained with different random initializations capture the individual differences in human behavior.
1/8
This paper started almost a decade ago in collaboration with the amazing @racheldenison.bsky.social. Marshall Green and Mingjia Hu did the actual work.
These results don't mean that ANNs are a good model of internal evidence for all visual tasks (far from it), but they do show that this is likely to be the case for simple visual spaces.
Critically, artificial neural networks (ANNs) trained on the orientation task reproduced both the fine- and coarse scale results as emergent properties, without any special training or fine-tuning. This was the same for 3-, 4-, and 5-layer networks.
At the same time, increasing the stimulus tilt in coarse-scale increments had a highly non-linear transformation with a plateau beyond 14 degrees. This difference between fine- and coarse-scale results isn't predicted a priori from most standard models.
In a task where subjects judged if Gabors were tilted clockwise or counterclockwise, we examined how orientation is transformed into internal evidence. We found that increasing the stimulus tilt in fine-scale increments resulted in a linear increase in sensitivity.
How do you know how visual stimuli are represented internally for decision making? This is perhaps the central question in perceptual decision making. In a new paper, we show that one can use artificial neural networks to crack this problem. #NeuroAi #VisionScience
direct.mit.edu/opmi/article...
Great work by the whole team: Medha Shekhar, @herrickfung.bsky.social, Krish Saxena, and Farshad Rafiei. Code and data posted as always.
More generally, our work represents the power of ANNs to uncover how humans represent and operate on perceptual information.
We found clear evidence that the Top2Diff model provided the best quantitative and qualitative fits to the data, suggesting that it most closely mimics the human confidence computation.
We then compared 7 confidence strategies: positive evidence (PE), Bayesian Confidence Hypothesis (BCH), Top-2 Difference in raw evidence (Top2Diff) or probability (ProbTop2Diff), Top Minus Average (ProbAvgRes), Entropy and Softmax. These are all the main competitors for multi-alternative decisions.
Human subjects performed an 8-choice digit categorization task based on noisy MNIST images. We used RTNet - a network we developed recently that is known to show the signature of human perceptual decisions (Rafiei et al., 2024, Nat Hum Beh) - to model the internal activation produced by each image.
How do people compute a sense of confidence? This question is usually addressed using very simple images because we don't know how complex stimuli are represented internally. In a new paper, we addressed this question using artificial neural networks (ANNs).
journals.plos.org/ploscompbiol...
It won't actually exist for another month or so, but because it now 'exists' on amazon, I'll humbly observe that, after working through this book, your student/trainee would be able to read and understand all but two or three papers in this week's J. Neurosci. Check it out:
New preprint: Confidence-accuracy dissociations in perceptual decision making. A review I was supposed to write 3 years ago for my VSS Young Investigator Award. Better late than never 😅 I tried to organize the literature and explore the likely mechanisms. Feedback welcome!
osf.io/preprints/ps...
New paper: Transcranial Focused Ultrasound for Identifying the Neural Substrate of Conscious Perception. With Dan Freeman, @brianodegaard.bsky.social, and Seung-Schik Yoo. www.sciencedirect.com/science/arti...
Congrats Chaz!!! Also, lovely kids :)
No two humans behave exactly alike. But what about neural networks? We found early evidence that human-like individual differences in behavior emerge from networks trained with different initializations. Here’s a peek at our results—to be presented at UniReps & DBM @NeurIPS. Full paper on the way!
My lab at Boston University has open positions for a postdoc and PhD students. We study visual perception, attention, and decision making with a focus on temporal dynamics. Check out our recent work here sites.bu.edu/denisonlab/ and email me if you're interested in learning more