Collabra: Psychology is seeking a new senior editor for the clinical section and new associate editors for the social section. If you are interested in either of these positions and you believe you are qualified, please fill out the application form before 30 April 2026! forms.gle/DgM3484SuLVD...
Posts by Hu Chuan-Peng
Want to contribute to scientific rigor and open science in clinical psychology? Apply to be the senior clinical editor at Collabra!
Worried it's too much work or you are too junior? Please ask me about it. You might, in fact, be the ideal person!
New paper that merits a read (Im totally unbiased...not). Simple, straightforward, impactful message. Prediction a la LLM is nice. Constituent-constrained prediction is nicer. @jiajiezou.bsky.social and Nai Ding show brain, behavioral, MEG, ECoG data.
www.nature.com/articles/s41... #neuroskyence
Beyond binding: from modular to natural vision: Trends in Cognitive Sciences www.cell.com/trends/cogni...
#neuroskyence #visionscience
Reminder: If researchers find Cohen's d = 6, no they didn't.
trustworthy.scientific.claims/posts/if-res...
There seems to be a broad perception across psychology and neuroscience that work shouldn't be "too technical" in order to reach the broadest possible audience. While I think we should strive for accessibility, I feel that this attitude can also be self-defeating: why are we dumbing down?
Published!
What Does ‘Human-Centred AI’ Mean? doi.org/10.3390/bs16...
Thank you to Andy Wills www.andywills.info for inviting me to his SI (Advanced Studies in Human-Centred AI) — and furthermore, for being up for me completely disagreeing with so many mainstream views on HCAI! Great reviews too.
Join this event today: 7p GMT+2, 6p GMT+1, 1p ET, 10a PT.
🎉📖 Applications are open for The Turing Way Book Dash: 18th & 19th May 2026!
Deadline: 27th April 2026 (midnight, anywhere on Earth 🌏)
The Turing Way Book Dash is a collaborative event where you'll work with others to add to and improve The Turing Way book and become a part of its community ✨
great initiative!
I'm proud to be part of the Scientific Committee for the new $5M Digital Brain Project, to accelerate development of open source models of the human brain. Apply by May 15th for funding at digitalbrainproject.org
This excellent post implicitly highlights my primary purpose in organizing these replication studies--to *describe* what happens when we try to replicate the published literature. This descriptive evidence grounds conceptual debates about *why* we observe those rates, and what we "should" observe.
"the replication findings reinforce lessons that I have slowly been learning over the years ...'the issue needing to be solved is overconfidence..We tend to act as if published findings are replicable without actually assessing whether they are.'”
www.bloomberg.com/opinion/arti...
www.sciencedirect.com/science/arti...
very nice paper on a mice model of blindsight
i'm not sure if i'm fully convinced by their saliency control, but maybe i'm just picking hair
let me explain: (a thread to follow)
…Speaking of null results, I only just discovered this fascinating paper. Seems that we (social science researchers) systematically overestimate the effect sizes likely to be operating, so we underpower our studies
I’m v curious abt how much this reflects sheer wishful thinking vs substantive error
Very excited to announce that the #BayesianWorkflow book by @statmodeling.bsky.social, @avehtari.bsky.social, @rmcelreath.bsky.social et al publishes in June! routledge.com/9780367490140 #RStats #DataScience #Bayesian
Emergence of Successor Representations and Experimental Design. Top: Example of how sequence learning and sleep might change neural representations. Upon encountering a Welsh Corgi, the brain primarily represents the current stimulus entity. If the Corgi is part of a recurring temporal sequence (Corgi → Girl → House), subsequent stimuli (Girl and House) might be integrated into the Corgi representation. Post-learning sleep might provide an opportunity for the brain to replay learned experiences and thereby further strengthen successor representations. Upon post-sleep exposure to a Corgi image (right), brain activation patterns might reflect both the current stimulus (Corgi) as well as learned successors (Girl, House). Faded images indicate weaker representations. Middle: Timeline of the experiment. Participants first completed a perceptual task, followed by a sequence learning task (Memory Arena). Memory for the learned sequence was then assessed both before and after a period of sleep. Finally, participants completed the perceptual task again. Bottom left: Memory Arena sequence design. Participants (N = 26) were tasked with learning the spatiotemporal structure of 50 images. These images belonged to five distinct categories (letter strings, scenes, objects, faces, and body parts) and were organized into 10 subsequences of five images each, following one of two fixed category orders: (i) letter string, scene, object, face, or (ii) object, scene, letter string, face, with body part images randomly inserted to obscure the primary category sequences. The two subsequence types were counterbalanced across participants. Bottom right: Memory Arena location design. The Arena was spatially organized into five principal ‘slices’, with each slice corresponding to one of the five main image categories.
How do experiences reshape our internal representations of the world? @bstaresina.bsky.social &co show that learning sequential experiences reshapes how the #brain represents what we see; a post-learning nap strengthens these predictive changes @plosbiology.org 🧪 plos.io/4dJGwMC
Want to join the Multilingual Minds and Machines Meeting, June 22-23 in Nijmegen? Registration is open until 1 May! mmmm2026.github.io
New preprint from my lab! We study how reinforcement learning & selective attention interact. To do so, we built a set of models describing different ways that value & reward prediction error can modulate top-down attention. We compare model outcomes to monkey data from a color value learning task
Both questionable (e.g. p-hacking) and open (e.g. pre-registration) research practices are prevalent in education research. We sought to understand the explanations given by educational researchers for why either should or should not be used. Two teams of researchers independently analysed open-ended survey responses from 1488 education researchers on their feelings about questionable and open research practices. Despite using different analytic approaches, all of the major categorizations of participant responses were similar or related across teams. Our findings suggest that although respondents believe that questionable research practices should not be used, they conceded there are systemic reasons some use them. Similarly, although respondents generally support open practices, they noted situations in which they were not appropriate or necessary for education research. These findings can serve as a catalyst for training and policy initiatives. #MetaSci #Methodology #EduSky #AcademicSky #OpenSci
“Our findings suggest that although respondents believe that questionable research practices should not be used, they conceded there are systemic reasons some use them.”
Open Acc: doi.org/10.1098/rsos...
BSky authors: @sarahcaroleo.bsky.social, @jesse-fleming.bsky.social, @bryancook.bsky.social
"you could - by careful choice of an existing scoring method from the literature - find any effect, or nothing, or the reverse of any effect you choose. This is bonkers. ... so extreme as to be farcical."
This is the sentiment we were hoping people would come away with!
w/@anniria.bsky.social
Virtual Event April 16 // 1 pm ET NEW EVIDENCE ON REPRODUCIBILITY ACROSS SOCIAL AND BEHAVIORAL RESEARCH Moderator: Tim Errington Speakers: Katrin Auspurg, Abel Brodeur, and Andrew Tyner
What can large-scale studies tell us about reproducibility? In our webinar on April 16, researchers from COS, I4R, and META-REP will discuss findings from three papers—one from the recently published SCORE effort—and insights on reproducibility, transparency, and credibility
cos-io.zoom.us/webin...
Kern et al. estimate that a "replay" signal in MEG would need to be unrealistically strong/frequent to be statistically detectable using temporally delayed linear modelling (TDLM). #sleeppeeps
(love this "experiment visualization" format)
“Still WEIRD, still underreported: An updated benchmark for psychological science” doi.org/10.1037/amp0...
I'm sorry but it's going to be one of those posts
97% (positive results) vs 80%(null results) chance of reviewer recommending acceptance
Many folk are surprised to discover thay Risk of Bias assessment tools tend not to interrogate the question “Did this study actually happen? And are its results trustworthy enough to believe?”
Jack’s Cochrane endorsed INSPECT-SR checks have done a lot to mainstream such Trustworthiness Assessment.
What makes behavioral interventions work beyond the psychological theory they implement? Their format, level of engagement, delivery modality?
In a new paper analyzing 274 interventions from 15 megastudies (4.1M+ participants), we tested 19 features: www.sciencedirect.com/science/arti...
1/2
== Effect Size and Confidence Intervals (ESCI) check ==
Lots improved.
Has a revamped website and R package on CRAN.
Works like statcheck, also checks effect sizes and calculates confidence intervals.
Tricky countless edge cases, but after testing it on 1000s of articles, seems pretty decent.