More details about the Bayesian Workflow book and case studies now available on the book web site avehtari.github.io/Bayesian-Wor... (but you still need to wait a bit for the book)
Posts by Randy Ellis
It was great fun to chat with Randy about metascience and replicability. Check out his podcast!
Also available on:
Spotify:
open.spotify.com/episode/4zIg...
open.spotify.com/episode/3aCD...
Substack:
substack.com/@metascience...
substack.com/@metascience...
Here are my conversations with @briannosek.bsky.social and Tim Errington on Metascience Matters
Brian: www.youtube.com/watch?v=4DCV...
Tim: www.youtube.com/watch?v=-EDA...
We discussed their replication projects in psychology and cancer biology, the Center for Open Science, and many other topics.
If you’re in the Boston area, please join me on April 16 for my talk about the tragedy of fraudulent or false Alzheimer’s research – and the response to my book “Doctored.” Talk and reception sponsored by MIT Knight Science Journalism. ksj.mit.edu/event/lectur...
7/ The policy impact is striking: reproducibility rises from 29.6% before Data Access and Research Transpareny (DA-RT) to 79.8% after.
Update from the Metascience Alliance: A synthesis of input gathered so far is now available. It reflects back what’s been heard and highlights five emerging themes to guide next steps.
📝 Read the synthesis: www.cos.io/hubfs/Met...
💡 Learn how to get involved: cos.io/metascience-a...
>5h recordings and slides for most #LoveReplicationsWeek talks are now available on our updated website. Thank you so much everybody who contributed to this wonderful week, participated in the talks, and partnered-up with us. I think there should be a replication of this.
forrt.org/LoveReplicat...
Statistical Rethinking 2026 is done: 20 new lectures emphasizing logical and critical statistical workflow, from basics of probability theory to causal inference to reliable computation to sensitivity. It's all free, made just for you. Lecture list and links: github.com/rmcelreath/s...
Hidden Markov Models - Lecture B10 of Stat Rethinking 2026. Hidden state models, inference of latent strategies, time series, is the president dead?, capture-recapture and demographic inference, Guerilla Bayesian Workflow. This is the final lecture for 2026. www.youtube.com/watch?v=fuon...
My favorite is this kind of convo that turns into a neverending journey! Scientists in the Boston area interested in preventing mistakes from escalating into egregious fraud cases, please join our Meetup 7pm Sunday night at the Lavender room in Somerville Armory.
Here's my conversation with @eugenie-reich.bsky.social, an attorney representing scientific whistleblowers, on Metascience Matters: www.youtube.com/watch?v=SMRC...
We discuss her cases, the False Claims Act, whistleblower awards, pressures on scientists to produce positive data, among other topics.
Working hypothesis: If you're doing research and don't occasionally have a small existential crisis, either you've been blessed to work in an exceptional field (do tell which one it is!), or maybe you're being a bit naive.
Happy to announce that the RR for ManyNumbers 3 was accepted (in principle) at Developmental Science today. This project will investigate the socio-demographic correlates of preschool numeracy in US sites participating in ManyNumbers 1. If you're interested, it's not too late to join these projects.
Here's my conversation with @jamesheathers.bsky.social, Founder/Director of the Medical Evidence Project, on Metascience Matters: www.youtube.com/watch?v=QH87...
We discussed his book on Forensic Metascience, the story behind the GRIM test, how technology can enable metascience, and other topics.
Steve Brunton’s videos are good youtu.be/rCdxlN6Ph14?...
Replication Research (R2), a 🆕 community-led Diamond OA journal, makes replication studies more discoverable, publishable & rigorously evaluated—without subscription barriers or author fees. Ahead of #LoveReplicationsWeek, R2's senior editors shared their vision in our Q&A:
Wonderful to see this replication effort in the physical sciences using the models of many labs, preregistration, and transparency that have benefitted other fields.
And, an investment of $9.5 million to do it!
www.nature.com/articles/d41...
Here's what a Cohen's d = 22 looks like. Totally normal. See it all the time in my own data...
Today in that-didn't-happen: Cohen's d = 22.
Williams et al. (2014) has 145 citations, putting it in top 1% of most cited psych articles.
It is a load-bearing publication in its area, despite having impossible results.
pubpeer.com/publications...
It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
Without publication bias, we might not need many replications. With publication bias, 20% to 40% might be justified (but of course, extremely dependent on the assumptions in the simulations!). If the field is a mess, we need a lot of replication studies to clean up!
My colleague Krist Vaessen wrote a new book: “Neomania: How our obsession with innovation is failing science, and how to restore trust”. It's a great analysis how the drive for novelty hinders reliable scientific progress. Open Access, so read it here: books.openbookpublishers.com/10.11647/obp...
Here's my conversation with Mu Yang on Metascience Matters: www.youtube.com/watch?v=E2EK...
We discussed her work as a scientific sleuth, academic incentives for positive data, individual cases she has pursued, and why she loves being a sleuth.
Also on Spotify: open.spotify.com/episode/16R6...
New submission format at SBE:
“Replications as Registered Reports”
link.springer.com/journal/1118...
You can get "in-principle acceptance" before data collection even begins; final paper gets published regardless the results, if the study is conducted rigorously.
#EconSky
Call for metascience grants has a focus on three areas:
🔸️ The impact of artificial intelligence on scientific practice and the research landscape
🔸️ The effective design and leadership of research organisations
🔸️ Scientometrics approaches to understanding research excellence, efficiency and equity
Some discussion about this in a conversation I’ll be releasing in early March, thanks Rasu!