Advertisement · 728 × 90

Posts by Randy Ellis

More details about the Bayesian Workflow book and case studies now available on the book web site avehtari.github.io/Bayesian-Wor... (but you still need to wait a bit for the book)

2 weeks ago 98 28 2 0

It was great fun to chat with Randy about metascience and replicability. Check out his podcast!

2 weeks ago 3 1 0 0
Preview
Brian Nosek: Replicating Psychology, Founding the Center for Open Science Spotify video

Also available on:
Spotify:
open.spotify.com/episode/4zIg...

open.spotify.com/episode/3aCD...

Substack:
substack.com/@metascience...

substack.com/@metascience...

2 weeks ago 2 0 0 0
Brian Nosek: Replicating Psychology, Founding the Center for Open Science
Brian Nosek: Replicating Psychology, Founding the Center for Open Science YouTube video by Metascience Matters

Here are my conversations with @briannosek.bsky.social and Tim Errington on Metascience Matters
Brian: www.youtube.com/watch?v=4DCV...
Tim: www.youtube.com/watch?v=-EDA...

We discussed their replication projects in psychology and cancer biology, the Center for Open Science, and many other topics.

2 weeks ago 1 1 1 1
Preview
Lecture: Investigating Fraud, Arrogance, and Tragedy in Alzheimer's Research A Talk and Discussion With Investigative Reporter Charles Piller - Knight Science Journalism @MIT Join Charles Piller, a correspondent for Science and investigative reporter, as he discusses his book “Doctored,” an examination into how unscrupulous researchers, government enablers, and Pharma exec...

If you’re in the Boston area, please join me on April 16 for my talk about the tragedy of fraudulent or false Alzheimer’s research – and the response to my book “Doctored.” Talk and reception sponsored by MIT Knight Science Journalism. ksj.mit.edu/event/lectur...

3 weeks ago 3 1 0 0
Post image

7/ The policy impact is striking: reproducibility rises from 29.6% before Data Access and Research Transpareny (DA-RT) to 79.8% after.

3 weeks ago 5 5 1 1
Post image

Update from the Metascience Alliance: A synthesis of input gathered so far is now available. It reflects back what’s been heard and highlights five emerging themes to guide next steps.

📝 Read the synthesis: www.cos.io/hubfs/Met...

💡 Learn how to get involved: cos.io/metascience-a...

4 weeks ago 4 3 0 0
Post image

>5h recordings and slides for most #LoveReplicationsWeek talks are now available on our updated website. Thank you so much everybody who contributed to this wonderful week, participated in the talks, and partnered-up with us. I think there should be a replication of this.
forrt.org/LoveReplicat...

4 weeks ago 11 6 1 2
Video

Statistical Rethinking 2026 is done: 20 new lectures emphasizing logical and critical statistical workflow, from basics of probability theory to causal inference to reliable computation to sensitivity. It's all free, made just for you. Lecture list and links: github.com/rmcelreath/s...

4 weeks ago 598 193 11 11
Advertisement
Statistical Rethinking Lecture B10 - Hidden Markov Models
Statistical Rethinking Lecture B10 - Hidden Markov Models YouTube video by Richard McElreath

Hidden Markov Models - Lecture B10 of Stat Rethinking 2026. Hidden state models, inference of latent strategies, time series, is the president dead?, capture-recapture and demographic inference, Guerilla Bayesian Workflow. This is the final lecture for 2026. www.youtube.com/watch?v=fuon...

1 month ago 70 12 2 2

My favorite is this kind of convo that turns into a neverending journey! Scientists in the Boston area interested in preventing mistakes from escalating into egregious fraud cases, please join our Meetup 7pm Sunday night at the Lavender room in Somerville Armory.

1 month ago 3 1 0 0
Preview
Litigating multimillion-dollar scientific fraud cases | Metascience Matters #5 Spotify video

Also on Spotify: open.spotify.com/episode/3ka8...

1 month ago 0 0 0 0
Litigating multimillion-dollar scientific fraud cases | Metascience Matters #5
Litigating multimillion-dollar scientific fraud cases | Metascience Matters #5 YouTube video by Metascience Matters

Here's my conversation with @eugenie-reich.bsky.social, an attorney representing scientific whistleblowers, on Metascience Matters: www.youtube.com/watch?v=SMRC...

We discuss her cases, the False Claims Act, whistleblower awards, pressures on scientists to produce positive data, among other topics.

1 month ago 5 1 1 1

Working hypothesis: If you're doing research and don't occasionally have a small existential crisis, either you've been blessed to work in an exceptional field (do tell which one it is!), or maybe you're being a bit naive.

1 month ago 124 21 3 1

Happy to announce that the RR for ManyNumbers 3 was accepted (in principle) at Developmental Science today. This project will investigate the socio-demographic correlates of preschool numeracy in US sites participating in ManyNumbers 1. If you're interested, it's not too late to join these projects.

1 month ago 23 7 1 0
Preview
Forensic Metascience, the GRIM test, and technology for checking papers | Metascience Matters #4 Spotify video

Also available on Spotify: open.spotify.com/episode/0zH0...

1 month ago 1 0 0 0
Forensic Metascience, the GRIM test, and technology for checking papers | Metascience Matters #4
Forensic Metascience, the GRIM test, and technology for checking papers | Metascience Matters #4 YouTube video by Metascience Matters

Here's my conversation with @jamesheathers.bsky.social, Founder/Director of the Medical Evidence Project, on Metascience Matters: www.youtube.com/watch?v=QH87...

We discussed his book on Forensic Metascience, the story behind the GRIM test, how technology can enable metascience, and other topics.

1 month ago 8 3 2 0
Maximum Likelihood Estimation (MLE) with Examples
Maximum Likelihood Estimation (MLE) with Examples YouTube video by Steve Brunton

Steve Brunton’s videos are good youtu.be/rCdxlN6Ph14?...

1 month ago 2 0 0 0
Advertisement
Preview
Building a Publishing Model for Replication: Q&A with the Senior Editors of Replication Research COS spoke with the senior editors of Replication Research—a community-led Diamond Open Access journal that supports reproduction and replication studies.

Replication Research (R2), a 🆕 community-led Diamond OA journal, makes replication studies more discoverable, publishable & rigorously evaluated—without subscription barriers or author fees. Ahead of #LoveReplicationsWeek, R2's senior editors shared their vision in our Q&A:

1 month ago 13 9 0 1
Preview
Nanoscience is latest discipline to embrace large-scale replication efforts A European project calls for help to verify whether carbon quantum dots are really able to sense chemicals in cells.

Wonderful to see this replication effort in the physical sciences using the models of many labs, preregistration, and transparency that have benefitted other fields.

And, an investment of $9.5 million to do it!

www.nature.com/articles/d41...

1 month ago 37 11 0 0
Post image

Here's what a Cohen's d = 22 looks like. Totally normal. See it all the time in my own data...

2 months ago 11 1 2 0

Today in that-didn't-happen: Cohen's d = 22.

Williams et al. (2014) has 145 citations, putting it in top 1% of most cited psych articles.

It is a load-bearing publication in its area, despite having impossible results.

pubpeer.com/publications...

2 months ago 43 9 4 0
It must be very hard to publish null results
Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.

2 months ago 644 222 30 52
Post image

Without publication bias, we might not need many replications. With publication bias, 20% to 40% might be justified (but of course, extremely dependent on the assumptions in the simulations!). If the field is a mess, we need a lot of replication studies to clean up!

2 months ago 6 1 0 0

My colleague Krist Vaessen wrote a new book: “Neomania: How our obsession with innovation is failing science, and how to restore trust”. It's a great analysis how the drive for novelty hinders reliable scientific progress. Open Access, so read it here: books.openbookpublishers.com/10.11647/obp...

2 months ago 13 4 0 0
Advertisement
300+ retractions, image manipulation, and why science should be boring | Metascience Matters #3
300+ retractions, image manipulation, and why science should be boring | Metascience Matters #3 YouTube video by Metascience Matters

Here's my conversation with Mu Yang on Metascience Matters: www.youtube.com/watch?v=E2EK...

We discussed her work as a scientific sleuth, academic incentives for positive data, individual cases she has pursued, and why she loves being a sleuth.

Also on Spotify: open.spotify.com/episode/16R6...

2 months ago 14 6 0 0
Post image

New submission format at SBE:
“Replications as Registered Reports”

link.springer.com/journal/1118...

You can get "in-principle acceptance" before data collection even begins; final paper gets published regardless the results, if the study is conducted rigorously.

#EconSky

2 months ago 25 17 1 4

Call for metascience grants has a focus on three areas:

🔸️ The impact of artificial intelligence on scientific practice and the research landscape

🔸️ The effective design and leadership of research organisations

🔸️ Scientometrics approaches to understanding research excellence, efficiency and equity

2 months ago 3 1 0 0

Some discussion about this in a conversation I’ll be releasing in early March, thanks Rasu!

2 months ago 1 0 1 0
Preview
Metascience Matters I'm Randy Ellis, a computational biologist and neuroscientist who cares about metascience, reproducibility, and rigor in science. I started Metascience Matters because I believe science communication ...

YouTube: youtube.com/@metascience...
Spotify: open.spotify.com/show/7coSExb...
Apple: podcasts.apple.com/us/podcast/m...
iHeart: www.iheart.com/podcast/269-...

2 months ago 1 0 0 0