Advertisement · 728 × 90

Posts by ERROR

Given how machine learning practices have evolved in the last decade, I was very curious if these findings would hold up. Many thanks to Florian for his nuanced and thorough review.

Romantic desire: not predictable in 2017, not predictable in 2026.

1 month ago 21 5 3 0

Note: An earlier version of this thread described the problems with the ML techniques as severe, but this was misleading. Only without the described countermeasures, these might have resulted in severe problems.

1 month ago 0 1 0 0
OSF

The full rationale, including the review and author response, are available on PsyArXiv: osf.io/hxnum

The study materials and all new materials generated as part of the review are on OSF osf.io/y3hxg/ and Github github.com/FlorianParge....

Original article available here: doi.org/10.1177/0956...

1 month ago 0 0 1 0

This shows how replications can function as a safety net not just for random fluctuation, but also for problems with methods that were not known at the time.

1 month ago 1 0 1 0

Best practices evolve and especially in machine learning, avoiding overfitting can be difficult. Simple, easy to explain techniques like splitting the sample and straightforward replications are easy by comparison and will remain a best practice.

1 month ago 2 0 1 0

By doing training/test set splits and conducting a straightforward validation of their results in a second sample, the authors reined in the bias.

1 month ago 0 0 1 0

However, he also found that the machine learning techniques used, state-of-the-art at the time, could lead to inflated performance estimates. This could have led to bias had the authors not conducted self-replication.

1 month ago 2 0 1 0

Florian found some minor transcription errors and improvable documentation practices (unsurprising given the age of the work and the studies involved).

1 month ago 0 0 1 0

We are grateful to Florian Pargent for his in-depth review and reanalysis and to @datingdecisions.bsky.social @pauleastwick.bsky.social @elijfinkel.bsky.social for their willingness to have their impactful work scrutinised for errors.

1 month ago 1 0 1 0
OSF

New report: Joel, Eastwick, & Finkel (2017) “Is Romantic Desire Predictable? Machine Learning Applied to Initial Romantic Attraction”. Based on the review by @florianpargent.bsky.social, we find Minor Errors that do not affect the core conclusions of the manuscript. osf.io/hxnum

1 month ago 18 8 1 1
Advertisement

This is a follow-up on bsky.app/profile/erro...

2 months ago 1 0 0 0

News from scientific self-correction: Authors pushing to get errors in their papers corrected. Lukas continues to be a role model for how scientists should handle post-publication peer review.

2 months ago 9 5 1 0

Improving scientific practice can seem daunting. In this fantastic talk (and thread below), Julia Rohrer shares practical ways to communicate methodological insights to a wider audience of researchers.

2 months ago 23 10 1 0

At ERROR, we cannot compete with million-dollar bounties for whistleblowers. But it is great to see sleuthing work rewarded, and institutions admitting when their researchers engaged in misconduct.

4 months ago 13 7 0 0

Post-publication peer review is at it best when it's thoughtful, scrupulous, steeped in detail – and challenges key claims of the paper. @janhove.bsky.social's discussion of a recent paper on multilingualism exemplifies this.

4 months ago 18 9 0 0

Metascientists step up as role models for a healthy error culture in science. Here is a great case where an author and a critical reader collaborated to set the record straight.

4 months ago 13 3 0 1
APA PsycNet

Voluntary retraction remains a key way to put scientific self-correction into practice. Zhu and Holmes (2024) did the right thing when they realized that some of their results were based on a coding error.

Original: psycnet.apa.org/fulltext/202...

With retraction: psycnet.apa.org/fulltext/202...

4 months ago 16 7 0 0

Many error remain to be found in clinical trials. Patients deserve reliable results. Kudos to these authors for their persistent work to correct the record.

4 months ago 13 5 0 0
Advertisement

Congratulations to @simine.com for winning the Einstein Foundation Individual Award! 🎉

A well-deserved recognition for her seminal efforts to improve scientific rigor, which includes instituting detailed checks for errors and computational reproducibility at Psychological Science.

4 months ago 21 6 0 0

I think this is an overly pessimistic take from the @bmj.com.

Sharing data does not inherently increase trust, rather it enables verification which allows for trust calibration.

This example is a win. Serious issues were rapidly detected that would not have been without mandatory data sharing.

5 months ago 53 14 3 1

Synchronous Robustness Reports could explore implications of different analytical choices – but they could still suffer from bias. Hardwicke argues that preregistration is crucial to prevent it.

@tomhardwicke.bsky.social

5 months ago 8 9 0 0

Are methodological and causal inference errors creating a false impression that the gut microbiome causes autism? In this strong analysis, Mitchell, Dahly, and Bishop question the evidence.

They show that triangulation in science requires multiple robust lines of research.

5 months ago 18 12 0 0
7 months ago 7 0 0 0

New Nature podcast episode about ERROR and the Perspectives on Scientific Error workshop!

8 months ago 6 2 1 0
The ERROR project: “We pay experts to examine important and influential scientific publications for errors . . . We expect most published research to contain at least some errors . . . our reward sy...

“We pay experts to examine important and influential scientific publications for errors ... We expect most published research to contain some errors ... our reward system pays bonuses to both authors and reviewers even when minor errors are found ..."
statmodeling.stat.columbia.edu/2025/07/13/e...

9 months ago 25 9 1 2
Advertisement
Preview
Home ERROR is a bug bounty program for science to systematically detect and report errors in academic publications

✨ ERROR (@error.reviews) is a bug-bounty program for science that seeks to estimate the prevalence and nature of errors. error.reviews

8 months ago 13 6 1 1
1 year ago 4 1 0 0
Preview
Frontiers | The Emperor’s old clothes: a critical review of circular fashion in gray literature

EU legislation requiring clothes be reused and recycled may be based on a numerical error in a 2017 NGO report where $460 billion was added instead of subtracted.

www.frontiersin.org/journals/sus...

1 year ago 10 4 0 0
1 year ago 3 2 0 0
1 year ago 11 3 0 0