It's been a couple of years now since I finished my PhD at the @sidb-edinburgh.bsky.social ... and looking back at the experience, I think most of the things I learned had little to do with neuroscience.
I list a few of them in my latest blog post at celefthe.com/blog/2026/a-...
Posts by Constantinos Eleftheriou
Finally, many thanks to Lauren Schenkman @thetransmitter.bsky.social for the interview and writeup; it was a blast :)
doi.org/10.53053/ISK...
Pseudoreplication is not inevitable. Statistical tools such as linear mixed models allow us to take within- and between-animal variability into account. Hierarchical bootstraps are also another option (and a personal favourite!)
We need to start treating our statistics with the same respect we treat our experimental techniques & design. Merely paying lip service to reporting guidelines is not enough, and journals/editors need to take responsibility for the statistical rigour of the work they publish.
...especially when the effect sizes appear very small, or the degree of pseudoreplication is very high (e.g. very large number of pseudoreplicates per animal).
Is it a problem? Well, yes if we care about reproducibility, since pseudoreplication increases the likelihood of false positive results! Of course, just because an article is pseudoreplicated it doesn't mean the results outright wrong - but caution is warranted...
Stringent requirements and statistical reporting guidelines enforced by most journals since ~2012 have made no difference to the prevalence of pseudoreplication... but we did find that better reporting makes pseudoreplication easier to detect๐คท
We scored 645 articles published over the last 2 decades and found most were pseudoreplicated (~65% FXS and ~80% NDD articles), a trend which has persisted over the past 20 years.