I’m hiring a PhD student!
The candidate will work alongside @zefreeman.bsky.social, who is joining our research group as postdoc.
jobs.unibe.ch/job-vacancie...
Posts by Constantin Späth
We are inviting applications for a two-year postdoctoral position in a collaborative meta-science project on the effectiveness of data and code sharing policies in research-performing organizations. www.tue.nl/en/working-a...
Happy Friday everyone! I just posted what I think is an important blog post on my website. It is a critique of meta-meta-analyses: meta-analyses of meta-analyses.
Link: matthewbjane.github.io/blog-posts/b...
#stats #metascience
A test of higher and lower fractional volumes of resistance training upon arm and thigh muscle area: A multi-site randomised trial
2/6
E.g. I am currently managing more than 2 million in funding, and my projects are on improving coordination, the meaningful interpretation of effect sizes, doing better power analyses, and Metacheck, a tool that automatically checks which practices can be improved in a paper.
We're #recruiting a research assistant to work on a DFG-funded project on #MetaScience at the University of Cologne. Some coding skills in #python required, and interest in modelling the #cognition of #reading! Please spread the word!
What do you mean there is nothing? Your field has not thought about this at all? Then teach your students good examples from other fields, and tell them they have an easy and important contribution to make.
Publishing in a journal means endorsing it.
Where you publish reflects your values.
Choose wisely.
doi.org/10.52057/erj...
Reminds me of the scene when Alex Honnold climbed the skyscraper and there was a sign on a window that said "V2 in my gym" ;)
Our institute is hiring
1. an assistant professor (with TT) for Social Psychology (focus: environmental psychology)
ohws.prospective.ch/public/v1/jo...
2. an assistant lecturer (with TT) for Experimental Personality Psychology
ohws.prospective.ch/public/v1/jo...
It takes a lot of accepting scientific imperfection ;) For me it personally makes sense when simply thinking about “what you can control”. You can control what people are told, but not what they do. The manipulation is always a mediation path with some loss along the way.
"The benefits of and motivations behind large-team coordination in psychology" is finally out as preprint.
In this paper, @lakens.bsky.social, Krist Vaesen, and I discussed the possible rewards of large-team collaborations that are common in coordinated research.
The key word for me here is "conditional error control". I like that phrase.
Another phrase I like (in the broader context) is "long run, socially distributed error-control" from the last preprint of
@uyguntunc.bsky.social and @mntunc.bsky.social
Exactly, we need meta-scientists who not only operate on an abstract level, but who are also involved in specific areas of research!
Exactly, we need meta-scientists who not only operate on an abstract level, but who are also involved in specific areas of research!
James is a real role model. Doing just about the best empirical work in his field, and writing the best papers on his view on how to do good science. Even more impressive, he is no doing it outside of academia. I wish more metascientists would not just talk the talk, but walk the walk!
I was invited along with a selection of other experts to review the new format and afterwards to provide any additional commentary.
These have now been collected and published here: www.tandfonline.com/doi/full/10....
2/3
New blog post, inspired by the excellent recent qualitative paper by Makel and colleagues: On the reliability and reproducibility of qualitative research.
I reflect on how I will incorporate realist ontologies in my own qualitative research.
daniellakens.blogspot.com/2026/02/on-r...
Are you open science-minded, technically savvy, and interested in mixed methods? Come build the future of mixed methods with Tamarinde Haven and @mariestadel.bsky.social. Our campus is green, our colleagues supportive, and our research excellent!
www.academictransfer.com/en/jobs/3583...
Thanks! I hadn't thought of that (it's been a while since I wrote the article and adapted the chart for it) - in the original, there were no CIs & they used the publication year.
I think using the start year makes more sense as you've said, since the requirement was to register prospectively /1
I always find this image a bit misleading because it focus on the year studies are *published*, not when they are *started*.
Here is another version of that figure using the start year of study rather than publication year. Sample sizes in the early 1990s were larger than previous years.
I will leave you with 1 more blog on how incentives drive ignorance, not heuristics. daniellakens.blogspot.com/2016/09/why-... I did not learn about power because it was convenient not to. There was no motivation. Now I am a world leading expert. It is just incentives, not cognitive.
I've wondered about this in my area... I don't suspect as much pub bias as p-hacking. We've generated non-adjusted meta-analytic estimates and dose-response models from large datasets, then tested predictions in highly powered pre-reg studies with estimates almost bang on the mark.
A hypothesis developed based on the data is often more likely to be true, than if you had not used the data.
The problem is not whether the hypothesis is true.
The problem is the hypothesis was not severely tested. You can't *claim* it is true until you test it on new data.
Often a single Registered Report is more informative than a meta-analysis. The meta-analysis will show a non-zero estimate and we will not know if it is due to bias. Heterogeneity is huge, so the main recommendation of a meta-analysis is that future research is needed anyway.
screenshot of my post
Big new blogpost!
My guide to data visualization, which includes a very long table of contents, tons of charts, and more.
--> Why data visualization matters and how to make charts more effective, clear, transparent, and sometimes, beautiful.
www.scientificdiscovery.dev/p/salonis-gu...
If you have added some new slides/information and record yourself again for practice, it would be really great if you could share this practice session again for those who can't attend!
The image shows the abstract for my talk "The value of strong theory in intervention research: an example from the field of exercise science" at the upcoming 8th Perspectives on Scientific Error Workshop - you can find it in the program here: https://docs.google.com/document/d/1rt9ToVs1EkEuTbWod4st2b6StUQr5IhyTP1jhGskr4E/edit?usp=sharing
As if I haven't banged on about enough by now... looking forward to continuing to talk about how developing and trying to test strong theories is a pretty damned useful way of going about doing science.
Here's my abstract for the 8th Perspectives on Scientific Error Workshop in Leiden, NL.
#PSE8
New on the Archive:
Uygun Tunc, Duygu and Tunc, Mehmet Necip (2025) Inductive Risks and Evidential Thresholds: A Reliabilist Case for Value-Freedom in Science. [Preprint]
https://philsci-archive.pitt.edu/27848/