We are inviting applications for a two-year postdoctoral position in a collaborative meta-science project on the effectiveness of data and code sharing policies in research-performing organizations. www.tue.nl/en/working-a...
Posts by Thomas Klebel
Wait what? What qualifies one to be an Austrian between the dichotomy of German vs autistic? I was "neither", which seemed like decent outcome for an Austrian, but now you tell me I'm not even that? 🤣
The AI economy looks...really precarious. So @matteowong.bsky.social & I did a bunch of reporting to try to figure out what happens when a potential bubble collides with a war in Iran and a potential resource shortage. The answer is...arguably the most dire stuff i've heard from smart ppl in a while
Thanks @jesper-w-schneider.bsky.social!
Interesting paper. When science is cheap, multiverse analysis becomes the key abstraction for authors & journals to strategize for.
These results suggest its the new unit of evidence authoring & review should accommodate.
Related to some of my comments here: substack.com/@jessicahull...
Thanks! Was just curious because these data are usually due to all the outliers - good to see the UP+non-profit group exhibit a different pattern with lots of diamond OA!
Interesting that the median seems to be higher than the mean for the nonprofit+university group. At least that's what the boxplot seems to show.
Any speculation on the reason? Lots of 0 APCs in that group?
I'm really excited about joining the group, and also very open to new ideas and collaborations - looking forward to exploring new avenues in the coming months and years. So if you want to work on something together, just reach out and let's chat!
I'll also continue doing research on research, thinking about causality, and all the other fun stuff of understanding our social world better, one step at a time.
Today I had the pleasure of starting my new role at the Complex Social & Computational Systems group of @janalasser.eurosky.social at University of Graz as a Senior Scientist, to support the group in research data and software engineering tasks.
🧵on my new paper "Synthetic personas distort the structure of human belief systems" w Roberto Cerina I'm v excited about...
🚨 Do synthetic samples look like human samples?
We compare 28 LLMs to the 2024 General Social Survey (GSS) to find out + develop host of diagnostics...
Raises important questions: how will #scholcomm adapt, which norms around publishing will emerge? How will research assessment work in the future?
What do we want "research" to look like?
Curious to see where @socarxiv.bsky.social will end up with their policy.
Gift link (hope it works) for the @ftrain.bsky.social NY Times piece on the impact of post-November-2025 coding agents (like Claude Code) on the cost of developing software - it's very worth a read www.nytimes.com/2026/02/18/o...
It lays bare all the existing issues with academia, valuing quantity over quality as the most pressing one.
We'll need to find solutions fast, I guess.
"The 2026 International Conference on Machine Learning (ICML) has received more than 24,000 submissions — more than double that of the 2025 meeting."
That's just absurd. Maybe there was substantial growth before, but this is clearly unsustainable.
Some clear goals to work towards.
This article by Willem Halffman & Serge Horbach provides great insight into the imaginaries underlying this question: should everything be published (in machine readable form), or only a curated selection? What would "everything" even mean? Do published findings need to be retracted, ever?
It also ties in with the main question underlying the whole discourse: how should dissemination of knowledge be organised for research to function efficiently?
This is probably key.
Great thread, with lots of important considerations and implications for the metascience community.
But I can definitely see how LLMs can be useful when solid ground-trught data is available. It seems that the data you collected will be very beneficial to the metascience community as a benchmark, both for newer open LLMs, but also for general extraction of interesting aspects in the literature. 🎉
this is a very interesting point. We conducted large scoping reviews on the impact of open science, and to be frank, parts of the screening were brutal. There is lots of debate in the evidence synthesis community on using LLMs and similar tools, and I'm not sure about the consensus yet.
A Bayesian model-based framework simply seems more in line with what people actually do, and also more productive in the long-term.
What would the usual response be? To pre-register more? On this I'd agree with what @sabinaleonelli.bsky.social said today at the @tier2-project.eu final event. Heavily summarising: preregistration might be worthwhile in some domains, but certainly not in all.
Also points to the need to move away from single tests to more comprehensive research programmes that incrementally build scientific and subsequently statistical models in a more honest and transparent way.
Good to see ubiquitous publication bias documented that well for an entire field. I'd expect the same to be true for sociological research, although methods are probably more diverse there.
A variation: Scientists who claim they’re “not interested in causality” because they assume the term only applies to deterministic, law-like relationships that are unrealistic in their field. Instead, they’re interested in how “X drives Y”, the effects of X, the “extent to which X matters for Y”>
The stories we tell build our world, so we should be careful about the stories we tell ourselves and we should be wary of the stories that others tell.
This goes for politics and for science alike.
Yes indeed, thanks a lot!
"Honest, dedicated, skilled researchers may investigate a topic and come to opposite conclusions because of variation between how they conduct their analysis"
My sense is almost all experienced researchers are intuitively aware of this, but nevertheless we tend to ignore this inconvenient truth!