1/
"Silicon samples" are becoming more and more common in research and polling.
One problem: depending on the analytic decisions made, you can basically get these samples to show any effect you want.
The updated version of this preprint is now online!
THREAD🧵
arxiv.org/abs/2509.13397
Posts by Ruben C. Arslan
I’m sorry but this is an absolutely insane conclusion for the authors to draw given what they actually did
I'm talking about the cables you get with devices, which are usually A2C crap. The e-waste it was supposed to avoid. I was enthusiastic about this policy, but IMO same plug but different mutually incompatible standards (A2C vs C2C) isn't actually a solution.
but the biggest indignity is that to get a USB-C (C2C) shaver, I had to purchase one from from a brand called "manscaped". I still shudder every time I read the label.
also, I now have 15 usb c cables, most of which don't do power delivery or fast data transfer with no way to tell them apart except thickness/bendiness. and some usb-c devices need a usb-a outlet ("A2C") so I still can't travel with just 1 cable.
1. The paper with the implausibly large effects of Omega-3 fatty acids on mental health was now retracted. A little thread on the process where @ianhussey.mmmdata.io and I was involved.
www.sciencedirect.com/science/arti...
That suggestions seems fairly orthogonal. But generally I'm partial to the idea that larger sums should go to those who create a public resource. I think this is stipulated often now, but not checked.
Ian is smart, and he has integrity. You want to work with him.
Can I ask what solutions are on the table and what you've deemed not to work?
One other alternative is what they call pull funding here goodscience.substack.com/p/the-case-f..., seems to resemble advance market commitments. But this only fits certain cases and fields.
I agree with you about the problem of super noisy quality evaluation upfront. I retain some hope that if you reduced application rates, you could go into more depth and extract more quality signal. Which is why I like the deterrent component of bug bounties, it wouldn't deter everyone equally.
My point was that even if you allocated everyone sufficient basic funding, scarce resources (e.g. tenured jobs) & hence competition would remain. And if it is mainly intra-institutional, I suspect you'd just get more nepotism/patronage/bad power dynamics.
I’m hiring a PhD student!
The candidate will work alongside @zefreeman.bsky.social, who is joining our research group as postdoc.
jobs.unibe.ch/job-vacancie...
But e.g. the MPI seems like an alternative more institution-driven model and I wouldn't even be sure MPI researchers spend less time applying for grants, because they still want to look competitive on their CVs.
yes, I thought the point here is to make funding rates not be so low that this happens.
but would that improve the quality of the work that is done? by what mechanism?
Given limited funding, I think it should be competitive, unless we believe quality cannot be determined ahead of time, but nobody seriously believes, do they?
What's good about lotteries except that it's more honest than an unreliable panel? People not projects screams Matthew effect to me.
I don't understand. What's the alternative, a non-competivie grant scheme?
My preferred solution is to add bug bounty modules to grant funding. If you expect scrutiny of the completed work, you should invest more effort into the proposal. But since the bounties would affect publications, not grant proposals, the link is a bit weak. www.the100.ci/2026/04/13/s...
Obviously, if self selection is possible, it's currently not happening enough. Being penalised for submitting a rejected proposal with a wait time would help if the rejection was perceived to be affected by quality.
If that's true, there is room for quality-oriented self selection. But you may also argue that we're mostly blind to the flaws of our own work, even to how much we let the AI drive etc. If that's true, self selection won't help. Thoughts?
They know if they let ChatGPT find the references, whether they half-assed the sample size planning, they know whether they exaggerated novelty, whether they're only submitting the grant to satisfy some promotion requirement, or even whether some of their preliminary work was fraudulent.
Now, where I differ with my friends, notably @annemscheel.bsky.social, is that I think the submitters have a lot of private information but no reason to reveal it.
But there's a lot of research showing that grant decisions are unreliable and a widespread feeling that the way to win is to just roll the dice often enough. So, this may not be a very selective way to reduce submissions.
The number of submissions will only increase and I believe their solution is a poor fix. But my friends and I disagree about better solutions. To make resubmission intervals longer for people who get a low rating acts as a penalty for low ratings. If these ratings reliably evaluated quality: good.
We are inviting applications for a two-year postdoctoral position in a collaborative meta-science project on the effectiveness of data and code sharing policies in research-performing organizations. www.tue.nl/en/working-a...
Yes, it would be great to be able to put a number on that to be able to argue that increased scrutiny/QC could “pay for itself”
Most allegations don’t lead to a case being opened. 117 new cases in 2024. 119 closed, most pre investigation. Six misconduct findings. Super low rate. Most of what they do is hidden. You could give them more money or spend it smarter.
Cheers! Do I see this correctly that they issued two research misconduct findings in 2025? And they have ca. 25 employees?