Calling on the Rstats community: I am looking for beginner friendly content (blog/book chapter/youtube video/ anything) for complete beginners to learn about setting up R, Rstudio, what an R session is, a working directory, and setting up an R project 🙏
Posts by Yashvin Seetahul
We are inviting applications for a two-year postdoctoral position in a collaborative meta-science project on the effectiveness of data and code sharing policies in research-performing organizations. www.tue.nl/en/working-a...
#statstab #526 A Meta-Analysis of the Impact and Heterogeneity of Explicit Demand Characteristics
Thoughts: The impact of the design on conclusions is under-appreciated
#demandcharacteristics #design #methodology #error #bias
#metascience #metaanalysis
online.ucpress.edu/collabra/art...
"Demand characteristics can create false positives, false negatives, upward bias, and downward bias."
#Methodology #MetaSci
Is a 55% replication rate too low, too high, or just right? Some thoughts on Tyner et al.’s (2026) recent study.
#MetaSci #PhilSci
Issue 39 of #rdmweekly is out! 📬
It includes:
➡️ Working Smarter with {dplyr} 1.2.0 @ivelasq3.bsky.social
➡️ Computational Reproducibility: A Primer from @ukrepro.bsky.social
➡️ SCORE: Systematizing Confidence in Open Research and Evidence @cos.io
and more!
rdmweekly.substack.com/p/rdm-weekly...
Paying my respects to the Church of Normal Distribution (Hallgrimskirkja, Reykjavík). 📊🙏
Do big team science studies guarantee the global generalizability of findings?
At the risk of not overgeneralizing ourselves, we reanalysed one big team science study on temporal discounting.
Together with peerless @psforscher.bsky.social & @hcp4715.bsky.social
www.nature.com/articles/s41...
== Effect Size and Confidence Intervals (ESCI) check ==
Lots improved.
Has a revamped website and R package on CRAN.
Works like statcheck, also checks effect sizes and calculates confidence intervals.
Tricky countless edge cases, but after testing it on 1000s of articles, seems pretty decent.
This is an amazing repository of datasets that are helpful to self educate on key #stats principles
Some lovely diagrams and contrasts in Ziman’s (1981) “Science: The New Model”: doi.org/10.1177/0270...
I think the term "overpowered" described here was (is?) mostly used to say that you end up with the wrong conclusion because your sample was too large.
Even if you specify a SESOI, this situation can't occur. The situation you describe isn't having an "overpowered" study, it's just being wasteful.
The raincloud part is a bit overkill imo
The two density plots with the lines are great!
TU/e has gained a new research centre: META/e. Daniël Lakens and Krist Vaesen were among the founders of this knowledge hub for metascience—research aimed at improving the practice of science itself. “We want to be a home for every researcher who occasionally wonders: what are we even doing?”
Thumbnail is emptiness, and emptiness is thumbnail.
Here's a link to the video tho, in case anybody is wondering what they're apologizing for 😂
www.youtube.com/watch?v=XG-6...
A great new preprint on the importance of pilot studies for the validity of studies that are performed. Such an important tooic, that is discussed too little. I especially liked the section on the need for transparent reporting. osf.io/t968e_v1 By @yashvin.bsky.social and collaborators.
Thanks for sharing and for the feedback, Daniël! I actually got the original idea for this paper while listening to your talk about severity vs validity when having to deviate from a pre-registered plan (youtu.be/LfZqE4e3w-k?...)
Check out our preprint: "What Pilot Studies Can (and Cannot) Do for Validity in Psychological Research"
Great job @yashvin.bsky.social and @mbneff.bsky.social for leading!
doi.org/10.31234/osf...
A Heterogeneity Revolution in Psychology
"When studies that may appear similar are repeated, findings often vary more than we would expect due to sampling error. This is not necessarily a problem if we understand why this happens."
Pour les attaques ad hominem, je te propose de relire ce que t'as écrit juste avant.
T'as refusé de réagir par rapport à un récapitulatif des effets trouvés dans toutes les méta-analyses des effets court termes des JVV pcq que deux personnes y sont.
Ce sont les méta-analyses, pas les "études" sur la capture d'écran. Et comme tu peux voir Ferguson détecte des effets plus larges.
(Et puis c'est un peu hypocrite comme propos sur la malhonnêteté, t'as pas publié une méta-analyse avec Pascual? Il a combien de ses papiers qui sont frauduleux déjà?)
Ça n'a absolument pas de sens de dire ça... c'est comme dire "si t'es pas bourré après une gorgée d'alcool alors tu ne peux pas être bourré après 100 gorgées".
Un effet détecté dépendra toujours du dosage du stimulus.
De plus, il n'a jamais été démontré qu'il n'y a pas d'effet à court termes.
I wrote a blog for the Meta-Research Center expressing my infinite frustration about not getting data. What else is new, you might think? Well, I added an extra layer of annoyance directed at the journals who do NOTHING to enforce promised data sharing.
metaresearch.nl/blog/2026/2/...
Methodology. European Journal of Research Methods for the Behavioral and Social Sciences
👀
1. Changing Minds: When Do People Resist Scientific Findings?
When scientific findings touch on people's identities or values, people don't simply weigh the evidence. They also try to protect their beliefs, self-image, and more.
Read more: spsp.org/news/charact...
Congratulations!!