Seems like a logical name. I just had seen the same concept as a different name pubmed.ncbi.nlm.nih.gov/11113946/
Posts by Aaron Caldwell
Huh interesting. I’ve not seen this called centinels before but rather sympercents.
“[Of] 25 replication studies, 7 (28%) studies demonstrated robust replicability, meeting all three validation criteria: achieving statistical significance (p < 0.05) in the same direction as the original study and showing compatible effect size magnitudes as per the Z test (p > 0.05).”
I'm pleased to announce the publication of two important papers in Sports Medicine on the first every large replication project in sport and exercise science. Read the full papers:
lnkd.in/gH3NCqK5
lnkd.in/gE7izySW
#SportsScience #EvidenceBasedPractice #Research #OpenScience #Replication
I’ve implemented a couple different ways of plotting this in the TOSTER package aaroncaldwell.us/TOSTERpkg/ar...
Read it and had to do a double take to understand. Looking at this, I’ve never felt so American 🤣
We're hiring at the U. of Arkansas!
Asst. Prof. with the aim of a research program focus on circular bioeconomy systems, teaching, service, and mentoring graduate students and research staff. Tenure will accrue in Dept. Biol. & Agric. Engr.
Apply: uasys.wd5.myworkdayjobs.com/en-US/UASYS/...
⚠️ Job Alert ⚠️
We're looking to add a Assistant/Associate Professor of Biostatistics to our crew at UAMS in Northwest Arkansas! Please share widely!
Awesome team, great work-life balance, and you get to help make real impact in community health. #stats
uasys.wd5.myworkdayjobs.com/en-US/UAMS_A...
Mind sharing the DOI? I’m curious about this one.
Hey folks, I intermittently get on social media. If you ever need me, send me a message through my contact form on my website aaroncaldwell.us
New article from me:
“Inconsistent multiple testing corrections: The fallacy of using family-based error rates to make inferences about individual hypotheses”
Open access: doi.org/10.1016/j.me...
#Stats
#Methodology
New article: "Effects of preferred versus nonpreferred music on bench press performance". By Jasmin Hutchinson, @jennymurphy2.bsky.social, et al.
doi.org/10.51224/cik...
Check out our replication study as part of the larger replication project by the ssreplicationcentre.com
Thanks so much to Jasmin and team for their brilliant work on this
tweet from “gentile news network”: ⚠️ HERE ARE THE RULES ⚠️ 🚨Post a picture of a Jew. 🚨 Say "_____ is a Jew." (1st line) 🚨Include what they are known for/who they are (2nd line) 🚨 #NameThem. (3rd line) Please post only clean images of the person i.e. no stars of David etc. Let's get this trending. ❤️
gonna tell my grandkids this was musk’s twitter, because it was
Also a good paper that may be useful for the scenario you described pubmed.ncbi.nlm.nih.gov/10734289/
Ah, I had forgotten about this post. I like the simplicity of his MLM approach!
Yup, not necessarily wrong approach either. IMHO, I’d prefer to report Glass delta (pre-intervention SD) peerj.com/articles/103...
Yeah, it’s feature of the design (I’m guessing this is SMD of the change scores). Lack of concurrent control provides a conveniently large effect size
Yeah, that’s pretty much how I’d do it (at a glance)
So the mixed model you had but with t1 on the “right” and t2 and t3 on the “left”
Then I’d only include t1 as a covariate
When is the experimental treatment exposure? Before or after t1?
New article: "Model specification in mixed-effects models: A focus on random effects". By Keith Lohse et al.
doi.org/10.51224/cik...
No, that sounds entirely unreasonable. Are these limits for limits of agreement or an equivalence test.
If you don't think it would be any better your conclusion/claim would be better stated as "Effect sizes are function of the experiment design and analysis approach. So describing an effect outside of the context of the experiment just is not meaningful."
Claim 1: "Effect size is a function of the experiment design and analysis approach as much as, if not more than, the underlying effect. So describing an effect’s Cohen’s d outside of the context of the experiment just isn’t meaningful."
Is misleading the reader then, no?
Not sure I agree, all depends on what information/inference you are trying to make with the comparison. Let me ask you this though, what makes you think an *unstandardized* mean difference would be any better for comparing between experiments?
And heterogeneity does not mean the effect sizes can’t be compared. Random effects models in ma exist for a reason
To put it another way: if I had two different studies that used different measures with wildly different reliabilities (within subject variation) I wouldn’t be surprised by differences in the SMD.