My God… Just make it a lottery already!
Basic screen on scientific strength and feasibility and then… Lottery
Fair as can be, no bullshit, biased decisions or weird rejection argument. Just plain (bad)luck
Posts by Johannes Schwenke
Or start at the ISCB. There is a joint workshop:
Designing Next-Generation Respiratory Virus Trials: Estimands, Core Outcomes, and Adaptive Pandemic-Ready Frameworks
PROACT EU-Response + RECOVERY
and we are also inviting REMAP-CAP
This is an important story. I did not know yet about the early 2000's activism and how it was both effective and ineffecive. Very interesting read!
open.substack.com/pub/clinical...
Need to read the study more carefully, but might be of interest @epiellie.bsky.social, @tamarhaspel.bsky.social, @statsepi.bsky.social.
Or it's just selection / collider bias? ApoE4 raises LDL cholesterol. Saturated fat, as in meat, raises LDL, more so in people with ApoE4. High LDL increases cardiovasc & dementia risk. Study selects for old people without dementia. Meat eating ApoE4 carriers might be more depleted prior baseline.
And in what is my favorite figure ever, we look into how shrinkage impacts person-level means in both the univariate and multivariate case:
Thanks! Removing tidyverse and reducing dependencies is on our to-do list.
For this see for example here: github.com/scjohannes/m...
See lines 148-177 and 203-2011. Very much WIP.
By 2 d.f. do you mean for example 1 df for the 'main' effect and 1 df for the deviation from PO, either at a single cut-point or a linear deviation? Definitely easily doable is a matrix, e.g., a linear deviation from PO for time and a separate log OR for the treatment variable for one intercept.
That would also be great. For frequentist stats VGAM (I think) supports all of that! It's of course not nicely integrated into the rms ecosystem.
@aalbuquerque.bsky.social please do. I've wished for more flexible constraints in cppo arg from blrm() but never got around to invest the time to understand how to modify on the underlying stan code for that feature myself.
Is Positron becoming unusably slow/laggy for anyone else after some time? I'm having to restart multiple times a day, e.g., when Ctrl+Enter to send to the console takes multiple seconds. Feels like it's especially happening when working with .qmd files, but I haven't found any clear pattern.
nice paper from @bingkai.bsky.social adding evidence that adjustment in RCTs is good but ML is often not better than linear regression.
arxiv.org/pdf/2602.00434
The result shouldn't surprise you! The signal to noise ratio is too high to learn useful nonlinearities in all but the largest trials.
Very sensible article. This is very clear to anyone working in trials. What will make trials faster? Better outcome measures, designs that allow for frequent interim analyses, analyses that maximize information use.
open.substack.com/pub/cell/p/a...
This paper from Dominic Magirr et al. was very helpful in clearing up most of my confusion: osf.io/preprints/os...
Yes, I think (at least conceptually) I got most of that! I guess I'm currently mostly unsure when I think a superpopulation assumption is useful, and when not. Loved the chapter!
I'm very sympathetic towards Bayesian methods and often use them when I can decide on the approach. But not everyone agrees and imo Markov models are still super cool / promising no matter the framework!
In link.springer.com/book/10.1007..., in the chapter on covariate adjustment @kellyvanlancker.bsky.social clearly states that it's incorrect to use the default delta method, hence my worries.
Thanks. I shall have a look. I meant something similar when I said inference about what happened in our trial sample vs a population in the first post. But maybe that's also not the correct way to think about it? If I just care about "was there an effect in this trial", should I worry about var(X)?
Boostrapping takes too long for simulation studies, so I was hoping simulation from MVN could be good enough, but alas that would ignore the sampling variability of the Xs...
I stumbled across this because we are investigating how to calculate uncertainty for marginal estimands of markov models (a la @f2harrell.bsky.social, onlinelibrary.wiley.com/doi/10.1002/..., but frequentist).
Thanks! I was referring to how marginaleffects calculates the standard errors using the delta method by default, i.e., ignoring the sampling variability. The discussion is very helpful.
I'm slightly confused about this paper: pmc.ncbi.nlm.nih.gov/articles/PMC...
Does this mean that marginaleffects (@vincentab.bsky.social ) underestimates the variance when using avg_comparisons()? Or is this actually a question about whether I want to make inference about my sample vs a superpop ?
The authors don't even report a confidence interval around the difference, but calculating this by hand I get a width of about 35 pp on a scale from 0-100. The uncertainty is just massive. The clinical note completely ignores that and encourages a change in care.
This is obviously a question about practical equivalence or non-inferiority. N = 88. Patients had up to four measurements of TSH, which the authors reduced to 0/1 stable or not for the analysis (surely a crime in @f2harrell.bsky.social eyes). The result is that the study is hopelessly underpowered.
E.g., in a recent issue (clinician.nejm.org/patients-lev...) the clinical takeaway is that a 15% higher dose of levothyrox with breakfast was as good as the standard of care of levothyrox an empty stomach in maintaining stable TSH levels. The underlying RCT is academic.oup.com/jcem/advance...
@nejm.org has a subscription, for clinical notes, which summarize new research for clinicians in bite sized pieces. It's great for e.g., my partner, because she doesn't have time to read studies next to clinical work. But even @nejm.org absence of evidence is confused with evidence of absence...
SWIGs are definitely not core lol
Genuinely seems like a great use of AI? Train a classifier on types of (problematic) changes and map their date to prior to, during, after recruitment/intervention and when I open an article on pubmed/journal the doi/trial-ID is matched and I get a banner with the change info?
I would encourage people to check out the version control of this study on clinicaltrials.gov/study/NCT055....
The definition of early vs late, sample size, and inclusion criteria changed throughout the study period. Once it was even changed to be an observational study and then back to RCT??!