The thing with not dichotomizing continuous variable is it depends on context. Lots of stuff are dichotomized for legal/clinical/social reasons: drinking/voting age, tax rates, cancer stages. Ken Rothman had a tweet saying, we use a yes/no for measles diagnosis and not number of rashes/spots
Posts by DhRS
breakpoints. So, I only ask for codes and execute them manually, essentially separating generation and execution. I know it slower but keeps me sane. 2
Thank you so much; these are so helpful. Not sure I am missing something but I didn't see "stay in this directory" in your Claude prompt.
Every time I've tried to do similar (download/scrape/clean data), I get some weird stuff. I think being super-precise is the key but hard to anticipate all 1/
Lasagna is superior!
I happened to look at one of these APE paper in my field. Most of the state-level policy implementation dates were wrong. Dates were pulled from a couple of published sources, but didn't pull them correctly. Fancy stat modeling; don't get basics right
My 2 cents: 1) high time conferences switched to digital posters; waste is enormous with physical posters; and 2) digital posters can allow for an overview and detailed info on methods/results if a reader chooses so. Overview can be of ~100 words with a sentence or two for each section
What's the issue with this?
Why not just use random effects?
Except for vaccination for adults aged ≥65 years, ACIP makes no preferential recommendation for a specific vaccine when more than one licensed and recommended vaccine is available. Among adults aged ≥65 years any one of the following higher dose or adjuvanted influenza vaccines is preferentially recommended: HD-IIV3, RIV3, or aIIV3. If none of these three vaccines is available at an opportunity for vaccine administration, any other available age-appropriate influenza vaccine should be used (4,5).
Not defending FDA's decision, but this is in the same ACIP MMWR summary and what the decision is based on
"Core competencies are whatever our [Authors] areas of research are"
What is your complaint about this piece?
She is his frequent collaborator/co-author; they probably have like 50 papers together
Surely, Hassabis and Amodei qualify as scientists. Need not be mutually exclusive
Surely the FIFA Prize would be awarded in Memory of Pele/Maradona/Beckenbauer
"Come for Epstein, stay for leniency designs" 😨
Directly go to the 3rd line of the email
A natural experiment beckons?!
Is this a Vinay in new form, or is it a recurrence?
Also, you can tell where they got inspiration for their ludicrous bar charts in demo
It looks to me that the model is fine-tuned/overfit on "how many r's in strawberry" type questions, hence the two r's to two b's
Holy hell, this is so good.
The comments highlighted some stuff I had overlooked and were very relevant too, unlike the generic ones one would get from ChatGPT/Claude. Surely, there is a human in the loop?
I would go one step further. I also had to come to terms with the fact that my slow work and fewer papers don't necessarily mean they are ground-breaking and better than someone's who is churning them out. I am just slow, period.
If you have "switch to a new todo app" in your todo list, then you would have completed at least one task
The transfer of one manuscript from one journal after rejection is increasingly becoming a clown show. It's not really a transfer if I have to reformat the manuscript, revise for new word count, and re-upload all documents.
Forgot to register for this event. Was it recorded, and if so, will the recording be posted anywhere?
Conduct and submit to a journal a descriptive study in an area where not much is known. Clearly mention the descriptive study design in title/objective.
Reviewer: This is a descriptive study and lacks rigor. What about [a bunch of confounders]?
Editor: Several methodological concerns. Reject.
Authors using AI to write papers.
Editors using AI to decide on sending papers for peer review.
Reviewers using AI to craft reviews.
Editors using AI to decide on acceptance/major revisions/minor revisions.
Readers using AI to summarize published articles.
In contrast, learning more about causal inference has made me more willing to apply for applied grants. Laying bare the assumptions (wrt confounding, bias, measurement) that need to hold for the estimates to be considered causal is how I approach it
I like to think multivariable modeling for descriptive questions (risk factors-type studies) as a form of standardization, if only a few important variables are considered, similar to reporting age-sex-standardized rates/values