The Royal Statistical Society discussion meeting for Regression by Composition will be held in London on March 24th. The event will also be livestreamed. Sign up to be among the first to hear about the future of regression modelling at rss.org.uk/training-eve...
Posts by Anders Huitfeldt
In my view, approximate correctness of the model (in terms of biological plausibility) is lexicographically more important to model choice than tractability of the estimation procedure (provided that researchers honestly discuss how that might affect their inferences)
You can of course argue that oppressed groups tend to be assigned tougher judges because of discrimination. Then you should not control for judge because it is a mediator. But for conceptual clarity, this is different from your stated reason for not controlling
I think I disagree with this. Suppose you have sufficient data to control for judge on the individual judge level and the only relevant bias is whether the judges are prejudiced. Then controlling for judge will give you a model that appropriately detects such bias.
Fun article about “outsider” scientists and their breakthroughs.
“Academia filters most funding, publishing, and hiring decisions through senior insiders, which favors ideas within existing paradigms.”
worksinprogress.co/issue/why-sc...
Goose chase meme. Goose asks "Unbiased for what estimand?" then yells "Unbiased for what!!"
Reckon I can submit this as a figure for a paper?
Foreground text says “Marginal” and “conditional” are relative descriptions of an estimand In the background is a smokey mollusc
New post: “Marginal” and “conditional” are relative descriptions of an estimand
New paper posted on Arxiv: "When do composite estimands answer non-causal questions?"
This can happen more often than you think, and can have a dramatic impact on trial results (e.g. a false-positive rate of almost 90%)
arxiv.org/abs/2506.22610 @timpmorris.bsky.social
Suppose someone went to a discussion board whose community values differ more from ours than those of the “red tribe” (e.g islamists, putinists or CCP ideologues). Wouldn’t you think this was a good thing, helping both sides understand better how the others see the world?
That’s fine. You can stay away from them as much as you want. I just dont get the rationale for shaming people for engaging “the other side” in conversation
You can be a moral purist all you want, but honestly, what is your end game? Do you expect the “other faction” to just all suddenly come to the realization that you were right all along, and that they were contemptible bigots who had been mislead by disinformation ?
If we want to find a way to live alongside people who don’t share our values, the first step will be to find a way to communicate across the sociocultural fault line. That requires that we do not wall ourselves into separate corners of the internet
After two years of trying to avoid this discussion, I just necroed *that thread* on datamethods (discourse.datamethods.org/t/should-one...) in order to share an excellent preprint by philosopher Veli-Pekka Parkkinen (philsci-archive.pitt.edu/24785/1/efme...)
I’m George Takei and I approve of this message.
«The world is what it is; men who are nothing, who allow themselves to become nothing, have no place in it»
🤢 horrifying piece of work, which we’ll see idiots quoting.
For goodness sake @jclinepi.bsky.social
In economics, editors, referees, and authors often behave as if a published paper should reflect some kind of authoritative consensus.
As a result, valuable debate happens in secret, and the resulting paper is an opaque compromise with anonymous co-authors called referees.
1/
People say estimating a causal effect sets the bar unattainably high.
But estimating an association is a bar that is literally so low that you can't go under it.
Better to aim for what you want and fall short than to, in Homer Simpson's words, "aim so low, no one will even care if you succeed."
While most of the alternatives to democracy have been tried, there are many alternatives to the publishing system that have not been tried and may be superior.
I was not trying to make a point about inexact wording, and I apologize if that is how it came across. My point was that there may exist better systems for evaluating science that have not been tried.
I believe the quote was "all the others that have been tried". The system was well suited for an era when the limiting factor was the cost of printing and distributing paper journals. When distribution of information is essentially free, other systems may work better for identifying scientific value
The Justice Department has filed a lawsuit against approximately two dogs.
Unpopular opinion: Blinded peer review was a mistake. If someone is able to block my paper based on their midwit opinions, they should at the very least be forced to put their name on the review, and publicly stake their reputation on the (false) claim that my work is flawed.
Same. Have no desire to speak ill of the dead, but I wished they had left this sentence out of the obituary: "He will be remembered for his remarkable contributions to the psychology world and as a defender of expression without undue fear of reprisal." Simply not true.
The data used for this map seems off. Having worked in hospitals in both countries, I refuse to believe that the death rate in Norway is double the rate in Ireland
Can anyone point me to a convincing intro to / explanation of ‘in silico’ trials?
Everything I’ve read & heard about them just makes the writer/presenter sound nuts.
What am I missing??
The Onion should buy Elsevier next
I like this taxonomy, but the examples given are not convincing. Most missingness is dependent on unseen data, so I understand that it is difficult to find good examples, but they could have tried to find some plausible random missingness mechanism for illustration
Gotta be honest here, Rhian is one of my favorite authors. I always enjoy her papers so much. I’m currently reading her paper on collapsibility and I feel like this one will be a natural follow up.