Advertisement · 728 × 90

Posts by Sean Harrison, PhD

"How can we improve the efficiency of our machines?"

"Have you found and used all the power shards on the map?"

1 month ago 1 0 0 0

Well that made me chuckle!

1 month ago 0 0 0 0

"I left my home port, for it was not for me.

I reached the shores of a distant land.

I stayed a time, but the sea yet called.

I sailed again, to shores not yet dreamt.

I believe I will keep travelling, for so long as there is breath to billow my sails."

1 month ago 4 0 1 0

Do you mean a hard read because it's absolutely devestating?

I listened to it on a long walk once.

It was not a happy walk.

1 month ago 5 0 0 0
Preview
The Radleys The Radleys are an everyday family who juggle dysfuncti…

Matt Haig wrote a book about vampires in Bishopthorpe, which is more a village than suburbia, but still.

www.goodreads.com/book/show/79...

1 month ago 0 0 0 0

Someone noted that they were dismayed at all their academic colleagues using LLMs to write papers, because it was clear they viewed academia as the business of producing papers, not the vocation of the pusuit of truth.

Given the existence of paper mills, they probably aren't entirely wrong.

1 month ago 17 3 1 0
How the Healthcare System Screws Over Non-Binary People Increasing numbers of non-binary people are seeking treatment for gender dysphoria in the UK. But they say that the National Health Service is dramatically failing to adequately or efficiently support...

I mean, it's been noted on here that non-binary people say they are binary when trying to access hormones, because otherwise they won't get them.

Extremely clear rationale for not doing exactly what they did...

www.vice.com/en/article/h...

1 month ago 3 0 0 0

Like yeah, also good from a risk of bias perspective, but even then, using an LLM defeats the purpose: are you going to stick to what the LLM said?

Why?

Just do the fucking work.

1 month ago 0 0 0 0

This feels identical to kids using LLMs to do their homework: the point of an analysis plan isn't (entirely) to do an analysis plan, it's to think through your analysis, which informs data collection and interventions, and does a lot of work up-front you'd otherwise have to do later.

1 month ago 1 0 1 0

If basic logistic regression gets you 90% of the way there, you probably won't notice any effects of ML, or, indeed, more sophisticated/complex regression models.

And it doesn't matter at all if the dataset sucks, i.e., has no available information.

Also, simplicity is beneficial in its own right.

1 month ago 1 0 0 0
Advertisement

Lukewarm take: the potential proportion variance explained in the outcome by the totality of the dataset is the hard ceiling for the predictive capacity of that dataset.

Different methods aim to capture as much of that information as possible, but many models could be close to that ceiling.

1 month ago 1 0 2 0

Brent Knoll!

1 month ago 1 0 0 0

Depends on the construction and conceptualisation of "deprivation", but in general, I see residual confounding as above: unblocked paths because what you're measuring is not precisely the confounder of interest.

1 month ago 2 0 0 0

Quite so, but arguably a complete (possibly overly-complex) DAG would show that "deprivation" as a core variable would affect any measurement of deprivation (education, etc., which may be affected by other things), and residual confounding would therefore be the remaining open path.

1 month ago 2 0 1 0

If anyone needs it, I'd suggest "deprivation" could go into pretty much every "U" variable for virtually any observational research.

"Residual deprivation", if you like, if you've measured and controlled for e.g. "household income" or "Townsend deprivation index".

Because, like, that won't do it.

1 month ago 2 0 1 0

Anyway, it'd be pretty awesome to burn all those steps into a slice of tree stump and sell *that* at a farmer's market.

...

I probably wouldn't end up selling it, that sounds cool.

1 month ago 1 0 0 0

Pretty sure I once did a logistic regression with a continuous exposure sort-of by hand, as in, wrote out the matrices, inverted them, etc., using a computer for the maths but not the process.

Not 100% sure why, either for learning or for understanding.

1 month ago 2 0 1 0

Ahahaha, I didn't even see that.

The IRRs are most definitely *not* on the log scale, and the 0.8, 0.9 ... 1.2 are not on the log scale, but the scale itself *is* on the log scale (the distance between markings is equal).

1 month ago 0 0 0 0

If it's case, I wonder* if the authors can get back some of the $9,550** they spent on the APC...

*I don't really wonder.
**May be less if the authors had some kind of instutional deal.

Man, I hate journals.

1 month ago 4 0 0 0

"Minor" in the sense that the conclusions don't change based on the incorrect figure and/or IRRs, and that I'd just ask them to fix the figure, rather than retract the paper.

It's *possible* the journal re-did the figure in house style and screwed it up, fairly certain that happened to me once...

1 month ago 1 0 1 0
Advertisement
Figure 2 Unadjusted incidence rate ratios for heart failure, atrial fibrillation, and VHD at 12 months -- none of the IRRs or their 95% confidence intervals are plotted on the graph correctly (or vice versa, depending on which is correct)

Figure 2 Unadjusted incidence rate ratios for heart failure, atrial fibrillation, and VHD at 12 months -- none of the IRRs or their 95% confidence intervals are plotted on the graph correctly (or vice versa, depending on which is correct)

Minor but irritating point - Figure 2 is simply wrong.

None of the IRRs or their 95% CIs match the figure.

Like, there's only 3 figures...

1 month ago 9 1 3 1

If the decision to fund a study comes *solely* down to power, even though the methods are sound, and the research question can be answered well with the proposed data, then it can meaningfully add to the evidence base.

To do otherwise means you'd never study rare outcomes or populations.

1 month ago 0 0 0 0

"While relying on significance tests to judge a causal effect as zero or non-zero may seem unpalatable, an even worse interpretation is that the point estimate is the effect."

I don't think practice should be governed by people mistreating statistics.

1 month ago 0 0 1 0

"Careless readers may take it for a suggestion that any sample size is acceptable when making causal inferences about important questions. We believe this is a real risk."

Yet adequate sample sizes are *not* necessary for causal inference (though are obviously useful)...

1 month ago 0 0 1 0

And by doing more than nothing, they'll likely give a balanced overview of a shitty situation, which can then be used instead of "alarming" people with contradictory results.

Nonetheless, I don't call for any analysis, I call for *good* analyses, which require decent thought that goes beyond power.

1 month ago 0 0 1 0

In this case, the D group have provided an answer that is orders or magnitude more precise than A or B, so would dominate any meta-analysis in any case.

They also adjusted, where others didn't, which may be appropriate or not.

But, point is, reviewers would do more than "do nothing".

1 month ago 0 0 1 0

They'd look through each analysis, assess the risks of bias, maybe try for an IPD meta-analysis if they can get data from each study (although this is just 2x2 tables, and it looks like only one adjusted, so the data is largely available anyway), assess heterogeneity, and summarise appropriately.

1 month ago 0 0 1 0

I disagree with:

"They [meta-analysts] do nothing and the socially alarmed groups are left with four sets of equivocal and possibly contradictory results, arguably more alarming than having no information at all."

Reviews do more than meta-analyses, and their conclusion wouldn't be "do nothing".

1 month ago 0 0 1 0

I mean, that's just an argument for doing good research.

If the question is worth answering, it's worth answering with limited information, but I noted above that you'd still have to do a *good* analysis.

At least some of the hypothetical groups didn't do that.

1 month ago 0 0 1 0

"Any amount of good analysis is better than no analysis, so long as it is *all* reported, because it feeds into systematic reviews and meta-analyses, which are ultimately what we should be using for the basis of policy decisions, not individual studies.

And small populations need analyses too."

1 month ago 0 0 1 0
Advertisement