Advertisement · 728 × 90

Posts by Maarten van Smeden

Preliminary appraisal of machine learning based prediction models A significant portion of healthcare research is devoted to the development of prediction models, yet the integration of these models into routine clinical care remains limited. Persistent barriers, su...

Appraising ML prediction models @maartenvsmeden.bsky.social
1) Share model and code
2) Predictors and language
3) Class imbalance & calibration
4) Model validation sample size
5) Is the model clinically useful in practice?
www.jclinepi.com/article/S089...

3 weeks ago 8 2 0 0
Preview
Signal or noise? Evaluating commonly used attribution methods for explaining deep neural networks in electrocardiogram classification AbstractAims. Attribution-based explainability methods are widely used in electrocardiogram (ECG) analysis to interpret predictions from ‘black-box’ deep n

Fantastic new paper casting doubt on explainability of explainable AI. To explain complex machine learning algorithms you need reproducibility of the explanation at a minimum academic.oup.com/ehjdh/articl... #machinelearning #Statistics #StatsSky @maartenvsmeden.bsky.social

1 month ago 37 15 1 1
Post image

Happy to see this in print!

doi 10.1146/annurev-statistics-042324-123749
@maartenvsmeden.bsky.social @laurewynants.bsky.social @vanamsterdam.bsky.social and Ewout Steyerberg

1 month ago 17 2 0 1

Cluster analysis will always give you clusters

1 month ago 4 0 1 0

I have got the data
and I figured out exactly how to do my cluster analysis
all I need is a relevant question that my cluster analysis is going to answer

1 month ago 37 2 4 0
Post image

What if you combine open datasets with AI? Apparently, a 3 fold increase in low quality research papers, mass-produced by paper mills.

Interesting study in @jclinepi.bsky.social #academicsky #episky #medsky #Skystats

Thanks to @maartenvsmeden.bsky.social for initially posting this on Linkedin!

1 month ago 9 5 1 0
Preview
Evaluation of performance measures in predictive artificial intelligence models to support medical decisions: overview and guidance Numerous measures have been proposed to illustrate the performance of predictive artificial intelligence (AI) models. Selecting appropriate performance measures is essential for predictive AI models i...

Our guidance regarding performance measures for medical AI models is finally out!

- Stop bashing AUROC, although it does not settle things
- Calibration and clinical utility are key
- Show risk distributions
- Classification statistics (e.g. F1) are improper

www.thelancet.com/journals/lan...

4 months ago 50 25 2 1
Post image

NEW PAPER

The use of explainable AI in healthcare evaluated using the well known Explain, Predict and Describe taxonomy by Galit Shmueli

link.springer.com/article/10.1...

4 months ago 25 4 1 1
Advertisement

🧐

4 months ago 8 0 0 0

this is one of my favourite observations about sample size calculations. (afaik first articulated by Miettinen in 1985)

4 months ago 77 21 1 2

Ha! I did not know I quoted Miettinen :). Thanks for the reference

4 months ago 7 0 0 0

For some research studies the optimal sample size should be estimated at 0

4 months ago 62 6 1 2

“Data available upon reasonable request” is academic language for you can get my data OVER MY DEAD BODY

5 months ago 38 3 2 0

I take version control very seriously

5 months ago 9 0 1 0

Manuscript_Final_Version_actualFINALcopy_version9b_USETHISONE.docx

5 months ago 46 0 6 0

Prediction models that are used to guide medical decisions are usually regulated under medical device regulation. This means, putting a calculator out there to promote the use your new prediction model is likely to break some rules.

5 months ago 11 3 1 0

The lasso works really well in particular settings and for particular purposes. If you are after high prediction performance alone and you have a rather large sample size, it can be an excellent choice indeed. But most analytical goals are not only about prediction

6 months ago 4 0 0 0
Advertisement

Kind reminder: data driven variable selection (e.g. forward/stepwise/univariable screening) makes things *worse* for most analytical goals

6 months ago 34 10 2 3
Preview
Vacancy — PhD position on AI methodology for prediction of patient outcomes using organoid models Are you passionate about bringing personalized medicine to the next level and make real impact in healthcare? Join our team and develop novel AI methodology to improve predictions of relevant patient ...

NEW FULLY FUNDED PHD POSITION

Looking for a motivated PhD candidate to join our team. Together with Danya Muilwijk, Jeffrey Beekman and I, you will explore opportunities and limitations of AI in the context of organoids

For more info and for applying 👉
www.careersatumcutrecht.com/vacancies/sc...

6 months ago 9 8 1 0

Interpretable "AI" is just a distraction from safe and useful "AI"

1 year ago 10 2 1 1

This is right tho. Let’s therefore call them sensitivity positive predictive value curves bsky.app/profile/laur...

8 months ago 7 0 1 0
Preview
Performance evaluation of predictive AI models to support medical decisions: Overview and guidance A myriad of measures to illustrate performance of predictive artificial intelligence (AI) models have been proposed in the literature. Selecting appropriate performance measures is essential for predi...

For details: arxiv.org/abs/2412.10288

8 months ago 12 2 1 0

No.

8 months ago 11 2 2 0

I wonder who those people are who come here dying to know what GenAI has done with some prompt you put in

8 months ago 5 1 1 0
Advertisement

If you think AI is cool, wait until you learn about regression analysis

8 months ago 126 22 5 4

TL;DR: Explainable AI models often don't do a good job explaining. They can be very useful for description. We should be really careful when using Explainable AI in clinical decision making, and even when judging face validity of AI models

Excellently led by @alcarriero.bsky.social

8 months ago 11 0 1 0
Post image

NEW PREPRINT

Explainable AI refers to an extremely popular group of approaches that aim to open "black box" AI models. But what can we see when we open the black AI box? We use Galit Shmueli's framework (to describe, predict or explain) to evaluate

arxiv.org/abs/2508.05753

8 months ago 69 19 6 1
Preview
Guidelines for Reporting Observational Research in Urology: The Importance of Clear Reference to Causality - PubMed Observational studies often dance around the issue of causality. We propose guidelines to ensure that papers refer to whether or not the study aim is to investigate causality, and suggest language to ...

This is, however, not clever or safe writing, it is a bad collective habit that needs to stop. Not by avoiding references to causality but by clear referencing to it

pubmed.ncbi.nlm.nih.gov/37286459/

8 months ago 9 3 1 0

The healthcare literature is filled with "risk factors". This word combination makes research findings sound important by implying causality, while avoiding direct claims of having identified causal associations that are easily critiqued.

8 months ago 24 1 2 2

And taking this analogy one step further: it gives genuine phone repair shops a bad name

8 months ago 7 0 0 0