Advertisement · 728 × 90

Posts by Angus Nicolson

Future work must understand this variability, and design explanation formats that provide more consistent responses.

Thanks for my co-authors: Elizabeth Bradburn, @yaringal.bsky.social, @profaris.bsky.social , J. Alison Noble.

Paper: www.nature.com/articles/s41...

5 months ago 0 0 0 0

We found that AI predictions significantly reduced clinician error.

But adding explanations gave no further significant reduction.

Explanations also had no significant effect on self-reported trust or reliance.

Critically, while some clinicians improved with explanations, others performed worse.

5 months ago 0 0 1 0

Our new paper in npj Digital Medicine is out! "The human factor in explainable artificial intelligence: clinician variability in trust, reliance, and performance"

It is often asserted that XAI is essential for trust. In this study, we put that to the test.

www.nature.com/articles/s41...

5 months ago 2 0 1 1