My thoughts @kffhealthnews.org on the proposed dismantling of AHRQ
We need safety research to protect our patients from harms in health care. No organization in the world does more for that than AHRQ.
Let the general public know.
kffhealthnews.org/news/article...
Posts by Enrico Coiera
What does the 2025-26 Federal Budget mean for Australia’s investment in healthcare AI?
aihealthalliance.org/2025/03/28/w...
In a new paper in Machine Learning, we recast AI explanation as a conversation between AI and human, allowing the explanation to be tailored fit the knowledge and needs of a human (or requesting AI agent)
link.springer.com/article/10.1...
US NIH Grant review panels are suspended, and a freeze imposed on travel, communication with the public, hiring ....
www.science.org/content/arti...
There is a growing US push for clinical AI safety to be certified by academic assurance labs rather than the FDA.
But there are many challenges - conflicts of interest via industry funding to universities, scaleability, and suitability to post-market monitoring.
link.springer.com/article/10.1...
TRIPOD-LLM is out! Check out our consensus guidelines for reporting #LLM research in biomedicine. TRIPOD-LLM is intended to be a living guideline to keep up with the rapid advances in LLMs. Kudos to lead author
Dr. Jack Gallifant
“While the study doesn't identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.”
[New digital scribe paper] Expert evaluation of large language
models for clinical dialogue summarization
www.nature.com/articles/s41...
In this study with @dafraile.bsky.social ChatGPT's ability to summarise primary care consultations was impressive but not yet at human skill level.
Who should certify health AI is safe? There is a push in the US for it not to be the FDA but instead, the industry that manufactures the technology.
Do we have good examples of high risk technology applications where there has been effective self regulation?
www.politico.com/news/2025/01...
How to get a PhD in 20 Tweets (Part 2)
Happy Holidays!
How to get a PhD in 20 Tweets (Part 1)
Source: blogs.bmj.com/bmj/2012/02/...
Just one more damn thing to add into the polycrisis mix:
‘Unprecedented risk’ to life on Earth: scientists call for halt on ‘mirror life’ microbe research.
www.theguardian.com/science/2024...
"The important thing is that paper makes it very clear that nobody should ever take LLMs at their word. They can easily tell you one thing and (especially if hooked up as agents) do another — possibly quite contrary to what they have alleged they are doing." - Gary Marcus, from the linked substack.
Always a hostage to fortune when making such predictions! But we now have digital scribes and I reckon we will be close to this world by 2030, if not there. I believe that everything described is now technically possible, except maybe for the curator agents which are a few years away.
And for fun, here is one from the vault circa 2004: "Four rules for the reinvention of health care"
www.bmj.com/content/328/...
Can we design health services to resiliently respond to crises like climate change? In this paper we show innovation during COVID-19 depended on repurposing existing services into new "Innovation bundles". So should we should design "health services as platforms"?
link.springer.com/article/10.1...
Indeed. One research challenge is to take the massive data sets potentially generated in a smart environment and find ways to make them clinically useful. “Old fashioned” notes maybe said too little but smart environments will likely say too much. Solveable but currently unsolved.
The 4 stages of digital scribes
1. Human led documentation
2. Mixed-initative documentation
3. Computer-led documentation
4. Intelligent clinical environment
How long until we work in smart, sensor dense, clinical spaces where documentation disappears as a human task?
www.nature.com/articles/s41...
Sometimes digital health evaluations focus on hard outcomes when process benefits are more likely. It is hard to demonstrate morbidity and mortality changes due to an EHR because so many other things also need to go right eg changes in human decisions and processes
ebooks.iospress.nl/volumearticl...
Interesting discussions on today's CIEHF webinar launching the new human-centred healthcare AI guidance. Questions around relying on AI vs monitoring the outputs critically. And what users need to know - and who's going to support them. ergonomics.org.uk/resource/int...
The cause is likely multiple I suspect. Yes it could be journal acceptance practices leading to a distortion in the pool of published values, it could also be because of researchers "p hacking" ie looking for analyses which produce 'favourable' outcomes. It could how common tools calculate AUCs!
Histogram of AUC mean values
New research from @aidybarnett.bsky.social shows published AUC values for some clinical prediction models are are over-inflated with excesses above 0.7, 0.8 and 0.9 and shortfalls below these thresholds, risking sub-optimal decisions
bmcmedicine.biomedcentral.com/articles/10....
AI in health: A little less conversation, a little more action please!
How far have we progressed the AI in healthcare policy agenda over the last 12 months? More than we expected at the time, but there is so much still to do.
www.medicalrepublic.com.au/ai-in-health...
AI in health: A little less conversation, a little more action please!
How far have we progressed the AI in healthcare policy agenda over the last 12 months? More than we expected at the time, but there is so much still to do.
www.medicalrepublic.com.au/ai-in-health...
Three models of conformance service: (A) universal conformance: all agents have access to the same global standard (Mx); (B) mediated conformance: adaptors provide an externally situated conformance service to interoperating agents; (C) localized conformance: autonomous adaptive agents internalize their conformance functions. Standards are mandated in (A), incompletely adhered to in (B), and potentially helpful but not necessary in (C).
“What would things look like in a zero standards world? .. from the perspective of an autonomous and adaptive entity, we would see standards for what they are - a workaround when entities cannot adapt.”
academic.oup.com/jamia/articl...
Today in Australia *all* of the responsibility for use of ai scribe rests with the clinician. “Once you accept the scribe note it becomes your note”. Manufacturer, integrator, educator and accreditor are unburdened with responsibility. So question is whether that is fair and reasonable?
When the clinical use of AI leads to patient harms, medico legal responsibility should not just fall on the shoulders doctors but all those who can manage or mitigate risk - including software developers. (Tracey Pickett, Avant) #aicare24
Kicking things off with @ecoiera.bsky.social at today’s health AI conference in Melbourne. Very useful scorecard and update on progress in AI for health in the last year. (Spoiler - it’s a lot of activity!) #digitalhealth
Covers of Science and Nature journals in the past 2 weeks denoting remarkable progress of life science with A.I. tools
A vertical takeoff of life science with #AI LLLMs.
Publication of 10 new foundation models of Proteins, DNA, RNA, methylation, cells, and interactions, evolution, and design in the past couple of weeks!
Unprecedented progress, reviewed in the new Ground Truths
erictopol.substack.com/p/learning-t...