One month left 'til @iclr-conf.bsky.social, about time to launch our ✨@gram-org.bsky.social Competititon✨ The theme is geometry x AI4science with a dataset kindly provided by BeyondMath.
Deadline: April 22, 2026 (AoE)
🔗 gram-competition.github.io
Posts by TUM AI in Medicine Lab
A promotional graphic for an oral presentation at the EACL 2026 conference in Morocco. The background features a sunny, historic Moroccan stone fortress gate with palm trees, a clear blue sky, and decorative geometric tile patterns in the corners. Text in the top left indicates the event is at Palais Des Congres, Rabat, from March 24-29, 2026. A banner across the middle displays the presentation title: "Unintended Memorization of Sensitive Information in Fine-Tuned Language Models." Below the title is a flowchart diagram illustrating how Large Language Models (LLMs) trained on sensitive medical text can inadvertently memorize Personally Identifiable Information (PII), and how a "True-Prefix Attack" can extract a patient's name even when fine-tuned for downstream tasks that do not contain PII. Text at the very bottom reads, "Oral Presentation: March 27 | 11:00 AM | Salle La Palmeraie."
Thrilled to present our paper "Unintended Memorization of Sensitive Information in Fine-Tuned Language Models" at #EACL2026 in Rabat! 🇲🇦
w/ J. Marin Ruiz, G. Kaissis, P. Seidl, R. v. Eisenhart-Rothe, F. Hinterwimmer & @danielrueckert.bsky.social.
Read here: arxiv.org/abs/2601.174...
Our latest paper on ML-based flow estimation ✨trained on 4D flow MRI✨ in the carotid arteries is now published (open-access) in Medical Image Analysis.
🔗 www.sciencedirect.com/science/arti...
Congratulations Florian on this achievement! We wish you all the best on your next chapter.
Interested in doing similar work? Reach out to our team!
We're always looking for motivated students: kiinformatik.mri.tum.de/de/lehrstuhl...
3/3 #AIMresearch
Large dense models are often difficult to run on clinical devices.
Instead, this approach uses MoEs to create a sparser, more specialized architecture for different patient demographics and imaging types.
The results: among other improvements, a reduction of up to 76% in inference costs!
2/n
Student in front of a research poster on Mixture-of-Experts
January Thesis Highlights from AIM Lab 🎓
To celebrate their hard work, we want to showcase the excellent research our bachelor and master students produce!
This month, Florian Braunmiller finished a great master thesis on Mixture-of-Experts (MoE) architectures for medical imaging.
1/n
SPRIND'S Next Frontier AI initiative
🦄 Build Europe’s next AI unicorn - @sprind-de.bsky.social Next Frontier AI initiative backs bold ideas with €125M to create European frontier AI labs.
Believe Europe should lead in AI? Join the challenge!
👉 next-frontier.ai
If you’re a student and interested in these topics, make sure to checkout our teaching offerings here: kiinformatik.mri.tum.de/de/lehrstuhl...
Great having Robbie Holland back for a visit at our lab!
He’s at AIMI at Stanford University and gave a speech on auto-generated hypotheses in the context of AI4Science. Not only for the lab, but maybe more importantly for our lab‘s seminar on Multi-modal AI for Medicine (IN2107, IN45072).
#AIMnews
🎄 Lab Christmas Party! 🎄
It's always great to use the holiday season as an opportunity to (re-)connect!
We had a blast at our Christmas party with lots of laughs, pizza, and Glühwein 🍷
Wishing everyone a good end to this year and happiest of holidays to those who celebrate! ✨ #AIMsocial
Great being at #NeurIPS last week!
Thankful for the good times in the sun and the people I met.
If you’ve ever wondered whether model performance can be inferred directly from your training alone, check out our work!
1/2
Saturday 11:15am: „Are foundation models useful feature extractors for electroencephalography analysis?“ - Özgün Turgut, Felix Bott, Markus Ploner, Daniel Rückert
neurips.cc/virtual/2025...
3/3
Friday 4:30pm: „Gradient-Weight Alignment as a Train-Time Proxy for Generalization in Classification Tasks“ - Florian A. Hölzl, Daniel Rückert, Georgios Kaissis
neurips.cc/virtual/2025...
2/3
It’s this time of the year again! #NeurIPS2025
If you are in San Diego, make sure to check out 2 works from our lab this week 📝📝 #AIMresearch
1/3
Their paper, "Evaluation and mitigation of the limitations of large language models in clinical decision-making," is an important contribution to the integration of AI in healthcare.
Read it here: www.nature.com/articles/s41...
Huge congratulations to Paul Hager and Dr. med. Friederike Jungmann for winning the MDSI Best Paper Award in the Societal Impact category! 🏆🎊
See more details in the thread below!
Photo's copyright: Andreas Heddergott/TUM
#AIMResearch #AIMNews #MDSI #BestPaperAward
We celebrated the 5th anniversary of our research chair at @tum.de! 💙🥂
It's been an incredible journey of research and collaboration. Thank you to everyone who has made this possible. We are very much looking forward to the next years to come!
#AIMAnniversary #AIMNews
Quick look back at an insightful day yesterday at the Bavarian Conference on AI in Medicine, where @paulhager.bsky.social , @luciehuang.bsky.social, Alina Dima, and Vasiliki Sideri-Lampretsa were representing us.
Team, thank you for being such good ambassadors! 👏
#AIMNews #AIinMedicine
However, that’s not all! Yundi is currently extending the framework by integrating K-space signal data and genomic information to further enhance its multimodal capability.
By doing this, ViTa enables a broad spectrum of downstream applications, including cardiac phenotype and physiological feature prediction, segmentation, and classification of cardiac/metabolic diseases within a single unified framework.
ViTa is a multi-modal, multi-task, and multi-view foundation model that delivers a comprehensive representation of the heart and a precise interpretation of individual disease risk. It integrates anatomical information from 3D+time cine MRI stacks with detailed patient-level tabular data.
We are wrapping up our 5th-year anniversary paper series with ViTa by Yundi Zhang et al. (www.sciencedirect.com/science/arti...), a work that adresses: How to realize personalized cardiac healthcare that moves beyond a single task?
#AIMResearch #AIMAnniversary #MultiModalLearning #CardiacMRI
But the work hasn't stopped there! Progress on the benchmark is tracked using an online leaderboard: huggingface.co/MIMIC-CDM, and we are currently developing the next generation of medical benchmarks in co-operation with Google for Health.
Models were found to perform significantly worse than doctors, to not follow guidelines, and to be extremely sensitive to simple changes in input. This means more work has to be done before we can safely deploy them for high stakes clinical decision making.
Paper: www.nature.com/articles/s41...
While LLMs aced standard medical licensing exams, the authors argued for evaluation in real-world clinical settings. So they developed a new dataset and benchmark that features and simulates real-world emergency room cases, and tests robustness and adherence to clinical guidelines.
Is it safe to use LLMs in the clinic today? This is the central question that @paulhager.bsky.social and Friederike Jungmann tackled in their 2024 study published in #NatureMedicine, which is our 5th-year anniversary's highlight paper this week.
#AIMAnniversary #AIMResearch #LLM #Benchmark
Paul presented dynamic, temporally resolved lung imaging using INR-based registration. Steven focused on multi-contrast fetal brain MRI reconstruction for improved motion correction and image quality.
Thank you both for sharing your work and always welcome!
Last week we had the pleaure of hosting Paul Kaftan (Institute of Medical System Biology, @uniulm.bsky.social) and Steven Jia (Institut de Neurosciences de la Timone, @univ-amu.fr) for talks on the incredible potential of Implicit Neural Representations (INRs) in medical imaging.
#AIMnews #INRs
We wish you success in all your future endeavors!
Huge and well-deserved congratulations to Dmitrii Usynin on successfully completing his doctoral journey in our lab! 🎊
Dima’s research has resulted in significant contributions to field of trustworthy artificial intelligence in particular for collaborative biomedical image analysis.