Advertisement · 728 × 90

Posts by Lukas Klein

Post-Pretraining in Vision, and Language Foundation Models | Yuki M. Asano (UTN)
Post-Pretraining in Vision, and Language Foundation Models | Yuki M. Asano (UTN) YouTube video by heidelberg.ai

In case you missed the last heidelberg.ai talky by Prof. Yuki Asano (@yukimasano.bsky.social‬) on "Post-Pretraining in Vision, and Language Foundation Models", it is just released on the heidelberg.ai Youtube Channel: www.youtube.com/watch?v=5UTC...

10 months ago 2 1 1 0
Post image

We’re thrilled to welcome Yuki Asano, Professor at the University of Technology Nuremberg and head of the Fundamental AI (FunAI) Lab, to our heidelberg.ai / NCT Data Science Seminar series on May 13th at 5 pm in Heidelberg (INF280 Seminar Rooms K1+K2) for an in-person event.

11 months ago 8 5 1 0
Post image

✨Excited to share our work on “AI-powered virtual tissues from spatial proteomics for clinical diagnostics and biomedical discovery” (arxiv.org/pdf/2501.060...), building on our vision paper in @cellpress.bsky.social on multi-scale, multi-modal foundation models (shorturl.at/G2Dew).

1 year ago 22 3 1 0

In her talk, Charlotte will share insights into the fields of Virtual Cells and Digital Twins, highlighting how AI is shaping personalized cancer therapies through advanced simulations of cellular behavior and patient-specific outcomes.

1 year ago 0 0 0 0
Post image

If you're interested in AI for 🦠 Virtual Cells and 👥 Digital Twins in Oncology, join our Heidelberg AI talk by @bunnech.bsky.social on the 23rd either in-person
or virtual!

More information: heidelberg.ai/2025/01/23/c...

1 year ago 1 0 1 0
Preview
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics Explainable AI (XAI) is a rapidly growing domain with a myriad of proposed methods as well as metrics aiming to evaluate their efficacy. However, current studies are often of limited scope, examining ...

🤔 Curiously, the emerging top-performing method is not examined in any relevant related study.

Happy to discuss the results during the conference!

Paper: arxiv.org/abs/2409.16756
Benchmark: github.com/IML-DKFZ/latec
(3/3)

1 year ago 1 0 0 0

🚀 Through LATEC, we showcase the risk of conflicting metrics causing unreliable rankings and propose a more robust evaluation scheme. We critically evaluated 17 XAI methods across 20 metrics in 7,560 unique setups, including varied architectures & input modalities.
(2/3)

1 year ago 1 0 1 0
Advertisement
Post image

Picking the right explainable AI method for your computer vision task? Wondering about its evaluation reliability?

🎯 Then you might be interested in our latest #neurips2024 publication on LATEC, a (meta-)evaluation benchmark for XAI methods and metrics!

📄 arxiv.org/abs/2409.16756
🧵(1/3)

1 year ago 6 3 1 0