🚀 Excited to launch new Summer Lecture Series by @ellis.eu & @uni-jena.de:“Explainability & Understanding of Models”
•Jun 2 @hila-chefer.bsky.social
•Jun 24 @margretkeuper.bsky.social
•Jul 8 @simoneschaub.bsky.social
3 talks, 5 weeks—stay tuned! ✨
#WomenInScience #ExplainableAI #MachineLearning
Posts by Simone Schaub-Meyer
We are excited to share our call for papers with a submission deadline on May 21st, 2026! We invite submissions of high-quality research papers presenting original contributions in all areas of pattern recognition!
Read more: www.gcpr-vmv.de/year/2026/gc...
#GCPR2026 #VMV2026
Exciting news! The DAGM German Conference on Pattern Recognition #GCPR2026 will be hosted together with the Vision, Modeling, and Visualization #VMV2026 Conference at the University of Siegen on September 22-25, so mark your calendars! 📅
More info: www.gcpr-vmv.de/year/2026
📢🎓 We have open PostDoc positions in Computer Vision & ML at @tuda.bsky.social and @hessianai.bsky.social within the Reasonable AI Cluster of Excellence — supervised by @stefanroth.bsky.social, @simoneschaub.bsky.social and many others!
Apply here: www.career.tu-darmstadt.de/tu-darmstadt...
📢🎓 We have open PhD positions in Computer Vision & Machine Learning at @tuda.bsky.social and @hessianai.bsky.social within the Reasonable AI Cluster of Excellence — supervised by @stefanroth.bsky.social, @simoneschaub.bsky.social and many others!
www.career.tu-darmstadt.de/tu-darmstadt...
🎉 Today, Simon Kiefhaber will present our ICCV oral paper on how to make optical flow estimators more efficient (faster inference and lower memory usage) with state-of-the-art accuracy:
🌍 visinf.github.io/recover
Talk: Tue 09:30 AM, Kalakaua Ballroom
Poster: Tue 11:45 AM, Exhibit Hall I #76
📢Excited to share our IROS 2025 paper “Boosting Omnidirectional Stereo Matching with a Pre-trained Depth Foundation Model”!
Work by Jannik Endres, @olvrhhn.bsky.social, Charles Cobière, @simoneschaub.bsky.social, @stefanroth.bsky.social and Alexandre Alahi.
[1/8] We are presenting four main conference papers, two workshop papers, and a workshop at @iccv.bsky.social 2025 in Hawaii! 🎉🏝
🎓 Looking for a PhD position in computer vision? Apply to the European Laboratory for Learning & Intelligent Systems (ELLIS) and work with @stefanroth.bsky.social & @simoneschaub.bsky.social! Join the info session on Oct 1.
@ellis.eu @tuda.bsky.social
ellis.eu/news/ellis-p...
We are presenting five papers at the DAGM German Conference on Pattern Recognition (GCPR, @gcpr-by-dagm.bsky.social) in Freiburg this week!
Efficient Masked Attention Transformer for Few-Shot Classification and Segmentation (GCPR 2025)
by @dustin-carrion.bsky.social, @stefanroth.bsky.social, and @simoneschaub.bsky.social
🌍: visinf.github.io/emat
Poster: Wednesday, 03:30 PM, Postern 8
Removing Cost Volumes from Optical Flow Estimators (ICCV 2025 Oral)
by @skiefhaber.de, @stefanroth.bsky.social, and @simoneschaub.bsky.social
🌍: visinf.github.io/recover
Poster: Friday, 10:30 AM, Poster 14
🚀 Open-Mic Opinions! 🚀
We welcome you to voice your opinion on the state of XAI. You get 5 minutes to speak (in-person only) during the workshop.
📷 Submit your proposals here: lnkd.in/d7_EWKXp
For more details: lnkd.in/dpYWVYXS
@iccv.bsky.social #ICCV2025 #eXCV
Some impressions from our VISINF summer retreat at Lizumer Hütte in the Tirol Alps — including a hike up Geier Mountain and new research ideas at 2,857 m! 🇦🇹🏔️
🚨Deadline Approaching! 🚨
Non-Proceedings track closes in 2 days!
Be sure to submit on time!
We are awaiting your submissions!
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025 #eXCV
Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
Call for papers at the eXCV workshop at ICCV 2025.
Join us in taking stock of the state of the field of explainability in computer vision, at our Workshop on Explainable Computer Vision: Quo Vadis? at #ICCV2025!
@iccv.bsky.social
We are presenting 3 papers at #CVPR2025!
Reasonable Artificial Intelligence und The Adaptive Mind: Die TU Darmstadt wird im Rahmen der Exzellenzstrategie des Bundes und der Länder mit gleich zwei geförderten Clusterprojekten ausgezeichnet. Ein Meilenstein für unsere Universität! www.tu-darmstadt.de/universitaet...
"Reasonable AI" got selected as a cluster of excellence www.tu-darmstadt.de/universitaet...
Overwhelmingly happy to be part of RAI & continue working with the smart minds at TU Darmstadt & hessian.AI, while also seeing my new home at Uni Bremen achieve a historic success in the excellence strategy!
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
Why has continual ML not had its breakthrough yet?
In our new collaborative paper w/ many amazing authors, we argue that “Continual Learning Should Move Beyond Incremental Classification”!
We highlight 5 examples to show where CL algos can fail & pinpoint 3 key challenges
arxiv.org/abs/2502.11927
🏔️⛷️ Looking back on a fantastic week full of talks, research discussions, and skiing in the Austrian mountains!
Excited to share that today our paper recommender platform www.scholar-inbox.com has reached 20k users! We hope to reach 100k by the end of the year.. Lots of new features are being worked on currently and rolled out soon.
Verstehen, was KI-Modelle können – und was nicht: Interview mit @simoneschaub.bsky.social, Early-Career-Forscherin im Clusterprojekt „RAI“ (Reasonable Artificial Intelligence).
"RAI" ist eines der Projekte, mit denen sich die TUDa um einen Exzellenzcluster bewirbt.
www.youtube.com/watch?v=2VAm...
Hi Julian, just joined bluesky, I am working on XAI in Computer Vision, would be great to be added to the list as well, thanks
Want to learn about how model design choices affect the attribution quality of vision models? Visit our #NeurIPS2024 poster on Friday afternoon (East Exhibition Hall A-C #2910)!
Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds
Our work, "Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals" is accepted at TMLR! 🎉
visinf.github.io/primaps/
PriMaPs generate masks from self-supervised features, enabling to boost unsupervised semantic segmentation via stochastic EM.
Die DFG hat Dr. Simone Schaub-Meyer ins Emmy Noether-Programm aufgenommen. Schaub-Meyer will Methoden entwickeln, die das Verständnis für weit verbreitete Modelle der #KI in der Bild-und Videoanalyse erhöhen www.tu-darmstadt.de/universitaet...