Enhancing Concept Localization in CLIP-based Concept Bottleneck Models
Rémi Kazmierczak, Steve Azzolin, Goran Frehse, Eloïse Berthier, Gianni Franchi
Action editor: Chen Sun
https://openreview.net/forum?id=2xaOl0wluw
#saliency #concepts #bottleneck
New #Featured Certification:
GIFT: A Framework Towards Global Interpretable Faithful Textual Explanations of Vision Classifiers
Eloi Zablocki, Valentin Gerard, Amaia Cardiel, Eric Gaussier, Matthieu Cord, Eduardo Valle
https://openreview.net/forum?id=OwhW5MpFmD
#saliency #visual
Simplifying Knowledge Transfer in Pretrained Models
Siddharth Jain, Shyamgopal Karthik, Vineet Gandhi
Action editor: Stephen Lin
https://openreview.net/forum?id=eQ9AVtDaP3
#saliency #deep #models
Statistical Test for Saliency Maps of Graph Neural Networks via Selective Inference
Shuichi Nishino, Tomohiro Shiraishi, Teruyuki Katsuoka, Ichiro Takeuchi
Action editor: Christopher Morris
https://openreview.net/forum?id=5NkXTCVa7F
#saliency #salient #subgraphs
New #J2C Certification:
CNN Interpretability with Multivector Tucker Saliency Maps for Self-Supervised Models
Aymene Mohammed Bouayed, Samuel Deslauriers-gauthier, Adrian IACOVELLI, David Naccache
https://openreview.net/forum?id=VM8bNd5A09
#saliency #cnn #cnns
Left Volume activation maps of the saliency-sensitive response in the foreground ROIs of V1, V2, and IPS for a representative participant. Red lines indicate the boundary between gray matter (GM) and cerebrospinal fluid (CSF). Yellow lines indicate the boundary between GM and white matter (WM). Right: Lefthand column: Surface activation maps of saliency-sensitive response ( in percent signal change) in different cortical depths of V1 in the same representative participant. Dashed circles indicate the location of foreground on the cortical surface. Righthand column: Saliency maps (averaged across all participants) in image space
How does the brain direct our #attention to conspicuous objects in our field of #vision? This study in humans maps the neural origin and propagation of #saliency signals through #CorticalLayers during visual processing @plosbiology.org 🧪 plos.io/4qqcAsW
Left Volume activation maps of the saliency-sensitive response in the foreground ROIs of V1, V2, and IPS for a representative participant. Red lines indicate the boundary between gray matter (GM) and cerebrospinal fluid (CSF). Yellow lines indicate the boundary between GM and white matter (WM). Right: Lefthand column: Surface activation maps of saliency-sensitive response ( in percent signal change) in different cortical depths of V1 in the same representative participant. Dashed circles indicate the location of foreground on the cortical surface. Righthand column: Saliency maps (averaged across all participants) in image space
How does the brain direct our #attention to conspicuous objects in our field of #vision? This study in humans maps the neural origin and propagation of #saliency signals through #CorticalLayers during visual processing @plosbiology.org 🧪 plos.io/4qqcAsW
Left Volume activation maps of the saliency-sensitive response in the foreground ROIs of V1, V2, and IPS for a representative participant. Red lines indicate the boundary between gray matter (GM) and cerebrospinal fluid (CSF). Yellow lines indicate the boundary between GM and white matter (WM). Right: Lefthand column: Surface activation maps of saliency-sensitive response ( in percent signal change) in different cortical depths of V1 in the same representative participant. Dashed circles indicate the location of foreground on the cortical surface. Righthand column: Saliency maps (averaged across all participants) in image space
How does the brain direct our #attention to conspicuous objects in our field of #vision? This study in humans maps the neural origin and propagation of #saliency signals through #CorticalLayers during visual processing @plosbiology.org 🧪 plos.io/4qqcAsW
Simplifying Knowledge Transfer in Pretrained Models
New #TMLR-Paper-with-Video:
Simplifying Knowledge Transfer in Pretrained Models
Siddharth Jain, Shyamgopal Karthik, Vineet Gandhi
https://tmlr.infinite-conf.org/paper_pages/eQ9AVtDaP3
#saliency #learning #deep
SciTech Chronicles. . . . . . . . .Mar 18th, 2025
bit.ly/stc031825
#pattern #eastward-shift #Mississippi #Tennessee #bioelectricity #epithelial # microelectrode #calcium #AI #age-related #Biobank #saliency #RNS #Wearable-LLM #PTSD #EEG #Observatory #hydrothermal #magma #steam
CNN Interpretability with Multivector Tucker Saliency Maps for Self-Supervised Models
Aymene Mohammed Bouayed, Samuel Deslauriers-gauthier, Adrian IACOVELLI, David Naccache
Action editor: Mathieu Salzmann
https://openreview.net/forum?id=VM8bNd5A09
#saliency #cnn #cnns
Reproducibility Study of "Languange-Image COnsistency"
Konrad Szewczyk, Patrik Bartak, Mikhail Vlasenko, Fanmin Shi
Action editor: Krzysztof Geras
https://openreview.net/forum?id=FvxTseSYRk
#saliency #interpretability #explainability