Arian presenting our work on measuring the #uncertainty of visual #explanations from autonomous navigation systems at the #ACRAI research day.
A paper with more details is in the works. A link will follow soon.
#InterpretableML #XAI #AI #ML #UAI #sqIRL #IDLab #UAntwerp #imec #DEFRA #AHOI
Posts by sqIRL Lab
Interested in training hyperspectral image analysis models with reduced annotated data?
Salma explores this question in her recent paper.
doi.org/10.1007/s104...
#hyperspectral #HSI #AI #ML #ContrastiveLearning
#sqIRL #IDLab #UAntwerp #imec
Time to celebrate!
Last week Salma successfully defended her PhD on #Representation #Learning for #Hyperspectral Image Analysis.
Thanks for the cool research, your support to the members of the group, and all your contributions to #sqIRL/#IDLab.
Congratulations Dr. Haidar
#HSI #AI #ML #UAntwerp
Kudos to Thomas and the involved collaborators for the solid contributions to the field.
Curious about our work, have a look at our website: sqirllab.github.io/
#Interpretability #mechinterp #compinterp #xai #AI #ML
#sqIRL #UAntwerp #IDLab
At the MI workshop (spotlight), we show how Bilinear Autoencoders ease the analysis of neural representations through their decomposition into polynomial latents.
Paper and the cool demos at tdooms.github.io/research/bae
#Interpretability #mechinterp #compinterp #xai #AI #ML
#UAntwerp #IDLab
Have a look at the work our lab will be presenting at #NeurIPS '25.
On the main track, SimpleStories, a dataset full of simple yet diverse stories which has the potential of becoming the MNIST for language.
openreview.net/pdf?id=sVh3e...
#Interpretability #mechinterp #xai #AI #ML
#sqIRL #UAntwerp
We just launched a #linkedin page. Please help us spread the word and share it with people that might be interested.
linkedin.com/company/sqir...
#RepresentationLearning #interpretability #explainability #XAI #mechinterp #AI #ML #sqIRL #ComputerVision #HSI #IDLab #UAntwerp
Saja's survey on the #intepretability/#explainability of Capsule Networks was accepted at #Neurocomputing. Give it a look while it is free to access.
doi.org/10.1016/j.ne...
#CapsNets #AI #ML #XAI #sqIRL #UAntwerp #IDLab
Thanks for the #Flanders AI Research Program (FAIR) for supporting this collaboration and to the involved persons for the fruitful collaboration.
#Neuromorphic #hyperdimensional #interpretability #Explainability #VSA #xai #AI #ML #FAIR #UAntwerp #imec
#HDC models aim to be an energy-efficient alternative to current #AI systems and thanks to the efforts of our collaborators, their decision-making process is now more interpretable.
#Neuromorphic #hyperdimensional #interpretability #Explainability #VSA #xai #AI #ML #FAIR #UAntwerp #imec
Our work on Interpretable #Hyperdimensional Computing (HDC) classifiers for #tabular data is now available at #Neurocomputing.
doi.org/10.1016/j.ne...
#Neuromorphic #interpretability #Explainability #VSA #xai #ML #FAIR #UAntwerp #imec
Feeling rewarded... Honored to be among the Top Reviewers at #NeurIPS this year.
#AI #ML #UAntwerp #sqIRL #IDLab #imec
Thanks to our collaborators from the #VUB, Ward Gauderis and Geraint Wiggins; as well as #sqIRL members Thomas Dooms and José Oramas for the nice collaboration. #UAntwerp #FlandersAI #FAIR
This week our lab was present at the Flanders AI Research day. There we contributed with a deep dive session on #Compositional #Interpretability More details at: compinterp.github.io
#CompInterp #interpretableML #XAI #explainability #aisafety
This month we welcomed Dr. Renata Turkeš, who will be exploring #TDA principles for the characterisation of generalisation capabilities of DNNs. Welcome to the lab Renata. renata-turkes.github.io #UAntwerp
The deadline for the #AIMLAI workshop held jointly with #ECMLPKDD2025 has been extended until June 21st.
Looking forward to last-minute submissions on work around #interpretability and #explainability of #AI / #ML
project.inria.fr/aimla
#mechinterp #xai
Our lab got two papers accepted at #ECMLPKDD2025 on the topics of #Interpretability for Spiking NNs and self-supervised representation learning with embedded interpretability .
Congrats to Jasper, Hamed, Fabian and our collaborators.
#SNN #SIM #AI #ML #neuromorphic #xai #interpretableML
Honored to be selected among the Outstanding Reviewers at #CVPR2025.
#UAntwerp @sqirllab.bsky.social #IDLab
cvpr.thecvf.com/Conferences/...
It is confirmed, the #AIMLAI workshop will be held jointly with @ecmlpkdd.org.
We invite the submissions of long and short papers covering work around #interpretability and #explainability of #AI/#ML.
Deadline: 14/06/25
CfP: shorturl.at/yYQ9G
Website: shorturl.at/W9r1A
#XAI #mechinterp #ECMLPKDD
Last week our lab celebrated the doctoral defense of Hamed Behzadi. It has been four years since Hamed join us, and his evolution into fully fledged independent researcher has been constant. Congratulations!
See shorturl.at/296MV for some of the work produced by Hamed.
#ML #AI #Interpretability
Benjamin Vandersmissen (25/04 evening) will show the effects that using a twin network has on learning processes and share insights on how TNA leads to superior predictive performance in a number of tasks for several architectures. #deeplearning #ML #ICLR2025 #sqIRL
openreview.net/forum?id=TEm...
Thomas Dooms will show how to bilinear MLPs can server as more transparent component that provides a better lens to study the relationships between inputs, outputs and the weights that define the models. #mechinterp #interpretability #ML #AI #XAI #ICLR2025 #sqIRL
openreview.net/forum?id=gI0...
If you are at #ICLR2025 and interested on how to understand DNNs from its weights and on how to improve predictive performance of a DNN via Twin Network Augmentation, we encourage you to get in touch with Thomas and Benjamin who will be presenting our work there. #sqIRL #UAntwerp #XAI
We had the opportunity to contribute to the Research Day of the Antwerp Center of Responsible #AI ( #ACRAI ) where Salma and Hamed presented their work on #explainability-driven #HSI analysis and model #interpretability, respectively.
#ML @uantwerpen.be
www.uantwerpen.be/en/research-...
This week we had the visit of Prof. Eliana Pastor (DBDMG @PoliTO) who gave a presentation on her research around the topics of #trustworthyAI, #Bias analysis and #FairnessAI. Very good work and interesting ideas. @elianapastor.bsky.social we hope to host you again soon. #explainability #AI #ML
Bilinear MLPs Enable Weight-based Mechanistic Interpretability
M. Pearce, T. Dooms, A. Rigg, J. Oramas, L. Sharkey
We show that bilinear layers can serve as an interpretable replacement for current activation functions; enabling weight-based interpretability.
preprint: arxiv.org/abs/2410.08417
Improving Neural Network Accuracy by Concurrently Training with a Twin Network
B. Vandersmissen, L. Deckers, J. Oramas
We show the effectiveness of TNA lies on a better exploration of the parameter space and the learning of more robust and diverse features
preprint: openreview.net/forum?id=TEm...
A great start for 2025.
Proud to announce that our group (#sqIRL/IDLab) got two papers accepted at #ICLR2025. A first for our young lab.
Thanks to our collaborators, the FAIR Program and the Dept. of CS @uantwerpen.bsky.social for supporting this research.
#AI #ML #interpretability #XAI
Recent work published by the #sqIRL Lab on the training of competitive deeper Forward-Forward Networks. #FF #localLearning #ML #RepresentationLearning