Advertisement · 728 × 90

Posts by sqIRL Lab

Post image Post image Post image Post image

Arian presenting our work on measuring the #uncertainty of visual #explanations from autonomous navigation systems at the #ACRAI research day.

A paper with more details is in the works. A link will follow soon.

#InterpretableML #XAI #AI #ML #UAI #sqIRL #IDLab #UAntwerp #imec #DEFRA #AHOI

1 month ago 1 1 0 0
Preview
Enhancing hyperspectral image prediction with contrastive learning in low-label regimes - Applied Intelligence Labelled data scarcity remains a longstanding challenge in hyperspectral image analysis, primarily due to high spectral dimensionality and the laborious nature of manual annotation. Self-supervised co...

Interested in training hyperspectral image analysis models with reduced annotated data?
Salma explores this question in her recent paper.
doi.org/10.1007/s104...

#hyperspectral #HSI #AI #ML #ContrastiveLearning
#sqIRL #IDLab #UAntwerp #imec

2 months ago 1 0 0 1
Post image Post image Post image Post image

Time to celebrate!
Last week Salma successfully defended her PhD on #Representation #Learning for #Hyperspectral Image Analysis.

Thanks for the cool research, your support to the members of the group, and all your contributions to #sqIRL/#IDLab.

Congratulations Dr. Haidar

#HSI #AI #ML #UAntwerp

3 months ago 4 0 0 1
sqIRL - Interpretable Representation Learning Research lab focused on interpretable representation learning and explainable AI

Kudos to Thomas and the involved collaborators for the solid contributions to the field.

Curious about our work, have a look at our website: sqirllab.github.io/

#Interpretability #mechinterp #compinterp #xai #AI #ML
#sqIRL #UAntwerp #IDLab

4 months ago 0 0 0 0
tdooms

At the MI workshop (spotlight), we show how Bilinear Autoencoders ease the analysis of neural representations through their decomposition into polynomial latents.
Paper and the cool demos at tdooms.github.io/research/bae

#Interpretability #mechinterp #compinterp #xai #AI #ML
#UAntwerp #IDLab

4 months ago 0 0 1 0

Have a look at the work our lab will be presenting at #NeurIPS '25.
On the main track, SimpleStories, a dataset full of simple yet diverse stories which has the potential of becoming the MNIST for language.
openreview.net/pdf?id=sVh3e...

#Interpretability #mechinterp #xai #AI #ML
#sqIRL #UAntwerp

4 months ago 0 0 1 1
Preview
sqIRL (Interpretable Representation Learning) | LinkedIn sqIRL (Interpretable Representation Learning) | 23 followers on LinkedIn. We are "squIRreL", the Interpretable Representation Learning Lab based at IDLab - University of Antwerp & imec. ...

We just launched a #linkedin page. Please help us spread the word and share it with people that might be interested.
linkedin.com/company/sqir...

#RepresentationLearning #interpretability #explainability #XAI #mechinterp #AI #ML #sqIRL #ComputerVision #HSI #IDLab #UAntwerp

4 months ago 1 1 0 0
Redirecting

Saja's survey on the #intepretability/#explainability of Capsule Networks was accepted at #Neurocomputing. Give it a look while it is free to access.
doi.org/10.1016/j.ne...

#CapsNets #AI #ML #XAI #sqIRL #UAntwerp #IDLab

4 months ago 2 1 0 0

Thanks for the #Flanders AI Research Program (FAIR) for supporting this collaboration and to the involved persons for the fruitful collaboration.

#Neuromorphic #hyperdimensional #interpretability #Explainability #VSA #xai #AI #ML #FAIR #UAntwerp #imec

5 months ago 0 0 0 0
Advertisement
Redirecting

#HDC models aim to be an energy-efficient alternative to current #AI systems and thanks to the efforts of our collaborators, their decision-making process is now more interpretable.

#Neuromorphic #hyperdimensional #interpretability #Explainability #VSA #xai #AI #ML #FAIR #UAntwerp #imec

5 months ago 0 0 1 0
Redirecting

Our work on Interpretable #Hyperdimensional Computing (HDC) classifiers for #tabular data is now available at #Neurocomputing.

doi.org/10.1016/j.ne...

#Neuromorphic #interpretability #Explainability #VSA #xai #ML #FAIR #UAntwerp #imec

5 months ago 1 1 1 0
Post image

Feeling rewarded... Honored to be among the Top Reviewers at #NeurIPS this year.

#AI #ML #UAntwerp #sqIRL #IDLab #imec

6 months ago 2 1 0 0

Thanks to our collaborators from the #VUB, Ward Gauderis and Geraint Wiggins; as well as #sqIRL members Thomas Dooms and José Oramas for the nice collaboration. #UAntwerp #FlandersAI #FAIR

6 months ago 0 0 0 0
Post image Post image Post image Post image

This week our lab was present at the Flanders AI Research day. There we contributed with a deep dive session on #Compositional #Interpretability More details at: compinterp.github.io
#CompInterp #interpretableML #XAI #explainability #aisafety

6 months ago 3 1 1 0
Post image

This month we welcomed Dr. Renata Turkeš, who will be exploring #TDA principles for the characterisation of generalisation capabilities of DNNs. Welcome to the lab Renata. renata-turkes.github.io #UAntwerp

7 months ago 1 0 0 1

The deadline for the #AIMLAI workshop held jointly with #ECMLPKDD2025 has been extended until June 21st.
Looking forward to last-minute submissions on work around #interpretability and #explainability of #AI / #ML

project.inria.fr/aimla
#mechinterp #xai

10 months ago 1 1 0 0
Advertisement
Post image

Part of the #sqIRL lab at the IDLab day 2025 #uantwerp

10 months ago 1 1 0 0

Our lab got two papers accepted at #ECMLPKDD2025 on the topics of #Interpretability for Spiking NNs and self-supervised representation learning with embedded interpretability .
Congrats to Jasper, Hamed, Fabian and our collaborators.

#SNN #SIM #AI #ML #neuromorphic #xai #interpretableML

10 months ago 3 1 0 1
2025 Progam Committee

Honored to be selected among the Outstanding Reviewers at #CVPR2025.
#UAntwerp @sqirllab.bsky.social #IDLab

cvpr.thecvf.com/Conferences/...

11 months ago 4 1 0 0

It is confirmed, the #AIMLAI workshop will be held jointly with @ecmlpkdd.org.
We invite the submissions of long and short papers covering work around #interpretability and #explainability of #AI/#ML.

Deadline: 14/06/25
CfP: shorturl.at/yYQ9G
Website: shorturl.at/W9r1A

#XAI #mechinterp #ECMLPKDD

11 months ago 1 2 0 0
Post image Post image Post image Post image

Last week our lab celebrated the doctoral defense of Hamed Behzadi. It has been four years since Hamed join us, and his evolution into fully fledged independent researcher has been constant. Congratulations!

See shorturl.at/296MV for some of the work produced by Hamed.

#ML #AI #Interpretability

11 months ago 0 1 0 1
Preview
Improving Neural Network Accuracy by Concurrently Training with a... Recently within Spiking Neural Networks, a method called Twin Network Augmentation (TNA) has been introduced. This technique claims to improve the validation accuracy of a Spiking Neural Network...

Benjamin Vandersmissen (25/04 evening) will show the effects that using a twin network has on learning processes and share insights on how TNA leads to superior predictive performance in a number of tasks for several architectures. #deeplearning #ML #ICLR2025 #sqIRL
openreview.net/forum?id=TEm...

11 months ago 1 0 0 0
Preview
Bilinear MLPs enable weight-based mechanistic interpretability A mechanistic understanding of how MLPs do computation in deep neural net- works remains elusive. Current interpretability work can extract features from hidden activations over an input dataset...

Thomas Dooms will show how to bilinear MLPs can server as more transparent component that provides a better lens to study the relationships between inputs, outputs and the weights that define the models. #mechinterp #interpretability #ML #AI #XAI #ICLR2025 #sqIRL
openreview.net/forum?id=gI0...

11 months ago 1 0 1 0

If you are at #ICLR2025 and interested on how to understand DNNs from its weights and on how to improve predictive performance of a DNN via Twin Network Augmentation, we encourage you to get in touch with Thomas and Benjamin who will be presenting our work there. #sqIRL #UAntwerp #XAI

11 months ago 1 0 1 1
Post image Post image

We had the opportunity to contribute to the Research Day of the Antwerp Center of Responsible #AI ( #ACRAI ) where Salma and Hamed presented their work on #explainability-driven #HSI analysis and model #interpretability, respectively.
#ML @uantwerpen.be
www.uantwerpen.be/en/research-...

1 year ago 2 2 0 0

This week we had the visit of Prof. Eliana Pastor (DBDMG @PoliTO) who gave a presentation on her research around the topics of #trustworthyAI, #Bias analysis and #FairnessAI. Very good work and interesting ideas. @elianapastor.bsky.social we hope to host you again soon. #explainability #AI #ML

1 year ago 1 1 0 0
Advertisement
Preview
Bilinear MLPs enable weight-based mechanistic interpretability A mechanistic understanding of how MLPs do computation in deep neural networks remains elusive. Current interpretability work can extract features from hidden activations over an input dataset but gen...

Bilinear MLPs Enable Weight-based Mechanistic Interpretability
M. Pearce, T. Dooms, A. Rigg, J. Oramas, L. Sharkey

We show that bilinear layers can serve as an interpretable replacement for current activation functions; enabling weight-based interpretability.

preprint: arxiv.org/abs/2410.08417

1 year ago 0 0 0 0
Preview
Improving Neural Network Accuracy by Concurrently Training with a... Recently within Spiking Neural Networks, a method called Twin Network Augmentation (TNA) has been introduced. This technique claims to improve the validation accuracy of a Spiking Neural Network...

Improving Neural Network Accuracy by Concurrently Training with a Twin Network
B. Vandersmissen, L. Deckers, J. Oramas

We show the effectiveness of TNA lies on a better exploration of the parameter space and the learning of more robust and diverse features

preprint: openreview.net/forum?id=TEm...

1 year ago 0 0 1 0

A great start for 2025.
Proud to announce that our group (#sqIRL/IDLab) got two papers accepted at #ICLR2025. A first for our young lab.

Thanks to our collaborators, the FAIR Program and the Dept. of CS @uantwerpen.bsky.social for supporting this research.

#AI #ML #interpretability #XAI

1 year ago 0 1 1 0

Recent work published by the #sqIRL Lab on the training of competitive deeper Forward-Forward Networks. #FF #localLearning #ML #RepresentationLearning

1 year ago 1 0 0 0