โ๏ธ Traveling to #ICML2025. Presenting our paper: Quantifying Prediction Consistency Under Fine-Tuning Multiplicity in Tabular LLMs #TabularLLM #Uncertainty #PredictionConsistency #Robustness
๐ Wed 16 Jul 4:30 p.m. PDT โ 7 p.m. PDT
โก๏ธ East Exhibition Hall A-B E-901
๐https://arxiv.org/abs/2407.04173
Posts by Sanghamitra Dutta
We propose Redundant Information Distillation which maximizes the task-relevant common information between teacher and student using a new alternating optimization: #explainability #informationtheory #distillation #modelcompression
๐ข Knowledge distillation trains smaller student models from complex teacher models. But are all teachers equally helpful? Can we formally quantify useful distillable knowledge? Our paper at #AISTATS2025 explains distillation using Partial Information Decomposition. arxiv.org/abs/2411.07483
๐ Sharing our recent paper on "Counterfactual Explanations for Model Ensembles Using Entropic Risk Measures" ๐ ๐ Accepted at #AAMAS2025
Joint work with: Erfaun Noorani Pasan Dissanayake Faisal Hamman #Explainability #XAI #AlgorithmicRecourse #EntropicRisk arxiv.org/abs/2503.07934
Pasan's research focuses on reconstructing simpler, interpretable, and efficient models from larger complex models by specifically integrating explainability (recent publications at #NeurIPS2024 #AISTATS2025 #AAMAS2025)
Excited to share that my PhD student Sachindra Pasan Dissanayake has been awarded the Outstanding Graduate Assistant Award by the Graduate school (Top 2% of campus graduate assistants). #proudadvisor
pasandissanayake.github.io
Are you interested in serving as a Program Committee member for the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2025)? PC Members are expected to review papers in their area of expertise. Expression of interest form: forms.gle/dmhCPbRBTEzF...
#FAccT2025
โ๏ธ Headed to #NeurIPS2024 Presenting our paper on "Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope Theory"
๐ Wed 11 Dec 4:30 pm, Poster Session 2 East Exhibit Hall A-C #3303 #NeurIPS #XAI #Explainability #Privacy #Counterfactuals
Arxiv: arxiv.org/abs/2405.05369