LOGML 2026 mentor applications are still open!
Deadline extended to 22 March 2026(AoE).
Mentor a small team at Imperial College London during 13–17 July'26.
Travel/accommodation support available.
🔗 logml.ai/apply.html
#LOGML #GeometricDeepLearning #GraphML #MachineLearning
Large-Scale Graph Dataset Measures Long-Range Interactions
City‑Networks offers over 100 k road‑intersection nodes, pushing models to capture long‑range graph info, and adds a metric using output Jacobians of distant neighbors. Read more: getnews.me/large-scale-graph-datase... #citynetworks #graphml #longrange
This article tests how degree, clustering, and topology–feature ties sway GNN and feature-only models using HypNF synthetic graphs. #graphml
Synthetic HypNF graphs reveal GNN fragilities: HGCN beats GCN on dense, homogeneous nets but falters on sparse power-law ones. #graphml
A quick reminder: Workshop on Graph-Augmented LLMs (GaLM): Bridging Language and Structured Knowledge is still accepting submissions at #IEEE #ICDM.
📝 Papers — Extended deadline: September 5
For more info and how to submit your work: iitbhu.ac.in/cf/jcsic/act...
#LLM #ML #AI #GraphML #RAG
📄 Preprint available:
arxiv.org/abs/2506.01208
Joint work with @tijldebie.bsky.social @nickheard @alexandermodell
#TemporalNetworks #DynamicGraphs #NetworkScience #ChangeDetection #Cybersecurity #SignalProcessing #Wavelets #StatisticalLearning #TimeSeries #GraphML
📄 Full preprint available:
arxiv.org/abs/2506.01208
Joint work with @tijldebie.bsky.social @alexandermodell @nickheard
#TemporalNetworks #DynamicGraphs #NetworkScience #ChangeDetection #Cybersecurity #SignalProcessing #Wavelets #StatisticalLearning #TimeSeries #GraphML
📄 Link to the full preprint:
arxiv.org/abs/2506.01208
Joint work with @tijldebie.bsky.social @alexandermodell @nickheard
#TemporalNetworks #DynamicGraphs #NetworkScience #ChangeDetection #Cybersecurity #SignalProcessing #Wavelets #StatisticalLearning #TimeSeries #GraphML
🎉 Our paper “The Generalized Skew Spectrum of Graphs” was accepted to ICML 2025!
We applied deep math - group theory, rep theory & Fourier analysis - to graph ML (no quantum this time!😄)
📍 See you in Vancouver in July!
📄 arxiv.org/abs/2505.23609
#ICML2025 #GraphML #AI #ML
some of our newest and exciting updates in how we align the use of AI with the practice of catastrophe modeling and disaster risk science!
#AI #DRR #EnvironmentalRisk #Cambridge #AI4ER #CDT #regional #disaster #risk #catastrophe #EO #ML #CAT #graphML #Bayesian
I’m in Sydney this week! 🇦🇺 Excited to attend #WWW2025, the 2025 ACM Web Conference.
Tomorrow, I’ll be presenting our paper, “To Share or Not to Share: Investigating Weight Sharing in Variational Graph Autoencoders,”
co-authored with Jiaying Xu.
#WebConf2025 #GraphML
Knowledge Graph Technology Showcase for Tom Sawyer Software.
Want to simplify your data analysis? Dr. Ashleigh Faith breaks down how Tom Sawyer Software's suite empowers you to use graphML effortlessly. No coding? No problem! Watch her insightful review and unlock your data's potential: www.youtube.com/watch?v=1-AZ... #GraphML #TechReview
2025 ACM Web Conference
Some research update!
Happy to share that our #GraphML paper:
"To Share or Not to Share: Investigating Weight Sharing in Variational Graph Autoencoders"
co-authored with Jiaying Xu,
has been accepted for presentation at the ACM Web Conference #WWW2025! 🥳
Paper online soon. See you in Sydney!
A poster with a light blue background, featuring the paper with title: “A True-to-the-Model Axiomatic Benchmark for Graph-based Explainers”. Authors: Corrado Monti, Paolo Bajardi, Francesco Bonchi, André Panisson, Alan Perotti Background Explainability in GNNs is crucial for enhancing trust understanding in machine learning models. Current benchmarks focus on data, ignoring the model’s actual decision logic, leading to gaps in understanding. Furthermore, existing methods often lack standardized benchmarks to measure their reliability and effectiveness. Motivation Reliable, standardised benchmarks are needed to ensure explainers reflect the internal logic of graph-based models, aiding in fairness, accountability, and regulatory compliance. Research Question If a model M is using a protected feature f , for instance using the gender of a user to classify whether their ads should gain more visibility, is a given explainer E able to detect it? Core Idea An explainer should detect if a model relies on specific features for node classification. Implements a “true-to-the-model” rather than “truth-to-the-data” logic. Key Components White-Box Classifiers: Local, Neighborhood, and Two-Hop Models with hardcoded logic for feature importance. Axioms: an explainer must assign higher scores to truly important features. Findings: Explainer Performance Deconvolution: Perfect fidelity but limited to GNNs. GraphLIME: Fails with non-local correlations and high sparsity. LRP/Integrated Gradients: Struggle with zero-valued features. GNNExplainer: Sensitive to sparsity and edge masking. Real-World Insights: Facebook Dataset Fidelity in detecting protected feature use in classification. Results for different explainers, highlighting strengths and limitations. Contributions: Proposed a rigorous framework for benchmarking explainers Demonstrated practical biases and flaws in popular explainers
Check out our poster at #LoG2024, based on our #TMLR paper:
📍 “A True-to-the-Model Axiomatic Benchmark for Graph-based Explainers”
🗓️ Tuesday 4–6 PM CET
📌 Poster Session 2, GatherTown
Join us to discuss graph ML explainability and benchmarks
#ExplainableAI #GraphML
openreview.net/forum?id=HSQTv3R8Iz
Hi!
This thursday, nov 21st, 11am EST, Rishabh Ranjan will present:
RELBENCH: A Benchmark for Deep Learning on Relational Databases (NeurIPS 2024 Datasets and Benchmarks Track)
🎈
Join on zoom (link on website)
arxiv.org/pdf/2407.20060
#graphml #machinelearning #temporalgraphs #neurips