Advertisement · 728 × 90
#
Hashtag
#log2024
Advertisement · 728 × 90

This week we look forward to an exciting talk featuring two #LOG2024 papers!
📢 Towards Neural Scaling Laws on Graphs & Do Neural Scaling Laws Exist on Graph Self-Supervised Learning?
🎤 Jingzhe Liu (MSU) & Qian Ma (RPI)
📅 11am EST, Feb 6 (Thu)
📍 Zoom (link on website)
Don’t miss it! 🔥

3 1 1 0
Preview
Edge-Splitting MLP: Node Classification on Homophilic and... Message Passing Neural Networks (MPNNs) have demonstrated remarkable success in node classification on homophilic graphs. It has been shown that they do not solely rely on homophily but on...

Standard GNNs excel on homophilic graphs but depend on neighborhood patterns. ES-MLP, a student project by Matthias Kohn and co-supervised with Marcel Hoffmann, presented at #log2024, combines Graph-MLP with edge-splitting for faster, robust, edge-free inference. openreview.net/forum?id=BQE...

1 0 1 0
Post image

#LOG2024 Franco Scarselli kicking off the Learning on Graphs (LOG) Italian Meetup!!!

4 0 0 0

Most popular tags (it) in the last 29 minutes:

#buongiorno 6
#30novembre 2
#codicedellastrada 1
#spininelfianco 1
#moca 1
#tor 1
#showermirror 1
#coffee 1
#log2024 1
#propagandalive 1

4 2 0 1
Post image

Well, I guess there’s no better way to start off this application for me than saying I’m honoured to have been selected among the top reviewers at #LoG2024 🚀🚀🤩🤩

1 0 0 0

Most popular tags (it) in the last 30 minutes:

#marcomengoni 4
#attivo 2
#twink 2
#incontri 2
#log2024 2
#fuckputin 1
#putinwarcriminal 1
#russiaterrorist 1
#schiavone 1
#libri 1

1 0 0 0

🚀 Don’t miss our #LoG2024 poster on Tue, Nov 28 @ 15:00 GMT!

Join us to discuss GNNs + LLMs for disinformation detection. The conference registration is free!
#FakeNewsDetection #GNNs #AI #LLMs @logconference.bsky.social

1 0 0 0
Post image

Happy to share our paper "Enriching GNNs with Text Contextual Representations for Detecting Disinformation Campaigns on Social Media" being presented tomorrow at #LoG2024!

🗓 Tue, Nov 28
⏰ 15h GMT
Virtual
Paper: arxiv.org/pdf/2410.19193

Join us to discuss combating disinformation with GNNs! 🧵 (1/5)

0 0 1 0
Post image

Our paper with Alexei Pisacane and Victor Darvariu "Reinforcement Learning Discovers Efficient Decentralized Graph Path Search Strategies" will be presented today at the Learning on Graphs Conference 2024 (@logconference.bsky.social) at 3pm UK time. #log2024

Paper: openreview.net/pdf?id=trxhr...

3 0 0 0

#LoG2024 Day 2 is a blast so far! We have a fantastic program:

⚙️ an exciting tutorial on Geometric Generative Models
🎙️ second keynote by Zachary Ulissi
🕸️ second session of oral presentations
🤗 first poster session

Details and links: logconference.org 💜💙❤️

3 0 0 0

LoG Conference Tutorial on Geometric Generative Models -- Happening now with @joeybose.bsky.social , @alextong.bsky.social and Heli Ben-Hamu.

Livestream: www.youtube.com/@learningong...

#LoG2024

5 2 0 1
A poster with a light blue background, featuring the paper with title: “A True-to-the-Model Axiomatic Benchmark for Graph-based Explainers”.
Authors: Corrado Monti, Paolo Bajardi, Francesco Bonchi, André Panisson, Alan Perotti 

Background
Explainability in GNNs is crucial for enhancing trust understanding in machine learning models. Current benchmarks focus on data, ignoring the model’s actual decision logic, leading to gaps in understanding. Furthermore, existing methods often lack standardized benchmarks to measure their reliability and effectiveness.

Motivation
Reliable, standardised benchmarks are needed to ensure explainers reflect the internal logic of graph-based models, aiding in fairness, accountability, and regulatory compliance.

Research Question
If a model M is using a protected feature f , for instance using the gender of a user to classify whether their ads should gain more visibility, is a given explainer E able to detect it?

Core Idea
An explainer should detect if a model relies on specific features for node classification.
Implements a “true-to-the-model” rather than “truth-to-the-data” logic.

Key Components
White-Box Classifiers:  Local, Neighborhood, and Two-Hop Models with hardcoded logic for feature importance.
Axioms: an explainer must assign higher scores to truly important features.
Findings:
Explainer Performance
Deconvolution: Perfect fidelity but limited to GNNs.
GraphLIME: Fails with non-local correlations and high sparsity.
LRP/Integrated Gradients: Struggle with zero-valued features.
GNNExplainer: Sensitive to sparsity and edge masking.

Real-World Insights: Facebook Dataset
Fidelity in detecting protected feature use in classification.
Results for different explainers, highlighting strengths and limitations.
Contributions:
Proposed a rigorous framework for benchmarking explainers
Demonstrated practical biases and flaws in popular explainers

A poster with a light blue background, featuring the paper with title: “A True-to-the-Model Axiomatic Benchmark for Graph-based Explainers”. Authors: Corrado Monti, Paolo Bajardi, Francesco Bonchi, André Panisson, Alan Perotti Background Explainability in GNNs is crucial for enhancing trust understanding in machine learning models. Current benchmarks focus on data, ignoring the model’s actual decision logic, leading to gaps in understanding. Furthermore, existing methods often lack standardized benchmarks to measure their reliability and effectiveness. Motivation Reliable, standardised benchmarks are needed to ensure explainers reflect the internal logic of graph-based models, aiding in fairness, accountability, and regulatory compliance. Research Question If a model M is using a protected feature f , for instance using the gender of a user to classify whether their ads should gain more visibility, is a given explainer E able to detect it? Core Idea An explainer should detect if a model relies on specific features for node classification. Implements a “true-to-the-model” rather than “truth-to-the-data” logic. Key Components White-Box Classifiers: Local, Neighborhood, and Two-Hop Models with hardcoded logic for feature importance. Axioms: an explainer must assign higher scores to truly important features. Findings: Explainer Performance Deconvolution: Perfect fidelity but limited to GNNs. GraphLIME: Fails with non-local correlations and high sparsity. LRP/Integrated Gradients: Struggle with zero-valued features. GNNExplainer: Sensitive to sparsity and edge masking. Real-World Insights: Facebook Dataset Fidelity in detecting protected feature use in classification. Results for different explainers, highlighting strengths and limitations. Contributions: Proposed a rigorous framework for benchmarking explainers Demonstrated practical biases and flaws in popular explainers

Check out our poster at #LoG2024, based on our #TMLR paper:
📍 “A True-to-the-Model Axiomatic Benchmark for Graph-based Explainers”
🗓️ Tuesday 4–6 PM CET
📌 Poster Session 2, GatherTown
Join us to discuss graph ML explainability and benchmarks
#ExplainableAI #GraphML
openreview.net/forum?id=HSQTv3R8Iz

2 0 0 0