Advertisement · 728 × 90
#
Hashtag
#SaTML25
Advertisement · 728 × 90

🎤 That’s a wrap on #SaTML25! Huge thanks to the speakers, organizers, reviewers, and everyone who joined the conversation. See you next time!

1 0 0 0
Post image

🔍 How private was that release? @a-h-koskela.bsky.social presents a method for auditing DP guarantees using density estimation. #SaTML25

0 0 0 0
Post image

🧮 Getting the math right. @matt19234.bsky.social walks through common traps in privacy accounting and how to avoid them. #SaTML25

3 1 0 0
Post image

🧠 Marginals leak. Steven Golob shows how synthetic data built on marginals can still compromise privacy. Paper: arxiv.org/abs/2410.05506 #SaTML25

0 0 2 0
Post image

📃🔐 Privacy and fairness? Khang Tran introduces FairDP, enabling fairness certification alongside differential privacy. Paper: arxiv.org/abs/2305.16474 #SaTML25

0 0 1 0
Post image

🖼️📡 Hide and seek. Luke Bauer presents a method for covert messaging with provable security via image diffusion. Paper: arxiv.org/abs/2503.10063 #SaTML25

0 0 0 0
Post image

💣 Still work to do. Yigitcan Kaya makes the case that ML-based behavioral malware detection is fragile and far from solved. Paper: arxiv.org/abs/2405.06124 #SaTML25

0 0 0 0

🕵️‍♂️ From detection to covert messaging—Session 13 explores the gray areas of ML security. #SaTML25

0 0 2 0
Post image

💻 What can you learn privately when compute is tight? Zachary Charles tackles user-level privacy under realistic constraints. #SaTML25

0 0 0 0
Post image

📊 Not all public datasets are equal. Xin Gu proposes a new metric—gradient subspace distance—to guide private learning choices. Paper: arxiv.org/abs/2303.01256 #SaTML25

3 1 0 0
Post image

📚🔒 Choose wisely. Kristian Schwethelm presents a method to balance data utility and privacy in active learning. Paper: arxiv.org/abs/2410.00542 #SaTML25

0 0 0 0
Post image

⚖️ Privacy isn’t always fair. Kai Yao breaks down the mechanisms that can introduce unfairness into private learning. Paper: arxiv.org/abs/2501.14414 #SaTML25

1 0 0 0

🔐 Starting the final afternoon at #SaTML25 with Session 12—private learning from all angles: fairness, dataset selection, active learning, and budget-aware privacy.

1 0 4 0
Post image

🌲💀 Even decision trees aren’t safe. Lorenzo Cazzaro shows how to poison tree-based models. Paper: arxiv.org/abs/2410.00862 #SaTML25

0 0 0 0
Post image

🚗🔦 How robust are LiDAR detectors?Alexandra Arzberger presents Hi-ALPS, benchmarking six systems used in autonomous vehicles. Paper: arxiv.org/abs/2503.17168 #SaTML25

0 0 1 0
Post image

🎯 Robustness meets domain adaptation. Natalia Ponomareva introduces DART, a principled method for adapting without labels—and withstanding attacks. #SaTML25

0 0 1 0

🛡️🌍 Session 11 at #SaTML25 is all about making models that hold up—across domains, sensors, and even sneaky tree poison.

0 0 1 0
Post image

🔍 A fairness reality check. Claire Zhang surveys the landscape of fair clustering—what works, what doesn’t, and what’s next. #SaTML25

0 0 0 0
Post image

🎯 Adversarial incentives meet fairness. Emily Diana presents a minimax approach to fairness when users can game the system. #SaTML25

0 0 0 0
Post image

🌀 Trying to be fair… and failing? Natasa Krco argues that efforts to reduce bias can themselves be arbitrary—or even unfair. #SaTML25

1 0 0 0
Post image

🌍 No central authority, no problem?Sayan Biswas explores fairness challenges and solutions in decentralized learning systems. Paper: arxiv.org/abs/2410.02541 #SaTML25

0 0 0 0

⚖️ In Session 10, #SaTML25 takes a hard look at fairness—decentralized setups, strategic behavior, and when fairness efforts might backfire.

0 0 4 0
Post image

☀️ Kicking off the final day of #SaTML25 with a big question: Should you trust artificial intelligence? Matt Turek takes the stage for this morning’s keynote on the path toward trustworthy AI.

0 0 0 0
Post image

🌈 Can machines see color like we do? Ming-Chang Chiu presents ColorSense, exploring color perception in machine vision. Paper: arxiv.org/abs/2212.08650 #SaTML25

0 0 0 0
Post image

🪵🧵 Texture vs. shape. Blaine Hoak dives into real-world evidence of texture bias in vision models. Paper: arxiv.org/abs/2412.10597 #SaTML25

1 1 0 0
Post image

📎 Perception with CLIP. Christian Schlarmann shows how robustness in CLIP models improves perceptual metrics. Paper: arxiv.org/abs/2502.11725 #SaTML25

0 0 0 0

👁️ Wrapping up Day 2 with perception-focused research—texture, color, robustness, and what vision really means for machines. #SaTML25

1 0 3 0
Preview
Equilibria of Data Marketplaces with Privacy-Aware Sellers under Endogenous Privacy Costs We study a two-sided online data ecosystem comprised of an online platform, users on the platform, and downstream learners or data buyers. The learners can buy user data on the platform (to run a stat...

💸 Privacy meets economics. Diptangshu Sen presents how to model how data sellers behave when privacy costs are part of the game. Paper: arxiv.org/abs/2402.08826 — Next time we hope to have you in person! #SaTML25

0 0 0 0
Post image

🎯 Not all queries are equal. Lorenz Wolf presents a mechanism for private selection under varying sensitivity levels. Paper: arxiv.org/abs/2501.05309 #SaTML25

0 0 0 0
Post image

🔗 When noise talks. Haewon Jeong explores how correlated privacy can improve distributed mean estimation. Paper: arxiv.org/abs/2407.03289 #SaTML25

0 0 0 0