Advertisement · 728 × 90

Posts by Anshuman Suri

5/ But the broader message? It's time to give 'parameter access' another serious look in privacy research 🔬

Find more details in the paper (accepted to TMLR), w/ Xiao & Dave

📜 openreview.net/pdf?id=fmKJf...
💻 github.com/iamgroot42/a...

1 year ago 0 0 0 0

4/ The big open question remains: how close are optimal black-box attacks to this theoretical optimum? The gap might be negligible, suggesting black-box methods suffice—or significant, showing parameter access offers better empirical upper bounds 🤔

1 year ago 0 0 1 0

3/ Our work challenges this assumption head-on. By carefully analyzing SGD dynamics, we prove that optimal membership inference requires white-box access to model parameters. Our Inverse Hessian Attack (IHA) serves as a proof of concept that parameter access helps!

1 year ago 0 0 1 0

2/ Prior work (e.g., proceedings.mlr.press/v97/sablayro...) suggests black-box access is optimal for membership inference—assuming SGLD as the learning algorithm. But these assumptions break down for models trained with SGD

1 year ago 0 0 1 0

1/ Most membership inference attacks (MIAs) have seemingly converged to black-box settings, driven by empirical evidence and theoretical folklore suggesting black-box access was optimal. But what if this assumption missed something critical? 😨

tl;dr? It did 🧵

1 year ago 1 0 1 0

Temporally shifted data splits in membership inference can be misleading ⚠️ Be cautious when interpreting these benchmarks!

1 year ago 2 1 0 0