5/ But the broader message? It's time to give 'parameter access' another serious look in privacy research 🔬
Find more details in the paper (accepted to TMLR), w/ Xiao & Dave
📜 openreview.net/pdf?id=fmKJf...
💻 github.com/iamgroot42/a...
Posts by Anshuman Suri
4/ The big open question remains: how close are optimal black-box attacks to this theoretical optimum? The gap might be negligible, suggesting black-box methods suffice—or significant, showing parameter access offers better empirical upper bounds 🤔
3/ Our work challenges this assumption head-on. By carefully analyzing SGD dynamics, we prove that optimal membership inference requires white-box access to model parameters. Our Inverse Hessian Attack (IHA) serves as a proof of concept that parameter access helps!
2/ Prior work (e.g., proceedings.mlr.press/v97/sablayro...) suggests black-box access is optimal for membership inference—assuming SGLD as the learning algorithm. But these assumptions break down for models trained with SGD
1/ Most membership inference attacks (MIAs) have seemingly converged to black-box settings, driven by empirical evidence and theoretical folklore suggesting black-box access was optimal. But what if this assumption missed something critical? 😨
tl;dr? It did 🧵
Temporally shifted data splits in membership inference can be misleading ⚠️ Be cautious when interpreting these benchmarks!