Neural networks are highly non-convex, so approximate error minimizers need not look anything like each other in parameter space. But we show that nevertheless (for many model sizes) approximate error minimizers must closely agree in function/prediction space despite this!
Posts by Jess Sorrell
Do you have recent work on differential privacy? Submit it to TPDP 2026 in Boston, whose deadline is in ~2 weeks.
TPDP is a lightly reviewed workshop, whose main purpose is getting researchers in DP together in one place. Dual submissions allowed (and encouraged!).
Your #NeurIPS2026 reviews ask you to compare to five papers on clawxiv.org
What do you do?
The other paper accepted to @iclr-conf.bsky.social 2026 🇧🇷. Our work on replicable RL sheds some light on how to consistently make decisions in RL.
@ericeaton.bsky.social @mkearnsphilly.bsky.social @aaroth.bsky.social @sikatasengupta.bsky.social @optimistsinc.bsky.social
Come join us for a workshop on productive use of AI for research and research-adjacent tasks!
Come hang out with us at COLT 2025 and think about crafting and communicating your research agenda!
You are a continual inspiration to me, I hope you know this
Congrats to Russell!! 🎉
Join #HopkinsDSAI for the Johns Hopkins Celebrates Women in Data Science and AI event on April 9 from 12-4 p.m.
Register to celebrate leading women in the fields of data science and AI with a keynote speaker, panel discussion, and poster session: ai.jhu.edu/event/johns-...
Also congratulations!!
Is this in your office, and if so, when can I come pay my respects to the norm balls?
DC-area folks: lots going on right now, but this rally to save PEPFAR is worth a look. The cruelties and robbery are worth fighting in aggregate and important to fight in specific. pepfarreport.org/event
Alex Tolbert is running a stellar conference next week at Emory that I am bummed to be missing out on. The speaker lineup is especially remarkable- spanning theoretical computer science to machine learning to law to philosophy. You should go and enjoy it for me. 39893947.hs-sites.com/aiethicsconf...
Dear Google search. I don't mean private parties. I never mean private parties. I am neither hip nor a socialite. It's private parities. I meant what I said. Every time. Thank you.
Oh cool! A student's been teaching me about LM benchmarks recently and might be doing her project on something related to evaluating evaluations. If you make your site public, I will def point her your way!
My turn to ask: what class?
Also, you didn't happen to snap a white board pic, did you?
Theory of Replicable ML. It's a mix of topics from Aaron's and Adam Smith's course on adaptive data analysis + recent work on formal algorithmic replicability. jess-sorrell.github.io/Courses/Repl...
Your timing's great, I just showed my class Moritz's result today! (Borrowing @aaroth.bsky.social's notes.) Gonna go share your post on Canvas :)
Just started rereading IJ and the whole subsidized time thing sounds too close to real for comfort
Noooooooo
Oh hey, now I'm excited for these papers!
I worked at coffee shops for a few years and one of the joys of the job was memorizing regulars' orders to do exactly this thing. It's fun for everyone involved, I think :)
I like the balls argument! Will probably go that route next time
I have so many Chernoff arguments now! Thanks! #blessed
How're you holding up?
Nice, thanks!
Thank youuuuuu
Anyone have a favorite proof of Hoeffding's lemma that's more intuitive than applying the AMGM inequality to a term of a Taylor expansion of a function plucked from The Land of It Made Things Work? I'm happy to trade constants for intuition/simplicity
The subspecies of CS theorists typically does, yeah