The net effect of strong LLMs will be to make it easier to find information and learn.
Posts by Gavin Brown
Dimensional lumber is out.
Distributional lumber is in.
If δ=0, we write ε-DP and call it “pure” DP. If δ>0, we call it “unclean.”
I wrote a survey article on computationally efficient methods for "robust" mean estimation, including robustness to contamination, heavy-tailed data, or in the sense of differential privacy.
The same ideas are useful for all 3 (seemingly-different) forms of robustness! 1/2
arxiv.org/abs/2412.02670
It’s bad when reviewers are extremely wrong, but it feels worse when they’re extremely right.
No idea. I never let reality constrain my jokes.
Instead of a special poster session, NeurIPS should use physical spotlights to identify exceptional work.
Good point, I’ll try to keep quiet about my opinions on the word “epoch.”
Can I still join if I pronounce it “click”?
The issue is: important things need short names.
I know! “Erasure,” come on.
Review: Serious issues with presentation meant I could not interpret the results.
Rebuttal: Great, you’re saying we made a breakthrough and just need to write it up better.
Ceci n'est pas une Annoucement Sign.
“We went to a restaurant once, and it made a huge impression on us.”
And, once in a while, as you’re cleaning up the proof of your upper bound, you have to stop and ask “wait, is this even an algorithm?”
William Sealy Gosset developed the Student’s t-test as part of his work as Head Brewer of Guiness.
Pearson was already working with big data.
A few of us are going to corner the market on efficient differentially private mean estimation in Mahalanobis norm, really drive up the price.
Never ask a learning theorist about their algorithm’s run time. If it’s good, they’ll bring it up themselves.
I seem to recall that math undergrads in the US have approximate gender parity. What are they doing right that CS does wrong?
Not a big fan of things changing. I’m still secretly hoping everyone will get back on AIM and Xanga.