Posts by Yuli Slavutsky
Uncertainty estimation fails under distribution shifts. Why? Partly because in stats, even Bayesian stats, we treat x as given. But intuitively data makes different models plausible. For reliable uncertainty, we need to account for it explicitly. Come chat with me about it tomorrow at my poster
Hello!
We will be presenting Estimating the Hallucination Rate of Generative AI at NeurIPS. Come if you'd like to chat about epistemic uncertainty for In-Context Learning, or uncertainty more generally. :)
Location: East Exhibit Hall A-C #2703
Time: Friday @ 4:30
Paper: arxiv.org/abs/2406.07457
The circuit hypothesis proposes that LLM capabilities emerge from small subnetworks within the model. But how can we actually test this? 🤔
joint work with @velezbeltran.bsky.social @maggiemakar.bsky.social @anndvision.bsky.social @bleilab.bsky.social Adria @far.ai Achille and Caro
Fri 13 Dec 11 a.m. PST — 2 p.m. PST
East Exhibit Hall A-C #2204
In this paper, we tackle shifts caused by an unknown attribute with an approach opposite to bootstrapping: we use small samples to generate synthetic environments with different "kinds" of classes and learn more robust data representations.
But in zero-shot, we face new classes at test time. To adapt, we need to know which "kind" of classes to emphasize. But in reality, the shift is often unknown.
Class distribution shifts are often seen as the easiest to handle—that's often true for supervised learning, thanks to reweighting/resampling.
I'm on my way to #NeurIPS2024. On Friday I'm going to present my latest paper with Yuval Benjamini. The gist is in the comments, and come chat with me to hear more!
Samples y | x from Treeffuser vs. true densities, for multiple values of x under three different scenarios. Treeffuser captures arbitrarily complex conditional distributions that vary with x.
I am very excited to share our new Neurips 2024 paper + package, Treeffuser! 🌳 We combine gradient-boosted trees with diffusion models for fast, flexible probabilistic predictions and well-calibrated uncertainty.
paper: arxiv.org/abs/2406.07658
repo: github.com/blei-lab/tre...
🧵(1/8)
Hi, would love to be added! Thanks!
Hi! Would love to be added. Thanks!
Hi! Would love to be added! Thanks!
Hi! I'd love to be added. Thanks!
Hi! Could you please add me to the starter pack? Thanks!
Hi! Could you please add me to the starter pack? Thanks!