Advertisement · 728 × 90

Posts by Yuli Slavutsky

Post image
4 months ago 2 0 0 0
NeurIPS Poster Quantifying Uncertainty in the Presence of Distribution ShiftsNeurIPS 2025

Uncertainty estimation fails under distribution shifts. Why? Partly because in stats, even Bayesian stats, we treat x as given. But intuitively data makes different models plausible. For reliable uncertainty, we need to account for it explicitly. Come chat with me about it tomorrow at my poster

4 months ago 5 1 1 1
Post image

Hello!

We will be presenting Estimating the Hallucination Rate of Generative AI at NeurIPS. Come if you'd like to chat about epistemic uncertainty for In-Context Learning, or uncertainty more generally. :)

Location: East Exhibit Hall A-C #2703
Time: Friday @ 4:30
Paper: arxiv.org/abs/2406.07457

1 year ago 23 4 0 1

The circuit hypothesis proposes that LLM capabilities emerge from small subnetworks within the model. But how can we actually test this? 🤔

joint work with @velezbeltran.bsky.social @maggiemakar.bsky.social @anndvision.bsky.social @bleilab.bsky.social Adria @far.ai Achille and Caro

1 year ago 15 6 2 2

Fri 13 Dec 11 a.m. PST — 2 p.m. PST
East Exhibit Hall A-C #2204

1 year ago 2 0 0 0
NeurIPS Poster Class Distribution Shifts in Zero-Shot Learning: Learning Robust RepresentationsNeurIPS 2024

Paper: neurips.cc/virtual/2024...

1 year ago 2 0 1 0

In this paper, we tackle shifts caused by an unknown attribute with an approach opposite to bootstrapping: we use small samples to generate synthetic environments with different "kinds" of classes and learn more robust data representations.

1 year ago 2 0 1 0

But in zero-shot, we face new classes at test time. To adapt, we need to know which "kind" of classes to emphasize. But in reality, the shift is often unknown.

1 year ago 2 0 1 0

Class distribution shifts are often seen as the easiest to handle—that's often true for supervised learning, thanks to reweighting/resampling.

1 year ago 2 0 1 0
Advertisement
Post image

I'm on my way to #NeurIPS2024. On Friday I'm going to present my latest paper with Yuval Benjamini. The gist is in the comments, and come chat with me to hear more!

1 year ago 7 4 1 1
Samples y | x from Treeffuser vs. true densities, for multiple values of x under three different scenarios. Treeffuser captures arbitrarily complex conditional distributions that vary with x.

Samples y | x from Treeffuser vs. true densities, for multiple values of x under three different scenarios. Treeffuser captures arbitrarily complex conditional distributions that vary with x.

I am very excited to share our new Neurips 2024 paper + package, Treeffuser! 🌳 We combine gradient-boosted trees with diffusion models for fast, flexible probabilistic predictions and well-calibrated uncertainty.

paper: arxiv.org/abs/2406.07658
repo: github.com/blei-lab/tre...

🧵(1/8)

1 year ago 153 23 4 4

Hi, would love to be added! Thanks!

1 year ago 2 0 0 0

Hi! Would love to be added. Thanks!

1 year ago 0 0 0 0

Hi! Would love to be added! Thanks!

1 year ago 1 0 0 0

Hi! I'd love to be added. Thanks!

1 year ago 0 0 0 0

Hi! Could you please add me to the starter pack? Thanks!

1 year ago 1 0 0 0

Hi! Could you please add me to the starter pack? Thanks!

1 year ago 0 0 0 0