Advertisement · 728 × 90

Posts by Stella Biderman @ ICLR

Specifically, I believe many things get worse when you try to optimize them because the underlying assumptions aren’t robust to the amount of computational power we are able to leverage.

5 hours ago 49 0 0 0

My hot take is that the median social system breaks under too much optimization pressure, and we should stop trying to optimize things

5 hours ago 206 28 9 6

I can’t even come up with a way to get to 600% here… the obvious error to make would be 100*(600/10) but that gives 6,000% not 600%.

5 hours ago 0 0 0 0

Excited to be on my way to @iclr-conf.bsky.social! Come stop by our posters and hit me up. I'm especially excited to talk about
- Open weight safety
- Training dynamics and interpretability over time
- Memorization and machine unlearning
- Open data
- Rigorous experimental design

22 hours ago 19 1 0 0

**702 sorry

3 days ago 3 0 0 0

FISA 207 is blatantly illegal and immoral and has always been obviously so. Republicans are pretending to not know this, just like Democrats did during the Biden administration.

This is bipartisan evil.

3 days ago 25 3 1 0

Feb 3, 2025 - We started fighting to save our data.
July 3, 2025 - We launched #SaveOurSigns with Minn librarians.
April 2026 - We are still talking about the importance of public data as a public good.

❤️🛟

3 weeks ago 17 11 0 0

1. No, it just doesn’t provide evidence for the claim that that’s happening
2. I’m much more worried about the dangers of right winger extremism than moderating the left. I think if the reported phenomenon was real it would still be a net positive for society

3 weeks ago 2 0 0 0
Advertisement

Regretfully, the story about LLMs anti-polarizing people was not real.

3 weeks ago 87 23 1 0

We can only hope 🙏

3 weeks ago 3 0 0 0
Preview
The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text Large language models (LLMs) are typically trained on enormous quantities of unlicensed text, a practice that has led to scrutiny due to possible intellectual property infringement and ethical concern...

Actually, you have a moral imperative to work with me

arxiv.org/abs/2506.05209

3 weeks ago 0 0 1 0

We currently live in a world that is a lot worse than it could be because of social media created echo chambers and algorithmic promotion of political extremism. So yes, stopping models from doing this is a huge win for safety.

3 weeks ago 13 2 0 0

If this is real, it’s very plausibly the biggest win for alignment research.

3 weeks ago 56 9 3 0

OLMoCR is kinda mediocre in my testing. What kinds of documents has it been good for you in?

3 weeks ago 0 0 2 0

They have a 175M / 4 year grant from the NSF and NVIDIA that’s earmarked for open source AI.

3 weeks ago 4 1 2 0
Advertisement

It’s really good

3 weeks ago 1 0 0 0
Post image

You have a moral imperative to refuse to work with these people or develop models for these purposes.

3 weeks ago 9 3 1 2

I’m having trouble figuring out what it would mean for someone to not let you FT your open weight model. Is this about pre-release evaluation?

3 weeks ago 2 0 1 0

That would be quite helpful!

3 weeks ago 0 0 0 0

I would be very interested in seeing a talk script and the resulting slides side by side. I tried taking your advice with Claude and was pretty disappointed in the resulting slides tbh

3 weeks ago 1 0 1 0

I’m not sure how to make an argument like this / I’m not 100% sure what you’re looking for.

3 weeks ago 0 0 0 0

How do you identify which problems are interesting and valuable? When people don’t work on problems that matter, why do you think that is?

3 weeks ago 6 1 1 0
How to Prepare a Talk

Your link to Eisner doens't work... I think you meant www.cs.jhu.edu/~jason/advic...

3 weeks ago 2 0 1 0

Do you have a reference for how to do "a bound derived from a differential approach"?

3 weeks ago 0 0 2 0

If I was going to claim that a finetuning methodology for machine unlearning “really worked,” what evidence would you like to see?

1 month ago 10 0 2 0
Advertisement

I'm not sure if I'm more called out by this skeet or the fact that I've had two kidney stones already tbh...

1 month ago 1 0 0 0
Examples of mislabeled web text by existing LangID systems. A full text version is available on the blog post below.

Examples of mislabeled web text by existing LangID systems. A full text version is available on the blog post below.

Examples of mislabeled web text by existing LangID systems. A full text version is available on the blog post below.

Examples of mislabeled web text by existing LangID systems. A full text version is available on the blog post below.

Language identification still proves to be a challenging task, especially for web data. In collaboration with @mlcommons.org @eleutherai.bsky.social @jhu.edu and 97 community members, we created CommonLID, a new benchmark for LangID for 100+ languages!

2 months ago 11 5 1 0
Post image

Has anyone else had Claude code become non-functional recently? Even with a test input it spins for minutes without doing anything. Same thing happens in terminal.

2 months ago 5 0 4 0

It's going to get worse because people hate AI

2 months ago 11 0 3 0
Post image Post image Post image Post image

The only reasons I use social media platforms is to get eyeballs on research and to yell at people who are wrong online.

X > Bluesky at both for me

2 months ago 0 0 0 0