Advertisement · 728 × 90

Posts by Sanjeev Arora

Post image Post image

Check out our new blogpost and policy brief on our recently updated lab website!

❓Are we actually capturing the bubble of risk for cybersecurity evals? Not really! Adversaries can modify agents by a small amount and get massive gains.

9 months ago 1 1 1 0

Would it make sense to also track how often this happened in pre-2023 cases? Humans "hallucinate" by making cut-and-paste mistakes, or other types of errors.

10 months ago 1 0 1 1

The paper seems to reflect reflect a fundamental misunderstanding about how LLMs work. One cannot (currently) tell an LLM to "ignore pretraining data from year X onwards". The LLM doesn't have data stored neatly inside it in sortable format. It is not like a hard drive.

11 months ago 3 0 0 0

Great comment by my colleague @randomwalker.bsky.social

1 year ago 3 0 0 0

Understanding and extrapolating benchmark results will become essential for effective policymaking and informing users. New work identifies indicators that have high predictive power in modeling LLM performance. Excited for it to be out!

1 year ago 11 3 1 0
Post image

What are 3 concrete steps that can improve AI safety in 2025? 🤖⚠️

Our new paper, “In House Evaluation is Not Enough” has 3 calls-to-actions to empower evaluators:

1️⃣ Standardized AI flaw reports
2️⃣ AI flaw disclosure programs + safe harbors.
3️⃣ A coordination center for transferable AI flaws.

1/🧵

1 year ago 11 8 1 1

Congratulations ! great result.

1 year ago 4 0 1 0
Advertisement

A new path forward for open AI (note the space between the two words). Looking forward to seeing how it enables great research in the open.

1 year ago 4 0 1 0
x.com

x.com/parksimon080...

Can VLMs do difficult reasoning tasks? Using new dataset for evaluating Simple-to-Hard generalization (a form of OOD generalization) we study how to mitigate the dreaded "modality gap" VLM vs its base LLM.
(note: the poster, Simon Park, applied to PhD programs this spring)

1 year ago 0 0 0 0

SimPO: new method from Princeton PLI for improving chat models via preference data. Simpler than DPO and widely adopted within weeks by top models in the chatbot arena. Excellent and elementary account by author
@xiamengzhou.bsky.social (she's also on job market!). tinyurl.com/pepcynaxFully

1 year ago 11 1 3 0
Preview
Join Paper Club with Princeton University on Model Alignment Challenges in Preference Learning [AI Tinkerers - Paper Club] Join Our Paper Club Event Series! Meet with Sadhika Malladi, AI Researcher at Princeton University and discuss the challenges of aligning language models with human preferences. Don’t miss this unique...

I'll be giving a talk on my two recent preference learning works (led by Angelica Chen and @noamrazin.bsky.social) in the AI Tinkerers Paper Club today (11/26) at noon ET. Excited to share this talk with a broader audience! paperclub.aitinkerers.org/p/join-paper...

1 year ago 5 1 0 1

Interesting thread from Geoffrey Irving about the fragility of interpreting LLMs' latent reasoning (whether self-reported, or recovered by some mechanistic interpretability idea). I have been pessimistic about trusting latent reasoning.

1 year ago 2 0 0 0