Advertisement ยท 728 ร— 90

Posts by Ken Liu

... and awesome collaborators & advisors!!
Zihao Wang, Rui Sun, Wei Liu, Weijia Shi, @huaxiuyaoml.bsky.social , Linjun Zhang, Andrew Ng, @jameszou.bsky.social, @sanmikoyejo.bsky.social, @yejinchoinka.bsky.social, Percy Liang, @stanfordnlp.bsky.social, @stanfordhai.bsky.social

7 months ago 2 0 0 0
Unsolved Questions (UQ) Project An open platform for evaluating AI models on real-world, unsolved questions

9/
UQ is an exploratory effort at creating a new paradigm for AI evals:
๐ŸŒ Platform: uq.stanford.edu
๐Ÿ“„ Paper: arxiv.org/abs/2508.17580
๐Ÿ’ป Code: github.com/uq-project/UQ
๐Ÿค— Data: huggingface.co/datasets/uq-...

Thanks to my wonderful project co-leads Fan Nie (applying for PhD!) and Niklas Muennighoff!!

7 months ago 1 0 1 0
Post image

8/
*UQ-Platform* (uq.stanford.edu) then continues where UQ-Validators leave off. It hosts the UQ-Dataset with AI answers and UQ-validation results, and experts can then rate AI answers, comment, and otherwise help resolve open questions -- just like Stack Exchange :). We need YOU to write reviews!

7 months ago 0 0 1 0
Post image

7/
*UQ-Validators* are simply LLMs (and compound LLM scaffolds) trying to pre-screen candidate answers to unsolved questions *without ground-truth answers*.

The key intuition is that it may be easier for LLMs to *validate* answers to hard questions (e.g. spotting mistakes) than to *generate* them.

7 months ago 0 0 1 0
Post image

6/
In contrast, we aim for UQ-Dataset to be difficult and realistic *by construction*: unsolved questions are often hard and naturally arise when humans seek answers, thus progress yields real-world value.

In exchange, we have to figure out how to evaluate models without answers...

7 months ago 0 0 1 0
Post image

5/
UQ started with the observation that benchmark saturation has led to a *difficulty-realism tension*:

1. We contrive harder exams that begin to lose touch of real-world model usage
2. We build realistic evals (e.g. use human preferences) that became easy and/or hackable

7 months ago 0 0 1 0
Post image Post image

4/
Here are some sample questions in the UQ-Dataset, which spans math, physics, CS theory, history, puzzles, scifi, and more; see uq.stanford.edu for full list!

7 months ago 0 0 1 0
Post image

3/
Our main idea: rather than having static benchmarks scored once, can we evaluate LLMs *continuously and asynchronously* on real-world Qs with an actual need?

UQ-Dataset provides inputs โ†’ UQ-Validators screen outputs โ†’ UQ-Platform hosts live verification and model ranking.

7 months ago 0 0 1 0
Advertisement
Unsolved Questions (UQ) Project An open platform for evaluating AI models on real-world, unsolved questions

2/
The UQ project has 3 parts:
1. UQ-Dataset: 500 hard, popular, old, yet unanswered questions from Stack Exchange network
2. UQ-Validators: LLM critics to pre-screen model answers
3. UQ-Platform (uq.stanford.edu): community verification (think AI-native Stack Exchange!)

7 months ago 1 0 1 0
Post image

New paper! We explore a radical paradigm for AI evals: assessing LLMs on *unsolved* questions.

Instead of artificially difficult exams where progress โ‰  value, we assess LLMs on organic, unsolved problems via reference-free LLM validation & community verification. LLMs solved ~10/500 so far:

7 months ago 6 1 2 0

๐Ÿ™‹๐Ÿปโ€โ™‚๏ธ

1 year ago 0 0 0 0
Preview
Stanford NLP PhDs Join the conversation

go.bsky.app/AKGJ82V

1 year ago 1 0 0 0

hi

1 year ago 3 0 2 0

๐Ÿ™‹โ€โ™‚๏ธ

1 year ago 1 0 0 0