Advertisement · 728 × 90

Posts by Keyon Vafa

Yale University, Institute for the Foundations of Data Science Job #AJO31114, Postdoc in Foundations of Data Science, Institute for the Foundations of Data Science, Yale University, New Haven, Connecticut, US

📣 Postdocs at Yale FDS! 📣 Tremendous freedom to work on data science problems with faculty across campus, multi-year, great salary. Deadline 12/15. Spread the word! Application: academicjobsonline.org/ajo/jobs/31114 More about Yale FDS: fds.yale.edu

5 months ago 23 13 0 1

💡🤖🔥 @keyonv.bsky.social's talk at metrics-and-models.github.io was brilliant, posing epistemic questions about what Artificial Intelligence "understands".

Next (two weeks): Alexander Vezhnevets talks about a new multi-actor generative agent based model. As usual, *all welcome* #datascience #css💡🤖🔥

7 months ago 4 1 0 0

💡🤖🔥The talk by Juan Carlos Perdomo at metrics-and-models.github.io was so thought provoking that the convenors stayed to discuss it in the room afterwards for quite some time!

Next, we have @keyonv.bsky.social asking: "What are AI's World Models?". Exciting times over here, all welcome!💡🤖🔥

8 months ago 6 2 0 0
Preview
What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models Foundation models are premised on the idea that sequence prediction can uncover deeper domain understanding, much like how Kepler's predictions of planetary motion later led to the discovery of Newton...

Paper: arxiv.org/abs/2507.06952

Co-authors: Peter Chang, Ashesh Rambachan (@asheshrambachan.bsky.social), Sendhil Mullainathan (@sendhil.bsky.social)

9 months ago 2 1 1 1
Preview
ICML Workshop on Assessing World Models Date: Friday, July 18 2025 Location: Ballroom B at ICML 2025 in Vancouver, Canada

This is one way to evaluate world models. But there are many other interesting approaches!

Plug: If you're interested in more, check out the Workshop on Assessing World Models I'm co-organizing Friday at ICML www.worldmodelworkshop.org

9 months ago 0 0 1 0
Preview
Evaluating the World Model Implicit in a Generative Model Recent work suggests that large language models may implicitly learn world models. How should we assess this possibility? We formalize this question for the case where the underlying reality is govern...

Last year we proposed different tests that studied single tasks.

We now think that studying behavior on new tasks better captures what we want from foundation models: tools for new problems.

It's what separates Newton's laws from Kepler's predictions.
arxiv.org/abs/2406.03689

9 months ago 2 0 1 0

Summary:
1. We propose inductive bias probes: a model's inductive bias reveals its world model

2. Foundation models can have great predictions with poor world models

3. One reason world models are poor: models group together distinct states that have similar allowed next-tokens

9 months ago 0 0 1 0
Advertisement
Post image

Inductive bias probes can test this hypothesis more generally.

Models are much likelier to conflate two separate states when they share the same legal next-tokens.

9 months ago 1 0 1 0
Post image

We fine-tune an Othello next-token prediction model to reconstruct boards.

Even when the model reconstructs boards incorrectly, the reconstructed boards often get the legal next moves right.

Models seem to construct "enough of" the board to calculate single next moves.

9 months ago 0 0 1 0

If a foundation model's inductive bias isn't toward a given world model, what is it toward?

One hypothesis: models confuse sequences that belong to different states but have the same legal *next* tokens.

Example: Two different Othello boards can have the same legal next moves.

9 months ago 2 0 1 1
Post image

We also apply these probes to lattice problems (think gridworld).

Inductive biases are great when the number of states is small. But they deteriorate quickly.

Recurrent and state-space models like Mamba consistently have better inductive biases than transformers.

9 months ago 0 0 1 0
Post image Post image

Would more general models like LLMs do better?

We tried providing o3, Claude Sonnet 4, and Gemini 2.5 Pro with a small number of force magnitudes in-context w/o saying what they are.

These LLMs are explicitly trained on Newton's laws. But they can't get the rest of the forces.

9 months ago 3 0 1 0
Post image

We then fine-tuned the model on a larger scale, to predict forces across 10K solar systems.

We used a symbolic regression to compare the recovered force law to Newton's law.

It not only recovered a nonsensical law—it recovered different laws for different galaxies.

9 months ago 2 1 1 0
Post image

To demonstrate, we fine-tuned the model to predict force vectors on a small dataset of planets in our solar system.

A model that understands Newtonian mechanics should get these. But the transformer struggles.

9 months ago 4 1 1 0
Post image

But has the model discovered Newton's laws?

When we fine-tune it to new tasks, its inductive bias isn't toward Newtonian states.

When it extrapolates, it makes similar predictions for orbits with very different states, and different predictions for orbits with similar states.

9 months ago 1 0 1 0
Video

We apply these probes to orbital, lattice, and Othello problems.

Starting with orbits: we encode solar systems as sequences and train a transformer on 10M solar systems (20B tokens)

The model makes accurate predictions many timesteps ahead. Predictions for our solar system:

9 months ago 3 0 1 0
Post image

We propose a method to measure these inductive biases. We call it an inductive bias probe.

Two steps:
1. Fit a foundation model to many new, very small synthetic datasets
2. Analyze patterns in the functions it learns to find the model's inductive bias

9 months ago 4 0 1 0
Advertisement

Newton's laws are a kind of foundation model. They provide a place to start when working on new problems.

A good foundation model should do the same.

The No Free Lunch Theorem motivates a test: Every foundation model has an inductive bias. This bias reveals its world model.

9 months ago 3 0 1 0

If you only care about orbits, Newton didn't add much. His laws give the same predictions.

But Newton's laws went beyond orbits: the same laws explain pendulums, cannonballs, and rockets.

This motivates our framework: Predictions apply to one task. World models generalize to many

9 months ago 4 0 1 0
Post image

Perhaps the most influential world model had its start as a predictive model.

Before we had Newton's laws of gravity, we had Kepler's predictions of planetary orbits.

Kepler's predictions led to Newton's laws. So what did Newton add?

9 months ago 3 0 1 0
Post image

Our paper aims to answer two questions:

1. What's the difference between prediction and world models?
2. Are there straightforward metrics that can test this distinction?

Our paper is about AI. But it's helpful to go back 400 years to answer these questions.

9 months ago 5 0 1 0
Video

Can an AI model predict perfectly and still have a terrible world model?

What would that even mean?

Our new ICML paper (poster tomorrow!) formalizes these questions.

One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵

9 months ago 40 14 2 6
Keyon Vafa: Predicting Workers’ Career Trajectories to Better Understand Labor Markets If we know someone’s career history, how well can we predict which job they’ll have next?

If we know someone’s career history, how well can we predict which jobs they’ll have next? Read our profile of @keyonv.bsky.social to learn how ML models can be used to predict workers’ career trajectories & better understand labor markets.

medium.com/@gsb_silab/k...

9 months ago 9 2 1 0
Post image

Foundation models make great predictions. How should we use them for estimation problems in social science?

New PNAS paper @susanathey.bsky.social & @keyonv.bsky.social & @Blei Lab:
Bad news: Good predictions ≠ good estimates.
Good news: Good estimates possible by fine-tuning models differently 🧵

9 months ago 7 3 1 1

*Please repost* @sjgreenwood.bsky.social and I just launched a new personalized feed (*please pin*) that we hope will become a "must use" for #academicsky. The feed shows posts about papers filtered by *your* follower network. It's become my default Bluesky experience bsky.app/profile/pape...

1 year ago 524 296 23 83
Preview
Large language models act as if they are part of a group - Nature Computational Science An extensive audit of large language models reveals that numerous models mirror the ‘us versus them’ thinking seen in human behavior. These social prejudices are likely captured from the biased conten...

Happy to write this News & Views piece on the recent audit showing LLMs picking up "us versus them" biases: www.nature.com/articles/s43... (Read-only version: rdcu.be/d5ovo)

Check out the amazing (original) paper here: www.nature.com/articles/s43...

1 year ago 13 7 0 1
Preview
SICSS-ODISSEI Summer School 2025 - ODISSEI – Open Data Infrastructure for Social Science and Economic Innovations From 16 to 27 June 2024, ODISSEI is hosting its fourth summer school at Erasmus University in Rotterdam, as part of the Summer Institutes in Computational Social Science (SICSS) and the Erasmus Gradua...

Applications open for the SICSS-ODISSEI summer school at Erasmus University. For PhD students, post-docs and early career researchers interested in computational social science. More info: odissei-data.nl/event/sicss-...

1 year ago 11 8 0 0
Advertisement
Banner for CHI 2025 workshop with text: "Speech AI for All: Promoting Accessibility, Fairness, Inclusivity, and Equity"

Banner for CHI 2025 workshop with text: "Speech AI for All: Promoting Accessibility, Fairness, Inclusivity, and Equity"

📢Announcing 1-day CHI 2025 workshop: Speech AI for All! We’ll discuss challenges & impacts of inclusive speech tech for people with speech diversities, connecting researchers, practitioners, policymakers, & community members. 🎉Apply to join us: speechai4all.org

1 year ago 21 4 2 1

Thank you Alex!

1 year ago 1 0 0 0
Preview
Behavioral ML Date: December 14, 2024 (at NeurIPS in Vancouver, Canada) Location: MTG 19&20

Our Saturday workshop is focused on incorporating insights from the behavioral sciences into AI models/systems.

Speakers and schedule: behavioralml.org

Location: MTG 19&20 at 8:45am

1 year ago 1 0 0 0