📣 Postdocs at Yale FDS! 📣 Tremendous freedom to work on data science problems with faculty across campus, multi-year, great salary. Deadline 12/15. Spread the word! Application: academicjobsonline.org/ajo/jobs/31114 More about Yale FDS: fds.yale.edu
Posts by Keyon Vafa
💡🤖🔥 @keyonv.bsky.social's talk at metrics-and-models.github.io was brilliant, posing epistemic questions about what Artificial Intelligence "understands".
Next (two weeks): Alexander Vezhnevets talks about a new multi-actor generative agent based model. As usual, *all welcome* #datascience #css💡🤖🔥
💡🤖🔥The talk by Juan Carlos Perdomo at metrics-and-models.github.io was so thought provoking that the convenors stayed to discuss it in the room afterwards for quite some time!
Next, we have @keyonv.bsky.social asking: "What are AI's World Models?". Exciting times over here, all welcome!💡🤖🔥
Paper: arxiv.org/abs/2507.06952
Co-authors: Peter Chang, Ashesh Rambachan (@asheshrambachan.bsky.social), Sendhil Mullainathan (@sendhil.bsky.social)
This is one way to evaluate world models. But there are many other interesting approaches!
Plug: If you're interested in more, check out the Workshop on Assessing World Models I'm co-organizing Friday at ICML www.worldmodelworkshop.org
Last year we proposed different tests that studied single tasks.
We now think that studying behavior on new tasks better captures what we want from foundation models: tools for new problems.
It's what separates Newton's laws from Kepler's predictions.
arxiv.org/abs/2406.03689
Summary:
1. We propose inductive bias probes: a model's inductive bias reveals its world model
2. Foundation models can have great predictions with poor world models
3. One reason world models are poor: models group together distinct states that have similar allowed next-tokens
Inductive bias probes can test this hypothesis more generally.
Models are much likelier to conflate two separate states when they share the same legal next-tokens.
We fine-tune an Othello next-token prediction model to reconstruct boards.
Even when the model reconstructs boards incorrectly, the reconstructed boards often get the legal next moves right.
Models seem to construct "enough of" the board to calculate single next moves.
If a foundation model's inductive bias isn't toward a given world model, what is it toward?
One hypothesis: models confuse sequences that belong to different states but have the same legal *next* tokens.
Example: Two different Othello boards can have the same legal next moves.
We also apply these probes to lattice problems (think gridworld).
Inductive biases are great when the number of states is small. But they deteriorate quickly.
Recurrent and state-space models like Mamba consistently have better inductive biases than transformers.
Would more general models like LLMs do better?
We tried providing o3, Claude Sonnet 4, and Gemini 2.5 Pro with a small number of force magnitudes in-context w/o saying what they are.
These LLMs are explicitly trained on Newton's laws. But they can't get the rest of the forces.
We then fine-tuned the model on a larger scale, to predict forces across 10K solar systems.
We used a symbolic regression to compare the recovered force law to Newton's law.
It not only recovered a nonsensical law—it recovered different laws for different galaxies.
To demonstrate, we fine-tuned the model to predict force vectors on a small dataset of planets in our solar system.
A model that understands Newtonian mechanics should get these. But the transformer struggles.
But has the model discovered Newton's laws?
When we fine-tune it to new tasks, its inductive bias isn't toward Newtonian states.
When it extrapolates, it makes similar predictions for orbits with very different states, and different predictions for orbits with similar states.
We apply these probes to orbital, lattice, and Othello problems.
Starting with orbits: we encode solar systems as sequences and train a transformer on 10M solar systems (20B tokens)
The model makes accurate predictions many timesteps ahead. Predictions for our solar system:
We propose a method to measure these inductive biases. We call it an inductive bias probe.
Two steps:
1. Fit a foundation model to many new, very small synthetic datasets
2. Analyze patterns in the functions it learns to find the model's inductive bias
Newton's laws are a kind of foundation model. They provide a place to start when working on new problems.
A good foundation model should do the same.
The No Free Lunch Theorem motivates a test: Every foundation model has an inductive bias. This bias reveals its world model.
If you only care about orbits, Newton didn't add much. His laws give the same predictions.
But Newton's laws went beyond orbits: the same laws explain pendulums, cannonballs, and rockets.
This motivates our framework: Predictions apply to one task. World models generalize to many
Perhaps the most influential world model had its start as a predictive model.
Before we had Newton's laws of gravity, we had Kepler's predictions of planetary orbits.
Kepler's predictions led to Newton's laws. So what did Newton add?
Our paper aims to answer two questions:
1. What's the difference between prediction and world models?
2. Are there straightforward metrics that can test this distinction?
Our paper is about AI. But it's helpful to go back 400 years to answer these questions.
Can an AI model predict perfectly and still have a terrible world model?
What would that even mean?
Our new ICML paper (poster tomorrow!) formalizes these questions.
One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵
If we know someone’s career history, how well can we predict which jobs they’ll have next? Read our profile of @keyonv.bsky.social to learn how ML models can be used to predict workers’ career trajectories & better understand labor markets.
medium.com/@gsb_silab/k...
Foundation models make great predictions. How should we use them for estimation problems in social science?
New PNAS paper @susanathey.bsky.social & @keyonv.bsky.social & @Blei Lab:
Bad news: Good predictions ≠ good estimates.
Good news: Good estimates possible by fine-tuning models differently 🧵
*Please repost* @sjgreenwood.bsky.social and I just launched a new personalized feed (*please pin*) that we hope will become a "must use" for #academicsky. The feed shows posts about papers filtered by *your* follower network. It's become my default Bluesky experience bsky.app/profile/pape...
Happy to write this News & Views piece on the recent audit showing LLMs picking up "us versus them" biases: www.nature.com/articles/s43... (Read-only version: rdcu.be/d5ovo)
Check out the amazing (original) paper here: www.nature.com/articles/s43...
Applications open for the SICSS-ODISSEI summer school at Erasmus University. For PhD students, post-docs and early career researchers interested in computational social science. More info: odissei-data.nl/event/sicss-...
Banner for CHI 2025 workshop with text: "Speech AI for All: Promoting Accessibility, Fairness, Inclusivity, and Equity"
📢Announcing 1-day CHI 2025 workshop: Speech AI for All! We’ll discuss challenges & impacts of inclusive speech tech for people with speech diversities, connecting researchers, practitioners, policymakers, & community members. 🎉Apply to join us: speechai4all.org
Thank you Alex!