I am curious if you have ever tried to compiling all of your disparate observations about the impacts of changing various hyperparameters in your models. Having followed your work for a bit, it seems like you have a wealth of knowledge about this that would be interesting to a lot of people.
Posts by Mason Kamb
A great @quantamagazine.bsky.social article on our theory of creativity in convolutional diffusion models lead by @masonkamb.bsky.social See also our paper with new results in version 2: arxiv.org/abs/2412.20292 to be presented as an oral at @icmlconf.bsky.social #icml25
Also, see this explainer thread for more details:
bsky.app/profile/maso...
If you're interested, you can also:
- read our paper (now with faces!): arxiv.org/pdf/2412.202...
- use our code + weights:
github.com/Kambm/convol...
Honored to have had my recent work with
@suryaganguli.bsky.social on the mechanisms behind creativity in diffusion models featured in this lovely article by
Webb Wright for Quanta magazine!
NEW: Mason Kamb ( @masonkamb.bsky.social ) from @stanford.edu presents a predictive theory of combinatorial creativity in diffusion models.
Watch the video: youtu.be/DP_kGt0-2cg
#NeuroAI2025 #AI #ML #NeuroAI
Came for the political ripostes and stayed for the diffusion models
The DOGE etc. damage to US science will have enormous effects that will linger for decades. But they will be sufficiently gradual and diffuse that people who want to pretend the cause wasn't obvious will be able to do so.
In another blow to legacy media, I'm hearing that the Trump administration plans to remove The Atlantic from its war-plans group chat. The outlet will be replaced in the chat by the Gateway Pundit.
real instructive that just by paying attention to the background hum of regular small plane crashes the media has created a perception of a sharp increase
Finally got to reading the fascinating & excellent paper by Kamb and Ganguli, which makes a significant contribution to diffusion/GenAI literature & will likely become one of the most-cited works in this space. Unlike many "theoretical" ML studies, theirs is high-dimensional and practical.. 1/n
Wow, thank you for this very charitable review! Happy to answer any questions/discussion points if you have them.
Code should be out soonish, working to bring the repo into a fit state for public consumption (currently it's a bit spaghettified). Colab not yet in the works, but perhaps it should be…
*replicate for MNIST that is. Different datasets have different characteristics in this regard.
Interesting question. On a patch level I don't have a specific answer. Formally at the largest scales the answer is probably "all of them." On a whole-image level I've found that you can approximately replicate the generated images you get with the whole dataset with only a few hundred examples.
You're also never precisely at t=0 due to discretization, which mitigates the blowup issue as well.
The NN generated outputs will not obey this consistency condition because they don't blow up. In practice this doesn't affect the output a whole lot. The intuition is that if you have a lot of patches into the dataset, the aforementioned consistency condition becomes very mild.
Good question. The effect of this explosion for the ELS machine ends up being that it enforces the consistency condition in theorem 4.1 (each pixel should match the center pixel of the l2-nearest patch). Intuition here is that these are the only points where the score fails to explode.
Our new paper! "Analytic theory of creativity in convolutional diffusion models" lead expertly by @masonkamb.bsky.social
arxiv.org/abs/2412.20292
Our closed-form theory needs no training, is mechanistically interpretable & accurately predicts diffusion model outputs with high median r^2~0.9
We’re excited to push the envelope of deep learning theory to encompass minimal examples of realistic diffusion models in this paper. We hope that this work will lay a foundation for detailed investigations into more sophisticated models, including those with self-attention.
The images from the Attention-enabled model bear strong qualitative resemblance to the ELS machine, but exhibit *just enough* nonlocal coordination to be semantically meaningful.
Our theory is tailored to models that have strong locality biases, such as CNNs. However, we find that our theory (bottom rows) is still moderately predictive for a simple diffusion model *with* self-Attention layers (top rows), which explicitly break equivariance/locality.
Diffusion models are notorious for getting the wrong numbers of fingers, legs, etc. Our theory is able to recapitulate this behavior, and provides for the first time a clear mechanistic explanation for these failures as a consequence of excessive locality.
This simple model of diffusion model creativity is remarkably predictive-- we find that, after calibrating a single time-dependent hyperparameter (the locality scale), we can replicate the behavior of trained fully-convolutional diffusion models on a case-by-case basis
Under optimal *equivariant+local* denoising, each pixel can be drawn towards *any* training patch from *anywhere* in the training set, rather than only the ones that are drawn from the same pixel location. We call this model the Equivariant Local Score (ELS) Machine.
Under optimal *local* denoising, each *pixel* forms an independent Bayesian estimate for the probability of each training example, based on the information visible in the receptive field, rather than the entire image.
We identify two key inductive biases in CNNs that cause this underfitting: 1) locality (i.e. finite effective receptive field size) and 2) translational equivariance. Strikingly, the functional optimum of the training objective under these constraints is analytically solvable!
If diffusion models performed optimally under their training objective, they would only ever output exact copies of their training data-- inconsistent with creativity. In order to generate novel, creative output, diffusion models must therefore *underfit their training objective.*
Diffusion models are trained to take a corrupted sample from the training set, and guess the underlying image. The Bayes-optimal guess is simply the average of all possible training set images, each weighted by a factor corresponding to its likelihood under the noise model.
Our theory provides clear mechanistic answers to these questions in the very simplest nontrivial case we could find. Specifically, we study the case of diffusion models with small, fully-convolutional backbones– no self-attention.
What is the origin of the apparent creative properties of diffusion models? How do their outputs relate to their training data? Also, why do diffusion models sometimes struggle with spatial consistency?