Advertisement · 728 × 90

Posts by Mason Kamb

I am curious if you have ever tried to compiling all of your disparate observations about the impacts of changing various hyperparameters in your models. Having followed your work for a bit, it seems like you have a wealth of knowledge about this that would be interesting to a lot of people.

9 months ago 0 0 0 0

A great @quantamagazine.bsky.social article on our theory of creativity in convolutional diffusion models lead by @masonkamb.bsky.social See also our paper with new results in version 2: arxiv.org/abs/2412.20292 to be presented as an oral at @icmlconf.bsky.social #icml25

9 months ago 21 4 1 0

Also, see this explainer thread for more details:
bsky.app/profile/maso...

9 months ago 0 0 0 0

If you're interested, you can also:
- read our paper (now with faces!): arxiv.org/pdf/2412.202...
- use our code + weights:
github.com/Kambm/convol...

9 months ago 1 0 1 0

Honored to have had my recent work with
@suryaganguli.bsky.social on the mechanisms behind creativity in diffusion models featured in this lovely article by
Webb Wright for Quanta magazine!

9 months ago 10 2 1 0
An Analytic Theory of Creativity in Convolutional Diffusion Models with Mason Kamb
An Analytic Theory of Creativity in Convolutional Diffusion Models with Mason Kamb Mason Kamb from Stanford University joined the Frontiers of NeuroAI Symposium on June 6, 2025, to discuss "An Analytic Theory of Creativity in Convolutional ...

NEW: Mason Kamb ‪( @masonkamb.bsky.social ‬) from @stanford.edu‬ presents a predictive theory of combinatorial creativity in diffusion models.

Watch the video: youtu.be/DP_kGt0-2cg

#NeuroAI2025 #AI #ML #NeuroAI

10 months ago 2 1 0 0

Came for the political ripostes and stayed for the diffusion models

11 months ago 0 0 0 0
Advertisement

The DOGE etc. damage to US science will have enormous effects that will linger for decades. But they will be sufficiently gradual and diffuse that people who want to pretend the cause wasn't obvious will be able to do so.

1 year ago 146 31 15 5

In another blow to legacy media, I'm hearing that the Trump administration plans to remove The Atlantic from its war-plans group chat. The outlet will be replaced in the chat by the Gateway Pundit.

1 year ago 2326 317 31 10

real instructive that just by paying attention to the background hum of regular small plane crashes the media has created a perception of a sharp increase

1 year ago 1252 128 63 28

Finally got to reading the fascinating & excellent paper by Kamb and Ganguli, which makes a significant contribution to diffusion/GenAI literature & will likely become one of the most-cited works in this space. Unlike many "theoretical" ML studies, theirs is high-dimensional and practical.. 1/n

1 year ago 8 1 2 0

Wow, thank you for this very charitable review! Happy to answer any questions/discussion points if you have them.

Code should be out soonish, working to bring the repo into a fit state for public consumption (currently it's a bit spaghettified). Colab not yet in the works, but perhaps it should be…

1 year ago 1 0 0 0

*replicate for MNIST that is. Different datasets have different characteristics in this regard.

1 year ago 0 0 0 0

Interesting question. On a patch level I don't have a specific answer. Formally at the largest scales the answer is probably "all of them." On a whole-image level I've found that you can approximately replicate the generated images you get with the whole dataset with only a few hundred examples.

1 year ago 1 0 1 0

You're also never precisely at t=0 due to discretization, which mitigates the blowup issue as well.

1 year ago 1 0 0 0
Advertisement

The NN generated outputs will not obey this consistency condition because they don't blow up. In practice this doesn't affect the output a whole lot. The intuition is that if you have a lot of patches into the dataset, the aforementioned consistency condition becomes very mild.

1 year ago 0 0 1 0

Good question. The effect of this explosion for the ELS machine ends up being that it enforces the consistency condition in theorem 4.1 (each pixel should match the center pixel of the l2-nearest patch). Intuition here is that these are the only points where the score fails to explode.

1 year ago 1 0 1 0
Post image

Our new paper! "Analytic theory of creativity in convolutional diffusion models" lead expertly by @masonkamb.bsky.social
arxiv.org/abs/2412.20292
Our closed-form theory needs no training, is mechanistically interpretable & accurately predicts diffusion model outputs with high median r^2~0.9

1 year ago 134 31 5 7

We’re excited to push the envelope of deep learning theory to encompass minimal examples of realistic diffusion models in this paper. We hope that this work will lay a foundation for detailed investigations into more sophisticated models, including those with self-attention.

1 year ago 4 1 0 0

The images from the Attention-enabled model bear strong qualitative resemblance to the ELS machine, but exhibit *just enough* nonlocal coordination to be semantically meaningful.

1 year ago 1 0 1 0
Post image

Our theory is tailored to models that have strong locality biases, such as CNNs. However, we find that our theory (bottom rows) is still moderately predictive for a simple diffusion model *with* self-Attention layers (top rows), which explicitly break equivariance/locality.

1 year ago 2 0 1 0
Post image

Diffusion models are notorious for getting the wrong numbers of fingers, legs, etc. Our theory is able to recapitulate this behavior, and provides for the first time a clear mechanistic explanation for these failures as a consequence of excessive locality.

1 year ago 5 1 1 0
Advertisement
Post image

This simple model of diffusion model creativity is remarkably predictive-- we find that, after calibrating a single time-dependent hyperparameter (the locality scale), we can replicate the behavior of trained fully-convolutional diffusion models on a case-by-case basis

1 year ago 3 0 1 0
Post image

Under optimal *equivariant+local* denoising, each pixel can be drawn towards *any* training patch from *anywhere* in the training set, rather than only the ones that are drawn from the same pixel location. We call this model the Equivariant Local Score (ELS) Machine.

1 year ago 5 0 1 0

Under optimal *local* denoising, each *pixel* forms an independent Bayesian estimate for the probability of each training example, based on the information visible in the receptive field, rather than the entire image.

1 year ago 3 0 1 0

We identify two key inductive biases in CNNs that cause this underfitting: 1) locality (i.e. finite effective receptive field size) and 2) translational equivariance. Strikingly, the functional optimum of the training objective under these constraints is analytically solvable!

1 year ago 3 0 1 0

If diffusion models performed optimally under their training objective, they would only ever output exact copies of their training data-- inconsistent with creativity. In order to generate novel, creative output, diffusion models must therefore *underfit their training objective.*

1 year ago 4 0 1 0

Diffusion models are trained to take a corrupted sample from the training set, and guess the underlying image. The Bayes-optimal guess is simply the average of all possible training set images, each weighted by a factor corresponding to its likelihood under the noise model.

1 year ago 3 0 1 0

Our theory provides clear mechanistic answers to these questions in the very simplest nontrivial case we could find. Specifically, we study the case of diffusion models with small, fully-convolutional backbones– no self-attention.

1 year ago 4 0 1 0
Advertisement

What is the origin of the apparent creative properties of diffusion models? How do their outputs relate to their training data? Also, why do diffusion models sometimes struggle with spatial consistency?

1 year ago 3 0 1 0