Advertisement · 728 × 90

Posts by Luca Ambrogioni

Apply - Interfolio {{$ctrl.$state.data.pageTitle}} - Apply - Interfolio

The University of Notre Dame is hiring 5 tenure or tenure-track professors in Neuroscience, including Computational Neuroscience, across 4 departments.

Come join me at ND! Feel free to reach out with any questions.

And please share!

apply.interfolio.com/173031

7 months ago 39 33 0 2
Post image

I am very happy to finally share something I have been working on and off for the past year:

"The Information Dynamics of Generative Diffusion"

This paper connects entropy production, divergence of vector fields and spontaneous symmetry breaking

link: arxiv.org/abs/2508.19897

7 months ago 21 3 0 0

Many when the number of steps in the puzzle is in the thousands and any error leads to a wrong solution

10 months ago 0 0 0 0

Have you ever asked your child to solve a simple puzzle in 60.000 easy steps?

10 months ago 0 0 1 0

Students using AI to write their reports is like me going to the gym and getting a robot to lift my weights

10 months ago 58 16 2 3
Post image

Generative decisions in diffusion models can be detected locally as symmetry breaking in the energy and globally as peaks in the conditional entropy rate.

The both corresponds to a (local or global) suppression of the quadratic potential (Hessian trace).

11 months ago 7 0 0 0
Post image

🧠✨How do we rebuild our memories? In our new study, we show that hippocampal ripples kickstart a coordinated expansion of cortical activity that helps reconstruct past experiences.

We recorded iEEG from patients during memory retrieval... and found something really cool 👇(thread)

11 months ago 167 63 5 5
Advertisement

Why? You can just mute out politics and owner's antics and it becomes perfecly fine again

11 months ago 3 0 4 0
Post image

In continuous generative diffusion, the conditional entropy rate is the constant term that separates the score matching and the denoising score matching loss

This can be directly interpreted as the information transfer (bit rate) from the state x_t and the final generation x_0.

11 months ago 21 5 0 0
Post image

Decisions during generative diffusion are analogous to phase transitions in physics. They can be identified as peaks in the conditional entropy rate curve!

11 months ago 10 3 0 0

I'd put these on the NeuroAI vision board:

@tyrellturing.bsky.social's Deep learning framework
www.nature.com/articles/s41...

@tonyzador.bsky.social's Next-gen AI through neuroAI
www.nature.com/articles/s41...

@adriendoerig.bsky.social's Neuroconnectionist framework
www.nature.com/articles/s41...

11 months ago 34 10 2 1

Very excited that our work (together with my PhD student @gbarto.bsky.social and our collaborator Dmitry Vetrov) was recognized with a Best Paper Award at #AABI2025!

#ML #SDE #Diffusion #GenAI 🤖🧠

11 months ago 19 2 1 0

Indeed. We are currently doing a lot of work on guidance, so we will likely try to use entropic time there as well soon

11 months ago 2 0 1 0

The largest we have tried so far is EDM2 XL on 512 ImageNet. It works very well there!

We did not try with guidance so far

11 months ago 2 0 1 0
Post image

I am very happy to share our latest work on the information theory of generative diffusion:

"Entropic Time Schedulers for Generative Diffusion Models"

We find that the conditional entropy offers a natural data-dependent notion of time during generation

Link: arxiv.org/abs/2504.13612

11 months ago 25 5 2 0
Post image

Flow Matching in a nutshell.

1 year ago 52 7 1 1

I will be at #NeurIPS2024 in Vancouver. I’m looking for post-docs, and if you want to talk about post-doc opportunities, get in touch. 🤗

Here’s my current team at Aalto University: users.aalto.fi/~asolin/group/

1 year ago 15 5 0 0
Advertisement
NeurIPS Poster Rule Extrapolation in Language Modeling: A Study of Compositional Generalization on OOD PromptsNeurIPS 2024

Can language models transcend the limitations of training data?

We train LMs on a formal grammar, then prompt them OUTSIDE of this grammar. We find that LMs often extrapolate logical rules and apply them OOD, too. Proof of a useful inductive bias.

Check it out at NeurIPS:

nips.cc/virtual/2024...

1 year ago 113 8 7 1
Photograph of Johannes Margraph and Günter Klambauer introducing the ELLIS ML4Molecules Workshop 2024 in Berlin at the Fritz-Haber Institute in Dahlem.

Photograph of Johannes Margraph and Günter Klambauer introducing the ELLIS ML4Molecules Workshop 2024 in Berlin at the Fritz-Haber Institute in Dahlem.

Excited to speak at the ELLIS ML4Molecules Workshop 2024 in Berlin!

moleculediscovery.github.io/workshop2024/

1 year ago 46 4 3 0

Can we please stop sharing posts that legitimate murder? Please.

1 year ago 1 0 0 0

Our team at Google DeepMind is hiring Student Researchers for 2025!

🧑‍🔬 Interested in understanding reasoning capabilities of neural networks from first principles?
🧑‍🎓 Currently studying for a BS/MS/PhD?
🧑‍💻 Have solid engineering and research skills?

🌟 We want to hear from you! Details in thread.

1 year ago 59 5 2 0
On the left figure, it showcases the behavior of Hopfield models. Given a query (the initial point of energy descent), a Hopfield model will retrieve the closest memory (local minimum) to that query such that it minimizes the energy function. A perfect Hopfield model is able to store patterns in distinct minima (or buckets). In contrast, the right figure illustrates a bad Associative Memory system, where stored patterns share a distinctive bucket. This enables the creation of spurious patterns, which appear like mixture of stored patterns. Spurious patterns will have lower energy than the memories due to this overlapping.

On the left figure, it showcases the behavior of Hopfield models. Given a query (the initial point of energy descent), a Hopfield model will retrieve the closest memory (local minimum) to that query such that it minimizes the energy function. A perfect Hopfield model is able to store patterns in distinct minima (or buckets). In contrast, the right figure illustrates a bad Associative Memory system, where stored patterns share a distinctive bucket. This enables the creation of spurious patterns, which appear like mixture of stored patterns. Spurious patterns will have lower energy than the memories due to this overlapping.

Diffusion models create beautiful novel images, but they can also memorize samples from the training set. How does this blending of features allow creating novel patterns? Our new work in Sci4DL workshop #neurips2024 shows that diffusion models behave like Dense Associative Memory networks.

1 year ago 39 5 1 1

The naivete of these takes is always amusing

They could be equally applied to human beings, and they would work as well

1 year ago 1 0 0 0

There are indeed cases in which obtaining an SDE equivalence isn't straightforward

1 year ago 1 0 0 0
Advertisement

I have always been saying that diffusion = flow matching.

Is it supposed to be some sort of news now??

1 year ago 4 0 1 0

However, flow matching theory doesn't provide much guidance on how to do stochastic sampling

It relies on the extra structure of diffusion

1 year ago 1 0 1 0

Disagree, religious literacy is important

1 year ago 0 0 0 0
Samples y | x from Treeffuser vs. true densities, for multiple values of x under three different scenarios. Treeffuser captures arbitrarily complex conditional distributions that vary with x.

Samples y | x from Treeffuser vs. true densities, for multiple values of x under three different scenarios. Treeffuser captures arbitrarily complex conditional distributions that vary with x.

I am very excited to share our new Neurips 2024 paper + package, Treeffuser! 🌳 We combine gradient-boosted trees with diffusion models for fast, flexible probabilistic predictions and well-calibrated uncertainty.

paper: arxiv.org/abs/2406.07658
repo: github.com/blei-lab/tre...

🧵(1/8)

1 year ago 153 23 4 4