I'm excited to open the new year by sharing a new perspective paper.
I give a informal outline of MD and how it can interact with Generative AI. Then, I discuss how far the field has come since the seminal contributions, such as Boltzmann Generators, and what is still missing
Posts by Sebastian Lehner
Measuring AI Progress in Drug Discovery - A NEW LEADERBOARD IN TOWN
2015-2025: turns out that there's hardly any improvement. AI bubble?
GPT is at 70% for this task, whereas the best methods get close to 85%.
Leaderboard: huggingface.co/spaces/ml-jk...
P: arxiv.org/abs/2511.14744
thanks!
Posting a few nice importance sampling-related finds
"Value-aware Importance Weighting for Off-policy Reinforcement Learning"
proceedings.mlr.press/v232/de-asis...
Returning soon - stay tuned!
sites.google.com/view/monte-c...
I am very happy to finally share something I have been working on and off for the past year:
"The Information Dynamics of Generative Diffusion"
This paper connects entropy production, divergence of vector fields and spontaneous symmetry breaking
link: arxiv.org/abs/2508.19897
xLSTM for multivariate time series anomaly detection: arxiv.org/abs/2506.22837
“In our results, xLSTM showcases state-of-the-art accuracy, outperforming 23 popular anomaly detection baselines.”
Again, xLSTM excels in time series analysis.
New paper on the generalization of Flow Matching www.arxiv.org/abs/2506.03719
🤯 Why does flow matching generalize? Did you know that the flow matching target you're trying to learn *can only generate training points*?
w @quentinbertrand.bsky.social @annegnx.bsky.social @remiemonet.bsky.social 👇👇👇
New preprint alert 🚨
How can you guide diffusion and flow-based generative models when data is scarce but you have domain knowledge? We introduce Minimum Excess Work, a physics-inspired method for efficiently integrating sparse constraints.
Thread below 👇https://arxiv.org/abs/2505.13375
Need to predict bioactivity 🧪 but only have limited data ❌?
Try our interactive app for prompting MHNfs — a state-of-the-art model for few-shot molecule–property prediction. No coding or training needed. 🚀
📄 Paper:
pubs.acs.org/doi/10.1021/...
🖥️ App:
huggingface.co/spaces/ml-jk...
I have cleaned a bit my lecture notes on Optimal Transport for Machine Learners arxiv.org/abs/2505.06589
Many recent posts on free energy. Here is a summary from my class “Statistical mechanics of learning and computation” on the many relations between free energy, KL divergence, large deviation theory, entropy, Boltzmann distribution, cumulants, Legendre duality, saddle points, fluctuation-response…
I asked "on the other platform" what were the most important improvements to the original 2017 transformer.
That was quite popular and here is a synthesis of the responses:
Come check out SDE Matching at the #ICLR2025 workshops, a new simulation-free framework for training fully general Latent/Neural SDEs (generalisation of diffusion and bridge models).
FPI: Morning poster session
DeLTa: Afternoon poster session
#SDE #Bayes #GenAI #Diffusion #Flow
Excited to present our poster on Boltzmann priors for Implicit Transfer Operators tomorrow at @iclr-conf.bsky.social!
See you tomorrow at poster 13, 10-12:30.
1/11 Excited to present our latest work "Scalable Discrete Diffusion Samplers: Combinatorial Optimization and Statistical Physics" at #ICLR2025 on Fri 25 Apr at 10 am!
#CombinatorialOptimization #StatisticalPhysics #DiffusionModels
My video interview with @quantamagazine.bsky.social about AI-designed physics experiments, AI as a Muse for new ideas in Science, and Artificial Scientists: www.youtube.com/watch?v=T_2Z...
📢 AI-discovered Gravitational Wave Detectors
published in @apsphysics.bsky.social Phys.Rev.X, with Rana Adhikari & Yehonathan Drori @ligo.org @caltech.edu @mpi-scienceoflight.bsky.social
journals.aps.org/prx/abstract...
Extremely happy to see this paper online after 3.5 years of work.
🧵1/5
We have been reworking the Quickstart guide of POT to show multiple examples of OT with the unified API that facilitates access to OT value/plan/potentials. It allows to select regularization/unbalancedness/lowrank/Gaussian OT with just a few parameters. pythonot.github.io/master/auto_...
xLSTM 7B: A Recurrent LLM for Fast and Efficient Inference
Meet the fastest 7B language model out there. Based on the mLSTM!
P: arxiv.org/abs/2503.13427
Tweedie's formula is super important in diffusion models & is also one of the cornerstones of empirical Bayes methods.
Given how easy it is to derive, it's surprising how recently it was discovered ('50s). It was published a while later when Tweedie wrote Stein about it
1/n
Opportunity to work with @hochreitersepp.bsky.social , @jobrandstetter.bsky.social , and me!!
We have many open positions in machine learning, deep learning, LLMs!! Both for PostDocs and PhDs!
Join us!
We provide a classical simulation of DWave quantum "s-word" paper.
Here it is arxiv.org/abs/2503.08247 , great work by Linda Mauron at the CQS Lab, check it out! (1/4)
I shared a controversial take the other day at an event and I decided to write it down in a longer format: I’m afraid AI won't give us a "compressed 21st century"
Here: thomwolf.io/blog/scienti...
It's an extension of this interview discussion from the AI summit: youtu.be/AxBd3G0lFLs?...
My new paper "Deep Learning is Not So Mysterious or Different": arxiv.org/abs/2503.02113. Generalization behaviours in deep learning can be intuitively understood through a notion of soft inductive biases, and formally characterized with countable hypothesis bounds! 1/12
Thanks @zlatko-minev.bsky.social and hello bluesky world!
Luca (Martino) once told me (when I said "MCMC does not have weights") that this is incorrect (in his Sicilian style): When you reject in MCMC, you increase the weight of the current sample. Chains do have replicates, can be written like a weighted sample. High rejection rate *is* weight degeneracy.
Excited to share our work with friends from MIT/Google on Learned Asynchronous Decoding! LLM responses often contain chunks of tokens that are semantically independent. What if we can train LLMs to identify such chunks and decode them in parallel, thereby speeding up inference? 1/N
Excited about our progress in characterizing The Computational Advantage of Depth in Learning with Neural Networks. Check out the number of samples that can be saved when GD runs on a multi-layer rather than on a two-layer neural network. arxiv.org/pdf/2502.13961
📢PSA: #NeurIPS2024 recordings are now publicly available!
The workshops always have tons of interesting things on at once, so the FOMO is real😵💫 Luckily it's all recorded, so I've been catching up on what I missed.
Thread below with some personal highlights🧵