(10/n) Supported by @bifold.berlin, @zuseschooleliza.bsky.social.
Posts by Winfried Ripken
(9/n) Check out our paper and code:
๐ Website & Colab: ml4molsim.github.io/hamiltonian-...
๐ Paper: arxiv.org/abs/2601.22123
๐ป Code: github.com/ML4MolSim/ha...
(8/n) Done with a brilliant team: @plainer.bsky.social, @gregorlied.bsky.social, @thorbenfrank.bsky.social, Oliver Unke, Stefan Chmiela, @franknoe.bsky.social, and Klaus-Robert Mรผller.
(7/n) With this, you get a single model that provides:
- Instantaneous forces (like a standard MLFF)
- Stable large-timestep updates far beyond classical integrators
- Training and inference cost comparable to MLFFs
(6/n) How?
Inspired by recent advances in few-step generative modeling, our tailored loss function combines force matching with a consistency constraint that enforces agreement of the predicted flow across different time horizons.
(5/n) Our approach:
In contrast, our method learns continuous-time, large-timestep dynamics directly from decorrelated phase-space samples without requiring expensive reference trajectories, and supports arbitrary timesteps during inference.
(4/n) Existing approaches rely on trajectory data, typically generated using another model simulated with small timesteps. This potentially introduces artifacts from the teacher and is computationally expensive.
Can we learn large timesteps, without ever seeing trajectories?
(3/n) The solution: Hamiltonian Flow Maps
The core idea is to model the Hamiltonian evolution directly in phase space over a finite time interval. Concretely, we aim to learn a ๐๐ฎ๐บ๐ถ๐น๐๐ผ๐ป๐ถ๐ฎ๐ป ๐๐น๐ผ๐ ๐ ๐ฎ๐ฝ that advances positions and momenta over an interval ฮt.
(2/n) The problem: Simulations are fundamentally limited by the small timesteps required for stable numerical integration. Even with ML, most of the cost still comes from taking millions of tiny integration steps.
Ever get tired of tiny timesteps bottlenecking your MD simulations?
We show how to train a model for large-timestep Hamiltonian dynamics directly on standard MLFF datasets. ๐ก๐ผ ๐ฟ๐ฒ๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ฐ๐ฒ ๐๐ฟ๐ฎ๐ท๐ฒ๐ฐ๐๐ผ๐ฟ๐ถ๐ฒ๐, ๐ป๐ผ ๐๐ป๐ฟ๐ผ๐น๐น๐ถ๐ป๐ด, ๐ป๐ผ ๐๐ฒ๐ฎ๐ฐ๐ต๐ฒ๐ฟ needed!
๐งต๐
Link to github: github.com/ML4MolSim/di...
Paper on arxiv: arxiv.org/abs/2506.15378
โจ Via a modular architecture, we enable a fair comparison of symmetries changing both attention mechanism and embedding strategies of our model
โจ We transfer the powerful DiT architecture from Computer Vision to the molecular domain, proposing two complementary graph-based conditioning strategies
We introduce DiTMC, a new way to predict molecular conformers - the different 3D shapes molecules can flex into.
โจ We learn to predict 3D geometry from molecular structure
โจ We achieve state-of-the-art results on the GEOM benchmarks
Iโm excited to be at NeurIPS 2025 next week and present our latest paper on molecular conformer generation! Huge thanks to my co-authors @thorbenfrank.bsky.social , Gregor Lied, Klaus-Robert Mรผller, Oliver Unke and Stefan Chmiela for an incredible collaboration. Supported by: @bifold.berlin