Advertisement ยท 728 ร— 90

Posts by Winfried Ripken

(10/n) Supported by @bifold.berlin, @zuseschooleliza.bsky.social.

2 months ago 0 0 0 0
Learning Hamiltonian Flow Maps

(9/n) Check out our paper and code:
๐ŸŒ Website & Colab: ml4molsim.github.io/hamiltonian-...
๐Ÿ“„ Paper: arxiv.org/abs/2601.22123
๐Ÿ’ป Code: github.com/ML4MolSim/ha...

2 months ago 1 0 1 0

(8/n) Done with a brilliant team: @plainer.bsky.social, @gregorlied.bsky.social, @thorbenfrank.bsky.social, Oliver Unke, Stefan Chmiela, @franknoe.bsky.social, and Klaus-Robert Mรผller.

2 months ago 1 0 1 0

(7/n) With this, you get a single model that provides:
- Instantaneous forces (like a standard MLFF)
- Stable large-timestep updates far beyond classical integrators
- Training and inference cost comparable to MLFFs

2 months ago 0 0 1 0
Post image

(6/n) How?

Inspired by recent advances in few-step generative modeling, our tailored loss function combines force matching with a consistency constraint that enforces agreement of the predicted flow across different time horizons.

2 months ago 0 0 1 0
Post image

(5/n) Our approach:

In contrast, our method learns continuous-time, large-timestep dynamics directly from decorrelated phase-space samples without requiring expensive reference trajectories, and supports arbitrary timesteps during inference.

2 months ago 0 0 1 0
Post image

(4/n) Existing approaches rely on trajectory data, typically generated using another model simulated with small timesteps. This potentially introduces artifacts from the teacher and is computationally expensive.

Can we learn large timesteps, without ever seeing trajectories?

2 months ago 0 0 1 0
Post image

(3/n) The solution: Hamiltonian Flow Maps

The core idea is to model the Hamiltonian evolution directly in phase space over a finite time interval. Concretely, we aim to learn a ๐—›๐—ฎ๐—บ๐—ถ๐—น๐˜๐—ผ๐—ป๐—ถ๐—ฎ๐—ป ๐—™๐—น๐—ผ๐˜„ ๐— ๐—ฎ๐—ฝ that advances positions and momenta over an interval ฮ”t.

2 months ago 0 0 1 0
Advertisement
Post image

(2/n) The problem: Simulations are fundamentally limited by the small timesteps required for stable numerical integration. Even with ML, most of the cost still comes from taking millions of tiny integration steps.

2 months ago 0 0 1 0
Video

Ever get tired of tiny timesteps bottlenecking your MD simulations?

We show how to train a model for large-timestep Hamiltonian dynamics directly on standard MLFF datasets. ๐—ก๐—ผ ๐—ฟ๐—ฒ๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ ๐˜๐—ฟ๐—ฎ๐—ท๐—ฒ๐—ฐ๐˜๐—ผ๐—ฟ๐—ถ๐—ฒ๐˜€, ๐—ป๐—ผ ๐˜‚๐—ป๐—ฟ๐—ผ๐—น๐—น๐—ถ๐—ป๐—ด, ๐—ป๐—ผ ๐˜๐—ฒ๐—ฎ๐—ฐ๐—ต๐—ฒ๐—ฟ needed!

๐Ÿงต๐Ÿ‘‡

2 months ago 18 8 1 2
Video

Link to github: github.com/ML4MolSim/di...
Paper on arxiv: arxiv.org/abs/2506.15378

4 months ago 0 0 0 0

โœจ Via a modular architecture, we enable a fair comparison of symmetries changing both attention mechanism and embedding strategies of our model
โœจ We transfer the powerful DiT architecture from Computer Vision to the molecular domain, proposing two complementary graph-based conditioning strategies

4 months ago 0 0 1 0

We introduce DiTMC, a new way to predict molecular conformers - the different 3D shapes molecules can flex into.
โœจ We learn to predict 3D geometry from molecular structure
โœจ We achieve state-of-the-art results on the GEOM benchmarks

4 months ago 0 0 1 0

Iโ€™m excited to be at NeurIPS 2025 next week and present our latest paper on molecular conformer generation! Huge thanks to my co-authors @thorbenfrank.bsky.social , Gregor Lied, Klaus-Robert Mรผller, Oliver Unke and Stefan Chmiela for an incredible collaboration. Supported by: @bifold.berlin

4 months ago 8 2 1 2