... so definitely more than 4.4 but less than 4.6 million? /s
source?
Posts by Sander Vandenhaute
Just expressing my support for typst as well! It’s mature enough to typeset a PhD thesis, has a very mellow learning curve, and has a great community.
Orb-v3 out now -- achieves SOTA on speed *and* accuracy
arxiv.org/abs/2504.06231
Ridgeline chart showing the distribution of global daily air temperature differences from the pre-industrial reference period (1850-1900), for every year between 1940 and 2024. Each individual year resembles a hill, shaded in a darker shade of red and further to the right for warmer years. The trend is clearly towards warmer years, with 2024 standing out as first year above 1.5C.
NEW: 2024 has just been confirmed as the warmest year on record, and the first to breach the 1.5C threshold.
We used a ridgeline (Joy Division inspired) chart to visualise daily temperature anomalies since 1940.
2024 clearly stands out with 100% of its days above 1.3C and 75% above 1.5C.
typst (the definitive Latex successor) and manim (for stunning visuals/movies/slideshows) are two incredibly useful pieces of software
typst.app
github.com/3b1b/manim
Anne Gagneux, Ségolène Martin, @quentinbertrand.bsky.social Remi Emonet and I wrote a tutorial blog post on flow matching: dl.heeere.com/conditional-... with lots of illustrations and intuition!
We got this idea after their cool work on improving Plug and Play with FM: arxiv.org/abs/2410.02423
Here I was thinking I’d have a hard time convincing people RPA is empirical.
Do quantum monte carlo techniques have true potential or are we stuck with decades-old approximations invented by highly noncomputational scientists?
simple Python API; to drive a single 'master' job which then does everything else!
scalable molecular simulation: github.com/molmod/psiflow
scientific: ML potentials, DFT and post-HF calculations, (path-integral) MD, replica exchange, alchemical ΔF , hessians, ...
technical: automated job submission, simple Python, scales to >100 nodes, containerized!
is there a #compchem starter pack here?
a golden (😂) PES
Actually, from that perspective, even a 1000x slowdown could be acceptable since it would be used less for super long MDs and more for building models above and beyond atomic-level MD...
From a distance, and this is probably controversial, but it feels like AF has made so much progress that would have otherwise required decades of atomic simulations ?
For drug discovery, do you think more accurate atomic interactions are the way to go, or will people gradually abandon bottom-up atomic-level simulations?
So as long as all possible low-density environments are included in training, putting a limit at fixed cutoff makes sense?
Hmm, yeah, and maybe the fixed neighbors thing is just a trick to speed up training and improve performance on the synthetic benchmarks
At the same time: beyond a “threshold” number of neighbors there is maybe so much screening that the required # neighs to include becomes a constant?
I should really get out of my matsci cave because I am so unaware of these things 😂
I was imagining OpenMM with a bunch of custom force expressions, PME, and anisotropic pressure control (often needed in solid state) at 1 ms / step — and maybe 100 ms / step for MACE for a similar system, approx..
at least in my experience!
Although matbench leading entries usually truncate the number of neighbors to consider to a fixed number, such that the cost of a single message passing layer no longer scales with density …
Right sorry, wasn’t counting bio! Though I think it’s more the increased density rather than shear system size that widens the performance gap?
In matsci / catalysis, with proper enhanced sampling, the main worry is not so much the achievable time scales rather than the accuracy of the QM data…
100x because that’s how much slower an “optimized” ML potential for any particular system would be. I might be optimistic here but a small MACE network and the right training data have always gotten me below ~1 meV/atom and ~50 meV/A errors.
Epic capture ….Grand Canyon National Park in Arizona ✨👏✨😎✨
Thrilled to announce Boltz-1, the first open-source and commercially available model to achieve AlphaFold3-level accuracy on biomolecular structure prediction! An exciting collaboration with Jeremy, Saro, and an amazing team at MIT and Genesis Therapeutics. A thread!
new SOTA on collective variable learning!
gist? Train classifier in feature space of pretrained GNN to predict 'phase' of an atomic geometry:
CV(A->B) = logit(B) - logit(A)
+data-efficient
+invariant wrt trans/rot/perm
+compatible w foundation models!