Pretraining implicit solvent, or coarse grained, models is difficult, limiting the transferability of the resulting ML force fields. However, Justin just figured out a novel method for pretraining using protein language models. More can be found at: arxiv.org/abs/2601.05388
Posts by Bin Zhang
Excited to see our IGME models applied to unravel the free energy landscape of the tetra-nucleosome. Wonderful collaboration with @binzmit.bsky.social! Congrats to all the authors!
Join us for the summer!
Thrilled to share our new JCP editorial (shorturl.at/6mKdY), co-authored with @tamar_schlick, introducing the special collection “Chromatin Structure and Dynamics: Recent Advancements.”
Huge thanks to all the outstanding contributors!
We welcome 1st-year graduate student Chenxi Ye to join the group!
Our preprint on a single-bead-per-nucleotide DNA model is live! This model reproduces both atomistic simulations and persistent length for a variety of sequences. It's compatible with openMM, allowing for highly efficient GPU simulations.
www.biorxiv.org/content/10.1....
First post on bluesky.