Advertisement · 728 × 90

Posts by Wei-Tse Hsu

Preview
Structural basis for prostaglandin and drug transport via SLCO2A1 Nature Communications - SLCO2A1 (also known as OATP2A1) is responsible for the transport of eicosanoids, including prostaglandins (PGs), as well as of a subset of nonsteroidal anti-inflammatory...

Happy to see our work on SLCO2A1 with @smlea.bsky.social, Nakanishi and Newstead labs out now. Important insight into how prostaglandin and many drugs are transported. Hats off to @weitse-hsu.bsky.social for computational work!

@oxfordbiochemistry.bsky.social

www.nature.com/articles/s41...

1 month ago 29 11 1 0
Post image

Now out in JACS! 🎉 : "Computing Solvation Free Energies of Small Molecules with Experimental Accuracy"! It's been a pleasure to collaborate on this with Harry Moore (@jhmchem.bsky.social) & Gábor Csányi pubs.acs.org/doi/10.1021/...

2 months ago 30 8 1 0
Post image

New Preprint!! We show that binding entropy can be quantitatively predicted from crystallographic ensemble models, accounting for both protein conformational entropy and solvent entropy! www.biorxiv.org/content/10.6...

2 months ago 39 14 1 2
Preview
Can AI-Predicted Complexes Teach Machine Learning to Compute Drug Binding Affinity? We evaluate the feasibility of using co-folding models for synthetic data augmentation in training machine learning-based scoring functions (MLSFs) for binding affinity prediction. Our results show th...

🚀 Bottom line:
With careful filtering, co-folding predictions can indeed teach ML about binding affinity.

👉 Read the full JCIM paper: pubs.acs.org/doi/full/10....

Work with Aniket Magarkar
@boehringerglobal.bsky.social and @philbiggin.bsky.social @ox.ac.uk

(6/6)

3 months ago 2 1 0 0
Post image

🔎 SI highlights:
- AEV-PLIG beats Boltz-2 in 4 target classes in the FEP benchmark (loses 1, ties 6); both are competitive with FEP+ in some cases.
- ipLDDT & ligand pLDDT are also effective filters; pTM, PAE, PDE are not
- Boltz confidence seems to generalize better than its structure module
(5/6)

3 months ago 1 0 1 0
Post image

❓ Are co-folding predictions good enough to train scoring functions?

👉 Yes — with careful filtering. We see no performance difference b/w models trained on:
- experimental structures
- corresponding co-folding predictions

This holds across AEV-PLIG, EHIGN, and RF-Score.
(4/6)

3 months ago 0 0 1 0
Post image

❓ When can we trust a co-folding prediction?

👉 From reproducing HiQBind with Boltz-1x, a few simple heuristics are recommended high-quality cofolding augmentation:
1️⃣ single-chain systems
2️⃣ Boltz confidence > 0.9
3️⃣ train–test similarity > 60%

(3/6)

3 months ago 1 0 1 0
Post image

❓ How much can data augmentation actually improve scoring?

👉 Short answer: only if the added data are high-quality. Adding BindingNet v1 clearly improved performance, but v2 did not—despite being 10x larger—due to its substantially lower quality.

Quality beats quantity.
(2/6)

3 months ago 1 0 1 0
Advertisement

📢 Can AI-Predicted Complexes Teach Machine Learning to Compute Drug Binding Affinity?

In our recent JCIM work, we tested whether co-folding models can be used for data augmentation for training ML-based scoring functions (SFs).

We asked 3 simple but critical questions. 👇
(1/6)

3 months ago 6 1 1 0