Special thanks to Biogen and CIFAR for the support, and
@proceduralia.bsky.social + @pierrelucbacon.bsky.social
for their valuable supervision, and to the entire Mila community for their feedback, discussions, and support. Code, paper, and models are public: github.com/ddidacus/mol...
Posts by Diego Calanzone
Mol-MoE improves with more property experts with a larger gain than classic merging and overall, it achieves the highest scores. Simple reward scalarization here does not work. We aim at further calibrating Mol-MoE and testing the performance on larger sets of objectives.
The model we obtain does achieve a smaller mean absolute error in generating compounds according to the provided properties, surpassing the alternative methods. Arguably, the learned routing functions can tackle task interference.
But the relationship between interpolation coefficients and properties isn’t strictly linear, needing a calibration function. Mol-MoE addresses this by training only the routers to predict optimal merging weights from prompts, enabling more precise control and less interference.
Think, think, think... what if we trained experts on single properties separately and leveraged model merging techniques to obtain a multi-property model? We re-implement rewarded soups and obtain a robust baseline capable of generating high-quality, out-of-distribution samples.
In our ablation studies, instruction-tuned models struggle with higher property values due to lack of explicit optimization. Even RL fine-tuning on multiple objectives can hit performance plateaus or declines, and balancing objectives requires re-training, limiting steerability.
Drug discovery inherently involves multi-objective optimization, requiring candidate molecules to not only bind effectively to target proteins, triggering a specific function, but also to meet safety and compatibility criteria to become drugs. Is supervised learning sufficient?
Molecule sequence models learn vast molecular spaces, but how to navigate them efficiently? We explored multi-objective RL, SFT, merging, but these fall short in balancing control and diversity. We introduce **Mol-MoE**: a mixture of experts for controllable molecule generation🧵
Finally, LOgically COnsistent (LoCo) LLaMas can outperform solver-based baselines and SFT! I thank @nolovedeeplearning.bsky.social and @looselycorrect.bsky.social for the guidance in realizing this project, get in touch or come to chat in Singapore!
arxiv.org/abs/2409.13724
Our method makes LLaMa's knowledge more consistent to any given knowledge graph, by seeing only a portion of it! It can transfer logical rules to similar or derived concepts. As proposed by @ekinakyurek.bsky.social et al., you can use a LLM-generated KB to reason over its knowledge.
Yes! We propose to leverage the Semantic Loss as a regularizer: it maximizes the likelihood of world (model) assignments satisfying any given logical rule. We thus include efficient solvers in the training pipeline to efficiently perform model counting on the LLM's own beliefs.
Various background works focus on instilling single consistency rules, e.g. A and not A can't be both true (negation, Burns et al.), A true and A implies B, thus B true (modus ponens). Can we derive a general objective function that combines logical rules dynamically?
🥳 "Logically Consistent Language Models via Neuro-Symbolic Integration" just accepted at #ICLR2025!
We focus on instilling logical rules in LLMs with an efficient loss, leading to higher factuality & (self) consistency. How? 🧵
RNA FISH -> a fish
used Cursor (based on claude sonnet 3.5) over VS Code for a week now. Early feedback:
✔️ great to parallelize training and inference
✔️ multi-file context, can easily setup hyperparam sweeps
✔️ great to visualize results with high level guidance. Welcome spider plots!
Test of Time Paper Awards are out! 2014 was a wonderful year with lots of amazing papers. That's why, we decided to highlight two papers: GANs (@ian-goodfellow.bsky.social et al.) and Seq2Seq (Sutskever et al.). Both papers will be presented in person 😍
Link: blog.neurips.cc/2024/11/27/a...
I guess it also depends on the field/subfield?
researchers on cancer, message me: I’d like to know about your work, your research questions!
While we're starting up over here, I suppose it's okay to reshare some old content, right?
Here's my lecture from the EEML 2024 summer school in Novi Sad🇷🇸, where I tried to give an intuitive introduction to diffusion models: youtu.be/9BHQvQlsVdE
Check out other lectures on their channel as well!
I've created an initial Grumpy Machine Learners starter park. If you think you're grumpy and you "do machine learning", nominate yourself. If you're on the list, but don't think you are grumpy, then take a look in the mirror.
go.bsky.app/6ddpivr