A dream come true! I presented "No Representation, No Trust" on my favorite RL podcast, TalkRL!
Make sure to check it out to learn why training with PPO for too long makes your agent collapse!
Posts by Tim Davidson
🔥 Want to train large neural networks WITHOUT Adam while using less memory and getting better results? ⚡
Check out SCION: a new optimizer that adapts to the geometry of your problem using norm-constrained linear minimization oracles (LMOs): 🧵👇
love the format/stack you settled on — hyped for 2025 entries 🦾🎇🎆
any chance at a sweet blogpost at some point ?! O_o
🚀 Introducing PICLe: a framework for in-context named-entity detection (NED) using pseudo-annotated demonstrations.
🎯 No human labeling needed—yet it outperforms few-shot learning with human annotations!
#AI #NLProc #LLMs #ICL #NER
Here's Veo 2, the latest version of our video generation model, as well as a substantial upgrade for Imagen 3 🧑🍳🚢
(Did I mention we are hiring on the Generative Media team, btw 👀)
blog.google/technology/g...
lol. yes, very true and important
Also, check out our ML project template—it’s a game-changer!🚀🚀
@caglarai.bsky.social
🧑💻 github.com/CLAIRE-Labo/...
I am in Vancouver for NeurIPS 2024 until December 16th if you want to meet, DM or email me.
We have two accepted papers from my lab:
1. Building on Efficient Foundations: Effective Training of LLMs with Structured Feedforward Layers, on Wednesday, East Exhibit Hall A-C #2010 (1/3)
favorite conference experience for me :)
Better VQ-VAEs with this one weird rotation trick!
I missed this when it came out, but I love papers like this: a simple change to an already powerful technique, that significantly improves results without introducing complexity or hyperparameters.
I've put together a starter pack of EPFL researchers across all labs and domains! 🇨🇭 Would love to expand this list and showcase more amazing work happening at EPFL. Drop a reply to be added!
#EPFL #academicsky
go.bsky.app/73zdbtp
🦾 ☝️- nice pack :)
~🐣~ -> 🐛