🚀 A new era in European #AIresearch begins!
ELLIOT is a €25M #HorizonEurope project launching July 2025 to build open, trustworthy Multimodal Generalist Foundation Models.
30 partners, 12 countries, EU values.
🔗 Press release: apigateway.agilitypr.com/distribution...
Posts by Ameya P.
🚀 Never miss a beat in science again!
📬 Scholar Inbox is your personal assistant for staying up to date with your literature. It includes: visual summaries, collections, search and a conference planner.
Check out our white paper: arxiv.org/abs/2504.08385
#OpenScience #AI #RecommenderSystems
🧵1/ 🚨 New paper: A Sober Look at Progress in Language Model Reasoning
We re-evaluate recent SFT and RL models for mathematical reasoning and find most gains vanish under rigorous, multi-seed, standardized evaluation.
📊 bethgelab.github.io/sober-reason...
📄 arxiv.org/abs/2504.07086
Hochlehnert, Bhatnagar, Udandarao, Albanie, Prabhu, Bethge: A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility https://arxiv.org/abs/2504.07086 https://arxiv.org/pdf/2504.07086 https://arxiv.org/html/2504.07086
Great work! A much-needed upgrade for continual learning datasets—excited to see progress on long-timespan tasks beyond classification. Deets below👇
Deadline extended to March 19 for the EVAL-FoMo workshop @cvprconference.bsky.social! We welcome submissions (incl. published papers) analyzing emerging capabilities & limits in visual foundation models.
Details: sites.google.com/view/eval-fo...
#CVPR2025
LMs excel at solving problems (~48% success) but falter at debunking them (<9% counterexample rate)!
Could form an AI Brandolini's Law: "Capability needed to refute bullshit is far larger than that needed to generate it"
AI can generate correct-seeming hypotheses (and papers!). Brandolini's law states BS is harder to refute than generate. Can LMs falsify incorrect solutions? o3-mini (high) scores just 9% on our new benchmark REFUTE. Verification is not necessarily easier than generation 🧵
🚀 Call for Papers – CVPR 3rd Workshop on Multi-Modal Foundation Models (MMFM)
@cvprconference.bsky.social ! 🚀
🔍 Topics: Multi-modal learning, vision-language, audio-visual, and more!
📅 Deadline: March 14, 2025
📝 Submission: cmt3.research.microsoft.com/MMFM2025
🌐 sites.google.com/view/mmfm3rd...
New preprint out! 🎉
How does LLM training loss translate to downstream performance?
We show that pretraining data and tokenizer shape loss-to-loss scaling, while architecture and other factors play a surprisingly minor role!
brendel-group.github.io/llm-line/ 🧵1/8
CuratedThoughts: Data curation focus for RL post-training! (Update 1) 🚀
25% of Openthoughts-114k-math filtered — issues included proofs, missing figures, and multiple questions with one answer.
Check out work by
@ahochlehnert.bsky.social & @hrdkbhatnagar.bsky.social
below 👇
Our 2nd Workshop on Emergent Visual Abilities and Limits of Foundation Models (EVAL-FoMo) is accepting submissions. We are looking forward to talks by our amazing speakers that include @saining.bsky.social, @aidanematzadeh.bsky.social, @lisadunlap.bsky.social, and @yukimasano.bsky.social. #CVPR2025
🔥 #CVPR2025 Submit your cool papers to Workshop on
Emergent Visual Abilities and Limits of Foundation Models 📷📷🧠🚀✨
sites.google.com/view/eval-fo...
Submission Deadline: March 12th!
LMs are used for annotation, evaluation and distillation! We identify critical issues!
LMs of a similar capability class (not model family tho!) behave similarly and this skews oversight far more than I expected.
Check the 4-in-1 mega paper below to 👀 how 👇
Can better representation learning help? No!
RanDumb recovers 70-90% of the joint performance.
Forgetting isn't the main issue—the benchmarks are too toy!
Key Point: Current OCL benchmarks are too constrained for any effective learning of online continual representations!
Across a wide range of online continual learning benchmarks-- RanDumb consistently surpasses prior methods (even latest contrastive & meta strategies), often by surprisingly large margins!
Continual Learning assumes deep representations learned outperform old school kernel classifiers (as in supervised DL). But this isn't validated!!
Why might it not work? Updates are limited and networks may not converge.
We find: OCL representations are severely undertrained!
How RanDumb works: Fix a random embedder to transform raw pixels. Train a linear classifier on top—single pass, one sample at a time, no stored exemplars. Order-invariant, worst-case ready🚀
Looks familiar? This is streaming (approx.) Kernel LDA!!
New Work: RanDumb!🚀
Poster @NeurIPS, East Hall #1910- come say hi👋
Core claim: Random representations Outperform Online Continual Learning Methods!
How: We replace the deep network by a *random projection* and linear clf, yet outperform all OCL methods by huge margins [1/n]
The Practitioner's Guide to Continual Multimodal Pretraining @dziadzio.bsky.social @confusezius.bsky.social @vishaalurao.bsky.social @bayesiankitten.bsky.social
Breaking the 8-model merge limit was tough, but we scaled to merging 200+ models! The secret? Iterative finetuning + merging *over time*.
The time axis unlocks scalable mergeability. Merging has surprising scaling gains across size & compute budgets.
All the gory details ⬇️
How do we benchmark the vast capabilities of foundation models? Introducing ONEBench – a unifying benchmark to test them all, led by
@adhirajghosh.bsky.social and
@dziadzio.bsky.social!⬇️
Sample-level benchmarks could be the new generation- reusable, recombinable & evaluate lots of capabilities!
Come chat with us @ NeurIPS for hot takes on the future of continual learning with foundation models!