Yeah, mostly because GPT-5 needs to think for 20 seconds to come up with a name for a variable. It's good for bigger, self-contained features, but the bias for "reasoning" in the model router makes it downright unusable for smaller changes.
Posts by Sebastian Dziadzio
๐ONEBench accepted to ACL main! โจ
Stay tuned for the official leaderboard and real-time personalised benchmarking release!
If youโre attending ACL or are generally interested in the future of foundation model benchmarking, happy to talk!
#ACL2025NLP #ACL2025
@aclmeeting.bsky.social
Done! Sorry for the wait
Added! ๐๏ธ
Done! ๐๐ป
Done! โ
The Practitioner's Guide to Continual Multimodal Pretraining @dziadzio.bsky.social @confusezius.bsky.social @vishaalurao.bsky.social @bayesiankitten.bsky.social
This has been a fun project with a great team: led by @vishaalurao.bsky.social and @confusezius.bsky.social, with core contributions from @bayesiankitten.bsky.social, and supervision by @zeynepakata.bsky.social, Samuel Albanie, and Matthias Bethge.
Plots showing the scaling dynamics described in the text.
As usual, scaling matters!
๐ Larger models benefit more from temporal merging than sequential finetuning.
๐ Larger compute budgets allow temporal merging to match (and surpass!) multitask performance.
๐ Best-in-TIME scales effectively across longer task sequences (50, 100).
A plot showing that different merging techniques perform similarly.
๐ The choice of merging technique doesnโt matter much.
In the temporal setting, complex merging techniques like TIES or Breadcrumbs offer only marginal gains compared to simpler ones like weight averaging.
A plot showing that different initialization and deployment strategies lead to different results.
๐ Initialization and deployment choices are crucial.
One strategy stands outโusing exponential moving average for both initialization and deployment strikes the best balance between knowledge accumulation and zero-shot retention. We call this approach โจBest-in-TIMEโจ
A plot showing that offline merging underperforms with respect to a replay baseline.
๐ Accounting for time is essential.
Standard merging struggles with the temporal dynamics. Replay and weighting schemes, which factor in the sequential nature of the problem, help (but only to a point).
Key insights:
๐ Accounting for time is essential.
๐ Initialization and deployment choices are crucial.
๐ The choice of merging technique doesnโt matter much.
A schematic representation of the TIME framework.
The world keeps changing, and so should our models.
Enter TIME (Temporal Integration of Model Expertise), a unifying approach that considers:
1๏ธโฃ Initialization
2๏ธโฃ Deployment
3๏ธโฃ Merging Techniques
We study these three axes on the large FoMo-in-Flux benchmark.
๐ New Paper: "How to Merge Your Multimodal Models Over Time?"
arxiv.org/abs/2412.06712
Model merging assumes all finetuned models are available at once. But what if they need to be created over time?
We study Temporal Model Merging through the TIME framework to find out!
๐งต
Come chat to us at NeurIPS about continual multimodal pretraining and some interesting follow-ups ๐
๐จLooking to test your foundation model on an arbitrary and open-ended set of capabilities, not explicitly captured by static benchmarks? ๐จ
Check out โจONEBenchโจ, where we show how sample-level evaluation is the solution.
๐ arxiv.org/abs/2412.06745
Kickstand advertising a Taylor Swift pop-up store.
Kickstand advertising a coffee shop to NeurIPS attendees.
The changing of the guard ceremony in Vancouver is complete
I keep forgetting about the concert, yesterday I was like 'wow people in Vancouver sure love sequins and cowboy boots'.
Whenever my "papers" tab group got lost in a chrome crash I felt nothing but relief.
The firehose is relentless, so over time my strategy became to skim in the moment if interesting and save to zotero, otherwise close the tab. There is only the present. Important stuff will come back.
Yeah, I think we consistently underestimate how much stuff is out there on the Internet. You might think your question or image prompt is niche and original, but if you consider the distribution of Internet-scale datasets, you'd have to work very hard to even reach the tail.
If someone said "the algorithm" with no additional context, I'd think of the latter, but "an algorithm" for me is still the former. Interesting how the default meaning is shifting.
How I use LLMs when writing papers:
1. Write a sentence.
2. Copy it to an LLM for edits, add a prompt explaining in simple words what I'm trying to say.
3. Realise my simple word explanation is actually what I need.
4. Copy it over to the paper, move on to the next sentence.
Have you read Fables for Robots? I think it was only published in English as part of Mortal Engines. If you liked Cyberiad, you'll like this one too!
Added you! ๐๐ป
All in! ๐๏ธ๐๏ธ๐๏ธ
You're in! โ
Welcome aboard! ๐๏ธ
๐ค Can you turn your vision-language model from a great zero-shot model into a great-at-any-shot generalist?
Turns out you can, and here is how: arxiv.org/abs/2411.15099
Really excited to this work on multimodal pretraining for my first bluesky entry!
๐งต A short and hopefully informative thread: