Where does one language model outperform the other?
We examine this from first principles, performing unsupervised discovery of "abilities" that one model has and the other does not.
Results show interesting differences between model classes, sizes and pre-/post-training.
Posts by Graham Neubig
Nice contribution to the understanding of Long CoT induction arxiv.org/abs/2502.03373 by Edward Yeo and colleagues (advised by @gneubig.bsky.social and @xiangyue96.bsky.social ). Its hard not to see this as mostly a negative result on induction on the 8B scale. π
LLM agents can codeβbut can they ask clarifying questions? π€π¬
Tired of coding agents wasting time and API credits, only to output broken code? What if they asked first instead of guessing? π
(New work led by Sanidhya Vijay: www.linkedin.com/in/sanidhya-...)
We are now done with all classes for CMU CS11-711 Advanced NLP!
Slides: phontron.com/class/anlp-f...
Videos: youtube.com/playlist?lis...
Hope this is useful to people π
1/ Introducing α΄α΄α΄Ι΄κ±α΄Κα΄Κα΄Κ: a retrieval-augmented LM to help scientists synthesize knowledge π
@uwnlp.bsky.social & Ai2
With open models & 45M-paper datastores, it outperforms proprietary systems & match human experts.
Try out our demo!
openscholar.allen.ai
Screenshot of the paper title "What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length"
π¬ Have you or a loved one compared LM probabilities to human linguistic acceptability judgments? You may be overcompensating for the effect of frequency and length!
π In our new paper, we rethink how we should be controlling for these factors π§΅: