The 5th Generation, Evaluation, and Metrics (GEM) Workshop will be at #ACL2026!
Call for papers is out. Topics include:
π LMs as evaluators
π Living benchmarks
π£ Eval with humans
and more
New for 2026: Opinion & Statement Papers!
Full CFP: gem-workshop.com/call-for-pap...
Posts by Gabi Stanovsky
π¨New paper alertπ¨
π§
Instruction-tuned LLMs show amplified cognitive biases β but are these new behaviors, or pretraining ghosts resurfacing?
Excited to share our new paper, accepted to CoLM 2025π!
See thread below π
#BiasInAI #LLMs #MachineLearning #NLProc
Can RAG performance get * worse * with more relevant documents?π
We put the number of retrieved documents in RAG to the test!
π₯Preprintπ₯: arxiv.org/abs/2503.04388
1/3
π¨New arXiv preprint!π¨
LLMs can hallucinate - but did you know they can do so with high certainty even when they know the correct answer? π€―
We find those hallucinations in our latest work with @itay-itzhak.bsky.social, @fbarez.bsky.social, @gabistanovsky.bsky.social and Yonatan Belinkov
GEM is so back! Our workshop for Generation, Evaluation, and Metrics is coming to an ACL near you.
Evaluation in the world of GenAI is more important than ever, so please consider submitting your amazing work.
CfP can be found at gem-benchmark.com/workshop
A vote to stop defining what's LLMs at the start of every paper
Joint work with @rkeydar.bsky.social, Gadi Perl, @eliyahabba.bsky.social
We hope this will help spur a much needed multidisciplinary discussion about realistic regulation measures. Happy to hear your thoughts!
There's a lot of talk about regulating AI, but do regulators know the technology well enough?
In our new paper, we survey major reg efforts & find they rely on benchmarking, which we know to be problematic. How did this happen & what can we do about it?
arxiv.org/pdf/2501.15693