Advertisement Β· 728 Γ— 90

Posts by Yotam Perlitz

How important are LLM evaluations to you?

A) Who cares?
B) Somewhat important (I guess?)
C) I'm an LLM, I evaluate myself.
D) Enough to join the pack

Lets talk about LLM evals here: go.bsky.app/DJpp8cy

1 year ago 6 1 3 0
Post image

Save yourselves the hours (or days) inferring all 64K examples, when using HELM
In arxiv.org/pdf/2308.116... we show that 160 examples 🀯🀯🀯 is enough to get a very good picture, #ComputeIsForTraining.

with
@lchoshen.bsky.social and more

1 year ago 7 1 0 0

Thanks!

1 year ago 0 0 0 0

@yamadashy

1 year ago 0 0 0 0
Preview
GitHub - yamadashy/repomix: πŸ“¦ Repomix (formerly Repopack) is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large... πŸ“¦ Repomix (formerly Repopack) is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) o...

If you haven't tried it yet:
github.com/yamadashy/re...
will can turn your repo into one file,
making it super easy to feed to a chatbot asking questions

1 year ago 1 0 1 0
Preview
BenchBench Leaderboad - a Hugging Face Space by ibm Discover amazing ML apps made by the community

✨ Developed a new benchmark or dataset for language models? ✨
Want the community to trust and adopt it? πŸ€”
Show that it (dis)agrees with common benchmarks

BenchBench makes it easy. Check it out:
πŸ‘‰ huggingface.co/spaces/ibm/b...

1 year ago 1 0 0 0

hi β€ͺ@mariaa.bsky.social‬
Can I be added to the pack?
Mostly posting about AI evaluations and benchmarking :)

1 year ago 0 0 2 0

hi @maosbot.bsky.social‬ can I be added to the AI pack?
mostly posting on Evaluations of AI but other things as well

1 year ago 0 0 0 0
Advertisement

Seems like it indeed measure what it claims to :)
Kudus to the authors
A faster, automatic (no annotators) alternative to the Chatbot arena https://t.co/WNk3UmXRSq

1 year ago 0 0 0 0

https://t.co/TZlMiQdgWR

1 year ago 0 0 0 0

we've now added the decentralized arena to benchbench,

check out how it fares with other benchmarks

https://t.co/pjhtr8CPZD

1 year ago 0 0 0 0

Get your benchmark game on: https://t.co/yY0swLQOHZ https://t.co/3qzkcIOd7u https://t.co/5Y7QUz0Ype

1 year ago 0 0 0 0

Me trying to choose the right LLM benchmark without BenchBench:

https://t.co/TZlMiQdgWR https://t.co/DQEttklUGQ

1 year ago 0 0 0 0

Shoutout to @streamlit, our framework of choice! Shoutout to @huggingface for hosting our space πŸ€— https://t.co/z8LFw6ZQG7

1 year ago 0 0 0 0

Explore the BenchBench Leaderboard to explore and visualize how established benchmarks compare: https://t.co/yY0swLQgSr
Use our Python package to perform your own BAT analysis: https://t.co/iU8favWVT6
And read the paper: https://t.co/RvCp3R6gU5 https://t.co/poHpewZkS3

1 year ago 0 0 0 0

BenchBench can prove your benchmark measures unique skills ❄️(disagreement with existing benchmarks)

Or prove it captures the essence of others aimed at (agreement), for example, agreeing with @lmsys, but efficiently. https://t.co/KwtHtTRESc

1 year ago 0 0 0 0

✨ Developed a new benchmark or dataset for language models? ✨

Want the community to trust and adopt it? πŸ€”

So, demonstrate its validity by comparing it to established benchmarks!

BenchBench makes it easy. Check it out:
πŸ‘‰ https://t.co/yY0swLQgSr

1 year ago 0 0 0 0
Advertisement

Shout-out to the amazing team at IBM behind Unitxt: @ElronBandel, @MatanOrbach, yoavkatz, eladv, @LChoshen, @yotamperlitz & more!

IBM is betting big on it (IBM Research AI VP πŸ‘‡) https://t.co/BKfK0JriYB

1 year ago 0 0 0 0

HELM just got a great upgrade!
We've integrated with Unitxt for:

Easy dataset addition
2x the datasets
Sharable & reproducible pipelines

Check out the blogpost: https://t.co/UJXwfPKzGN
And the unitxt repo
https://t.co/GeqMCoQhjv

@ElronBandel @YifanMai

1 year ago 0 0 0 0

Everyone knows you never have to use the full test set
We shows how much they were right 🀯!

Check out our presentation at @naacl
in Efficient/Low-Resources and Evaluation Methods for NLP (18 June 2024 @ 02:12)

or watch our video here:
https://t.co/pPOpKyLbhT

See you! https://t.co/ocVvmVBBlW

1 year ago 0 0 0 0

It is a great figure
and a great thing you did by sharing all your meta-data!

it had enabled a lot of great work
ours as well :)

https://t.co/9lGi8aW8IG https://t.co/Lz62fTdn7O

1 year ago 0 0 0 0

Bored with all benchmarks ranking models the same?
HOLMES doesn't πŸ’ͺ

Probing LMs for linguistic abilities is a fresh idea, @AndreasWaldis took it to the extreme 🦸

Give it a read!
or check out the leaderboard https://t.co/Byc1Nhp3nV https://t.co/zH0RLddkID

1 year ago 0 0 0 0

I've been working internally with this dataset
and let me tell you...

Its great! https://t.co/MOwn0OyVS3

2 years ago 0 0 0 0

like the color scheme πŸ… https://t.co/sdAosgxypV

2 years ago 0 0 0 0

Using contrastive representation for optimized human evaluation πŸ‘οΈπŸ‘οΈπŸ‘οΈ

Nice! https://t.co/49leLodOAQ

2 years ago 0 0 0 0

Check out the paper for more insights :) https://t.co/7zhb8mGtQ0

2 years ago 0 0 0 0

variance in evaluation has many sources,
this work really does a good job at profiling one of these https://t.co/nAf7zYDSd7

2 years ago 0 0 0 0
Advertisement

these models keeps changing πŸ’©
tomorrow this figure will have no meaning https://t.co/OsA2WfiLHn

2 years ago 0 0 0 0

this is a nice to have link :) https://t.co/DYApcasZen

2 years ago 0 0 0 0

seems like there are more latest findings similar to that, BTW @adinamwilliams , where can I find the full paper? https://t.co/sl1Jqa1R1R

2 years ago 0 0 0 0