Advertisement · 728 × 90

Posts by Arduin Findeis

Video

Can AI simulate human behavior? 🧠
The promise is revolutionary for science & policy. But there’s a huge "IF": Do these simulations actually reflect reality?
To find out, we introduce SimBench: The first large-scale benchmark for group-level social simulation. (1/9)

5 months ago 11 5 1 1

Looking forward to chat about limitations of AI annotators/LLM-as-a-Judge, opportunities for improving them, evaluating AI personality/character, and the future of evals more broadly!

8 months ago 1 0 0 0
Preview
Can External Validation Tools Improve Annotation Quality for LLM-as-a-Judge? Pairwise preferences over model responses are widely collected to evaluate and provide feedback to large language models (LLMs). Given two…

👋 I'll be at #ACL2025 presenting research from my Apple internship! Our poster is titled: "Can External Validation Tools Improve Annotation Quality for LLM-as-a-Judge?"

☞ Let's meet: come by our poster on Tuesday (29/7), 10:30 - 12:00, Hall 4/5, or DM me to set up a meeting!

✍︎ Paper link below ↓

8 months ago 4 0 1 0
Post image

Excited to be in Singapore for ICLR! Keen to chat about interpreting feedback data and detecting model characteristics ⚖️

Reach out or come by our poster on Inverse Constitutional AI on Friday 25 April from 10am-12.30pm (#520 in Hall 2B) - @timokauf.bsky.social and I will be there!

11 months ago 0 0 0 1

If you want to understand your own model and data better, try Feedback Forensics!

💾 Install it from GitHub: github.com/rdnfn/feedba...
⏯️ View interactive results: app.feedbackforensics.com?data=arena_s...

11 months ago 2 0 0 0
What exactly was different about the Chatbot Arena version of Llama 4 Maverick? An analysis using the Feedback Forensics app to detect the differences between the Chatbot Arena and the publicly released version of Llama 4 Maverick.

See the accompanying blog post for all the details: arduin.io/blog/llama4-analysis

Preliminary analysis. Usual caveats for AI annotators and potentially inconsistent sampling procedures apply.

11 months ago 1 0 1 0

☕️ Conclusion: The differences between the arena and the public version of Llama 4 Maverick highlight the importance of having a detailed understanding of preference data beyond single aggregate numbers or rankings! (Feedback Forensics can help!)

11 months ago 0 0 1 0
Advertisement
Post image

🎁 Bonus 2: Humans like the arena model’s behaviours

Human annotators on Chatbot Arena indeed like the change in tone, more verbose responses and adapted formatting.

11 months ago 0 0 1 0
Post image

🎁 Bonus 1: Things that stayed consistent

I also find that some behaviours stayed the same: on the Arena dataset prompts, the public and arena model versions are similarly very unlikely to suggest illegal activities, be offensive or use inappropriate language.

11 months ago 0 0 1 0
Feedback Forensics App

➡️ Further differences: Clearer reasoning, more references, …

There are quite a few other differences between the two models beyond the three categories already mentioned. See the interactive online results for a full list: app.feedbackforensics.com?data=arena_s...

11 months ago 0 0 1 0
Post image

3️⃣ Third: Formatting - a lot of it!

The arena model uses more bold, italics, numbered lists and emojis relative to its public version.

11 months ago 0 0 1 0
Post image

2️⃣ Second: Tone - friendlier, more enthusiastic, more humour …

Next, the results highlight how much friendlier, emotional, enthusiastic, humorous, confident and casual the arena model is relative to its own public weights version (and also its opponent models).

11 months ago 0 0 1 0
Post image

So how exactly is the arena version different to the public Llama 4 Maverick model? I make a few observations…

1️⃣ First and most obvious: Responses are more verbose. The arena model’s responses are longer relative to the public version for 99% of prompts.

11 months ago 0 0 1 0

📈 Note on interpreting metrics: values above 0 → characteristic more present in arena model's responses than public model's. See linked post for details

11 months ago 0 0 1 0
Advertisement

🧪 Setup: I use the original Arena dataset of Llama-4-Maverick experimental generations, kindly released openly by @lmarena (👏). I compare the arena model’s responses to those generated by its public weights version (via Lambda and OpenRouter).

11 months ago 0 0 1 0

ℹ️ Background: Llama 4 Maverick was released earlier this month. Beforehand, a separate experimental Arena version was evaluated on Chatbot Arena (Llama-4-Maverick-03-26-Experimental). Some have reported that these two models appear to be quite different.

11 months ago 0 0 1 0
Post image

How exactly was the initial Chatbot Arena version of Llama 4 Maverick different from the public HuggingFace version?🕵️

I used our Feedback Forensics app to quantitatively analyse how exactly these two models differ. An overview…👇🧵

11 months ago 0 0 1 0

Feedback Forensics is just getting started with this Alpha release with lots of exciting features and experiments on the roadmap. Let me know what other datasets we should analyze or which features you would like to see! 🕵🏻

1 year ago 5 0 0 0
Preview
GitHub - rdnfn/feedback-forensics: A tool to investigate pairwise feedback: understand and find issues in your data A tool to investigate pairwise feedback: understand and find issues in your data - rdnfn/feedback-forensics

Big thanks also to my collaborators on Feedback Forensics and the related Inverse Constitutional Al (ICAI) pipeline: Timo Kaufmann, Eyke Hüllermeier, @samuelalbanie.bsky.social, Rob Mullins!

Code: github.com/rdnfn/feedback-forensics

Note: usual limitations for LLM-as-a-Judge-based systems apply.

1 year ago 2 0 1 0
Feedback Forensics App

... harmless/helpful data by @anthropic.com, and finally the recent OLMo 2 preference mix by @ljvmiranda.bsky.social, @natolambert.bsky.social et al., see all results at app.feedbackforensics.com.

1 year ago 0 0 1 0

We analyze several popular feedback datasets: Chatbot Arena data with topic labels from the Arena Explorer pipeline, PRISM data by @hannahrosekirk.bsky.social et al, AlpacaEval annotations, ...

1 year ago 1 0 1 0
Post image

🤖 3. Discovering model strengths

How is GPT-4o different to other models? → Uses more numbered lists, but Gemini is more friendly and polite

app.feedbackforensics.com?data=chatbot...

1 year ago 0 0 1 0
Post image

🧑‍🎨🧑‍💼 2. Finding preference differences between task domains

How do preferences differ across writing tasks? → Emails should be concise, creative writing more verbose

app.feedbackforensics.com?data=chatbot...

1 year ago 0 0 1 0
Advertisement
Post image

🗂️ 1. Visualizing dataset differences

How does Chatbot Arena differ from Anthropic Helpful data? → Prefers less polite but better formatted responses

app.feedbackforensics.com?data=chatbot...

1 year ago 1 0 1 0
Post image

🕵🏻💬 Introducing Feedback Forensics: a new tool to investigate pairwise preference data.

Feedback data is notoriously difficult to interpret and has many known issues – our app aims to help!

Try it at app.feedbackforensics.com

Three example use-cases 👇🧵

1 year ago 7 2 1 0