Advertisement · 728 × 90

Posts by Tim Franzmeyer

Preview
High Accuracy, Less Talk (HALT): Reliable LLMs through Capability-Aligned Finetuning Large Language Models (LLMs) currently respond to every prompt. However, they can produce incorrect answers when they lack knowledge or capability -- a problem known as hallucination. We instead propo...

📄 Full paper: arxiv.org/abs/2506.04051

With amazing collaborators:
Archie Sravankumar
Lijuan Liu
Yuning Mao
Rui Hou
Sinong Wang
@jfoerst.bsky.social
Madian Khabsa
@lukezettlemoyer.bsky.social

10 months ago 1 0 0 0
Post image

🚨 One model, high correctness:

With low-threshold tuning, we take Llama3-70B from:

➡️ 51% → 87% correctness
➡️ Retaining 53% of the original completeness

10 months ago 0 0 1 0
Post image

⚖️ HALT allows you to trade off completeness and correctness

We introduce a threshold that tunes how eagerly the model should respond:

Low threshold = more reliable answers 🔒 (Left box)
High threshold = more detailed answers 📝(Right box)

10 months ago 0 0 1 0
Post image

🛠️ Our approach: Adjust finetuning responses to match the capabilities of the LLM

1️⃣ Break pretrained LLM responses into factual fragments
2️⃣ Use ground truth to flag incorrect fragments
3️⃣ Modify finetuning responses by removing or replacing errors with “Unsure from here” 🚧

10 months ago 0 0 1 0

🧠 Standard LLMs always respond — even when unsure.

This leads to partially incorrect outputs in critical domains like Coding, Math, Medicine, and QA.

Why? Standard finetuning ignores what the pretrained model actually knows and pushes it to always complete every prompt.

10 months ago 1 0 1 0
Video

What if LLMs knew when to stop? 🚧

HALT finetuning teaches LLMs to only generate content they’re confident is correct.

🔍 Insight: Post-training must be adjusted to the model’s capabilities.
⚖️ Tunable trade-off: Higher correctness 🔒 vs. More completeness 📝

🧵

10 months ago 10 1 1 0