📄 Full paper: arxiv.org/abs/2506.04051
With amazing collaborators:
Archie Sravankumar
Lijuan Liu
Yuning Mao
Rui Hou
Sinong Wang
@jfoerst.bsky.social
Madian Khabsa
@lukezettlemoyer.bsky.social
Posts by Tim Franzmeyer
🚨 One model, high correctness:
With low-threshold tuning, we take Llama3-70B from:
➡️ 51% → 87% correctness
➡️ Retaining 53% of the original completeness
⚖️ HALT allows you to trade off completeness and correctness
We introduce a threshold that tunes how eagerly the model should respond:
Low threshold = more reliable answers 🔒 (Left box)
High threshold = more detailed answers 📝(Right box)
🛠️ Our approach: Adjust finetuning responses to match the capabilities of the LLM
1️⃣ Break pretrained LLM responses into factual fragments
2️⃣ Use ground truth to flag incorrect fragments
3️⃣ Modify finetuning responses by removing or replacing errors with “Unsure from here” 🚧
🧠 Standard LLMs always respond — even when unsure.
This leads to partially incorrect outputs in critical domains like Coding, Math, Medicine, and QA.
Why? Standard finetuning ignores what the pretrained model actually knows and pushes it to always complete every prompt.
What if LLMs knew when to stop? 🚧
HALT finetuning teaches LLMs to only generate content they’re confident is correct.
🔍 Insight: Post-training must be adjusted to the model’s capabilities.
⚖️ Tunable trade-off: Higher correctness 🔒 vs. More completeness 📝
🧵