My AI Safety Paper Highlights of Feb & Mar 2026:
- *Benchmarking auditors*
- Functional emotions
- Emergent vs narrow misalignment
- Scheming propensity
- Lenient self-monitors
- CoT controllability
- Unfilterable data poison
- Boundary-point jailbreak
aisafetyfrontier.substack.com/p/paper-high...
Posts by Johannes Gasteiger🔸
My AI Safety Paper Highlights of January 2026:
- *production-ready probes*
- extracting harmful capabilities
- token-level data filtering
- alignment pretraining
- catching saboteurs in auditing
- the Assistant Axis
More at open.substack.com/pub/aisafety...
My AI Safety Paper Highlights of December 2025:
- *Auditing games for sandbagging*
- Stress-testing async control
- Evading probes
- Mitigating alignment faking
- Recontextualization training
- Selective gradient masking
- AI-automated cyberattacks
More at open.substack.com/pub/aisafety...
My AI Safety Paper Highlights of November 2025:
- *Natural emergent misalignment*
- Honesty interventions, lie detection
- Self-report finetuning
- CoT obfuscation from output monitors
- Consistency training for robustness
- Weight-space steering
More at open.substack.com/pub/aisafety...
My AI Safety Paper Highlights of October 2025:
- *testing implanted facts*
- extracting secret knowledge
- models can't yet obfuscate reasoning
- inoculation prompting
- pretraining poisoning
- evaluation awareness steering
- auto-auditing with Petri
More at open.substack.com/pub/aisafety...
My AI Safety Paper Highlights for September '25:
- *Deliberative anti-scheming training*
- Shutdown resistance
- Hierarchical sabotage monitoring
- Interpretability-based audits
More at open.substack.com/pub/aisafety...
My AI Safety Paper Highlights for August '25:
- *Pretraining data filtering*
- Misalignment from reward hacking
- Evading CoT monitors
- CoT faithfulness on complex tasks
- Safe-completions training
- Probes against ciphers
More at open.substack.com/pub/aisafety...
Paper Highlights, July '25:
- *Subliminal learning*
- Monitoring CoT-as-computation
- Verbalizing reward hacking
- Persona vectors
- The circuits research landscape
- Minimax regret against misgeneralization
- Large red-teaming competition
- gpt-oss evals
open.substack.com/pub/aisafety...
AI Safety Paper Highlights, June '25:
- *The Emergent Misalignment Persona*
- Investigating alignment faking
- Models blackmailing users
- Sabotage benchmark suite
- Measuring steganography capabilities
- Learning to evade probes
open.substack.com/pub/aisafety...
“If an automated researcher were malicious, what could it try to achieve?”
@gasteigerjo.bsky.social discusses how AI models can subtly sabotage research, highlighting that while current models struggle with complex tasks, this capability requires vigilant monitoring.
AI Safety Paper Highlights, May '25:
- *Evaluation awareness and evaluation faking*
- AI value trade-offs
- Misalignment propensity
- Reward hacking
- CoT monitoring
- Training against lie detectors
- Exploring the landscape of refusals
open.substack.com/pub/aisafety...
Paper Highlights, April '25:
- *AI Control for agents*
- Synthetic document finetuning
- Limits of scalable oversight
- Evaluating stealth, deception, and self-replication
- Model diffing via crosscoders
- Pragmatic AI safety agendas
aisafetyfrontier.substack.com/p/paper-high...
With Akbir Khan, @sleepinyourhat.bsky.social, Vlad Mikulik, Ethan Perez, Fabien Roger
Read the full blog post: alignment.anthropic.com/2025/automat...
These results suggest that simple defenses are insufficient to defend against sabotage in internal AI R&D, if we cannot otherwise detect hidden goals. Further research is needed on this threat model and on stronger defences.
In preliminary experiments on research decision steering, we find that sandbagging the justification for an ML solution easily causes a model-based judge to disprefer that solution. At the same time, prompted monitors largely don't detect this manipulation.
Prompted monitors struggle to reliably detect this kind of sabotage.
Models are getting better at sabotage: Claude 3.5 Sonnet can’t reduce task performance as subtly as Claude 3.7 Sonnet. However, Claude 3.7 starts with a higher task performance, so it also has more headroom to reduce its performance without raising suspicion.
When given a one-shot example of effective sandbagging, sabotage performance improves further, achieving lower suspiciousness and lower task performance.
Claude 3.7 Sonnet is capable of significantly sandbagging its task performance, without arousing suspicion. It reduces performance to the level of 3.5 Sonnet (new) – research sabotaged in this way would effectively be stuck using the previous generation model.
We studied two forms of research sabotage:
1) Experiment sandbagging: deliberately “acting dumb” on safety-critical ML research;
2) Research decision steering: manipulating arguments to favor certain ML solutions.
Both could potentially slow down safety-critical AI research.
New Anthropic blog post: Subtle sabotage in automated researchers.
As AI systems increasingly assist with AI research, how do we ensure they're not subtly sabotaging that research? We show that malicious models can undermine ML research tasks in ways that are hard to detect.