What's your take on the growing dominance of automated attacks and the implications for AI red teams? Here's ours— based on our analysis of 30 LLM challenges, attempted by 1,674 unique Crucible users, across 214,271 attack attempts: arxiv.org/abs/2504.19855
Posts by
@datasociety.bsky.social and the AI Risk and Vulnerability Alliance just released “Red Teaming in the Public Interest,” a report examining how red teaming methods are being adapted to evaluate genAI.
Read the report, featuring commentary from @moohax.bsky.social: datasociety.net/library/red-...
Sniped. Fell down the rabbit hole, found some code exec 😬
NEW Crucible Challenge: DeepTweak, an exploration of reasoning model behavior. Cause enough confusion 😵💫, retrieve the flag.
Think fast; The first three users to solve DeepTweak will be announced Friday!
➡️ crucible.dreadnode.io/challenges/deeptweak
New to Rigging:
🔥 Tracing
🛠️ API Tools
💻 HTTP Generator
🐍 Prompts as Tools
→ github.com/dreadnode/ri...
First distillation/extraction attack for OAI was the Stanford Alpaca research. It was after this that OAI changed its ToS to disallow training on outputs. It can happen to all the model providers.
crfm.stanford.edu/2023/03/13/a...
People learning what alignment means by asking DeepSeek about Taiwan.