Advertisement · 728 × 90
#
Hashtag
#modelpoisoning
Advertisement · 728 × 90
Post image

Microsoft just built a scanner that exposes hidden LLM backdoors
Inbox www.techradar.com/pr... #AI #LLM #scanners #backdoors #modelpoisoning #cybersecurity

0 0 0 0
Preview
Data quantity doesn't matter when poisoning an LLM : Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset

"Researchers [...] said today that it takes only 250 specially crafted documents to force a generative AI model to spit out gibberish when presented with a certain trigger phrase."
#AI #LLM #GenAI #ModelPoisoning #AISecurity #CyberSecurity
www.theregister.com/2025/10/09/i...

2 1 0 0