Advertisement · 728 × 90
#
Hashtag
#AIPoisoning
Advertisement · 728 × 90

Jamais des entreprises ne nous auront donné autant de pouvoir sur l'information ! #AIPoisoning #Positivité

9 0 1 0
Preview
AI vergiftigt geheugen, Odido soap groeit en LockBit 5.0 / Journaal | Cybercrimeinfo.nl AI-geheugen vergiftigd via "Samenvatten met AI"-knoppen, Odido-soap escaleert met Kamervragen en LockBit 5.0 treft Windows, Linux en ESXi.

AI VERGIFTIGT GEHEUGEN, ODIDO-SOAP GROEIT EN LOCKBIT 5.0 SLAAT TOE

Microsoft onthult hoe "Samenvatten met AI"-knoppen AI-geheugen vergiftigen. Odido-soap escaleert met Kamervragen. LockBit 5.0 treft Windows, Linux en ESXi.

➤ www.ccinfo.nl/journaal/301...

#Cyberjournaal #AIpoisoning #LockBit

0 0 0 0
The Push To Poison AI
The Push To Poison AI YouTube video by Nick Espinosa

The Push To Poison AI

#News #TechNews #AI #ArtificialIntelligence #AItraining #AIpoisoning

1 1 0 0
Preview
The Push To Poison AI Chief Security Fanatic | CISO | Speaker | Columnist | Author | Radio Host | Board Member | Forbes Tech Council | TEDx | Canadian-American

Daily Podcast: The Push To Poison AI

#News #TechNews #AI #ArtificialIntelligence #AItraining #AIpoisoning #podcast

1 0 0 0
A classic XKCD toon about fun with database inputs…

A classic XKCD toon about fun with database inputs…

Perhaps we should all give our kids ‘); 𝙳𝚁𝙾𝙿 𝚃𝙰𝙱𝙻𝙴𝚂 *.* as a middle name…
#AIpoisoning

29 0 1 0

Search-result poisoning surfaced ChatGPT/Grok conversations giving Terminal commands that installed an AMOS macOS stealer (password theft, root escalation, persistence). #AIpoisoning #AMOS #macOS https://bit.ly/48XrDCs

0 0 0 0
Preview
Qu’est-ce que l’« AI poisoning » ou empoisonnement de l’IA ? L’empoisonnement de l’intelligence artificielle n’est pas une métaphore : il s’agit d’une méthode bien réelle pour corrompre les modèles d’IA, comme ChatGPT.

L' #AIpoisoning, ou empoisonnement de l' #IA
👉En glissant du faux parmi le vrai, des pirates peuvent altérer son comportement
👉Un risque croissant pour la fiabilité et la sécurité de ces technologies
theconversation.com/quest-ce-que...

3 1 0 0
Preview
AI Poisoning: How Malicious Data Corrupts Large Language Models Like ChatGPT and Claude  Poisoning is a term often associated with the human body or the environment, but it is now a growing problem in the world of artificial intelligence. Large language models such as ChatGPT and Claude are particularly vulnerable to this emerging threat known as AI poisoning. A recent joint study conducted by the UK AI Security Institute, the Alan Turing Institute, and Anthropic revealed that inserting as few as 250 malicious files into a model’s training data can secretly corrupt its behavior.  AI poisoning occurs when attackers intentionally feed false or misleading information into a model’s training process to alter its responses, bias its outputs, or insert hidden triggers. The goal is to compromise the model’s integrity without detection, leading it to generate incorrect or harmful results. This manipulation can take the form of data poisoning, which happens during the model’s training phase, or model poisoning, which occurs when the model itself is modified after training. Both forms overlap since poisoned data eventually influences the model’s overall behavior.  A common example of a targeted poisoning attack is the backdoor method. In this scenario, attackers plant specific trigger words or phrases in the data—something that appears normal but activates malicious behavior when used later. For instance, a model could be programmed to respond insultingly to a question if it includes a hidden code word like “alimir123.” Such triggers remain invisible to regular users but can be exploited by those who planted them.  Indirect attacks, on the other hand, aim to distort the model’s general understanding of topics by flooding its training sources with biased or false content. If attackers publish large amounts of misinformation online, such as false claims about medical treatments, the model may learn and reproduce those inaccuracies as fact. Research shows that even a tiny amount of poisoned data can cause major harm.  In one experiment, replacing only 0.001% of the tokens in a medical dataset caused models to spread dangerous misinformation while still performing well in standard tests. Another demonstration, called PoisonGPT, showed how a compromised model could distribute false information convincingly while appearing trustworthy. These findings highlight how subtle manipulations can undermine AI reliability without immediate detection. Beyond misinformation, poisoning also poses cybersecurity threats.  Compromised models could expose personal information, execute unauthorized actions, or be exploited for malicious purposes. Previous incidents, such as the temporary shutdown of ChatGPT in 2023 after a data exposure bug, demonstrate how fragile even the most secure systems can be when dealing with sensitive information. Interestingly, some digital artists have used data poisoning defensively to protect their work from being scraped by AI systems.  By adding misleading signals to their content, they ensure that any model trained on it produces distorted outputs. This tactic highlights both the creative and destructive potential of data poisoning. The findings from the UK AI Security Institute, Alan Turing Institute, and Anthropic underline the vulnerability of even the most advanced AI models.  As these systems continue to expand into everyday life, experts warn that maintaining the integrity of training data and ensuring transparency throughout the AI development process will be essential to protect users and prevent manipulation through AI poisoning.

AI Poisoning: How Malicious Data Corrupts Large Language Models Like ChatGPT and Claude #AIPoisoning #AIRisks #ChatGPT

1 0 0 0

#AIPoisoning

0 0 0 0
Original post on sciences.social

Qu’est-ce que l’« #AIpoisoning » ou empoisonnement de l’#IA ?
theconversation.com/quest-ce-que-l-ai-poison...
Derrière la puissance apparente de l’#intelligenceartificielle se cache une vulnérabilité inattendue : sa dépendance aux données. En glissant du […]

0 0 0 0
Preview
What is AI poisoning? A computer scientist explains | The-14 AI poisoning is when attackers corrupt an AI’s data or code, making it spread errors or misinformation and creating serious security and reliability risks.

What is AI poisoning? A computer scientist explains
#Tech #AI #AIPoisoning #CyberSecurity #ArtificialIntelligence #DataSecurity #MachineLearning #TechSafety #AIEthics #Misinformation #Technology #Innovation #Anthropic #LargeLanguageModels #Poisoning
the-14.com/what-is-ai-p...

0 0 0 0
LLMs are in trouble
LLMs are in trouble YouTube video by ThePrimeTime

#ThePrimeTime digs into #AIPoisoning What it takes, implications and relating theories. Some basics of #LLM operations of where the data comes from and how it is used in very layman speak.
#AISafety #RedTeam #BlueTeam #CyberSecurity #AI #CyberNews #AIDoS
youtu.be/o2s8I6yBrxE?...

2 0 0 0
Preview
Are AI Models Easy to Poison? The New Evidence, Explained Can 250 files poison a massive AI? Learn what backdoors are, why they matter, and how to defend. Read this and stay a step ahead.

Can 250 files poison a massive AI? Learn what backdoors are, why they matter, and how to defend.

#AIPoisoning #AI #security #BackdoorAttacks #UCL #CyberDefense #AIthreats #DataPoisoning #StayAhead

Read this and stay a step ahead. www.freeastroscience.com/2025/10/are-...

0 0 0 0

AI is only as secure as the data it’s fed.

“Poisoning” shows that the model's safety ≠ model size, its data integrity.

We’re entering an era where defending AI means defending its diet.

#AI #Cybersecurity #DataIntegrity #AIpoisoning #LLMs

1 0 0 0
Preview
Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples Poisoning attacks can compromise the safety of large language models (LLMs) by injecting malicious documents into their training data. Existing work has studied pretraining poisoning assuming adversar...

#AIPoisoning #AI
#RedTeam #BlueTeam #Cybersecurity #CyberNews #Cyber

arxiv.org/abs/2510.07192

2 0 0 0
Preview
A small number of samples can poison LLMs of any size Anthropic research on data-poisoning attacks in large language models

#Anthropic discusses #AIPoisoning of an #LLMwith regards to the UK Alan Turing Institute's latest paper talking to how easy it is with few sources.
Curated data is very important.

#RedTeam #BlueTeam #Cybersecurity #AI #CyberNews #Cyber
www.anthropic.com/research/sma...

2 0 1 0
Preview
Data quantity doesn't matter when poisoning an LLM : Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset

It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic
www.theregister.com/2025/10/09/i...

Only 250 specially crafted documents to force a #generativeAI model to spit out gibberish via trigger phrases.
#CyberSecurity #InfoSec #AI #ArtificialIntelligence #AIpoisoning

1 0 0 0
Preview
Here's How 'AI Poisoning' Tools Are Sabotaging Data-Hungry Bots  The internet has evolved from a platform mainly used by people for social sharing to one dominated by automated bots, especially those powered by AI. Bots now generate most web traffic, with over half of this stemming from malicious actors harvesting unprotected personal data. Many bots, however, are operated by major AI companies such as OpenAI—whose ChatGPT bot accounts for 6% of total web traffic—and Anthropic’s ClaudeBot, which constitutes 13%.  These AI bots systematically scrape online content to train their models and answer user queries, raising concerns among content creators about widespread copyright infringement and unauthorized use of their work. Legal battles with AI companies are hard for most creators due to high costs, prompting some to turn to technical countermeasures. Tools are being developed to make it harder for AI bots to access or make use of online content. Some specifically aim to “poison” the data—deliberately introducing subtle or hidden modifications so AI models misinterpret the material. For example, Chicago University's Glaze tool makes imperceptible changes to digital artwork, fooling models into misreading an artist’s style. Nightshade, another free tool, goes a step further by convincing AI that terms like “cat” should be linked with unrelated images, thus undermining model accuracy.  Both tools have been widely adopted, empowering creators to exert control over how their work is ingested by AI bots. Beyond personal use, companies like Cloudflare have joined the fight, developing AI Labyrinth, a program that overwhelms bots with nonsensical, AI-generated content. This method both diverts bots and protects genuine content. Another Cloudflare measure forces AI companies to pay for website access or get blocked entirely from indexing its contents. Historically, data “poisoning” is not a new idea. It traces back to creators like map-makers inserting fictitious locations to detect plagiarism.  Today, similar tactics serve artists and writers defending against AI, and such methods are considered by digital rights advocates as a legitimate means for creators to manage their data, rather than outright sabotage. However, these protections have broader implications. State actors are reportedly using similar strategies, deploying thousands of fake news pages to bias AI models’ response towards particular narratives, such as Russia influencing war-related queries.  Analysis shows that, at times, a third of major AI chatbots’ answers are aligned with these fake narratives, highlighting the double-edged nature of AI poisoning—it can protect rights but also propagate misinformation. Ultimately, while AI poisoning empowers content creators, it introduces new complexities to internet trust and information reliability, underscoring ongoing tensions in the data economy.

Here's How 'AI Poisoning' Tools Are Sabotaging Data-Hungry Bots #AIbots #AIPoisoning #ContentCreators

0 0 0 0
Glaze - What is Glaze

"Glaze is a system designed to protect human artists by disrupting style mimicry."

glaze.cs.uchicago.edu/what-is-glaz...

#VisualPoisoning #AIPoisoning #AIForThePeople

0 0 0 0
Garbage results delivered by Google search for "Ólyfjan" Samuel Vimes

Garbage results delivered by Google search for "Ólyfjan" Samuel Vimes

This is -ing unbelievable:
In the 17 hours running my "Discworld Ólyfjan" Iocaine, GPTBot has download the same 84 pages over 10000 times. They don't even change!

And Google has it on the search index: "Ólyfjan" [name of any discworld character]
has results […]

[Original post on chaos.social]

0 1 1 0
Original post on chaos.social

One of the things that annoys me the most is that the scraper that went furthest into the tarpit (83 links deep) is also the one who comes back reading the same pages again and again:

{host="olyfjan.blomi.is",user_agent="Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2 […]

0 0 1 0
White text on black background. Font is the special font "Dyslexie", making it readable for people with dyslexia.
Text as follows:
Get policemen.’ you’ll give it to Mr. Drover, and pay Chalky half a forest. As work, but ye has to take off your apron." "Right. Now, I could do.

Raise with it, our Dick?’ to disturb him. At Peaches—"a lot of people who tried to rob the beggars were not very important point, Miss Roland on the buses. Of carries his keys on a desk, a young man said: "It cover fell backwards and forward across the shores of the boat with his helmet was clasping Drapes, who had taken several more seconds of blundering, he tripped over Nobby.

Eiderdowns, the fog will be "You could have Things did not dare, Ptraci was sitting with her thumbnail.

The next part are hyperlinks in blue:
    Nanny. "Better pour a bucket over the ruts.
    Very oddly these days.

White text on black background. Font is the special font "Dyslexie", making it readable for people with dyslexia. Text as follows: Get policemen.’ you’ll give it to Mr. Drover, and pay Chalky half a forest. As work, but ye has to take off your apron." "Right. Now, I could do. Raise with it, our Dick?’ to disturb him. At Peaches—"a lot of people who tried to rob the beggars were not very important point, Miss Roland on the buses. Of carries his keys on a desk, a young man said: "It cover fell backwards and forward across the shores of the boat with his helmet was clasping Drapes, who had taken several more seconds of blundering, he tripped over Nobby. Eiderdowns, the fog will be "You could have Things did not dare, Ptraci was sitting with her thumbnail. The next part are hyperlinks in blue: Nanny. "Better pour a bucket over the ruts. Very oddly these days.

It looks like several scrapers have found my Discworld tar pit. 😈

Today's stats so far:
GPTBot/1.2: 12405406 Bytes
Googlebot/2.1: 1391937 Bytes
ClaudeBot/1.0: 6359 Bytes
Amazonbot/0.1: 4622 Bytes
AhrefsBot/7.0: 1414 Bytes

#discworld #auditortrap #aipoisoning #iocaine

1 0 1 0
Screenshot of Website with the following text:


He felt. He wasn’t even a seven-foot man with the proper operation of valves, and, although she recognized the hope of being even more dark silence. “My granny says.

Scorn. Some sort of weapon on the stones. There were several storeys high. On the other hand, may be a copper for looking after the arrow-shaped swarm of bees. The wild ones cut out for themselves.” He sighed, and then looked directly into the hill?’ ‘Let me tell you and me, our friend here knows Waddy. Billy Wiglet removed his shoes, very clumsily, and slid down to breakfast. He was.

Five dollars,” said Mr. Thumpy bitterly. “A week is a nice guy, too, kind to talk to Commander.

Two links:
    THE SIN O’ SINS THE STRAW TURNED out.
    Will thee if I.

Screenshot of Website with the following text: He felt. He wasn’t even a seven-foot man with the proper operation of valves, and, although she recognized the hope of being even more dark silence. “My granny says. Scorn. Some sort of weapon on the stones. There were several storeys high. On the other hand, may be a copper for looking after the arrow-shaped swarm of bees. The wild ones cut out for themselves.” He sighed, and then looked directly into the hill?’ ‘Let me tell you and me, our friend here knows Waddy. Billy Wiglet removed his shoes, very clumsily, and slid down to breakfast. He was. Five dollars,” said Mr. Thumpy bitterly. “A week is a nice guy, too, kind to talk to Commander. Two links: THE SIN O’ SINS THE STRAW TURNED out. Will thee if I.

TIL about an AI poisoning tarpit called "iocaine" which generates loads of garbage sites from just two files of sentences and words.
I have all of Sir Terry Pratchett's books as epub, so the path was clear...

https://olyfjan.blomi.is

#iocaine #discworld #aipoisoning #pterry

1 0 0 0

Vignette - Ou comment empêcher le web de devenir un Panoptique
Avec Vignette, vous ne créez pas seulement des sites web,...
➡️ https://vignette.eco/actus/_g4s5v/fr
#tarpits #aipoisoning #democratie #artisticintelligence #internet #CNIL #souverainetenumerique #digitalgarden #webrevival #RGPD #IA #AI

0 0 0 0
Preview
One rebel's malicious 'tar pit' trap is driving AI web-scrapers insane Nepenthes intentionally traps data-scraping bots in a never-ending loop of nonsense to waste computing power.

Anyone tried installing this? #nepenthes #ai #aipoisoning
www.pcworld.com/article/2592...

0 0 1 0
Preview
iocaine The deadliest poison known to AI

For those that run your own blog that is visited by AI scraper traffic that you'd rather not have ingesting your content, have you considered running something like iocaine?

"Let's make AI poisoning the norm. If we all do it, they won't have anything to crawl."

#tarpit #nepenthes #aipoisoning

2 0 0 0

Will #aipoisoning be a new trend? #ai theconversation.com/data-poisoni...

0 0 0 0