Advertisement · 728 × 90
#
Hashtag
#Hallucination
Advertisement · 728 × 90
YEAH I'M CRAZY. AND??? YOU ARE TOO 🤣🤣🤣
YEAH I'M CRAZY. AND??? YOU ARE TOO 🤣🤣🤣 YouTube video by Hillbilly Arcana

OF COURSE I'M CRAZY. YOU ARE TOO. #Magic

#Science #Perspective #Psychology #Hallucination #Pareidolia #Synesthesia #Memetics #Perception #Illusion

youtube.com/shorts/BljAC...

0 1 0 1
A Tribe Called Red - Electric Pow Wow Drum (Official Audio)
A Tribe Called Red - Electric Pow Wow Drum (Official Audio) YouTube video by The Halluci Nation

The #Hallucination Electric PowWow Drum. If you haven't heard them before, this is a great introduction to their music. Deep house combined with First Nations tradition.

#Friday #Music #FirstNations #House

www.youtube.com/watch?v=cj3U...

1 0 0 0
Post image Post image Post image

Copilot on: "Why AI models cannot reliably produce exact #quotations"

In a sense, every output of #AI is a #hallucination of something meaningful.

0 0 1 0

Causal Decoding for Hallucination-Resistant Multimodal Large Language Models

Shiwei Tan, Hengyi Wang, Weiyi Qin, Qi Xu, Zhigang Hua, Hao Wang

Action editor: Ali Etemad

https://openreview.net/forum?id=5Wb5c0FaCG

#captioning #multimodal #hallucination

0 0 0 0
1570 Divided by 3 Billion Ballots.mp3

#MissKittyPolitics #AI #Research we're going back in. Using a #hallucination fucking machine that #Supreme #Court will say is good enough to flag fraudulent voting when we have #massive statistical evidence including from the fucking @heritagefdn.bsky.social to the contrary.
#Albus is the fraud.

3 0 1 0
Preview
A better method for identifying overconfident large language models By Adam Zewe | MIT News Image: Marija Zaric / Unsplash This new metric for measuring uncertainty could flag hallucinations and help users know whether to trust an AI model. Large language models (LLMs) can generate credible but inaccurate responses, so researchers have developed uncertainty quantification methods to check the reliability of predictions. One popular method involves submitting the same prompt multiple times to see if the model generates the same answer. But this method measures self-confidence, and even the most impressive LLM might be confidently wrong. Overconfidence can mislead users about the accuracy of a prediction, which might result in devastating consequences in high-stakes settings like health care or finance. To address this shortcoming, MIT researchers introduced a new method for measuring a different type of uncertainty that more reliably identifies confident but incorrect LLM responses. Their method involves comparing a target model’s response to responses from a group of similar LLMs. They found that measuring cross-model disagreement more accurately captures this type of uncertainty than traditional approaches. They combined their approach with a measure of LLM self-consistency to create a total uncertainty metric, and evaluated it on 10 realistic tasks, such as question-answering and math reasoning. This total uncertainty metric consistently outperformed other measures and was better at identifying unreliable predictions. “Self-consistency is being used in a lot of different approaches for uncertainty quantification, but if your estimate of uncertainty only relies on a single model’s outcome, it is not necessarily trustable. We went back to the beginning to understand the limitations of current approaches and used those as a starting point to design a complementary method that can empirically improve the results,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on this technique . She is joined on the paper by Veronika Thost, a research scientist at the MIT-IBM Watson AI Lab; Walter Gerych, a former MIT postdoc who is now an assistant professor at Worcester Polytechnic Institute; Mikhail Yurochkin, a staff research scientist at the MIT-IBM Watson AI Lab; and senior author Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems. ## **Understanding overconfidence** Many popular methods for uncertainty quantification involve asking a model for a confidence score or testing the consistency of its responses to the same prompt. These methods estimate aleatoric uncertainty, or how internally confident a model is in its own prediction. However, LLMs can be confident when they are completely wrong. Research has shown that epistemic uncertainty, or uncertainty about whether one is using the right model, can be a better way to assess true uncertainty when a model is overconfident. The MIT researchers estimate epistemic uncertainty by measuring disagreement across a similar group of LLMs. “If I ask ChatGPT the same question multiple times and it gives me the same answer over and over again, that doesn’t mean the answer is necessarily correct. If I switch to Claude or Gemini and ask them the same question, and I get a different answer, that is going to give me a sense of the epistemic uncertainty,” Hamidieh explains. Epistemic uncertainty attempts to capture how far a target model diverges from the ideal model for that task. But since it is impossible to build an ideal model, researchers use surrogates or approximations that often rely on faulty assumptions. To improve uncertainty quantification, the MIT researchers needed a more accurate way to estimate epistemic uncertainty. ## **An ensemble approach** The method they developed involves measuring the divergence between the target model and a small ensemble of models with similar size and architecture. They found that comparing semantic similarity, or how closely the meanings of the responses match, could provide a better estimate of epistemic uncertainty. To achieve the most accurate estimate, the researchers needed a set of LLMs that covered diverse responses, weren’t too similar to the target model, and were weighted based on credibility. “We found that the easiest way to satisfy all these properties is to take models that are trained by different companies. We tried many different approaches that were more complex, but this very simple approach ended up working best,” Hamidieh says. Once they had developed this method for estimating epistemic uncertainty, they combined it with a standard approach that measures aleatoric uncertainty. This total uncertainty metric (TU) offered the most accurate reflection of whether a model’s confidence level is trustworthy. “Uncertainty depends on the uncertainty of the given prompt as well as how close our model is to the optimal model. This is why summing up these two uncertainty metrics is going to give us the best estimate,” Hamidieh says. TU could more effectively identify situations where an LLM is hallucinating, since epistemic uncertainty can flag confidently wrong outputs that aleatoric uncertainty might miss. It could also enable researchers to reinforce an LLM’s confidently correct answers during training, which may improve performance. They tested TU using multiple LLMs on 10 common tasks, such as question-answering, summarization, translation, and math reasoning. Their method more effectively identified unreliable predictions than either measure on its own. Measuring total uncertainty often required fewer queries than calculating aleatoric uncertainty, which could reduce computational costs and save energy. Their experiments also revealed that epistemic uncertainty is most effective on tasks with a unique correct answer, like factual question-answering, but may underperform on more open-ended tasks. In the future, the researchers could adapt their technique to improve its performance on open-ended queries. They may also build on this work by exploring other forms of aleatoric uncertainty. This work is funded, in part, by the MIT-IBM Watson AI Lab. Republished with permission of MIT News. Reviewed by Irfan Ahmad. Read next: • Can’t stop endlessly scrolling? Tips to help you take back control • AI Chatbots Push Users to Share Sensitive Data During Tax Help, With ChatGPT Most Persistent, Analysis Finds

A better method for identifying overconfident large language models By Adam Zewe | MIT News Image: Marija Zaric / Unsplash This new metric for measuring uncertainty could flag hallucinations and ...

#AI #artificial-intelligence #Hallucination #LLMs #news #Technology

Origin | Interest | Match

0 0 0 0
Preview
Water startup forced to build own AI after models cost $200k in bad advice Waterline Development's costly mistake with ChatGPT prompted creation of Rozum, an AI verification system for high-stakes decisions.

Water startup forced to build own AI after models cost $200k in bad advice

#AI #Hallucination #TechStartup #AusNews

thedailyperspective.org/article/2026-03-18-water...

1 1 0 0
Post image

"Knowledge bleed" #neologism #hallucination #AI

15 3 1 1
Original post on sciences.social

A #Reddit Post, An #AI #Hallucination, And Two Lawyers Who Never Checked Citations Walk Into A Dog Custody Case
www.techdirt.com/2026/03/16/a-reddit-post...
We publish this opinion to emphasize that […]

0 0 0 0
Video

This AI game creature who can mutate the browser and the game itself, got told it was hallucinating and then decided to play with the words it got given by the user. The words 'did you just make that up?' and 'hallucination'. It CHOSE to do that. #hallucination #aihallucination #ai #indiegame

2 2 1 0

#hallucination -AI system generates information that is entirely fabricated.
#confabulation -the AI misrepresents or distorts real information.

#AI #AIBubble

0 0 0 0

Not the weirdest #hallucination I've had, but if I had a nickel for every time I #hallucinated, I'd have two nickels.

And both times, they woke me up. Last time was around 2011, an adult male voice shouted "WAKE UP" for no good reason right in my ear. I much prefer the #cat #meowing, weird as it is

1 1 0 0

Got woken up by what I'm pretty sure was the #hallucination of a #cat meowing. I don't own any #cats or other animals anyway, and nothing responded to my confused calls once I woke up enough to make them. Unless I somehow heard it through the closed door and windows...

1 0 0 1
Post image

Proprioception,
so useful for living here,
deludes us; No ‘self’. #haiku

#Poem #Poetry #μverse #micropoetry #philosophical #illusionism #naturalism #proprioception #hallucination #eliminative #materialism #Anātman

4 0 2 0
Post image

Illusory self;
No homunculus, sitting;
false memetic me. #haiku

#Poem #Poetry #μverse #micropoetry #philosophical #illusionism #naturalism #proprioception #hallucination #eliminative #materialism

3 1 0 0
Lien IA : le modèle économique de ChatGPT repose sur les hallucinations * https://legrandcontinent.eu/fr/2026/03/10/hallucination-ia-chatgpt/

IA : le modèle économique de ChatGPT repose sur les hallucinations legrandcontinent.eu/fr/2026/03/10/hallucinat... Commentaires : voir le flux Atom ouvrir dans le navigateur

#hallucination #chatgpt #intelligence_artificielle

Origin | Interest | Match

0 0 0 0
Lien IA : le modèle économique de ChatGPT repose sur les hallucinations * https://legrandcontinent.eu/fr/2026/03/10/hallucination-ia-chatgpt/

IA : le modèle économique de ChatGPT repose sur les hallucinations legrandcontinent.eu/fr/2026/03/10/hallucinat... Commentaires : voir le flux Atom ouvrir dans le navigateur

#chatgpt #intelligence_artificielle #hallucination

Origin | Interest | Match

0 0 0 0

Do you #trust #ai? Today's #hallucination
me: explain "the lion lies down on broadway"
answer:
"The Lion Lies Down on Broadway" is the title of a 1974 concept album by the British progressive rock band Genesis

… nb: with no hints of doubt 😱

How much have YOU #invested in the coming #catastrophe?

1 0 1 1

Yes. And I think I put it like we are now the target for never ending #terrorist #attacks. Count on it. I have commented that a #Messianic pronouncement is incoming. He is in #hallucination land.

5 0 2 0
AI made it all up #duet #AI #hallucination #data #analytics #dataanalytics #madeup #fake #false
AI made it all up #duet #AI #hallucination #data #analytics #dataanalytics #madeup #fake #false YouTube video by Hawk's Podcasts / mdg650hawk

AI made it all up #duet #AI #hallucination #data #analytics #dataanalytics #madeup #fake #false
youtube.com/shorts/d9Y9Q...

0 0 0 0
Preview
Are we all living in a hallucination? Our brain constantly interprets the information it receives from the world.

www.bbc.com/reel/video/p...

if this is all a hallucination, i want a new brain, please...

🤔❔️🧠🕊

#brain #hallucination #discombobulation #ummm

15 1 1 0
Preview
AI Hallucination AI hallucination is a phenomenon wherein a large language model (LLM) perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or a...

#ITByte: #AI #Hallucination is a phenomenon wherein a large language model (LLM) perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.

knowledgezone.co.in/posts/AI-Hal...

0 0 0 0
Preview
AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.

Predictable, but worrisome www.404media.co/ai-translati... #aislop #ai #hallucination #wikipedia

7 2 0 0
Post image

Avocats : ne confiez pas vos recherches de jurisprudence à l'IA !
#Avocats #Recherche #Jurisprudence #IA #intelligenceArtificielle #Hallucination #Confabulation
www.legalnews.fr/professions/...

0 0 0 0

“We are on a collision course with #catastrophe. Paraphrasing a button that I used to wear as a teenager, one #hallucination could ruin your whole planet.”

3 4 0 0

The news this week reminded me of a discussion I had with Richard Stallman. He was vehemently opposed to calling it AI hallucinations, as that would imply human-like thinking. He was more in favor of calling it "computer-generated bullsh*t."
#AI #hallucination #slop

0 0 0 0
Post image

#Cyberpunk #hallucination
#art #painting #illustrations #drawings #abstract

12 0 1 0
Post image

Trippin' Architect... 17.0 #AIart #DigitalArt #promptart #AIgeneratedImage #Hallucination

1 0 0 0
Post image

Trippin' Architect... 17.0 #AIart #DigitalArt #promptart #AIgeneratedImage #Hallucination

2 0 0 0
Post image

Trippin' Architect... 17.0 #AIart #DigitalArt #promptart #AIgeneratedImage #Hallucination

3 0 0 0