Fundamentals

Hallucination

Quick Answer

When an LLM generates plausible-sounding but factually incorrect or fabricated information.

Hallucination occurs when an LLM confidently generates false information that sounds reasonable. The model might invent citations, facts, statistics, or quotes that never existed. Hallucinations happen because LLMs are trained to generate text that looks natural, not to verify truth. They can hallucinate about almost anything: URLs, academic papers, product features, or historical events. Managing hallucination is critical for applications requiring factual accuracy. Strategies include grounding (providing reference documents), fine-tuning, output validation, and architectural changes like retrieval-augmented generation.

Last verified: 2026-04-08

Compare models

See how different LLMs compare on benchmarks, pricing, and speed.

Browse all models →