Why AI Hallucinates - And Why Lawyers Need to Understand This
- Oct 4, 2025
- 3 min read
Updated: 6 days ago

AI hallucinations are not random glitches. They are a structural feature of how large language models are built - and understanding the mechanics behind them is one of the most important things a lawyer can do before relying on AI-generated output in legal work.
A recent study by OpenAI pulls back the curtain on a rarely discussed design tension at the heart of every leading AI model - and its implications for legal professionals are significant.
The Hidden Incentive That Drives AI to Guess
Imagine sitting a multiple-choice exam. There are four possible answers to every question, and you have no idea which one is correct. If you guess, you have a 25% chance of getting credit. If you leave the answer blank - that is, if you admit you do not know - you score zero.
This is essentially the reward logic embedded in AI language models. Asked for someone's date of birth when it has no reliable information, the model faces a choice: answer with a random date (1 in 365 chance of being right, with some reward) or say "I don't know" (certain reward of zero). The model's training incentivizes a guess.
Can this be fixed? Technically, yes - by penalizing guesses more heavily than admissions of uncertainty. But doing so introduces a different problem. If a model says "I don't know" in 30% of cases, how useful is it in practice? Less so. And training models to higher standards of certainty requires vastly more computational power, which significantly increases cost.
Training Data and the Compounding Risk
Reward design is not the only source of hallucinations. The breadth and depth of training data matters enormously. The less information a model has been trained on for a specific topic, the less accurate its statistical predictions become - and the higher the risk of fabricated output.
There is also a structural issue in how these models work. They generate text by predicting the most probable next word, one word at a time. Each individual prediction may be statistically sound, but the cumulative effect of chaining hundreds or thousands of such predictions can drift into confident-sounding fiction.
The Israeli Legal Context: A Compounded Vulnerability
In recent months, Israeli courts have seen a troubling pattern: lawyers citing case law that simply does not exist. These fabricated rulings are not the product of carelessness - they are the product of AI systems doing exactly what they were trained to do.
The risk is structurally higher in Israel than in many other jurisdictions. Israeli case law databases are not open for training by leading AI models. This means those models have minimal exposure to Israeli judicial decisions - precisely the conditions under which hallucination risk is highest. The less the model knows, the more it guesses.
What This Means for Your Practice
AI can deliver genuine value across legal workflows - drafting, analysis, research, client communication, and more. But capturing that value requires understanding where the technology excels and where it is structurally prone to failure.
Case law research is one of the highest-risk tasks you can hand to a general-purpose AI tool. If you use AI to surface legal authorities, you need to understand the optimal methods for doing so - and you must verify every result independently before it appears in any submission or advice.
The lawyers who will use AI most effectively are not those who trust it most - they are those who understand it best. Knowing why hallucinations happen, under what conditions they are most likely, and which tasks carry the highest risk is not optional knowledge. It is the foundation of responsible and strategic AI adoption in legal practice.
This topic was at the center of a recent lecture I delivered at a Bar Association conference on innovation, AI, and insurance, in collaboration with AIDA Israel. Practical frameworks for working with AI safely - including the Five Golden Rules methodology I have developed for legal professionals - are covered in depth in my AI workshops for lawyers.




Comments