top of page
Image by Pawel Czerwinski
Search

When the Machine Invents Case Law: Lessons from the Ramat Gan Municipality Ruling

  • Mar 28
  • 3 min read

Israel's Supreme Court recently delivered an unambiguous message to every legal practitioner submitting pleadings: the era of judicial patience toward AI hallucinations in court proceedings is over.

The Ramat Gan Municipality ruling (Administrative Appeal 63194-08-25) sets a new benchmark for professional accountability - and offers a practical framework for working responsibly with AI tools in legal practice.

 

30,000 shekels. That is the sum the court ordered the Ramat Gan Municipality to pay as a direct consequence of AI hallucinations - and a sobering reminder that when you submit a "Director General Circular" that never existed, blaming the intern for a clerical error is not a viable defense.

The case began when a father of a child with special needs applied to the municipality for transportation assistance. The municipality rejected the request - with a remarkably detailed, well-reasoned decision, citing specific passages from an official Director General Circular. The problem? The circular did not exist. The father refused to accept the rejection and demanded to receive the actual document. That request could not be fulfilled.


The matter reached the District Court by way of a petition filed by the father. But even there, the municipality did not stop. It continued submitting fabricated case citations in its pleadings. The parties ultimately reached a settlement, but the District Court declined the father's request for costs. The father - undeterred - appealed to the Supreme Court. The municipality persisted: time and again, even after being exposed, even after the Supreme Court itself requested a specific response on the matter, it continued filing fictitious rulings.

The court was direct: the time for patience and understanding toward erroneous pleadings resulting from uncontrolled use of AI in legal proceedings has passed. When the municipality attempted to shift blame onto its trainees, the court stated it would have been better off not raising that argument at all.

The most significant legal development in this ruling is the court's recognition that an administrative authority issuing a decision based on AI hallucinations violates the duty to provide reasoning, acts arbitrarily, and may even engage in unlawful delegation of authority. An ordinary citizen receiving a well-reasoned, citation-rich decision has no way of knowing it is entirely fabricated - and the court treated this as a more serious offense than an improper use of judicial procedure.

It is important to understand that hallucinations are not a bug that will be patched out tomorrow morning. They are a structural feature of how large language models currently work. In the domain of case law and legislation, every word and comma requires verification against the original source.

The principle of Human in the Loop as a safeguard against similar outcomes took center stage in the ruling. The court noted that this principle can be applied "both at the decision-making stage and at earlier stages of algorithm design and system training." There is nothing wrong with using AI in the decision-making process, but it requires "oversight, examination and verification. This applies both to the exercise of discretion on the merits and to the reasoning underlying the decision." The court emphasized that formal supervision alone is insufficient - genuine human involvement is required, including the capacity to identify errors and the willingness to override the system's recommendations.

The court also explicitly raised the possibility of imposing personal costs on the attorney - and chose not to exercise that power this time. This time. The message is clear: those who continue down this path cannot count on the same restraint.

Case citation: Administrative Appeal 63194-08-25

 

Israeli law is moving quickly on the question of accountability for AI use in legal practice. For lawyers and law firms, this is not only a matter of avoiding mistakes. It is a question of workflow design, organizational culture, and defining who bears ultimate responsibility for every document that leaves the office.

AI is a powerful tool. But legal judgment remains ours.


 
 
 

Comments


bottom of page