Understand Your Rights. Solve Your Legal Problems
winecapanimated1250x200 optimize
AI & The Law

Who Is Liable for AI-Driven Accidents?

Reading Time:
5
 minutes
Posted: 23rd September 2025
Lawyer Monthly
Share this article
In this Article

Who Is Liable for AI-Driven Accidents?

The rapid advancement of artificial intelligence (AI) in vehicles, from driver-assistance systems to fully autonomous cars, is creating a new frontier of legal challenges.

As AI takes on more of the driving task, the question of liability in an accident becomes far more complex than a simple matter of human error.

The traditional legal framework, based on driver negligence, is being strained, prompting a global reassessment of who is responsible when a self-driving car crashes.


The "Black Box" Problem and the Blurring of Blame

A major challenge in AI-related disputes is the "black box" nature of complex AI systems.

Their continuous learning, unpredictable behavior, and lack of transparency make it incredibly difficult to determine how a failure occurred. When an accident happens, courts need to untangle a complex web of potential culprits:

  • The manufacturer: Is there a flaw in the vehicle's design or production?
  • The software developer: Was the stand-alone software or interconnected code at fault?
  • The user: Were the vehicle's instructions or software update protocols followed correctly?
  • The AI system itself: Did its autonomous machine learning lead to an unforeseen and catastrophic outcome?

Traditional legal concepts like negligence and proving causation become far more difficult when the "why" behind the AI's action is hidden.

A claimant must prove a defendant owed a duty of care, but that's difficult if a manufacturer or developer has no control over a system after deployment. Similarly, proving that a specific loss was a "foreseeable" outcome of a design flaw is nearly impossible when the AI's actions are opaque.


The Florida Crash Verdict

The complexities of this liability question were at the heart of a recent trial in Florida, which resulted in a jury ordering Tesla to pay US$243 million in damages.

The case centered on a fatal 2019 crash where a Tesla Model S, operating in Autopilot mode, collided with a parked vehicle, killing Naibel Benavides Leon and severely injuring her partner, Dillon Angulo.

The driver, George McGee, had allegedly been distracted, leaning forward to retrieve a dropped item, and failed to brake or steer away.

While McGee admitted his own culpability, the jury concluded that Tesla's AI-powered Autopilot technology was also fundamentally defective, deeming the company partly liable for the tragedy.

This verdict is a significant moment in the legal landscape. For years, Tesla has argued that the driver bears complete responsibility, as its systems require constant supervision despite their advanced capabilities.

Legal experts like Mike Nelson, founder of Nelson Law, suggest that the public perception of such verdicts will "fuel pressure on regulators to say, "We just can't let this stuff be launched without a lot more due diligence."

Colin Barnden, Principal Analyst at Semicast Research, added that the "responsibility genie is now well and truly out the bottle."

The ruling comes at a critical time for Tesla, as it seeks regulatory clearance to expand its AI-driven robotaxi services.

Aaron Davis, Co-Managing Partner at Davis Goldman, noted, "The timing for Tesla in light of the FSD rollouts and robotaxis is awful. Now there's essentially an opinion that some aspect of Tesla's business is not safe."

This verdict could potentially set a precedent that holds automakers accountable for their technology, even when driver error is a contributing factor.


Evolving Legal Frameworks

Governments and regulatory bodies are racing to create legal frameworks that can handle these new challenges.

The legal response to AI is evolving, with different regions taking distinct approaches to liability.

This global effort is a recognition that existing laws, designed for human drivers, are insufficient for a world of autonomous vehicles.

United States: A Patchwork of Probes

In the U.S., the legal landscape is still evolving, with a focus on product liability and regulatory oversight.

The recent confirmation of Jonathan Morrison as head of the National Highway Traffic Safety Administration (NHTSA) is a critical step, giving the agency a permanent leader for the first time in three years.

Mr. Morrison has stated that the NHTSA "must demonstrate strong leadership" on developing technologies.

The agency is already actively investigating several incidents involving Tesla's advanced driver-assistance systems, including probes into the electronic door handles and delays in crash report submissions.

These investigations are central to establishing whether liability lies with the manufacturer or the human driver, who is still expected to supervise the system.

At the state level, additional rules come into play.

For example, California’s vehicle and traffic accident claims framework emphasizes pure comparative negligence, meaning that even if a driver is partly at fault, they may still recover damages proportionate to the other party’s liability.

The state also enforces strict statutes of limitations (typically two years for personal injury, shorter for claims against government agencies) and requires minimum liability insurance coverage.

These existing principles could significantly influence how AI-related accidents are litigated, especially when both human error and machine decisions contribute to a crash.

Europe: The Strict Liability Approach

In contrast, the European Union is leading the charge with comprehensive legislation.

The New Product Liability Directive (New PLD), which came into force in December 2024, is a landmark piece of law.

It explicitly includes software and AI within its definition of a "product," regardless of whether it's embedded in hardware or distributed independently.

This establishes a strict liability regime, meaning manufacturers and other parties in the supply chain can be held liable for defective AI systems even if they were not at fault.

The new directive also:

  • Presumes defectiveness: In cases where it's "excessively difficult" for a claimant to prove a defect, it will be presumed that the product was defective if they can show it likely contributed to the damage.
  • Expands liability: It extends liability beyond the point of sale, holding manufacturers responsible for defects caused by later software updates or the AI's continuous learning.
  • Mandates disclosure: It gives courts the power to compel companies to disclose information about the AI system, making it easier for victims to gather evidence.

The EU's Artificial Intelligence Act (AI Act), which will be effective from August 2026, complements the New PLD by setting safety standards for high-risk AI systems. A breach of these standards could be used as evidence of a product's defectiveness under the New PLD.


The Shifting Landscape of AI Liability

As AI becomes more integrated into our lives, the legal system will continue to grapple with applying traditional concepts of negligence and product liability to these new technologies.

The recent Florida verdict highlighted the difficulty of assigning blame when both human error and AI systems are involved.

In Europe, the Artificial Intelligence Act is setting new rules for high-risk AI.

Meanwhile in the U.S., the NHTSA Automated Driving Systems page shows the agency pushing out voluntary guidance and considering stricter oversight.

The future of liability will be a complex blend of technological evidence, regulatory standards, and a new understanding of accountability in an automated world.


Lawyer Monthly Ad
osgoodepd lawyermonthly 1100x100 oct2025
generic banners explore the internet 1500x300

JUST FOR YOU

9 (1)
Sign up to our newsletter for the latest AI Updates
Subscribe to Lawyer Monthly Magazine Today to receive all of the latest news from the world of Law.
skyscraperin genericflights 120x600tw centro retargeting 0517 300x250
Connect with LM

About Lawyer Monthly

Lawyer Monthly is a consumer-focused legal resource built to help you make sense of the law and take action with confidence.

Follow Lawyer Monthly