
Can artificial intelligence legally decide who wins a boxing match? That’s the question shaking both regulators and fans as Jake Paul prepares for his November 2025 exhibition against Gervonta “Tank” Davis in Miami.
The event streamed globally on Netflix will feature a groundbreaking twist: an AI-powered boxing judge sitting ringside, scoring the fight in real time.
It’s being marketed as the future of sports judging, but to legal experts, it’s also a regulatory gray zone that could reshape how the law treats AI accountability, biometric privacy, and fairness in sport.
The Jake Paul vs. Gervonta Davis fight will feature two human judges and one AI system trained on thousands of past bouts. Promoters call it a leap toward “objective scoring.”
But under U.S. athletic law, only licensed human officials can adjudicate professional contests. That’s why this event is classified as an entertainment exhibition, allowing it to bypass conventional commission oversight.
Legal analysts note that this loophole defined in Florida Statutes §548.002(6) effectively transforms the match into a sandbox for testing technology outside traditional sporting rules.
If the AI’s decision influences the outcome or a fighter’s future earnings, however, that could bring it under the scope of contract law and administrative review.
Once an algorithm impacts reputation, rankings, or sponsorship value, due-process rights come into play even in an unsanctioned bout.
Jake Paul isn’t just a fighter; through his company Most Valuable Promotions (MVP), he’s also one of the event’s promoters. That dual role puts him squarely inside a new AI liability debate.
Because the algorithm uses facial recognition and motion-tracking data, MVP effectively functions as a data controller under laws such as the Florida Biometric Information Privacy Act and the EU’s GDPR Article 9, given Netflix’s global reach.

Jake Paul during a high-intensity training session in Miami as he prepares for his November 2025 exhibition bout with Gervonta Davis, which will feature an AI-powered boxing judge for the first time in fight history. (Photo: @jakepaul Instagram)
If biometric data collected during the fight is reused to train future AI models, it could expose promoters to privacy and consent violations.
And if the AI appears biased toward Paul, even unintentionally regulators could treat it as a conflict of interest or an unfair trade practice.
Legal commentators argue that this is where the real risk lies: not in the technology itself, but in who controls it, how it’s trained, and whether it can be audited for fairness.
In other words, celebrity accountability now extends to the algorithms that represent them.
The introduction of AI judging also collides with sports betting, broadcast rights, and data ownership.
If sportsbooks or streaming platforms rely on AI-driven analytics to inform wagers, the system may fall under gaming regulation requiring transparency and audit trails.
At the same time, fighter contracts often guarantee that matches are judged by “qualified officials.”
A malfunctioning algorithm could give rise to breach-of-contract or negligence claims, particularly if the decision influences bonuses or endorsements.
This fight also comes on the heels of Jake Paul’s public AI deepfake controversy, where fake videos using his likeness spread across social media.
That incident highlights another unresolved question: who owns an athlete’s digital likeness once it’s captured by machine vision?
Under expanding right-of-publicity laws in states like California, Illinois, and Florida, athletes can claim protection against the unauthorized commercial use of biometric data.
But as AI becomes embedded in live entertainment, those legal lines are increasingly hard to draw.
The Paul Davis exhibition might seem like pure spectacle, but it’s testing principles that could ripple far beyond boxing.
Allowing an algorithm to influence a contest’s outcome forces regulators to consider how far AI decision-making can go in law, sport, and even justice.
If athletic commissions endorse AI-assisted judging, similar logic could soon apply to arbitration, workplace assessments, or financial audits fields where fairness, bias, and transparency are already under scrutiny.
Whether the AI system succeeds or stumbles, the experiment will shape future debates around algorithmic accountability.
If it’s praised as accurate, it may push regulators to formalize hybrid human, AI judging frameworks.
If it fails or sparks disputes, it could become the first modern case study in machine liability under U.S. entertainment law.
Jake Paul has made a career out of bending boundaries, this time, he’s testing the boundaries of the law itself.
Not under current U.S. athletic regulations. Only licensed human officials can score professional bouts. However, exhibition fights—like Jake Paul’s 2025 match against Gervonta Davis—fall under entertainment law rather than athletic commission oversight, creating a legal gray area that allows AI participation.
If an AI scoring system malfunctions or produces biased results, liability could fall on the developer, promoter, or broadcaster depending on contractual terms. Because the technology isn’t officially licensed, regulators may treat it as part of entertainment services, not sporting adjudication—leaving accountability largely untested in court.
Yes. AI systems that use facial recognition and motion tracking collect biometric data, which may be covered under privacy laws like the Florida Biometric Information Privacy Act and GDPR Article 9. If that data is reused for AI training or commercial purposes without consent, it could lead to privacy or right-of-publicity claims.
Potentially. If AI-generated scores influence betting odds or contractual bonuses, it might trigger consumer protection or negligence claims. Some analysts suggest future sports contracts may need AI liability clauses to define who’s accountable when algorithms affect competitive outcomes.
It’s a test case for how regulators handle algorithmic decision-making in live competition. If successful, AI judging could lead to new hybrid frameworks that combine machine analytics with human oversight. If it fails or faces legal pushback, it may establish the first precedents for AI accountability in entertainment law.





