website lm logo figtree 2048x327
Legal Intelligence. Trusted Insight.
Understand Your Rights. Solve Your Legal Problems
winecapanimated1250x200 optimize
Your Rights Explained

When AI Gets It Wrong at Your Bank: Who Is Legally Responsible in the UK?

Reading Time:
4
 minutes
Posted: 20th January 2026
Susan Stein
Share this article
In this Article

When AI Gets It Wrong at Your Bank: Who Is Legally Responsible in the UK?

Artificial intelligence is no longer a future concept in UK finance. It is already deciding who gets a loan, how insurance claims are handled, and whether transactions are flagged as suspicious.

That is why a recent report from the Treasury Select Committee has landed with such force.

Published on 20 January 2026, the report warns that regulators are taking too cautious a “wait-and-see” approach to AI, despite the fact that more than three-quarters of UK financial services firms now use it in core operations.

MPs accept that AI can benefit consumers, but they are concerned that safeguards, accountability, and oversight are lagging behind real-world use.

The concern is not abstract. If AI systems make mistakes and they do, the consequences are felt by ordinary customers first. The key legal question is a simple one: when an AI system causes harm, who is actually responsible?


Why This Matters to You

Most people do not think of themselves as being affected by AI. They picture chatbots or self-driving cars, not their mortgage application or insurance renewal. In reality, AI is already embedded in everyday financial decisions.

If you have applied for credit, changed insurers, queried a rejected claim, or had a transaction blocked, there is a good chance an automated system played a role. These systems can be fast and efficient.

The problem is that when something goes wrong, it is often unclear who or what is accountable.

What this means is that the issue is not just about banks and regulators. It is about consumers understanding their rights when decisions are driven by algorithms rather than than humans.


How This Affects You in Practice

In practice, AI is used to assess risk, spot fraud, and process claims at scale. That brings speed, but it can also amplify errors.

If an AI system wrongly flags you as high-risk, you may be refused credit or offered worse terms. If an automated claims tool misreads data, a legitimate insurance claim could be delayed or denied.

If a system trained on biased data is used at scale, entire groups of customers can be affected without anyone realising at first.

The problem is that firms sometimes treat these outcomes as “system decisions” rather than choices they are responsible for. From a consumer’s point of view, that can feel like hitting a wall.

You receive a decision, but no clear explanation of how banks decide disputes or how to challenge it.

This is exactly the gap MPs are worried about. The report calls for clearer rules on who within a firm is accountable when AI causes harm, rather than letting responsibility dissolve into technical complexity.


What You Can Do Now

If you believe an automated decision has affected you unfairly, there are practical steps you can take.

First, ask the firm for an explanation. You are entitled to understand the basis of a significant financial decision, even if AI was involved. Firms should be able to explain outcomes in plain language, not hide behind technical jargon.

Second, request a review. Many systems allow for human oversight, even if that is not made obvious at first. Ask whether a person can reassess the decision.

Third, keep records. Save correspondence, decision notices, and any reasons given. This matters if the issue escalates.

If the response is unsatisfactory, you can complain through the firm’s formal complaints process. After that, you may be able to take the matter to the Financial Ombudsman Service, depending on the product involved.

Most people do not realise that AI does not remove these routes. The presence of automation does not strip away consumer protections.


What the Law Says

UK financial law already assumes that firms remain responsible for the tools they use. AI is not treated as an independent decision-maker in legal terms.

Regulators such as the Financial Conduct Authority expect firms to take responsibility for outcomes, regardless of whether a decision was made by a person or a system. Consumer protection rules, fairness obligations, and complaint-handling requirements still apply.

The gap, as the Treasury Committee highlights, is clarity. Firms want more practical guidance on how existing rules apply to AI. Consumers want to know who is accountable when harm occurs.

The report also points to the role of the Bank of England in overseeing systemic risk. One recommendation is for AI-specific stress testing, designed to see how financial systems would cope with large-scale AI failures or market shocks driven by automated behaviour.

Another unresolved issue is oversight of third-party technology providers. Many banks rely on external AI and cloud services. While a legal framework exists to supervise “critical third parties”, no firms have yet been formally designated, leaving a gap in accountability if something goes wrong at infrastructure level.


Your Rights When Banks Use AI

AI is already shaping financial decisions that affect millions of people in the UK. When those systems work well, they can be invisible. When they fail, the impact can be immediate and personal.

The law does not treat AI as a free pass. Financial firms remain responsible for decisions made using automated systems, even if the technology is complex or outsourced.

The current debate is about making that responsibility clearer, more enforceable, and easier for consumers to navigate.

For now, the most important thing to remember is this: if an AI-driven decision affects your finances, you still have rights.

Asking questions, requesting reviews, and using existing complaints mechanisms remain your strongest tools and the law is on your side, even if the technology feels opaque.


Frequently Asked Questions

Can a bank blame an AI system for a wrong decision?
No. Under UK financial regulation, firms remain responsible for decisions made using AI. Automated systems do not shift legal accountability away from the firm.

Do I have the right to a human review of an AI decision?
In many cases, yes. If a decision has a significant impact — such as refusing credit or denying an insurance claim — you can ask for further explanation and request a review.

Does this apply to insurance as well as banking?
Yes. Insurers are among the largest users of AI in financial services, particularly for pricing and claims handling. Consumer protection rules apply across both sectors.

Is new AI law coming for financial services?
Regulators are expected to issue clearer guidance rather than entirely new laws. Existing rules already apply; the focus is on making responsibilities and oversight more explicit.

Lawyer Monthly Ad
osgoodepd lawyermonthly 1100x100 oct2025
generic banners explore the internet 1500x300

JUST FOR YOU

9 (1)
Sign up to our newsletter for the latest Regulatory & Government Affairs Updates
Subscribe to Lawyer Monthly Magazine Today to receive all of the latest news from the world of Law.
skyscraperin genericflights 120x600tw centro retargeting 0517 300x250

About the Author

Susan Stein
Susan Stein is a legal contributor at Lawyer Monthly, covering issues at the intersection of family law, consumer protection, employment rights, personal injury, immigration, and criminal defense. Since 2015, she has written extensively about how legal reforms and real-world cases shape everyday justice for individuals and families. Susan’s work focuses on making complex legal processes understandable, offering practical insights into rights, procedures, and emerging trends within U.S. and international law.
More information
Connect with LM

About Lawyer Monthly

Legal Intelligence. Trusted Insight. Since 2009

Follow Lawyer Monthly