
The world is racing to build bigger, faster, more capable artificial intelligence. But behind the glossy talk of breakthroughs sits an uncomfortable truth most people never see: the enormous computing power needed to train cutting-edge systems has outgrown what many companies can handle on their own.
Into this gap comes a new layer of players—Neocloud providers—offering rentable, GPU-heavy infrastructure to anyone who needs it.
Names like CoreWeave, Nscale, Nebius, and others now sit quietly between the end user and the hyperscalers powering much of the world’s digital life.
It sounds like a straightforward upgrade to the cloud model. In reality, this new middle layer may end up reshaping one of the most sensitive questions in the AI era: who is legally responsible when something goes wrong?
And as more businesses rely on these providers, sometimes without fully understanding what they’re signing up for—questions about accountability, liability, and transparency are becoming impossible to ignore.
Training a modern AI model demands staggering amounts of computing power. GPUs must be linked, cooled, and kept running at insane intensity for days or weeks.
Only a handful of companies in the world can build and operate these facilities at scale. Most organisations can’t.
So they turn to Neoclouds, specialised providers that rent out access to high-performance GPU clusters the way traditional clouds rent out storage or virtual machines.
This shift creates a supply chain that barely existed a few years ago. Instead of one provider handling everything, you might now have:
A business → a Neocloud provider → a hyperscaler → a hardware manufacturer → a global web of data centres.
Each step adds complexity. Each step adds contractual fine print. And each step creates uncertainty about which party is accountable if the AI system produces biased outputs, leaks private data, or collapses during a critical moment.
It’s a new world where compute is outsourced, responsibility is blurred, and legal clarity hasn’t caught up yet.
Most people assume that if an AI system behaves unpredictably, the blame falls squarely on the company using it. After all, they built the model. They launched the service. They put it into the hands of customers.
But the reality is messier.
If the training environment is run on rented GPU clusters, everything from data performance to system failure may depend on someone else’s hardware.
Many Neoclouds lease space from hyperscalers, who themselves rely on other vendors for networking, cooling, and power.
A GPU shortage or supply issue upstream could break a system before a single line of code is written.
From a legal perspective, this creates a chain of interdependence, where it’s often unclear which layer contributed to the failure.
Courts have wrestled with similar issues in other industries—think of product liability cases involving parts from multiple manufacturers but AI adds a twist: algorithmic outcomes are not always predictable, and the infrastructure behind them is barely understood by those who use it.
The key to understanding this new landscape lies in the agreements businesses sign with their Neocloud providers. These contracts often determine:
what the provider is responsible for
what the customer must manage
where the provider’s liability ends
how disputes can be resolved
But unlike long-standing cloud service terms, Neocloud contracts are still evolving. Many are written for speed rather than clarity, shaped by demand for GPUs rather than demand for transparency.
Clauses about uptime, performance, security, and indemnity vary widely.
Most of these agreements shift risk downward onto the businesses using the service.
This doesn’t mean providers are doing anything wrong; it simply reflects a young market moving faster than regulators or courts.
The result is a series of arrangements where outages, data errors, or training anomalies may not clearly fall on any single party.
When something goes wrong with a model trained on borrowed compute, customers may find themselves stuck between multiple contracts pointing in different directions—none of which neatly answer the question of who is responsible.
The legal complexity deepens when you consider the physical world behind digital AI. Neoclouds often rent capacity in data centres across multiple regions.
These facilities must secure power, cooling, land-use approval, and grid access. Local regulations vary dramatically.
A model trained in one jurisdiction might be operating under rules from another.
A training run interrupted by a power delay might span three different suppliers. A data centre using GPUs might be subject to environmental restrictions because of its energy use.
None of these issues are hypothetical; regulators from the EU to state-level US energy commissions have been increasingly vocal about data centre expansion pressures.
That means AI supply chains aren’t just digital—they’re also subject to the rules of physical infrastructure, making accountability even more tangled.
One of the less obvious parts of this conversation is the effect infrastructure has on the behaviour of an AI system itself.
Training instability, partial failures, latency problems, or hardware inconsistencies can change how a model learns.
If an AI mistakenly generates harmful or discriminatory outputs, could an infrastructure glitch be partly to blame?
It’s not hard to imagine future disputes where one party claims:
“Our model only failed because the GPU cluster malfunctioned.”
“The provider didn’t allocate the hardware capacity we paid for.”
“A training run was corrupted by an infrastructure fault, not our code.”
Courts haven’t had many of these cases yet, but the ingredients are in place.
Technically, legally, and commercially, an AI system trained on unstable infrastructure may produce unreliable results—and figuring out who bears responsibility isn’t straightforward.
Governments and regulators are beginning to recognise that AI supply chains are more like ecosystems than isolated services. Data protection authorities care about where processing happens.
Competition regulators care about who controls access to scarce GPUs. Consumer protection bodies care about outcomes when automated systems harm people.
As these conversations evolve, a shared responsibility model may emerge—similar to how cybersecurity frameworks gradually came to define the roles of vendors, operators, and end users.
But for now, Neocloud contracts are the closest thing to a rulebook, and they vary from detailed and comprehensive to vague and unpredictable.
Businesses using these services may unknowingly step into arrangements where liability is fragmented across multiple players who never directly interact.
Even if you’re not training your own models, you interact with AI systems every day—search engines, authentication tools, recommendation feeds, automated forms.
All of these depend on layers of infrastructure you never see. The rise of Neoclouds means that the systems shaping your online life increasingly rely on chains of providers whose responsibilities aren’t always clear.
When something goes wrong—whether it’s a privacy breach, a misfiring algorithm, or an outage that affects essential services—the question of who is accountable becomes more complicated than most people realise.
Understanding this hidden world matters because AI doesn’t exist in isolation. It runs on physical machines, inside real buildings, powered by grids that can falter, governed by contracts that shift responsibility from one player to another.
And as AI becomes more central to everyday life, these behind-the-scenes legal arrangements will determine how disputes are resolved, who bears the consequences of failures, and how safe and reliable the systems around us truly are.
Neoclouds are no longer niche. They’ve become a fundamental part of the AI ecosystem, providing the GPU power that enables everything from chatbots to self-driving research. But with their rise comes a reshaping of accountability—one that touches infrastructure, law, and the daily experience of anyone who uses AI-powered services.
We may be heading toward a world where AI liability isn’t tied to a single company but distributed across a chain of providers, each responsible for one piece of a very complicated puzzle.
And understanding that puzzle is becoming essential, because the future of AI won’t just be about what models can do; it will also depend on the contracts, regulations, and shared responsibilities that keep the entire system running.
1. What exactly is a Neocloud provider?
A Neocloud provider is a specialised company that rents out high-performance GPU infrastructure for training and running AI models. Instead of hosting their own hardware, businesses can tap into these GPU clusters on demand, creating a new “middle layer” between AI developers and major hyperscale cloud platforms.
2. Why do Neoclouds complicate accountability in the AI supply chain?
Because they sit between the developer and the underlying cloud infrastructure, responsibility becomes blurred. When models fail, behave unpredictably, or suffer outages, it’s not always clear whether the fault lies with the developer, the Neocloud provider, the hyperscaler, or the hardware itself.
3. Who is responsible if an AI model trained on a Neocloud platform goes wrong?
There’s no universal answer. Responsibility may depend on the contract signed with the Neocloud provider, the reliability of the underlying hardware, and how the training environment was set up. In many cases, liability is split across several parties—making disputes more complex.
4. Can infrastructure issues actually change how an AI model behaves?
Yes. GPU instability, latency, inconsistent allocation, or interrupted training runs can all affect how a model learns. If a glitch occurs during training, the resulting AI may behave unpredictably—even if the developer’s code was correct.
5. Do Neocloud contracts protect businesses from AI-related failures?
Contracts vary widely. Some offer strong guarantees, while others shift most of the risk onto the customer. Many agreements limit the provider’s liability, leaving businesses exposed if something goes wrong during training or deployment.
6. How do energy and data centre regulations affect AI accountability?
Neoclouds rely on physical infrastructure—power grids, cooling systems, and local planning rules. If a model’s training is disrupted by energy constraints or regulatory issues, determining responsibility becomes even more complex, especially across borders with differing rules.
7. Are regulators paying attention to this new AI supply chain?
Yes. Data protection authorities are looking at where processing happens, competition regulators are watching GPU access, and consumer protection bodies are examining automated decision-making. A shared responsibility model may emerge as oversight increases.
8. What does all of this mean for everyday AI users?
Even simple tools—search engines, recommendation systems, authentication systems—depend on the hidden supply chains behind Neocloud providers. When failures occur, understanding who is accountable becomes essential to improving reliability and protecting users.





