The Liability Layer: Humans’ Roles in the Age of AI
- Vipul Divyanshu

- Nov 5, 2025
- 6 min read

Humans in the loop, the new liability layer when AI does everything.
It was a sweltering night when Arthur’s phone jolted him awake, its sharp buzz cutting through the stillness of his bedroom. Bleary-eyed, he fumbled for the device, the screen’s glare revealing a dire alert: “Critical Service Infrastructure Down — U.S. Region.” His pulse quickened. Flights were grounded, operations stalled, and the reputation of his company, a tech firm serving over two dozen Fortune 100 clients, teetered on the edge. As one of just five humans tasked with approving critical changes, Arthur’s role was no small matter. The on-call AI agent had already pinpointed the culprit: a faulty code commit from the previous night, generated by another AI aiming to optimize memory performance. Now it was up to Arthur to greenlight the rollback and fix, a decision that could make or break the night.
Arthur’s sweaty predicament is more than a personal anecdote; it’s a window into the future of work in the age of artificial intelligence. As AI’s capabilities eclipse human performance across countless tasks, from coding to diagnostics, the traditional structure of labor is bringing to light a first principles view of the world hierarchy. This future highlights a fundamental nature of organizational structure: the action layer and liability layer. In this view of organizational hierarchy, the “liability/accountability layer” is a select group tasked with overseeing and bearing responsibility for people reporting to them and, in the near future, it will be overseeing AI agents. The shift to AI promises efficiency and innovation but also heralds profound economic and social changes: fewer jobs, higher stakes, and a widening gap between the overseers and the obsolete.
A First Principle View of Work Hierarchies
Since the dawn of civilization, organizational structure has been a pyramid. At its base, individual contributors — iron smelters forging steel, farmers tilling fields, analysts crunching numbers — deployed their skills to create value. Above them, a smaller cadre of managers orchestrated the effort, overseeing the risk, setting strategies, ensuring quality, and shouldering accountability up the chain to stakeholders. This hierarchy, refined over centuries, became the backbone of industrial economies. The Industrial Revolution amplified it, shifting labour from artisanal crafts to machine operation, yet the structure endured: workers produced, managers oversaw, and their managers oversaw them, and so on. Each layer acted as the liability layer, answering to the layers above it, acting as a wrapper, and taking responsibility for the layer below it. The structure highlights the pursuit of optimization and higher value capture at the top. Fundamentally, people who bear more opportunity cost, risk, and liability (in the form of capital, debt, etc.) are compensated commensurately.
Today, the U.S. workforce reflects this legacy. According to the Bureau of Labor Statistics, approximately 90 million Americans are individual contributors, while 16.4 million serve as managers in some capacity. The division is clear: contributors generate output, while managers oversee that output and bear the liability in one form or another. But AI is upending this balance, as it is capable of automating not just manual tasks but also intellectual ones, forcing us to rethink who will survive the rise of AI and who will answer for its actions.
AI’s Disruptive Surge
AI’s rise is no longer speculative — it’s here, and it’s accelerating. A 2023 McKinsey report estimates that up to 30% of current jobs could be automated by 2030, with AI driving much of this shift as the bottom layer is replaced by AI agents. In finance, algorithms execute trades in milliseconds; in healthcare, AI diagnostics rival seasoned radiologists; in media, tools like GPT models churn out articles and scripts, creating AI slob. A 2024 Deloitte survey found that 63% of businesses now use AI in some capacity, up from 37% just five years prior. The technology isn’t merely augmenting human effort; it’s replacing it.
Yet even as AI assumes the grunt work, a critical need persists: accountability. Machines may crunch numbers or draft code, but they don’t face shareholders, regulators, or angry customers. That burden falls to humans, redefining their role from doers to overseers. Enter the new age of the liability layer, a slimmed-down managerial class tasked with approving AI outputs and owning the consequences.
What Is the Liability Layer?
When looked at from a first principles perspective, this layer has existed since the dawn of modern capitalism but has not been explicitly named. The layer in the age of AI becomes more prominently visible and significant as the human bridge between AI’s prowess and real-world stakes. The liability layer of the future will be a group — often small, highly skilled, and well compensated — responsible for ensuring AI-generated work meets standards, complies with laws, and delivers value. In Arthur’s case, he and his four colleagues form this layer at their firm. When an AI misstep crashes servers, they’re the human in the loop, sweating it out, not the algorithm, and are answerable to the layers on top.

This concept isn’t entirely new. Managers have long been the liability layer for human teams: principals vouching for analysts, CEOs vouching for their team to the board. What’s changing is the scale and source of the work. Whereas a manager once oversaw dozens of people, they might now supervise hundreds of AI agents, each producing output at superhuman speed. This new generation of managers will also be different from the previous one. These managers, in addition to having strong soft skills, must be sufficiently skilled in the fields in which they oversee AI. The legacy structure of liability doesn’t vanish; it concentrates.
In the Real World
Consider healthcare. A 2023 study by Stanford found that AI systems can diagnose certain conditions like pneumonia from chest X-rays, with accuracy matching or exceeding human experts. Imagine a senior doctor overseeing 500 AI-generated diagnoses daily, up from 50 manual reviews. The clinic’s capacity soars, as does the doctor’s paycheck, but the number of doctors needed to serve the existing demand shrinks. The doctor is now mainly active in the liability layer, bearing the legal and ethical weight of every call.
Autonomous vehicles offer another lens. While the legal liability regulation is still evolving, autonomous cab providers are allowing a virtual driver to take the liability of the many self-driving cars on the road, overseeing their behaviour remotely in real-time. Tesla’s Full Self-Driving mode can navigate highways, but when crashes occur, like the fatal 2023 incident in California, legal liability debates erupt. Is it the driver, the manufacturer, or the AI coder at fault? For now, human “operators” or companies shoulder the blame, forming a nascent liability layer as the tech matures.
The Economic Fallout
This transformation carries a double-edged sword. On one edge is efficiency as businesses slash labor costs while boosting output. A 2024 PwC study predicts AI could add $15.7 trillion to the global economy by 2030, with 70% of that from productivity gains. On the other edge, there’s displacement. Oxford Economics estimates that 20 million manufacturing jobs could vanish by decade’s end and be replaced by AI and robotics.
The liability layer itself shrinks the workforce. Whereas 90 million U.S. contributors once toiled, AI might leave only a fraction of overseers — say, 10 million — earning more than ever before. A 2023 World Inequality Database update shows the top 1% of U.S. earners now claim 20% of income, up from 10% in 1980. AI could turbocharge this trend.
Take Arthur’s firm: 10 employees — five in the liability layer — serve clients, once employing hundreds and even thousands. Their salaries reflect their scarcity and responsibility, averaging $500,000 annually, while AI handles the grunt work as utility for pennies. The result? Higher profits, fewer jobs, and a stark divide.
A Glimpse Ahead
What happens when AI grows to be so reliable that the liability layer thins? In a distant but plausible future, liability might shift from humans to AI providers or the systems themselves. Legal frameworks could evolve (imagine Alphabet or OpenAI insuring AI outputs, much like carmakers warrant vehicles), or “AI liability trusts” could emerge, backed by regulators and tech giants.
In medicine, if hospitals assume liability for AI diagnostics — bolstered by decades of data and trust — senior doctors might dwindle to troubleshooters for edge cases, then vanish entirely. In business, AI CEOs could report to boards, with humans relegated to symbolic roles. This post-work vision hinges on governance leaps and societal faith in machines, but it’s not science fiction; it’s a possible trajectory.
The Final Picture
Arthur’s sleepless night encapsulates a seismic shift. The liability layer isn’t just a concept; it’s the place where jobs will reside in our AI-driven future. It promises efficiency and riches for some but at a cost: mass displacement, concentrated power, and ethical quandaries. How do we ensure AI’s gains don’t widen inequality to a breaking point? Who decides when machines are trustworthy enough to bear their own liability? And what becomes of human purpose when the work of creation itself fades?
For now, the Arthurs of the world hold the line: sweaty, stressed, and indispensable. But as AI marches on, their layer may be the last bastion of human control in a machine-made economy. The trillion-dollar question isn’t just what people will do; it’s whether we are marching towards the age of AI solving problems for people or solving the problem of people.

Vipul Divyanshu (MBA ‘26) was born and raised in India. He has a bachelor's in Engineering and has been a serial tech founder with extensive experience in Applied AI, and has led AI at a Sequoia-backed venture. Prior to HBS, he was Co-founder and CTO of Streak AI, an India AI fintech platform with over 4M users and $21B in transactions. He enjoys energy drinks, squash, and board games.




Comments