All Dispatches
March 25, 2026///ai-liability / legal-doctrine / moral-crumple-zone / accountability / organizational-design

Scapegoat in the Loop: Culpability Without Capacity

The law needs someone to hold responsible. Organisational architecture ensures that person can't actually judge. A new paper maps where legal doctrine breaks down.

The Setup

Here is a thing that keeps happening.

A caseworker gets seven minutes per case. The screen shows the algorithm's recommendation and an approve button. She can't see the model's logic. She can override, technically -- but overrides mean supervisor review, paperwork, delays her caseload won't absorb. She approves. The determination is wrong. Someone loses housing. She's the decision-maker of record.

A developer ships AI-generated code. He checked that it works. He didn't -- couldn't, really -- check whether it copied someone else's protected expression. It did. He's the publisher. The model provider's terms of service say the output is his problem.

A CISO signs off on her company's security posture. The stack runs AI threat detection, automated log analysis, vendor-managed endpoint protection. Millions of events a day. No single person can verify all of it. A breach slips through. She's the officer of record.

Three different jobs. Same situation. Each person holds the title but not the controls. And when things go wrong, the law points at them -- because they're the ones it can point at.

The Actual Problem

We published a paper this week called Culpability Without Capacity. The argument is simple.

Every liability doctrine -- negligence, due process, copyright, securities fraud -- assumes the same thing: somewhere in the chain, a human decided. That human had information and authority. They could have acted differently.

That assumption made sense when the person with the title also had the power. Organisational design broke that link a long time ago. AI just made it impossible to ignore.

The law can spot the easy case: no human involved at all. What it can't do is evaluate the hard case: a human was there, but the structure made real judgment impossible. It sees presence. It can't see capacity.

What Breaks

The paper walks through three domains -- copyright, public administration, cybersecurity -- and finds the same failure mode in each.

Copyright. Courts have decided machines can't be authors. Fine. But when AI output infringes, who pays? The model provider disclaims liability by contract. The user can't verify provenance. So liability lands on the human publisher. Not because they're at fault. Because they're available.

Government decisions. Wisconsin's Supreme Court said judges can use a proprietary risk-scoring tool for sentencing -- even though they can't see how it works. The rule is that the score can't be "the determinative factor." But nobody checks whether that's true. The standard either makes the tool pointless or gives judges cover to follow it uncritically. Pick one.

Cybersecurity. The SEC tried to hold a CISO personally liable for enterprise security failures in the SolarWinds case. It collapsed. Meanwhile, after CrowdStrike's faulty update crashed 8.5 million machines, customers found their legal recourse capped by contract. The vendor that caused the damage was insulated. The companies that trusted the vendor absorbed the loss.

Same pattern, every time. The entity that designed the system is protected. The person who relied on it gets the blame.

Four Conditions That Kill Judgment

Not every constrained role is a broken one. People work under rules and time pressure everywhere. That's institutional life, not a legal crisis.

The paper draws a line. Judgment isn't just constrained -- it's foreclosed -- when these conditions combine:

The system is opaque: you can't see how it reasons. You lack authority: overriding it costs more than going along. You lack capacity: there's no time or bandwidth to evaluate. And the output is unintelligible: you can't check it against anything independent.

Any one of these is manageable. Stack them together and the human's role becomes theatrical. The title says decision-maker. The architecture says rubber stamp.

Where the Law Cracks

Five things keep going wrong across all three domains.

Transparency doesn't fix it. Telling the caseworker how the algorithm works doesn't give her time to evaluate it. Knowing is not the same as being able to act on what you know.

"Not the determinative factor" is a fiction. Click approve, and you've satisfied the standard. Nobody asks whether you could have done otherwise.

Contracts protect the wrong party. The company that built the system caps its exposure. The company that bought the system eats the loss.

Criminal law needs intent. But the problem isn't someone acting badly. It's an architecture that prevents anyone from acting well.

And products liability -- the one doctrine designed for harm without individual fault -- doesn't apply to software in the U.S. The EU fixed this. America hasn't.

What's Left

No court has ruled that structurally foreclosed judgment lets the nominal decision-maker off the hook. No court has said that organisations which design judgment out of a role should absorb the liability themselves.

The paper doesn't propose fixes. It asks three questions the case law has raised but not answered.

Can the law test whether a human actually had the capacity to judge -- not just whether they were present? Can it assign fault to the organisational design that separated knowing from deciding? And can the one liability framework that doesn't need individual fault finally reach software?

Until then, the human in the loop stays where they are. Not the person who failed. The person the system could reach.


The full paper is available here

Metacog LogoMetacognitive Development

© 2026 METACOG.DEV. ALL RIGHTS RESERVED.

SYSTEM STATUS: OPERATIONAL