The Wrong Story
The story most people tell about AI and work goes like this: machines are coming for human judgment. The task is to protect it.
That story is backwards.
In most compliance-heavy organizations, human judgment was not displaced by AI. It was displaced by decades of deliberate organizational design. AI arrived and automated the routinized work that had been concealing the absence. The judgment was already gone. The automation made it obvious.
That is the core argument of a new structural analysis we published this week. Not that AI threatens judgment. That AI reveals the places where judgment had already been architecturally removed.
How You Remove Judgment Without Saying So
The removal does not happen all at once. It happens through a set of design choices that look individually reasonable and are collectively devastating.
Deskilling is the oldest move. Braverman documented it in factory work: separate conception from execution. The same logic now runs through knowledge work. Clinical guidelines become decision trees. Eligibility rules become flowcharts. Protocols compress a professional's role to procedure execution. The worker still carries a title that implies judgment. The work no longer requires it.
The authority split is subtler and more corrosive. The person who knows the case cannot make the decision. The person who makes the decision does not know the case. Frontline workers hold contextual knowledge but lack authority. Decision-makers hold authority but receive information that has already been filtered, summarized, and stripped of the texture that made it useful.
The information travels up. The context does not.
Street-level constraint finishes the job. Even where formal discretion survives on paper, the conditions make it practically impossible to exercise. Chronic scarcity. Massive caseloads. Institutional pressure for consistency. A caseworker may technically have room to judge. They do not have time, resources, or organizational cover to actually do it.
These three mechanisms work together. Deskilling removes the need for judgment. The authority split removes the capacity. Street-level constraint removes the opportunity. By the time AI arrives, the role is already execution.
What AI Actually Automates
This matters because it reframes the entire debate about automation and jobs.
AI does not automate judgment. It automates compliance-shaped work. Where a role had already been reduced to executing constrained procedures within narrow output formats, AI performs that function reliably. Often more reliably than the person who was doing it under impossible conditions.
The displacement is real. But the thing being displaced is not judgment. It is the performance of judgment. The structured execution that organizations had substituted for the real thing.
That distinction changes what the problem actually is. If AI were displacing genuine human judgment, the fix would be to slow down adoption. If AI is exposing the prior removal of judgment, the fix is to decide whether you actually want judgment back.
Most organizations have not made that decision. Many do not want to.
The Oversight Trap
The governance response to all of this has been human-in-the-loop oversight. Keep a person in the chain. Problem solved.
Except the loop reproduces the original problem.
Elish identified the pattern in 2019 and called it the moral crumple zone: a structural position where a human absorbs blame for outcomes they cannot meaningfully influence. The human is present. The human lacks the authority, information, or time for their presence to constitute governance.
Wagner's description is blunter. The human functions as a rubber stamp. Not because the person is lazy or negligent. Because the structure guarantees it. Time pressure, institutional incentives, information asymmetry between human and system — these are predictable consequences of the design, not aberrations.
Human-in-the-loop does not solve the authority split. It formalizes it. The human is in the loop. The loop is still a circle where no one holds both knowledge and power at the same time.
The Diagnostic, Not the Prescription
The paper does not prescribe solutions. That is deliberate.
It identifies the conditions that make restructuring necessary and names the choice organizations actually face. For any given role, the question is binary. Does this work genuinely require human judgment?
If yes, then reconstituting judgment means reversing the architectural choices. Restoring both contextual knowledge and decision-making authority to the same position. That is not a technology problem. Better interfaces cannot compensate for absent authority. It is not a training problem. Training cannot overcome architecture. It is an authority problem. And it requires tolerance for slower throughput, inconsistent outputs, and decisions that cannot be controlled from above.
If no, then the governance language about human judgment was always nominal. The role was execution dressed in the vocabulary of professionalism. AI did not take away something real. It revealed that the thing had already been taken.
Why This Extends Beyond AI
The diagnosis is not specific to AI. Wherever expertise is formally retained while effective power is organized elsewhere, the same subordination pattern appears. Healthcare systems where clinicians follow protocols they did not write for patients they barely see. Legal systems where paralegals prepare cases for attorneys who review them for minutes. Educational systems where teachers deliver curricula designed by committees that never enter a classroom.
AI makes the condition newly visible because it automates the execution layer and forces the question: what was the human actually contributing? When the answer is "following the procedure," the automation surfaces a truth the organization had been avoiding.
The Bet
The bet is not that AI governance is impossible or that human oversight is always theater. Some oversight structures work. Some roles genuinely preserve judgment.
The bet is that you cannot fix a structural problem with a procedural patch. If judgment was removed by architecture, it can only be restored by architecture. And restoring it requires something most organizations resist: giving the person closest to the work the authority to actually decide.
That is expensive. It is slow. It produces inconsistency. It cannot be dashboarded or scaled without friction.
It is also the only thing that makes human-in-the-loop mean what people think it means.
The full paper behind this essay is available here