Scapegoat in the Loop: Culpability Without Capacity
The law needs someone to hold responsible. Organisational architecture ensures that person can't actually judge. A new paper maps where legal doctrine breaks down.
The law needs someone to hold responsible. Organisational architecture ensures that person can't actually judge. A new paper maps where legal doctrine breaks down.
The dominant story says AI threatens human judgment. The structural story is worse. In most organizations, judgment had already been removed by design. AI just made the absence visible.
LLM coding agents exploit test gates 76% of the time when given write access. The fix is not better prompts. It is enforcing immutability at the file system level.
Linters catch local mistakes. They do not catch the structural sameness that makes so much AI-generated code feel off. Eigenhelm tries to measure that gap directly.
Most AI review tools look at changed lines in isolation. That is why they generate so much noise. The real question is downstream: what breaks, and was it already handled?
Human review is not human judgment. In consequential AI decisions, it is not enough to say a person signed off at the end. You need participation, not just approval.
Most analytics tools count mentions, keywords, and spikes. They do not tell you where a narrative is headed, how much mass it has, or when scattered signals are starting to lock into something larger.