Bridget McCormack on AI Judges and the Structural Awareness Problem in Legal Algorithms

Former Michigan Chief Justice Bridget McCormack's recent discussion about AI systems deciding legal disputes, not merely assisting with research but actually adjudicating who's right and wrong, surfaces a fundamental problem in algorithmic governance: the gap between structural awareness and adaptive capability in high-stakes decision environments.

McCormack's framing is notable because it moves beyond the procedural automation we've seen in legal tech (document review, discovery analysis) to propose algorithmic systems making binding determinations. This isn't a hypothetical: online dispute resolution platforms already handle millions of small claims annually, and the pressure to expand algorithmic adjudication stems from legitimate capacity constraints in court systems. But the conversation reveals a category error about what legal judgment actually requires.

The Topology of Legal Reasoning

Legal decision-making operates at the intersection of rule application and contextual interpretation. A judge doesn't simply match facts to statutes (routine expertise) but recognizes when standard frameworks require adaptation to novel circumstances (adaptive expertise). This distinction, articulated by Hatano and Inagaki (1986), becomes critical when we consider algorithmic adjudication.

Current AI systems excel at pattern matching across large datasets. They can identify which prior cases most resemble a current dispute and predict likely outcomes based on historical distributions. What they cannot do is recognize when the structural features of a case require deviation from established patterns. This is the topology problem: understanding the shape of legal constraints differs fundamentally from knowing how to navigate those constraints in unprecedented situations (Kellogg et al., 2020).

McCormack's proposal implicitly assumes that legal judgment can be decomposed into recognizable patterns that, with sufficient training data, algorithmic systems can reproduce. But this assumes the competence boundary of "legal decision-making" remains stable. It doesn't. Each novel case potentially redefines what counts as relevant precedent, which factual distinctions matter, and how competing principles should be balanced.

The Awareness Without Structure Problem in Legal Algorithms

The discussion of AI judges also reveals the awareness-capability gap that characterizes algorithmic governance more broadly. Legal professionals are increasingly aware that algorithms influence case outcomes through risk assessment tools, sentencing recommendations, and resource allocation decisions. This awareness, however, does not translate into effective oversight or intervention capability (Schor et al., 2020).

Judges using algorithmic risk assessments can observe that certain defendants receive higher risk scores, but the structural logic generating those scores, particularly when derived from ensemble models or neural networks, remains opaque even to technically sophisticated users. This creates a coordination problem: the judge knows an algorithm has made a determination but lacks the structural schema to evaluate whether that determination reflects legitimate pattern recognition or spurious correlation.

If we move from algorithmic assistance to algorithmic adjudication, this problem intensifies. When algorithms recommend, humans retain decision authority and can reject recommendations based on contextual understanding. When algorithms decide, the question becomes: who has the competence to evaluate whether the algorithmic decision was structurally sound? Not "correct" in outcome (we often can't know), but sound in its application of legal reasoning principles.

The Governance Implication

McCormack's discussion matters because it surfaces what algorithmic adjudication actually requires: not better algorithms, but institutional structures for building adaptive expertise about algorithmic decision-making itself. This isn't a training problem that can be solved by teaching judges "how AI works." It's a structural problem about where legal competence resides when decision authority transfers to algorithmic systems.

The platformization of legal services follows the same pattern we observe in labor platforms. Access is equalized (anyone can use the dispute resolution platform), but outcomes remain highly variable because effective engagement requires tacit understanding of how algorithmic systems weight different types of evidence, frame disputes, and apply precedent (Rahman, 2021). Those who develop folk theories about "what the algorithm wants" may see better outcomes than those who don't, but neither group develops transferable structural understanding of legal reasoning itself.

The case for AI judges requires answering a question McCormack's discussion leaves open: if algorithmic systems make binding legal determinations, what competence transfers to human legal professionals, and what competence becomes permanently embedded in opaque technical systems? Until we address the structural awareness problem, expanding algorithmic adjudication simply redistributes decision-making opacity without improving legal coordination.