The DoD-Anthropic-OpenAI Standoff and the Competence Assumption Problem in AI Governance
A Governance Dispute That Reveals a Structural Problem
A dispute reported this week between the Department of Defense, Anthropic, and OpenAI has surfaced something that organizational theorists should find genuinely interesting. The core tension is not primarily about contract terms or procurement rules. It is about who has the legitimate authority to set the conditions under which AI systems operate in high-stakes military contexts. Each party in this standoff is operating with a different assumption about where relevant competence resides: in the technical developers who built the systems, in the military operators who deploy them, or in the policy structures that nominally govern both.
The Competence Assumption as a Governance Variable
Classical organizational theory - whether in the Weberian bureaucratic tradition or in principal-agent frameworks - assumes that governance structures are designed after competence has been located. You know who knows what, and you build accountability structures around that distribution. What makes the DoD-Anthropic-OpenAI situation unusual is that none of the three parties can fully verify the competence claims of the other two. The DoD cannot independently audit whether Anthropic's safety claims about Claude are accurate. Anthropic cannot fully model how its system will perform under operational military conditions it has never been designed to anticipate. OpenAI occupies a similar epistemic position. This is not a negotiation between parties with clear informational asymmetries. It is a coordination problem under mutual opacity.
This maps directly onto what Kellogg, Valentine, and Christin (2020) identified as the core challenge of algorithmic governance at work: the parties responsible for oversight frequently lack the structural understanding necessary to exercise that oversight meaningfully. Awareness that a system exists and produces consequential outputs does not translate into the capacity to evaluate or redirect those outputs. The DoD may have formal authority over deployment decisions, but formal authority and operational competence are distinct variables - and the gap between them creates precisely the kind of accountability vacuum this dispute is exposing.
Folk Theories in High-Stakes Institutional Settings
One of the distinctions my dissertation research draws on is the difference between folk theories and structural schemas (Gagrain, Naab, and Grub, 2024). A folk theory is an individual or institutional impression of how a system works, built from surface-level observation rather than structural understanding. A schema captures the actual relational architecture of the system. In platform labor contexts, this distinction predicts differential outcomes: workers who hold accurate structural schemas outperform those operating on folk theories, even when both groups have identical access to the platform.
The same dynamic appears to be operating in the DoD governance dispute. Each party seems to be reasoning from a folk theory about the others. The DoD appears to operate from a procurement model where the purchasing authority sets terms and the vendor complies, which is a reasonable folk theory for acquiring weapons systems but may not translate to AI systems where the developer retains ongoing interpretive authority over system behavior. Anthropic and OpenAI, for their part, appear to be reasoning from a commercial deployment model that does not map cleanly onto military command structures. Neither set of folk theories is adequate to the actual structural situation, and the standoff is, in part, the predictable result.
Why Procedural Fixes Will Not Resolve This
The most common organizational response to a governance dispute of this kind is procedural: establish clearer contracts, create oversight committees, define escalation pathways. These responses treat the problem as one of missing documentation rather than missing schemas. Hatano and Inagaki (1986) drew a foundational distinction between routine expertise, which is procedurally encoded and works well in stable environments, and adaptive expertise, which is principle-based and works in novel ones. Military AI deployment is not a stable environment. Writing better contracts addresses the topography of the current dispute - the specific terms in contention - without addressing the topology of the underlying coordination problem, which is that no party has a verified structural understanding of how authority, competence, and accountability should be distributed when the system itself is capable of generating outputs none of the parties fully anticipated.
Rahman (2021) described the "invisible cage" problem in platform work as a situation where workers are subject to algorithmic authority they cannot see, contest, or redirect. The DoD dispute suggests this is not a problem unique to gig workers. Large institutions can find themselves structurally subordinate to systems they nominally control. The governance question worth watching is not which party wins the current negotiation, but whether any party will develop the structural schema necessary to make future negotiations meaningful. That is a harder problem than a contract dispute, and it will not be resolved in Barcelona or in a Pentagon procurement review.
References
Gagrain, A., Naab, T. K., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Roger Hunt