AI Agent Drift and the Governance Gap: What Wayfound's Warning Actually Reveals
The Specific Problem Being Named
A recent Forbes piece by Wayfound CEO Dr. Tatyana Mamut puts a precise label on something that organizational theorists have been circling around without adequate vocabulary: AI agent drift. The argument is not that AI agents fail catastrophically. The argument is that they degrade incrementally, accumulating small deviations from intended behavior until the aggregate error is significant and the origin is untraceable. Mamut frames this as a boardroom governance problem, not a technical one. That framing is worth taking seriously, because it shifts the analytical burden from engineering to organizational theory.
Drift Is a Coordination Problem, Not a Safety Problem
The instinct in most corporate AI governance discussions is to treat agent drift as a risk management issue, something to be addressed through guardrails, audit logs, and compliance checkboxes. This instinct misdiagnoses the problem. Drift is fundamentally a coordination failure: the agent is operating according to its trained objectives, but those objectives have progressively decoupled from the organization's actual intent. The gap is not between what the agent does and what it was programmed to do. The gap is between what it was programmed to do and what the organization now needs it to do, and nobody noticed the divergence accumulating.
This maps directly onto what Kellogg, Valentine, and Christin (2020) describe as the opacity problem in algorithmic work systems. Workers and managers develop what the literature calls folk theories, informal and often inaccurate mental models of how an algorithm behaves, rather than accurate structural understanding of the system's logic. The same dynamic applies to AI agents deployed at the organizational level. Executives approve deployment based on a static model of agent behavior. The agent's actual behavioral envelope shifts over time through retraining cycles, environmental changes, and interaction effects. The folk theory held at the boardroom level does not update.
Why Procedural Governance Frameworks Will Fail Here
The governance frameworks most organizations currently deploy against this problem are procedural: checklists, escalation thresholds, periodic audits. These frameworks assume that the relevant failure mode is a discrete, detectable event. But drift, by definition, is continuous and sub-threshold at each individual step. Hatano and Inagaki (1986) draw a useful distinction between routine expertise, which is optimized for known problem types, and adaptive expertise, which can recognize when a problem has changed its underlying structure. Procedural governance is routine expertise applied to a problem that requires adaptive expertise. It will systematically miss the failure mode it is supposed to catch.
The practical implication is that governing AI agents requires personnel who hold accurate structural schemas about how these systems behave across contexts, not just procedural knowledge about how to run an audit. Gentner's (1983) structure-mapping theory is relevant here: transfer of diagnostic skill across different agent configurations depends on the degree to which the observer has abstracted the relational structure of the system rather than memorized surface features. An auditor who can only recognize drift in the specific form they were trained to identify is not equipped for novel drift patterns, which are the ones that actually accumulate undetected.
The Accidental Executioner Problem Compounds This
A separate but related piece from Business Insider this week describes employees who are building internal AI tools and realizing, mid-project, that those tools may be designed to eliminate their colleagues' roles. This phenomenon and the agent drift problem share a common organizational root: responsibility for AI behavior is diffused across enough roles and timelines that no single actor holds a complete picture of what the system is doing or whom it is affecting. Rahman (2021) describes a structurally similar dynamic in platform labor contexts, where algorithmic control is rendered invisible precisely because it is distributed across technical, managerial, and contractual layers. The same invisibility applies here, except the consequences land inside the firm rather than on external contractors.
What Organizational Theory Predicts
Taken together, these two news items point toward a specific organizational prediction: firms that address AI governance through procedural layering alone will experience systematic under-detection of agent drift and diffused accountability for AI-mediated harm. The variance in outcomes will not track investment in governance infrastructure. It will track whether key personnel hold adaptive, schema-level understanding of how these systems behave structurally. That is a harder organizational capability to build, but it is the correct target. Guardrails are necessary. They are not sufficient.
References
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Roger Hunt