EA's AI Productivity Paradox: When Algorithmic Mandates Meet Organizational Reality
The Specific Event Worth Examining
Electronic Arts CEO Andrew Wilson recently defended the company's aggressive, company-wide AI push even as employees reported measurable productivity drops following its implementation. This is not a story about AI skepticism versus optimism. It is a story about a specific organizational failure mode: the systematic mismatch between the competencies an algorithm assumes its users possess and the competencies those users actually hold when the mandate is handed down.
The Assumption Buried in Every AI Mandate
When EA's new management announced the AI pivot as part of a cost-reduction strategy tied to the company's private investor acquisition, they made an implicit assumption that rarely gets named explicitly: that deploying algorithmic tools is sufficient to realize their productivity benefits. This assumption is wrong, and the employee reports of productivity loss are exactly what theory would predict. The core problem is not that EA's workers are resistant or incompetent in any general sense. The problem is that platform coordination - in this case, AI-mediated workflow coordination - does not assume pre-existing competence. It generates competence endogenously, through participation, over time (Kellogg, Valentine, and Christin, 2020). A corporate mandate compresses that timeline to zero.
What EA's leadership appears to be observing, and defending away rather than diagnosing, is the cost of treating competence as a given. The workers are not failing to use the tools. They are failing to produce good outputs with the tools, which is a categorically different problem. Kellogg et al. (2020) distinguish between algorithmic awareness - knowing that a system exists and has effects - and the kind of structural literacy that allows a worker to actually navigate that system effectively. EA's employees almost certainly have awareness. The productivity data suggests they lack the structural literacy.
Why CEO Defense Is the Wrong Signal to Track
Wilson's public defense of the AI initiative is organizationally intelligible but analytically unhelpful. Executives defending major capital commitments is not news. What is worth tracking is the specific form his defense takes: a claim that the productivity drop is temporary and attributable to a transition period. This framing treats the problem as a learning curve, which implies that continued exposure will produce the required competence. That assumption has a name in the literature - it is the folk theory of skill acquisition, and it consistently underperforms against structured schema induction (Hatano and Inagaki, 1986).
The distinction Hatano and Inagaki (1986) draw between routine expertise and adaptive expertise is directly relevant here. Routine expertise develops when workers repeat procedures in stable environments. Adaptive expertise develops when workers understand the principles governing a system well enough to respond to novel configurations of that system. AI tools, by definition, produce novel configurations constantly. A worker who has only learned to follow a procedure for using an AI writing tool, for example, will fail when the tool changes its output patterns, its interface, or its embedded assumptions about the task. EA's situation is almost certainly producing routine learners when the environment demands adaptive ones.
The Transfer Problem EA Has Not Acknowledged
There is a second layer to this problem that the EA narrative has not surfaced. Rahman (2021) demonstrates that algorithmic systems create invisible cages, structural constraints that shape worker behavior without those constraints being legible to the workers inside them. When EA rolls out AI tools across functions, it is not simply adding a new instrument to existing workflows. It is reorganizing the structural constraints of those workflows in ways that workers cannot see clearly. The productivity drop is, in part, the cost of navigating a constraint environment you cannot read.
This connects directly to the distinction I draw in my own work between topology and topography. Knowing that algorithmic constraints exist is topographic knowledge - you know there are hills and valleys. Knowing the shape and logic of those constraints well enough to plan your route is topological knowledge. EA's workers appear to be operating with topographic awareness and topological illiteracy, which is precisely the condition that produces the awareness-capability gap documented by Gagrain, Naab, and Grub (2024).
What This Actually Predicts
If EA's AI initiative follows the pattern the literature describes, the productivity recovery Wilson is predicting will be partial and unevenly distributed. Workers who develop structural schemas of how the AI tools operate will recover and may exceed prior productivity baselines. Workers who develop only procedural familiarity will plateau below those baselines and remain fragile to tool updates. The distribution of outcomes will be power-law shaped, not normally distributed, because algorithmic systems amplify initial differences in structural literacy rather than averaging them out (Schor et al., 2020). EA's aggregate productivity numbers will look adequate, but that aggregate will conceal a workforce increasingly stratified by a competence that management did not deliberately build and cannot currently measure.
References
Gagrain, A., Naab, T., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., and Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861.
Roger Hunt