Lowe's AI Assistant and the Competence Assumption Problem in Enterprise Deployment
What Lowe's CEO Actually Said
Lowe's CEO Marvin Ellison recently stated that the company's AI assistant is resolving "major headaches" across nearly every operational level of the home improvement retailer. The framing is telling. Ellison did not describe the AI as augmenting expert workers. He described it as fixing problems. That distinction matters enormously for how we understand what is actually happening inside organizations that deploy conversational AI at scale, and it raises a question that the headline obscures: competent at what, exactly, and for whom?
The Inversion Classical Coordination Theory Misses
Standard accounts of technology adoption in organizational theory treat competence as a prior condition. You hire workers with relevant skills, you give them tools, and the tools extend what those workers can already do. The Lowe's deployment suggests a different structure entirely. When an AI assistant is described as fixing headaches "at nearly every level," the implication is that the tool is not extending worker competence - it is substituting for competence that was either absent or inconsistently distributed across the workforce. This is not a minor semantic difference. It represents a fundamental shift in what coordination actually requires from human participants.
The ALC framework I have been developing treats this as a key boundary condition. Classical coordination mechanisms, whether markets, hierarchies, or networks, assume ex-ante competence. Participants are assumed to arrive with schemas adequate to navigate the coordination structure. Platform-mediated environments invert this assumption: competence develops endogenously, through participation itself (Kellogg, Valentine, and Christin, 2020). The Lowe's case suggests that enterprise AI deployments may be creating a third structure, one where the system absorbs coordination work that was previously distributed across human workers with variable capability, and the organization never resolves the underlying competence question at all.
The Awareness-Capability Gap at the Organizational Level
Here is the specific problem this creates. Ellison's framing emphasizes outcomes: the AI is fixing things. But outcomes are not schemas. Workers who experience better outcomes because an AI assistant handled the difficult coordination steps do not thereby acquire the structural understanding needed to recognize when the AI is wrong, when its output requires correction, or when the underlying situation falls outside the system's reliable operating range. Hancock, Naaman, and Levy (2020) identified this as a core feature of AI-mediated communication: the interface obscures the generative process, which means users develop impressions of system capability rather than accurate structural understanding of what the system can and cannot do.
Gagrain, Naab, and Grub (2024) extend this concern specifically to algorithmic media contexts, distinguishing between folk theories, which are individual impressions about how a system works, and genuine algorithmic literacy. The organizational parallel is direct. If Lowe's workers develop folk theories about the AI assistant - "it handles the hard stuff" - without developing structural schemas about when and why it succeeds or fails, the organization has traded one vulnerability for another. Short-term headache reduction comes at the cost of long-term adaptive capacity.
The Routine Expertise Trap
Hatano and Inagaki (1986) drew a distinction between routine expertise, which is proceduralized performance in stable contexts, and adaptive expertise, which is principled reasoning that transfers to novel situations. An AI assistant that resolves common operational problems is, by definition, training workers in routine expertise. Workers learn that certain inputs produce certain outputs from the system. They do not learn the structural principles that would allow them to navigate situations the system handles poorly.
This is not an argument against deploying AI assistants at the enterprise level. It is an argument about what organizational theory needs to account for when evaluating such deployments. The relevant metric is not whether the AI fixes current headaches. The relevant metric is what happens to organizational competence when the AI encounters a headache it cannot fix, or when the system changes and prior procedural knowledge no longer applies. Rahman's (2021) analysis of algorithmic control in platform labor markets is instructive here: systems that structure work also structure the cognitive habits of those who work within them, often in ways that reduce rather than increase the worker's capacity to operate independently of the system.
What This Means for Organizational Research
The Lowe's announcement is not unusual. It is representative of a broad pattern in enterprise AI adoption where outcome metrics dominate the evaluation frame and competence development goes unmeasured. Organizational researchers need to be asking a different set of questions: not whether the AI produces better outputs, but whether human workers in these systems are developing transferable structural understanding or accumulating procedural dependencies that will not survive the next system change. The difference between those two trajectories is not visible in quarterly operational data. It becomes visible only when conditions shift and the system fails to generalize. At that point, the organization discovers whether it trained adaptive expertise or routine expertise, and the answer matters considerably more than any short-term headache count.
Roger Hunt