AI Is Making Workers Compete, But Not in the Way Anyone Is Measuring

The Specific Event Worth Taking Seriously

A recent piece circulating in business media carries a headline that is blunt enough to be useful: "AI isn't replacing workers, it's making them compete." The argument is that the real labor market disruption is not mass displacement but rather a structural narrowing of who counts as a "valuable" human contributor within AI-integrated workflows. Workers are scrambling, the piece suggests, not because their jobs have disappeared but because the threshold for being indispensable has shifted underneath them, often without clear notice of where the new threshold sits.

This framing is more interesting than it first appears. It is not a story about automation. It is a story about coordination failure, specifically the failure to coordinate around what competence now means inside algorithmically-mediated work environments.

Why "Competing" Is the Wrong Mental Model

The competition framing implies that workers know what they are competing on. That assumption deserves scrutiny. Algorithmic literacy research documents a persistent awareness-capability gap: workers develop general awareness that algorithms shape their outcomes, but this awareness does not translate into improved performance (Kellogg, Valentine, and Christin, 2020). Knowing that an AI system is evaluating your output, or filtering your visibility to a manager, or ranking your productivity against colleagues is not the same as knowing how to respond to that system effectively.

This is the distinction Hatano and Inagaki (1986) draw between routine and adaptive expertise. Routine expertise, which is competence built from procedural repetition, functions adequately when the environment is stable. Adaptive expertise, which is competence built from understanding structural principles, is required when the rules of the environment shift. The current wave of AI integration into workflows is precisely such a shift. Workers trained procedurally on yesterday's task structure are not equipped to adapt, regardless of how hard they work or how much they fear being "phased out."

The Coordination Problem No One Is Naming

What the business press is calling a competition problem is, at a structural level, a coordination problem. Organizations are deploying AI systems that reconfigure how work is evaluated, routed, and rewarded. Employees are expected to adapt. But the mechanism by which adaptation is supposed to occur is almost never specified. This is the core argument of the Algorithmic Literacy Coordination framework: platforms and AI-integrated workplaces do not assume pre-existing competence for navigating algorithmically-mediated environments. Competence must develop endogenously, through participation, and the variation in how well it develops is substantial even among workers with identical access to tools and information (Schor et al., 2020).

The "competition" narrative obscures this by locating the problem in individual motivation or effort. If workers are falling behind in AI-integrated roles, the implied diagnosis is that they are not trying hard enough to adapt. The structural diagnosis is different: workers lack schemas, not effort. They have folk theories about how the AI systems around them function - individual impressions assembled from experience and informal conversation - but folk theories are not the same as accurate structural understanding (Gagrain, Naab, and Grub, 2024). Folk theories produce locally adaptive behavior that fails to generalize when the system changes, which AI systems do, frequently and without announcement.

What Organizations Are Actually Getting Wrong

The organizational response to AI integration has been almost uniformly procedural. Training programs document specific workflows with specific tools. Documentation describes what to click and when. This is topographic knowledge: it tells workers where things are on the current map. It is not topological knowledge: it does not tell workers the shape of the constraints they are operating within, which is the knowledge that would allow them to navigate a new map when the old one is superseded.

Gentner's (1983) structure-mapping theory predicts that transfer of learning depends on structural alignment between source and target domains. Procedural training on Tool A does not transfer to Tool B unless the learner has represented the structural features that Tool A and Tool B share. Organizations investing in procedure-level AI training are not building the representational structures that enable adaptation. They are, at best, building competence for the current deployment, and that competence degrades as the deployment changes.

The Measurement Problem

There is a further complication. If organizations cannot clearly specify what AI-adaptive competence looks like, they also cannot measure who has it. The "competition" framing assumes a legible performance signal. But Hancock, Naaman, and Levy (2020) note that AI-mediated communication systematically changes how performance is perceived and attributed. Workers who understand the structural logic of the AI systems they work alongside may produce outputs that are indistinguishable from workers who do not, right up until the point where the system changes and the gap becomes suddenly visible. Organizations are likely misidentifying who their competent workers actually are, and the cost of that misidentification will become apparent at the next major system transition, not this one.

The story here is not that AI is creating competition. It is that organizations have not yet built the coordination infrastructure to make AI-integrated work legible to the people doing it. That is a solvable problem, but it requires treating schema development as an organizational investment, not an individual responsibility.

References

Gagrain, A., Naab, T. K., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society.

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.

Hancock, J. T., Naaman, M., and Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100.

Hatano, G., and Inagaki, K. (1986). Two courses of expertise. Research and Clinical Center for Child Development Annual Report, 8, 27-36.

Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.

Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., and Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861.