AI Is Making Workers Compete, Not Replacing Them: Why That Distinction Matters More Than You Think

The Headline Gets the Direction Wrong

A recent piece circulating in business media argues that AI is not replacing workers so much as forcing them to compete against each other for the shrinking number of roles that remain "valuable" in AI-integrated workforces (AI isn't replacing workers, 2025). The framing is meant to be reassuring. It is not. The shift from replacement to competition as the primary threat does not reduce the structural problem; it relocates it. And the relocation matters enormously for how organizations should think about training, coordination, and workforce development.

The Competition Frame Obscures the Real Mechanism

When the narrative shifts from "AI replaces workers" to "AI makes workers compete," the implicit assumption is that some workers will successfully adapt and others will not, and that this sorting process is roughly meritocratic. Workers who develop the right skills will survive; those who do not will be displaced. This framing is intuitive, but it is empirically problematic. Research on platform and algorithmically mediated work consistently shows that workers with identical access to the same tools produce dramatically different outcomes, and this variance cannot be explained by natural ability or effort alone (Kellogg, Valentine, and Christin, 2020). Power-law distributions in performance emerge not because some workers are categorically more capable, but because algorithms amplify initial differences in competence, creating feedback loops that compound over time.

The competition narrative also implies that the relevant skill is identifiable in advance. If workers know what "valuable" looks like, they can train toward it. But this is precisely where the awareness-capability gap becomes critical. Workers can develop accurate awareness that AI systems govern their performance evaluations and task allocation without that awareness translating into improved outcomes (Gagrain, Naab, and Grub, 2024). Knowing you are in a competition does not tell you the rules of the competition.

What Organizations Are Actually Getting Wrong

The organizational response to AI integration has been, almost universally, procedural. Companies issue guidelines on how to use specific tools, train employees on particular platforms, and document workflows that incorporate AI assistance. This is the wrong level of intervention. Procedural training produces what Hatano and Inagaki (1986) call routine expertise: competence that is calibrated to a specific environment and fails when that environment changes. Given that AI tools are themselves changing rapidly, training workers to use current tools correctly is a bet on stability that the market is not offering.

What the evidence suggests is more useful is schema induction: training that targets the structural features of how AI systems mediate performance, rather than the specific operations of any one system. Gentner's (1983) structure-mapping theory provides the theoretical foundation here. Transfer of learning occurs when learners extract relational structure from one context and apply it to another. Workers who understand why an AI system prioritizes certain inputs, how feedback loops within the system amplify or suppress signals, and what the topology of constraints looks like across platforms, are better positioned to adapt when specific tools change. The topography shifts constantly; the topology changes more slowly.

The Specific Failure Mode in Current Workforce Policy

The workers described in current reporting as "scrambling to secure their place" are, by and large, engaging in topographic learning. They are acquiring competence in specific tools, current workflows, and particular AI-assisted tasks. This is rational given short-term incentive structures. An employee who needs to demonstrate value in the next performance cycle will optimize for current tool proficiency, not for abstract structural understanding. But this is exactly the coordination failure that organizations need to address at the institutional level, because individual workers cannot be expected to trade short-term performance for long-term adaptability on their own.

Rahman's (2021) analysis of algorithmic control in platform work is instructive here. Workers operating under algorithmic management face what Rahman calls the invisible cage: a set of constraints that are real and consequential but not directly observable. The response to an invisible cage cannot be tool-specific training. It requires developing the capacity to infer structural features from observable behavior, which is a generalizable cognitive skill rather than a domain-specific procedure.

The Implication Organizations Are Missing

The competition framing, whatever its limits, does usefully identify that the sorting mechanism is now partially algorithmic. Organizations that invest only in tool-specific training are producing workers who are competent in the current environment but fragile in the next one. The more durable investment is in developing workers who understand the structural logic of algorithmically mediated environments well enough to transfer that understanding across tools and contexts. That is not a soft skill. It is an organizational design problem with a research-supported solution.

References

Gagrain, A., Naab, T., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society.

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.

Hatano, G., and Inagaki, K. (1986). Two courses of expertise. Research and Clinical Center for Child Development, 11, 27-36.

Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.

Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.