ServiceNow's 30% Unemployment Warning and the Competence Inversion Problem in Entry-Level Labor Markets

ServiceNow CEO Bill McDermott issued a striking warning this week: unemployment among recent graduates could exceed 30 percent as AI agents absorb the routine tasks that once constituted entry-level work. McDermott's framing is worth taking seriously, not because the specific number is defensible with current evidence, but because the structural mechanism he is describing is real and underappreciated in most mainstream coverage. The problem is not simply that AI will eliminate jobs. The problem is that it will eliminate the specific jobs through which junior workers have historically developed into senior ones.

The Training Ladder Problem

Entry-level positions in knowledge work have never been primarily about output. They are competence-induction environments. A junior analyst running financial models, a new associate reviewing contracts, or a recent graduate processing client data is not being paid for the quality of that work alone. They are being paid to develop the structural understanding of the domain that will eventually make them useful at higher levels of abstraction. When AI agents absorb those tasks, the output is preserved but the induction is eliminated. McDermott acknowledged this directly, noting that digital workers will handle "much of the grunt work once used to train junior staff." This is a precise description of what Hatano and Inagaki (1986) call the difference between routine and adaptive expertise. Routine expertise is procedure-following; adaptive expertise is the principled understanding that allows performance in novel contexts. The grunt work was never just grunt work. It was the substrate for developing adaptive expertise.

Why Awareness of the Problem Does Not Solve It

The organizational response to displacement predictions like McDermott's typically converges on training interventions: AI literacy programs, reskilling initiatives, and structured exposure to AI tooling. These responses reflect a coherent instinct but contain a structural flaw that my dissertation research addresses directly. Algorithmic literacy research consistently shows that awareness of how algorithms function does not translate into improved performance outcomes (Kellogg, Valentine, and Christin, 2020). Knowing that an AI agent exists and handles certain tasks is categorically different from knowing how to build competence in an environment where those tasks are no longer available for human practice. Organizations that respond to McDermott's warning by adding AI literacy modules to onboarding programs are addressing the topography of the problem - what is visible, what is named - without addressing its topology, meaning the underlying shape of the competence-development constraint.

The Endogenous Competence Problem at Scale

The ALC framework I am developing argues that competencies in algorithmically-mediated environments develop endogenously through participation, not through prior training alone. Platform coordination does not assume ex-ante competence; it creates conditions under which competence either develops or does not, depending on how participation is structured. The same logic applies to AI-augmented organizations. If entry-level workers are no longer participating in the tasks through which domain schemas are induced, the organization faces what I would call the endogenous competence problem at scale: the pipeline through which adaptive expertise is produced has been disrupted, and no amount of explicit instruction straightforwardly replaces it. Gentner's (1983) structure-mapping theory is relevant here. Schema induction depends on repeated exposure to structurally similar problems across varying surface features. That is what grunt work provides. Removing it removes the variation set from which structural understanding is abstracted.

What McDermott's Warning Actually Predicts

The unemployment figure of 30 percent is less important than the distributional prediction embedded in McDermott's logic. If entry-level task absorption by AI is uniform across industries but competence development is not replaced, the variance in graduate outcomes will increase, not just the mean unemployment rate. This is consistent with the power-law distribution pattern that motivates the ALC framework: identical access to AI-augmented environments will produce dramatically different outcomes depending on whether workers can develop structural schemas in the absence of traditional induction pathways. Schor et al. (2020) document similar variance patterns in platform labor contexts, where access to platforms does not equalize outcomes. The graduate labor market of the next decade may exhibit the same structure: access to AI tools will be widely distributed, but the competence to use them adaptively will not be, because the induction mechanisms that previously produced that competence have been automated away.

The Organizational Design Implication

The practical question this raises is not how to train graduates on AI tools, but how to redesign entry-level participation so that schema induction occurs despite the absence of traditional task pipelines. That is a genuinely hard organizational design problem, and I do not think most firms are currently treating it as such. McDermott's warning deserves a more precise response than reskilling rhetoric. It deserves a structural analysis of where competence actually comes from, and what replaces it when the conditions that produced it are removed.

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170. Hatano, G., and Inagaki, K. (1986). Two courses of expertise. Research and Clinical Center for Child Development, 11, 27-36. Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410. Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., and Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861.