AI Sales Coaches and the Competence Substitution Problem: What ServiceNow's Training Pivot Reveals

The News Event

Companies including ServiceNow and Braintrust are now embedding AI simulations directly into sales training pipelines, effectively replacing the coaching role that middle managers once filled. As reported this week, the rationale is straightforward: middle management layers are thinning, managers who remain lack bandwidth for one-on-one coaching, and AI simulations can scale in ways that human trainers cannot. The business logic is defensible. The organizational logic deserves considerably more scrutiny.

Substitution Versus Augmentation

The distinction between substituting competence and augmenting it matters enormously here, and most coverage of this trend collapses the two. When ServiceNow deploys an AI sales coach, the system can simulate objections, score responses, and provide immediate feedback. What it cannot do is model the interpretive judgment that converts a principle into a context-appropriate action. This is precisely the gap Hatano and Inagaki (1986) identified in their two-routes model of expertise development: procedural training produces routine expertise calibrated to expected conditions, while adaptive expertise develops through exposure to variability and the need to construct explanations under uncertainty. An AI simulation that scores responses against a rubric is, structurally, a procedure delivery system. It teaches sales representatives how to navigate known topography. It does not teach them to read unfamiliar terrain.

The Awareness-Capability Gap in a New Context

My dissertation research focuses on what I call the awareness-capability gap in algorithmic environments: workers often develop awareness of how a system operates without developing the capability to respond effectively to it (Kellogg, Valentine, and Christin, 2020). The AI sales coaching trend reproduces this gap in an organizational training context. Sales representatives trained through AI simulations will develop awareness of objection patterns and scripted responses. What they are unlikely to develop is the structural schema that tells them why a particular response works and under what conditions it transfers to novel situations. Gagrain, Naab, and Grub (2024) make a related point about algorithmic media users: exposure to algorithmic feedback increases awareness of the feedback mechanism without producing accurate mental models of the underlying structure. The sales training case is formally analogous: more reps at bat in simulations, more procedural familiarity, less structural understanding.

What Thinning Middle Management Actually Removes

There is a specific organizational loss embedded in this shift that the framing of "AI filling gaps left by absent managers" tends to obscure. Middle managers performing coaching functions were not simply delivering procedures. They were translating organizational context into situation-specific guidance, a function that requires what Gentner (1983) called structural relational knowledge: understanding not just what elements are present in a situation, but how they relate to each other and to analogous situations encountered previously. When a sales manager tells a representative "this objection is structurally the same as what you see in renewal conversations, so apply the same framing logic," that is schema transfer. AI simulations optimized for scalability and scoring have no mechanism for producing this kind of relational instruction. They can approximate the surface features of coaching without replicating the deep structure of it.

The Scaling Fallacy

The business case for AI sales coaching rests heavily on the claim that competence development can be scaled through simulation volume. More reps, more simulated conversations, faster onboarding. This logic is internally coherent only if the competence being developed is primarily procedural. For routine sales contexts with predictable objection structures, this may be sufficient. For contexts requiring adaptive expertise, scaling procedural training produces exactly the outcome that ALC theory predicts: workers who perform well in training conditions and fail in transfer conditions. Rahman (2021) documented a structurally related problem in platform work contexts, where workers who learned platform-specific procedures could not adapt when platform logic shifted. The sales training case is not identical, but the underlying mechanism is: procedural fluency does not transfer when the situation changes.

A Testable Claim

Here is the specific prediction this analysis generates. Companies deploying AI sales coaching as a replacement for managerial coaching, rather than as a supplement to it, will observe strong initial performance metrics from trainees followed by disproportionate failure rates when those trainees encounter novel objection structures or customer contexts that fall outside their simulation training distribution. The variances will look like individual differences in ability. They will actually reflect a training design that never produced structural schemas in the first place. The Gen Z career squeeze story published alongside this news this week adds a second layer: entry-level workers already face fewer mentorship pathways, and replacing those pathways with AI simulations that optimize for procedural fluency rather than adaptive judgment compounds an existing structural deficit. These two trends are not coincidental. They are mutually reinforcing.

References

Gagrain, A., Naab, T. K., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society.

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.

Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). W. H. Freeman.

Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.

Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.