EY's AI Agents and the Competence Inversion Problem in Professional Services

The Announcement and What It Actually Signals

EY executive Marc Jeschonneck told Business Insider this week that the firm's new AI agents will make life harder for entry-level accountants in the near term, framing this friction as a temporary adjustment cost before long-run benefits materialize. This framing is worth examining carefully, because it conceals a structural problem that standard transition narratives tend to obscure. The statement treats the difficulty as a matter of adaptation speed, as if entry-level workers simply need time to learn new tools. But the actual mechanism at work is more disruptive than that, and it maps almost precisely onto what coordination theorists would call a competence inversion problem.

Why "Adjustment Cost" Is the Wrong Frame

In classical professional development, entry-level workers acquire competence through exposure to routine tasks. Audit associates learn materiality thresholds by reviewing workpapers. Tax associates learn statutory structures by preparing returns. The routine work is not merely billable filler; it is the substrate through which structural schemas are built. When AI agents absorb that routine work, the developmental pipeline does not simply become faster. It becomes structurally incomplete. Kellogg, Valentine, and Christin (2020) documented a parallel dynamic in algorithmic work environments, where workers lost access to the feedback loops through which they had historically calibrated their own competence. The EY situation appears to replicate this pattern at the professional services level.

Jeschonneck's framing implicitly assumes that the competence developed through routine tasks is separable from the tasks themselves, that the learning can be decoupled from the doing. Hatano and Inagaki's (1986) distinction between routine and adaptive expertise directly challenges this assumption. Routine expertise is built through procedural repetition in stable environments. Adaptive expertise, which is what senior partners at EY actually deploy when they exercise judgment on complex engagements, develops through encounters with variation and ambiguity. If AI agents now handle the variation-free cases, the developmental conditions for adaptive expertise are degraded, not just delayed.

The Schema Deficit at Scale

This matters beyond EY specifically because the firm is not an outlier. The same logic applies anywhere professional services firms deploy AI agents to handle work that was previously developmental for junior staff. What looks like operational efficiency at the task level produces a schema deficit at the organizational level. Junior accountants, junior attorneys, junior analysts: all of these roles historically functioned as knowledge-building positions before they became revenue-generating ones. The AI transition conflates these two functions by treating the work purely as output.

The Patlytics case, a legal AI startup that reportedly grew revenue approximately ten times in a single year and now counts more than 40 percent of AmLaw 100 firms as clients, illustrates how rapidly this substitution is occurring in adjacent professional services domains. Patent law associates perform similarly structured developmental work: claim analysis, prior art searches, prosecution history review. When that work migrates to an AI platform, the firm captures margin, but the pipeline for producing senior-level structural judgment is attenuated in ways that will not be visible in near-term performance metrics.

What Organizations Are Actually Optimizing For

The deeper issue is a mismatch between what organizations measure and what they are actually depleting. Firms measure billable hours, realization rates, and per-partner revenue. They do not routinely measure the structural schema development of their junior workforce. Schor et al. (2020) observed a comparable measurement asymmetry in platform labor contexts, where workers' dependence on algorithmic systems increased as their own navigational competence atrophied. The professional services variant of this dynamic is subtler, but the structural logic is similar: the system optimizes for throughput at the expense of the conditions that produced expert judgment in the first place.

Jeschonneck is likely correct that EY's AI tools will be worthwhile in the long run for the firm's financial performance. That is a different claim than saying they will be worthwhile for the development of the workforce that operates them. Organizations conflate these two claims regularly, and the conflation tends to go unexamined until the talent pipeline has already thinned. The harder question, which firms like EY have not yet publicly addressed, is whether they have a theory of how adaptive expertise gets built in an environment where the developmental substrate has been algorithmically reassigned.

References

Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.

Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.

Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., and Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861.