FedEx's 400,000-Worker AI Training Rollout Reveals a Classic Competence Architecture Problem
The Announcement and What It Actually Claims
FedEx recently announced it is deploying AI training to approximately 400,000 workers with an explicit goal of making those workers "promotion-ready." The framing is notable: this is not positioned as efficiency training, compliance training, or even productivity training. It is positioned as a career development intervention. That distinction matters enormously for evaluating whether the program is likely to work, and for understanding what kind of competence FedEx is actually trying to build.
Procedural Training at Scale
The available reporting on the FedEx rollout emphasizes tool selection by leadership and structured training access for workers. This is a procedural architecture: identify tools, document workflows, train workers on those specific workflows. The underlying logic assumes that competence with AI systems is primarily a matter of familiarity with particular interfaces and outputs. If workers know how to operate the tools FedEx has approved, they become more capable employees.
This logic has a coherent internal structure, but it rests on an assumption that Hatano and Inagaki (1986) identified as the distinguishing feature of routine expertise rather than adaptive expertise. Routine expertise produces fast, accurate performance within familiar task conditions. Adaptive expertise produces performance that transfers to novel conditions. The question FedEx has not publicly answered is which of these it is trying to cultivate across its workforce, and whether its training architecture is capable of producing the one it claims to want.
The Awareness-Capability Gap at Organizational Scale
Research on algorithmic literacy has consistently documented what I call the awareness-capability gap: workers can develop accurate awareness that algorithmic systems are shaping their outcomes without developing any corresponding improvement in those outcomes (Kellogg, Valentine, & Christin, 2020). Knowing that an AI tool exists, and even knowing how it generally operates, does not translate automatically into the kind of structural understanding that changes behavior. Gagrain, Naab, and Grub (2024) found that increased algorithmic media use does not reliably produce algorithmic literacy; exposure and competence are separable variables.
At the scale of 400,000 workers, this gap becomes an organizational architecture problem rather than an individual learning problem. If the training program produces workers who can identify which AI tools FedEx has deployed and follow the documented procedures for using them, the program will likely show strong initial performance metrics. Workers will complete tasks faster, make fewer obvious errors, and score well on any assessment tied directly to the training content. None of that is evidence of the adaptive competence required to become "promotion-ready" in environments where AI tools are themselves changing.
Schema Induction Versus Procedural Documentation
The theoretical alternative to procedural training is schema induction: teaching workers the structural features of AI-mediated work environments rather than the specific features of particular tools. Gentner's (1983) structure-mapping framework suggests that transfer between domains occurs when learners have abstracted the relational structure of a problem rather than its surface features. A worker who understands why a given AI tool produces the outputs it does - what the system is optimizing for, what inputs it weights, where its outputs are structurally unreliable - is positioned to adapt when the specific tool changes. A worker who knows how to follow the documented procedure for that tool is not.
The distinction between topology and topography is useful here. Topographic knowledge is knowledge of specific terrain: this button does this, this output means that. Topological knowledge is knowledge of the shape of the constraint structure: AI recommendation systems tend to amplify initial signals, output confidence scores are not accuracy scores, optimization targets embedded in systems may not match organizational objectives. FedEx's reported training architecture is designed to produce topographic knowledge. The "promotion-ready" framing implies that topological knowledge is what actually differentiates career trajectories.
Why the Framing Matters for Evaluating the Program
None of this means the FedEx program is without value. Procedural AI training for a logistics workforce operating at scale has obvious operational justifications. Rahman (2021) documented the ways algorithmic management systems constrain worker autonomy through information asymmetry; reducing that asymmetry through any form of training is a legitimate organizational goal. The problem is specifically the gap between the stated outcome - promotion readiness - and the architecture most likely to produce it.
If FedEx measures program success by task-level performance metrics in the short term, the program will almost certainly appear to succeed. If it measures program success by the career trajectory outcomes it is explicitly promising workers, the evaluation timeline and the theoretical prediction point in a different direction. Organizations rolling out AI training at scale should be precise about which competence they are building, because procedural and adaptive expertise require different instructional architectures, and conflating them in program framing does a disservice to both the organization and the workers being trained.
References
Gagrain, A., Naab, T. K., & Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media & Society.
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), Child development and education in Japan. Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactance to algorithmic evaluation systems. Administrative Science Quarterly, 66(4), 945-988.
Roger Hunt