Inside the AI Mandate: Why Corporate Upskilling Campaigns Misread the Competence Problem

The Carrot-and-Stick Model of AI Adoption

A recent report from Fortune describes a coordinated push by major technology firms and Wall Street banks to transform every employee into what they are calling an "AI master." The mechanisms being deployed are familiar: productivity incentives, mandatory training modules, internal leaderboards, and in some organizations, performance reviews tied to AI tool usage. The framing is consistent across firms - workers who resist AI adoption are described as apprehensive, and the organizational response is to design compliance out of the system through structured incentive programs. This is a coherent managerial strategy. It is also, I would argue, a strategy that misreads the underlying competence problem almost completely.

The Awareness-Capability Confusion at the Center of These Programs

The programs described in the Fortune reporting share a common structural assumption: that exposure to AI tools, combined with sufficient motivation, will produce competent AI-augmented workers. This assumption conflates two distinct phenomena that the algorithmic literacy literature treats as separable. Kellogg, Valentine, and Christin (2020) document extensively that workers in algorithmically-mediated environments develop awareness of the systems governing their work without that awareness translating into improved performance outcomes. Gagrain, Naab, and Grub (2024) extend this finding to media consumption contexts, showing that algorithm literacy - knowing that algorithmic systems shape content - does not reliably produce the behavioral adjustments that literacy frameworks predict. The corporate upskilling campaigns being reported function almost entirely at the awareness layer. They teach workers that generative AI tools exist, what prompting syntax looks like, and which workflows are candidates for automation. What they do not teach is the structural logic underlying why AI outputs vary, when AI-mediated communication introduces systematic distortion, and how to calibrate judgment in contexts where AI output is plausible but incorrect.

Routine Expertise Is the Wrong Target

Hatano and Inagaki (1986) draw a distinction between routine expertise, the ability to execute well-defined procedures reliably, and adaptive expertise, the ability to apply underlying principles to novel problem configurations. The "AI master" programs described in the Fortune piece are oriented almost entirely toward routine expertise. They train workers to complete specific workflows with specific tools. This is a defensible short-term strategy if the tool environment remains stable. The difficulty is that generative AI platforms are not stable. Model updates, interface changes, and capability shifts occur on timescales that outpace procedural training cycles. A worker who has learned a procedure for using a particular AI tool is not equipped to adapt when the tool changes, which is roughly the same problem that platform-specific procedural training produces in gig economy contexts (Schor et al., 2020). The competence that would actually transfer across these shifts is schema-level understanding: knowing what class of problem AI-mediated communication is suited to solve, what failure modes are structurally predictable, and how to detect output degradation. That kind of understanding is not what mandatory module completion produces.

The Organizational Theory Problem These Campaigns Ignore

There is a second issue that the corporate framing obscures, one that sits closer to organizational theory than to cognitive science. The Fortune reporting notes that around 15 percent of Americans report willingness to work for an AI boss, while the majority express distrust of AI output and concern about job displacement. These two data points are usually reported as a tension to be managed through better change communication. I read them differently. Sundar (2020) identifies what he calls machine agency - the degree to which workers attribute autonomous decision-making capacity to AI systems - as a distinct variable shaping how humans interact with AI-mediated outputs. Workers who distrust AI output may not be exhibiting irrational resistance. They may be accurately detecting the accountability gap that opens when consequential decisions are routed through systems that cannot be interrogated using standard organizational accountability mechanisms (Rahman, 2021). That is not an apprehension problem. That is a governance problem, and no amount of incentive design resolves it.

What the Mandate Reveals About Organizational Schema

The broader pattern visible in this reporting is that large organizations are attempting to solve a schema-level problem with a procedural-level intervention. Gentner's (1983) structure-mapping theory suggests that transfer of competence requires learners to develop accurate structural representations of the problem domain, not just surface familiarity with specific instances. The firms described in the Fortune piece are producing surface familiarity at scale. Whether that produces durable capability gains, or whether it produces a cohort of workers who can demonstrate AI tool usage in stable conditions but fail in novel ones, is an empirical question. My prediction, derived from the ALC framework, is that the performance distributions within these firms will widen rather than narrow as a result of these programs. Workers who independently develop structural schemas for AI-mediated work will pull ahead. Workers who complete the modules will plateau. That is not a failure of motivation. It is a predictable consequence of mistaking procedural compliance for competence development.

References

Gagrain, A., Naab, T. K., & Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media & Society.

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.

Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100.

Hatano, G., & Inagaki, K. (1986). Two courses of expertise. Research and Clinical Center for Child Development Annual Report, 8, 27-36.

Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.

Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.

Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., & Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861.

Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication, 25(1), 74-88.