Jensen Huang at Carnegie Mellon and the Commencement Address as Competence Signal
The Graduation Speech as Organizational Artifact
On Sunday, NVIDIA founder and CEO Jensen Huang delivered the keynote address at Carnegie Mellon University's 128th commencement ceremony, telling graduates that "a new industry is being born" and that their careers begin "at the start of the AI revolution." The speech received considerable media attention, largely framed around its inspirational register. I want to set aside that framing and treat the address as an organizational artifact worth analyzing more carefully. What Huang actually communicated - and what he conspicuously did not communicate - tells us something important about how platform-adjacent firms signal competence expectations to incoming labor market participants.
The Framing Problem: Revolution Versus Structural Understanding
Huang's address invokes the language of epochal opportunity: a new era, an extraordinary moment, an industry being born. This rhetoric is not unusual for commencement addresses, but it carries a specific cognitive consequence. When new entrants to a domain are told they are arriving at the beginning of a transformation, the implicit message is that prior structural knowledge is less relevant than enthusiasm and participation. The problem is that this framing systematically deprioritizes schema acquisition in favor of procedural engagement. Research by Hatano and Inagaki (1986) draws a precise distinction between routine expertise, which is calibrated to expected task sequences, and adaptive expertise, which involves a principled understanding of why those sequences work. Revolutionary framing nudges new workers toward the former while creating the impression they are developing the latter.
Shadow AI and the Awareness-Capability Gap
This matters because a second news item published this week is directly related: reports on the rise of "shadow AI," meaning workers who use AI tools like Claude without informing their IT departments or organizations. These two stories, read together, describe a coherent pattern. New workers are told that participation in AI-mediated environments is the fundamental career move of this generation, and simultaneously, those same workers are routing around institutional oversight to access AI tools independently. The result is a workforce that is highly motivated to engage with algorithmically-mediated systems but structurally underprepared to understand those systems. This is precisely the awareness-capability gap identified in algorithmic literacy research: workers develop awareness that algorithms shape outcomes, but this awareness does not translate into improved performance (Kellogg, Valentine, and Christin, 2020). The shadow AI phenomenon suggests the gap is now organizational as well as individual. Firms lack visibility into how their employees are using AI tools, which means they cannot systematically address the schema deficits those employees carry.
What Commencement Rhetoric Omits
Huang's speech, from what has been reported, offers no structural account of how AI platforms actually coordinate work. It does not describe the mechanisms by which algorithmic systems amplify initial competence differences, nor does it explain why workers with identical access to the same tools routinely produce dramatically different outcomes. These distributional questions matter considerably. Power-law distributions in platform-mediated work are not primarily the result of natural ability or motivational differences. They reflect the endogenous amplification of early structural advantages (Schor et al., 2020). A commencement address that frames AI adoption as generational opportunity without explaining the structural logic of that opportunity is, in effect, equipping graduates with folk theories rather than schemas. Gentner's (1983) structure-mapping framework would predict that transfer to novel AI contexts requires learners to have accurate representations of structural relations, not surface-level familiarity with specific tools.
The Organizational Implication
The combination of high-profile revolutionary framing from AI industry leaders and the concurrent rise of unsanctioned AI tool adoption inside organizations points toward a competence coordination failure that firms are not yet taking seriously. Workers enter organizations primed to engage with AI and inclined to do so outside formal channels. Organizations, for their part, have largely responded with either prohibition or undifferentiated "AI literacy" initiatives that address awareness without building structural understanding. Neither response closes the gap. What the shadow AI pattern actually signals is that procedural training - telling workers which tools to use and how - is failing to produce workers who understand when, why, and under what constraints those tools produce reliable outputs. That is a schema problem, and commencement addresses from platform CEOs, however well-intentioned, are making it worse by orienting new entrants toward participation rather than comprehension.
References
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. Research and Clinical Center for Child Development, 11, 27-36.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., and Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5), 833-861.
Roger Hunt