The 'Stop Hiring Humans' Conference and the Competence Inversion Nobody Is Naming
What Actually Happened in Las Vegas
At HumanX last week, a four-day AI conference drawing roughly 6,500 attendees, industry insiders delivered a now-familiar message: workers should code smarter, think harder, and lean into their humanity. The phrase "stop hiring humans" circulated as provocation rather than policy, but the dodge was notable. Speakers consistently refused to quantify how many jobs AI would displace, pivoting instead to reassurances about augmentation and upskilling. I want to take that dodge seriously as an organizational phenomenon, because the evasion itself tells us something precise about the structural problem these executives are unwilling to name.
The Inversion Nobody Addressed
Classical labor economics assumes that workers arrive at a job market with pre-existing, assessable competencies. Employers screen for those competencies, and wages reflect them. This is the assumption underlying virtually every reassurance about "upskilling" offered at HumanX. The argument goes: workers who develop AI competencies will remain employable. What this framing systematically ignores is that AI-mediated work environments do not sort pre-existing competencies; they produce competencies endogenously through participation. The worker who has never interacted with a copilot tool does not lack a fixed, observable skill that training can simply install. They lack exposure to an environment that would develop that skill through feedback. This is the competence inversion problem, and it is categorically different from a skills gap.
The ALC framework I am developing at Bentley makes this distinction explicit. Platform coordination, including AI-assisted work coordination, generates power-law outcome distributions not because some workers have innately superior abilities but because algorithmic environments amplify initial differences in structural understanding. Kellogg, Valentine, and Christin (2020) documented this dynamic in gig platforms: identical access to the same system produces radically unequal outcomes. The HumanX speakers were, in effect, telling workers to "upskill" without acknowledging that the skill in question only becomes legible after sustained participation in the system itself.
Folk Theories Versus Structural Schemas
A separate story from the same news cycle is directly relevant here. Reporting on AI-driven platforms and female gig workers in Indonesia documents how workers develop what they believe are functional theories of how the algorithm rewards and penalizes behavior, only to find those theories unreliable when platform logic shifts (as reported in recent coverage on algorithmic labor in the Global South). This is precisely what the algorithmic literacy literature identifies as the awareness-capability gap. Gagrain, Naab, and Grub (2024) distinguish between folk theories, which are individual impressions about how a system works, and structural schemas, which are accurate representations of a system's underlying logic. Workers who develop folk theories are not ignorant; they are doing exactly what intuition recommends. They observe patterns and generalize. But folk theories are topographical: they describe how to navigate one specific terrain. Structural schemas are topological: they describe the shape of constraints across terrains.
The HumanX conference, based on all available reporting, delivered topographical advice. Workers were told to use specific tools, adopt specific workflows, and demonstrate specific outputs. None of this is useless. But Hatano and Inagaki (1986) distinguish routine expertise from adaptive expertise precisely because procedural competence fails when the procedure's context changes. AI tool landscapes change continuously. A worker trained to use Copilot in its current configuration is acquiring routine expertise in a domain where the configuration will be revised, deprecated, or replaced within 18 months. The Indonesian gig workers are not an edge case; they are the leading indicator.
What the Conference Should Have Said
Rahman (2021) describes platform governance as an "invisible cage": the rules that shape worker behavior are structural and largely opaque, yet workers are evaluated as if their outcomes reflect individual choices. The HumanX framing reproduces this logic at a macroeconomic scale. By attributing future employment outcomes to individual upskilling decisions, conference speakers transferred responsibility for a structural coordination problem onto individual workers, without acknowledging that the structural features of AI-mediated work are not yet stable enough to train against procedurally.
The organizationally honest message would have been something like this: the competencies that matter in AI-mediated work environments are not yet fully specifiable, because those environments are still generating the conditions under which competence becomes visible. General schema induction, teaching workers how to read algorithmic constraints rather than how to use any particular tool, is the more defensible investment, precisely because it produces transfer across the platform changes that are already coming. Anything else is procedural training for a procedure that will not survive the next product update cycle.
References
Gagrain, A., Naab, T., & Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media & Society.
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), Child development and education in Japan. Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Roger Hunt