AI Intensification and the Competence Expansion Problem: When Algorithmic Systems Redefine Job Boundaries
A new study reports that AI workplace tools are expanding employee tasks beyond their formal job descriptions while simultaneously blurring boundaries between work and personal time. This finding represents more than the familiar "scope creep" problem. It reveals a structural mechanism through which algorithmic systems invert traditional competence assumptions in organizational coordination.
The Competence Boundary Inversion
Traditional job design assumes relatively stable competence boundaries. Organizations hire for defined roles, employees develop expertise within those boundaries, and performance evaluation measures execution within scope. AI systems disrupt this model by continuously expanding what constitutes "the job." This is not merely intensification of existing work. It represents a fundamental shift in how competence requirements are determined and communicated.
The mechanism operates through what Kellogg, Valentine, and Christin (2020) identify as algorithmic work allocation: systems that dynamically reassign tasks based on real-time optimization rather than fixed role definitions. When an AI tool suggests a new task or automates part of a workflow, it implicitly redefines competence expectations. The employee must either develop new capabilities or risk appearing less productive relative to peers who adapt faster.
This creates what I term the competence expansion trap. Unlike traditional skill development, where organizations provide training for new responsibilities, AI-driven task expansion assumes workers will self-develop necessary competencies. The algorithmic system treats expanded capability as immediately available rather than requiring cultivation. This assumption becomes self-fulfilling: workers who cannot rapidly adapt appear less competent, reinforcing algorithmic allocation toward those who can.
The Awareness Without Structure Problem
The study's finding that AI blurs work-life boundaries points to a deeper coordination failure. Workers presumably recognize that AI tools are expanding their responsibilities. This awareness, however, does not translate to actionable knowledge about how to manage the expansion or negotiate boundaries with algorithmic systems.
This pattern mirrors the awareness-capability gap documented in platform work research (Schor et al., 2020). Knowing that an algorithm shapes your work allocation differs fundamentally from understanding the structural principles governing that allocation. Workers develop folk theories about AI behavior ("it rewards fast responses," "it penalizes breaks") without grasping the actual optimization logic.
The critical failure occurs at what I have called the application layer: the interface where human workers must coordinate with algorithmic systems. Traditional organizations provide explicit coordination mechanisms (reporting structures, role definitions, communication protocols). Algorithmic systems assume coordination will emerge endogenously through worker adaptation to system outputs. This assumption fails when workers lack structural schemas for the coordination logic itself.
Why Training Cannot Solve Structural Problems
Organizations facing this issue typically respond with procedural training: teaching employees how to use specific AI tools or manage particular expanded responsibilities. This approach treats the problem as a capability deficit rather than a coordination structure deficit.
The distinction matters because procedural training develops what Hatano and Inagaki (1986) term routine expertise: the ability to execute known procedures efficiently. Routine expertise fails when the AI system changes its allocation logic, introduces new task categories, or operates in novel contexts. Workers trained on specific procedures cannot transfer that knowledge to structurally similar but procedurally different situations.
What organizations need instead is schema induction: training that builds understanding of the structural principles governing AI-driven task allocation. This means teaching workers to recognize optimization patterns, understand feedback mechanisms, and identify the boundaries within which algorithmic systems operate. Schema-based understanding enables adaptive expertise, allowing workers to respond effectively to novel AI behaviors without requiring new procedural training for each variation.
The Organizational Implication
The intensification finding suggests that current AI deployment strategies externalize coordination costs to workers. Organizations gain efficiency through algorithmic optimization while workers absorb the complexity of managing expanded, shifting role boundaries. This externalization remains invisible in traditional productivity metrics because the AI system treats expanded worker capacity as a constant rather than a variable requiring organizational investment.
The solution requires recognizing AI deployment as a coordination design problem, not merely a technology adoption problem. Organizations must build explicit structures for managing the application layer where workers interface with algorithmic systems. This includes making optimization logic transparent, providing schema-based training on system principles, and creating mechanisms for workers to negotiate competence boundaries with algorithmic allocation systems.
Without these structures, AI intensification will continue to expand worker responsibilities while eroding the organizational support systems that traditionally enabled competence development. The awareness-capability gap will widen, and the benefits of AI deployment will accrue asymmetrically to organizations while costs concentrate on workers least equipped to manage them.
Roger Hunt