Block's AI Layoffs and the Competence Attribution Problem
What Jack Dorsey Actually Said
This week, Block announced layoffs affecting approximately 4,000 workers. In his letter to employees, CEO Jack Dorsey framed the decision explicitly around AI: that AI tools are making fundamental changes to "what it means to build and run a company." Tech analysts have pushed back, pointing to pandemic-era overhiring and overly ambitious automation promises as the more proximate causes. Both readings are probably partially correct. But the more interesting question is not what caused the layoffs. It is what the framing of the announcement reveals about how organizations currently understand the relationship between AI capability and human competence.
The Attribution Problem in Organizational AI Adoption
When a CEO attributes workforce reduction to AI capability, they are making an implicit claim about substitution: that AI systems can now perform tasks previously requiring human labor. This claim is doing a lot of unexamined work. It conflates tool availability with tool-mediated performance, which are categorically different things. Having access to an AI system that can generate code does not automatically mean that the organization has developed the organizational competence to deploy that system effectively. Access and capability are not the same variable.
This distinction maps directly onto what Kellogg, Valentine, and Christin (2020) identified as the central problem in algorithmically-mediated work environments: workers with identical tool access routinely produce dramatically different outcomes. The same asymmetry applies at the organizational level. Two firms with identical AI infrastructure will not produce identical efficiency gains. The variance in outcomes cannot be explained by the tools alone. It requires understanding the coordination mechanisms through which organizations absorb and deploy those tools. Dorsey's framing, like most public AI-layoff narratives, skips over this entirely.
Folk Theories at the Executive Level
The ALC framework distinguishes between folk theories and structural schemas. Folk theories are individual impressions about how a system works, often derived from surface-level observation. Structural schemas are accurate representations of the underlying logic governing system behavior. Most workers develop folk theories about algorithmic systems rather than schemas, because schemas require deliberate induction effort and are harder to acquire from experience alone (Gentner, 1983).
What is interesting about the Block announcement is that the same distinction appears to operate at the executive level. Dorsey's framing reads as a folk theory of AI capability: AI tools are changing things, therefore headcount must fall. This narrative skips the structural question of how AI actually integrates into organizational workflows, where the friction points are, what new coordination costs emerge, and whether the efficiency gains projected from tool access are realistically achievable at scale. The McKinsey Global Institute currently estimates that 57% of U.S. work hours could be automated within five years, but that projection itself depends on a set of organizational assumptions about deployment competence that are rarely made explicit.
The Reallocation Problem and the Time Dividend
The McKinsey framing of the "AI Time Dividend" - the idea that time freed by automation can be captured as organizational value - surfaces the same underlying problem from a different angle. Capturing freed time requires that organizations know what to do with it. That is an organizational competence question, not a technology question. Time reallocation is not passive. It requires active coordination, schema-level understanding of what tasks remain distinctly human, and adaptive expertise in reconfiguring workflows (Hatano and Inagaki, 1986). Routine expertise, the kind that follows procedures optimized for current conditions, will not be sufficient when the task environment itself is reorganizing.
Block's layoffs may or may not be primarily AI-driven. But the public framing adopted by Dorsey reflects a broader pattern in corporate AI communication: organizations are more comfortable asserting AI capability than demonstrating organizational readiness. Attributing workforce reduction to AI is strategically useful because it positions the decision as forward-looking and technologically sophisticated. It is less useful as an accurate description of what is actually happening inside the firm.
Why the Framing Gap Matters
If organizations systematically overstate AI substitution capability while understating the organizational competence required to realize that substitution, they create a predictable gap between projected and realized efficiency gains. Rahman (2021) described how algorithmic systems generate invisible constraints on worker behavior, constraints that are real but not legible from outside the system. A parallel dynamic may be emerging at the organizational level: executives are narrating AI transformation in terms of substitution and reduction, while the actual coordination challenges of AI integration remain largely invisible in public discourse. The workers being laid off are, in many cases, being displaced not by functional AI systems but by organizational confidence in AI systems that have not yet been tested at operational scale. That is a meaningful distinction, and one that organizational theory is better positioned to make than quarterly earnings calls.
References
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. *Cognitive Science, 7*(2), 155-170.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), *Child development and education in Japan* (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. *Academy of Management Annals, 14*(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. *Administrative Science Quarterly, 66*(4), 945-988.
Roger Hunt