Layoff Manifestos and the Folk Theory Problem: What Block and Atlassian Are Actually Saying About AI
The Shift in Justification Language
Something specific happened in recent weeks. Executives at Block and Atlassian did not announce layoffs by citing macroeconomic headwinds or competitive restructuring. They cited AI directly, framing workforce reductions as a structural consequence of what AI now enables their organizations to do. This is a meaningful departure from prior communication patterns, and it deserves more careful analysis than it has received. The change is not primarily about honesty or transparency. It is about what these organizations believe they understand regarding AI's productive capacity, and that belief structure reveals a deeper problem in how firms are reasoning about algorithmic systems.
Folk Theories at the Organizational Level
In algorithmic literacy research, a folk theory refers to an individual's working impression of how an algorithmic system behaves, formed through accumulated experience rather than structural understanding (Gragain, Naab, & Grub, 2024). Folk theories are not random. They are often locally accurate and instrumentally useful. The problem arises when individuals or organizations treat their folk theory as a structural schema and make decisions that depend on properties the system does not actually have.
Block and Atlassian's layoff communications suggest that their executive teams hold a specific folk theory: that AI can now absorb a definable portion of knowledge work capacity, and that this absorption is stable enough to justify permanent headcount reductions. This is not a structural claim. It is an impression formed from current performance benchmarks, internal pilot programs, and competitive signaling. The distinction matters because permanent workforce restructuring is not reversible on short timescales. If the folk theory is wrong, or if it is right only under current conditions that will not persist, the organizations have made an irreversible commitment based on a belief they have not validated structurally.
The Awareness-Capability Gap, Reversed
The algorithmic literacy literature documents a well-established pattern in which workers become aware that algorithms govern their outcomes but cannot translate that awareness into improved performance (Kellogg, Valentine, & Christin, 2020). What Block and Atlassian represent is an organizational-level inversion of this problem. These firms appear confident that they understand what AI can do. They are acting on that confidence. But confidence in current capability does not equal structural understanding of capability boundaries.
Hatano and Inagaki (1986) draw a useful distinction between routine expertise and adaptive expertise. Routine expertise performs well within the conditions under which it was developed. Adaptive expertise transfers when conditions change, because it is grounded in principled understanding rather than pattern matching. An organization that restructures around AI's current routine capabilities is demonstrating routine reasoning about a system that is itself changing rapidly. This is precisely the context where routine reasoning fails.
What the Manifesto Format Signals
The framing of these layoff announcements as AI-era manifestos, as a recent news analysis described them, is not incidental. A manifesto is a declaration of structural belief. By using this format, executives are not just communicating a decision. They are signaling that they have resolved the uncertainty about AI's organizational role. That resolution is premature, and the communication format itself encodes the folk theory problem. When organizations declare that AI is reshaping work in ways that justify permanent restructuring, they are treating a current empirical snapshot as a structural invariant.
Rahman (2021) describes how algorithmic systems create invisible constraints that workers navigate without full visibility into the system's logic. Organizations making permanent workforce decisions based on AI productivity assessments are in a structurally analogous position. They are navigating a system whose constraints they have observed but not fully mapped. The manifesto format obscures this epistemic limitation rather than acknowledging it.
The Organizational Implication
The specific news event here is not just that companies are laying off workers and blaming AI. It is that the justification structure has changed in a way that forecloses organizational learning. When layoffs are attributed to economic cycles, the implicit assumption is that conditions may change and adaptation is possible. When layoffs are attributed to a structural transformation in what AI enables, the organization has committed to a fixed interpretation of its environment. That commitment will be difficult to revise if the interpretation proves incomplete, which, given the current pace of AI development, is a near-certainty on a multi-year horizon. The manifesto format is, in organizational terms, a schema lock.
References
Gragain, A., Naab, T. K., & Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media & Society.
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), Child development and education in Japan. Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Roger Hunt