The $285 Billion Evaporation and the Competence Trap in Enterprise Software
What Happened and Why It Matters
On February 3rd, 2026, approximately $285 billion in market capitalization disappeared from global software stocks in a single trading session. Atlassian dropped 35% in one week. Intuit fell 34%. Salesforce hit a 52-week low. Oracle's valuation nearly halved from its recent peak. The proximate cause was not a recession signal or an interest rate shock - it was the market's re-evaluation of what enterprise software companies actually sell, prompted by accelerating AI capability announcements from firms like Anthropic. When IBM's stock shed $30 billion on a single afternoon following an Anthropic model release, investors were not panicking irrationally. They were updating a prior about where durable value lives in software-mediated work.
The Competence Assumption Hidden in Software Pricing
Enterprise software companies have long operated on an implicit organizational theory: that coordination problems are solved by giving workers better tools. The logic is straightforward enough - deploy Salesforce, and your sales organization coordinates around a shared data layer; deploy Intuit, and financial workflows become legible across roles. What this model assumes, however, is that the primary bottleneck to organizational performance is access to structured information. The $285 billion repricing event suggests the market has started questioning that assumption, because AI systems are now capable of handling the procedural layer that most enterprise software was designed to scaffold.
This connects to something I find underappreciated in how organizations think about platform adoption. The ALC framework I am developing argues that platform coordination inverts classical assumptions about competence: platforms do not assume workers arrive with pre-existing ability to navigate algorithmically-mediated environments (Kellogg, Valentine, & Christin, 2020). Enterprise software, by contrast, has always assumed the opposite. It assumes workers know what they need to do and provides a structured interface for doing it faster. AI does not just accelerate that interface - it threatens to absorb the procedural layer entirely, leaving organizations to confront a question they have rarely had to answer explicitly: what is the human competence that remains after the procedures are automated?
Routine Expertise Cannot Survive This Transition
Hatano and Inagaki (1986) drew a distinction between routine expertise and adaptive expertise that is directly relevant here. Routine expertise is the ability to execute established procedures with high reliability. Adaptive expertise is the ability to respond effectively when the procedures themselves become inappropriate or obsolete. Enterprise software has, for decades, been an industrial-scale instrument for producing routine expertise. It trained workers to follow defined workflows, input structured data, and interpret standardized outputs. The workers who became most valuable in that environment were those who internalized the procedural logic of the tool.
What the February market repricing reveals is that investors now believe AI will commoditize routine expertise faster than enterprise software vendors can adapt their pricing models. This is not a prediction about distant futures - Atlassian losing a third of its value in a week reflects a current assessment about revenue sustainability, not a speculative worry about 2035. The organizations that built their coordination infrastructure around procedural scaffolding are now exposed in a way that is structurally similar to what Schor et al. (2020) described in platform labor markets, where dependence on a particular technical layer creates acute vulnerability when that layer shifts.
The Transfer Problem Nobody Is Discussing
Here is what I think is missing from most analysis of this market event. The conversation has focused almost entirely on which software companies will survive and which AI companies will capture their revenue. The organizational question - what happens to the workers and firms whose coordination competencies were built around tools that are now being repriced to zero - is receiving almost no attention.
Gentner's (1983) structure-mapping theory predicts that transfer of learning is driven by structural similarity, not surface similarity. Workers who learned Salesforce well learned a surface-level topography: where to click, how to run reports, which fields matter. They did not necessarily develop a schema for the underlying structure of customer relationship management as a coordination problem. When the tool changes, the topography is useless and the schema - if it was ever developed - is what survives. The ALC framework calls this the awareness-capability gap: knowing how to navigate a specific platform is not the same as understanding the structural logic the platform was built to serve (Hancock, Naaman, & Levy, 2020).
What Organizations Should Take From This
The $285 billion event is less interesting as a market story than as an organizational diagnostic. It reveals which firms built their coordination capabilities on procedural scaffolding and which built them on structural understanding. For organizations now deciding how to respond, the temptation will be to adopt AI tools through the same procedural logic - find the new platform, train workers on its topography, and move on. That approach will reproduce exactly the vulnerability that just erased nearly three hundred billion dollars in equity value. The more durable response requires asking what structural competencies workers need to coordinate effectively regardless of which tool mediates that coordination - and investing in schema development rather than procedural retraining. That is a harder intervention to design and slower to show results, which is precisely why most organizations will not do it until the next repricing event forces the question again.
References
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.
Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100.
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., & Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861.
Roger Hunt