Demis Hassabis Is Right About AGI Gaps, But He's Describing the Wrong Problem
This week, DeepMind CEO Demis Hassabis identified three domains where current AGI systems fall short of real intelligence: continuous learning, long-term planning, and consistency. The framing is technically accurate. But I think the more consequential observation is being missed entirely, and it has less to do with what AI cannot do and more to do with how organizations are misreading what AI already does.
The Awareness-Capability Gap Shows Up in the C-Suite, Too
Hassabis is describing a capability frontier from an engineering perspective. That is useful. What organizational theory needs to grapple with is something different: the gap between awareness of AI limitations and the ability to act on that awareness effectively.
This is precisely the dynamic my dissertation research addresses through the Algorithmic Literacy Coordination (ALC) framework. Kellogg, Valentine, and Christin (2020) documented how workers in algorithmically-mediated environments develop awareness of algorithmic constraints without translating that awareness into improved performance. The same pattern appears to be emerging at the governance level. Executives and boards increasingly know that AI systems are inconsistent, that they cannot plan across long horizons, and that they do not learn continuously. What they lack is a structural schema for what to do with that knowledge.
Knowing the topology of a problem differs from knowing how to navigate its topography. Hassabis’s three limitations are topological observations. The organizational question is topographic: given these constraints, how do you structure workflows, accountability chains, and decision rights around systems that are brittle in precisely these ways?
Identic AI and the Coordination Problem
Don Tapscott’s framing of “identic AI,” reported this week alongside the Hassabis story, pushes in an interesting direction. Tapscott argues that AI agents are not just efficiency tools but restructuring agents that change how organizations work at a fundamental level. I find this claim more sociologically interesting than technically precise.
What Tapscott is gesturing at is a coordination problem. When AI agents are assigned identities, roles, and decision-making authority within organizational hierarchies, you introduce a new class of actor whose behavior is neither fully predictable nor fully explainable. Rahman (2021) showed how algorithmic control in platform firms creates what he calls an “invisible cage,” where workers are constrained by systems they cannot fully observe or interpret. The identic AI framing suggests we are about to extend that cage upward into management layers.
This matters because the three limitations Hassabis describes, specifically the lack of continuous learning, long-term planning, and consistency, are precisely the limitations that make hierarchical delegation to AI agents organizationally dangerous. Hierarchies depend on consistent agents who can update their behavior in response to feedback and plan across time horizons longer than a single interaction. Current AI systems fail all three conditions.
Routine Versus Adaptive Expertise at the Organizational Level
Hatano and Inagaki (1986) distinguished between routine expertise, which performs well in familiar conditions but fails when conditions shift, and adaptive expertise, which applies principles flexibly across novel contexts. I want to apply this distinction to how organizations are currently deploying AI.
Most enterprise AI deployment is optimized for routine expertise. A system is trained on specific workflows, evaluated on narrow benchmarks, and integrated into processes where its brittleness is managed through human oversight. This works until conditions change. Hassabis’s point about continuous learning is essentially a statement that current AI systems are locked into routine expertise by design. They cannot update their schemas in real time.
The organizational implication is not simply that AI has limitations. It is that organizations structured around AI agents with routine expertise profiles will be systematically unprepared for the novel conditions those agents will inevitably encounter. Schor et al. (2020) documented how platform dependence creates precarity precisely because the systems workers depend on are optimized for average conditions, not edge cases. The same dynamic scales to enterprise AI governance.
What This Means for How We Study Organizations
I am raising this not as a cautionary tale about AI risk but as a research design problem. If AI agents are becoming organizational actors, then organizational theory needs frameworks for studying them as such. The ALC framework offers one entry point by asking how coordination mechanisms develop when competence cannot be assumed in advance. Hassabis’s comments suggest that for the foreseeable future, AI competence in the three domains he identified cannot be assumed. The organizational question is how to coordinate around that fact, not how to wait for it to be resolved.
Hancock, Naaman, and Levy (2020) framed AI-mediated communication as a distinct register requiring new theoretical tools. I think AI-mediated coordination deserves the same treatment.
References
Hancock, J. T., Naaman, M., and Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, K. S. (2021). The invisible cage: Workers’ reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., and Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861.
Roger Hunt