Mark Cuban's CEO Warning and the Competence Verification Problem in AI Governance
Mark Cuban made a pointed claim this week: companies whose CEOs do not understand artificial intelligence may not survive the current transition, and executives who cannot demonstrate that understanding should "start to think about another job." The statement is worth taking seriously, not because Cuban's authority settles the question, but because the claim contains a genuine theoretical puzzle that corporate governance literature has not adequately addressed. What, precisely, does it mean for a CEO to "understand" AI, and how would a board of directors verify that understanding before organizational damage accumulates?
The Verification Gap in Executive AI Competence
Cuban's warning is structurally ambiguous. It conflates two very different competence states. The first is awareness: knowing that AI systems exist, that they affect competitive dynamics, and that ignoring them carries risk. The second is adaptive expertise: understanding the structural logic of how these systems operate well enough to make sound resource allocation decisions, evaluate vendor claims, and anticipate second-order organizational effects. Research on algorithmic literacy consistently finds that awareness and capability are not the same thing (Kellogg, Valentine, & Christin, 2020). Workers and managers who know that algorithms govern their environment routinely fail to translate that awareness into improved decision-making. The awareness-capability gap is well documented. Cuban's warning, taken at face value, does not distinguish between these two states, which means a CEO could satisfy the surface requirement by attending AI briefings and using the right vocabulary without possessing any genuine structural understanding.
Why Boards Cannot Easily Solve This
The governance problem here is not trivial. Boards are charged with evaluating CEO competence, but AI competence is particularly difficult to assess from the outside. Hatano and Inagaki (1986) distinguish routine expertise - the ability to execute known procedures correctly - from adaptive expertise, which involves understanding principles well enough to respond to novel conditions. A CEO who has been coached on AI talking points possesses something closer to routine expertise: a set of rehearsed responses that perform competence in familiar contexts. A board that evaluates AI literacy through earnings call transcripts or presentation polish will systematically mistake the first state for the second. This is not a new problem in corporate governance, but AI accelerates it because the domain is evolving faster than any procedural script can track.
Sundar (2020) frames a related concern about how people develop what he calls "machine heuristics" - simplified mental models about how AI systems behave. These heuristics feel like understanding and function socially as understanding, but they are closer to what Gentner (1983) would call surface-feature mappings: associations built on visible similarities rather than structural relationships. A CEO whose AI "understanding" consists of machine heuristics will make systematically incorrect predictions whenever the AI context shifts, which in the current environment is constantly.
The Endogenous Competence Problem at the Executive Level
The ALC framework I develop in my dissertation research argues that platform coordination inverts the classical assumption of ex-ante competence: platforms do not assume workers arrive capable, because capability develops through participation in the algorithmically mediated environment itself. Something analogous is happening at the executive level with AI. Organizations cannot simply hire for AI-competent CEOs from a pre-existing talent pool, because the competence required is still being defined by the technology's development. This means the relevant question is not whether a current CEO "understands AI" at some fixed threshold, but whether the organization has structures that support ongoing competence development at the leadership level as the technology changes.
Cuban's framing, despite its intuitive appeal, potentially pushes firms toward a replacement logic - find someone who knows AI - when the more defensible organizational response is a development logic: build institutional structures that generate and sustain adaptive expertise over time. Rahman (2021) documents how algorithmic governance creates what he terms an "invisible cage" of constraints that workers cannot fully see. Executive decision-making about AI investments faces an analogous constraint: leaders are making structural commitments inside systems they cannot fully observe, which makes static competence assessments of limited value.
What the Warning Actually Reveals
Cuban's statement is most useful not as a prescription but as a diagnostic signal. The fact that a credible public voice is now framing AI incompetence as an existential organizational threat suggests that the legitimacy costs of visible AI ignorance at the CEO level have crossed a threshold. Whether or not AI-literate CEOs actually produce better firm outcomes is an empirical question that has not been settled. What has shifted is the governance environment: boards that fail to address executive AI competence now bear reputational and fiduciary exposure that they did not bear two years ago. That shift in institutional context, independent of any underlying performance relationship, is itself an organizational fact worth analyzing carefully.
References
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. *Cognitive Science, 7*(2), 155-170.
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), *Child development and education in Japan* (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. *Academy of Management Annals, 14*(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. *Administrative Science Quarterly, 66*(4), 945-988.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. *Journal of Computer-Mediated Communication, 25*(1), 74-88.
Roger Hunt