Indeed's Anti-Leaderboard Decision Reveals a Structural Tension in AI Governance

The Announcement That Deserves More Attention

This week, Indeed's Chief Information Officer Anthony Moisant made a comment that struck me as more theoretically significant than its coverage suggests. In an interview with Business Insider, Moisant explained that Indeed deliberately tracks employee AI token usage but keeps that data in the background, explicitly rejecting the kind of competitive leaderboard model that platforms like Tokenmaxxing have promoted. The framing was straightforward: visibility without gamification. But the organizational logic underneath that decision is considerably more complex, and it connects to a structural problem that coordination theory has not fully addressed.

What a Leaderboard Actually Does

To understand why Indeed's decision is interesting, it helps to be precise about what a leaderboard accomplishes organizationally. A leaderboard converts a latent distribution of performance into a visible, rankable signal. This is not neutral. Rank-ordering workers on AI token consumption treats token use as a proxy for competence, which assumes that the quantity of AI interaction maps reliably onto productive output. That assumption is almost certainly wrong, and Moisant's team appears to know it. The problem is that token volume is a behavioral trace, not a competence measure. High token consumption could reflect sophisticated iterative prompting, or it could reflect an employee who cannot get the tool to produce usable output and keeps trying. These are structurally opposite situations that produce identical leaderboard scores.

The Awareness-Capability Confusion at Scale

This is where the decision connects to a persistent problem in algorithmic literacy research. Kellogg, Valentine, and Christin (2020) documented extensively that workers in algorithmically-mediated environments develop awareness of how systems function without that awareness translating into improved performance outcomes. The gap between knowing an algorithm exists and knowing how to respond to it effectively is not a knowledge gap - it is a schema gap. Workers hold folk theories rather than structural models of how the system operates, and folk theories produce locally adaptive behavior that fails under novel conditions (Hatano and Inagaki, 1986).

A token leaderboard would accelerate this problem rather than solve it. If employees are ranked on consumption volume, they receive a behavioral signal - use the tool more - without receiving any structural information about what effective use actually looks like. This is the organizational equivalent of telling a student to read more pages without clarifying what comprehension means. You will get more page-turning. You will not necessarily get more understanding. Indeed's decision to suppress the leaderboard is, in effect, a decision not to generate a false competence signal at scale.

The Governance Problem This Creates

Removing the leaderboard does not resolve the underlying measurement challenge - it relocates it. If token data exists in the background and informs managerial decisions, the question becomes: what interpretive framework do managers use when they examine that data? Without a public schema for what good AI use looks like, organizations risk replacing one bad proxy (ranked volume) with a collection of idiosyncratic managerial folk theories. Different managers will read identical token data differently depending on their own mental models of what AI-augmented work should produce. This is a coordination problem, not simply a measurement problem.

Sundar (2020) argues that the rise of machine agency fundamentally changes how humans attribute competence and responsibility in human-machine interactions. When the machine is doing part of the cognitive work, traditional performance attribution becomes structurally ambiguous. Organizations that suppress leaderboards without replacing them with shared structural schemas are essentially deciding to leave that ambiguity unresolved. This may be better than resolving it badly, but it is not a stable long-term governance position.

What a Structural Schema Would Look Like

Gentner's (1983) structure-mapping theory offers a useful frame here. Competence transfer occurs when learners abstract the relational structure of a domain rather than memorizing surface features. Applied to AI tool governance, this means organizations need to develop and communicate models of what effective human-AI interaction looks like at the process level - not how many tokens employees consume, but what kinds of iterative reasoning loops produce reliable outputs, and under what conditions AI-generated content requires human verification before organizational use. This is schema induction work, and most organizations are nowhere near doing it systematically. Indeed's CIO has identified the right problem by rejecting the leaderboard. The harder question, which the interview does not answer, is what the organization puts in its place.

Why This Matters Beyond One Company

The Wall Street banks currently facing congressional and investor pressure on AI adoption - JPMorgan and Goldman Sachs among them - are navigating the same structural tension at considerably larger scale. The instinct under that pressure is to generate visible metrics that demonstrate engagement with AI. Token leaderboards, adoption dashboards, and usage rate reports are all responses to that pressure. Indeed's decision to resist that instinct is worth taking seriously as an organizational model, provided it is paired with the harder work of building shared structural schemas rather than simply hiding a number that was not measuring the right thing to begin with.