Google's Project EAT and the Competence Development Paradox in AI Upskilling

Google has launched an internal initiative codenamed Project EAT, designed to "supercharge employees with AI" through better tools and practices. According to internal documents, the project aims to upskill Google's workforce in AI capabilities. The timing is notable: this comes as the company simultaneously faces questions about employee displacement from AI automation and as other tech companies struggle with post-layoff talent gaps. But Project EAT reveals a deeper theoretical problem that most corporate AI training initiatives systematically misunderstand.

The Schema Vacuum in Enterprise AI Training

The initiative's stated goal is to provide employees with "better AI tools and practices." This framing exemplifies what I call the procedural fallacy in algorithmic literacy development. The assumption is that competence emerges from access to tools plus instruction in their use. But research on algorithmic coordination suggests otherwise. Kellogg, Valentine, and Christin (2020) document that workers develop awareness of algorithmic systems without corresponding improvements in performance outcomes. Knowing that AI tools exist, and even knowing how to execute specific procedures with them, does not address the structural problem of adaptive expertise transfer.

The question Google should be asking is not "how do we train employees to use AI tools" but rather "what structural schemas enable employees to develop competence that transfers across rapidly evolving AI systems?" These are fundamentally different objectives with different pedagogical requirements. Procedural training produces routine expertise that becomes obsolete when the tool changes. Schema-based training produces adaptive expertise that transfers to novel contexts (Hatano & Inagaki, 1986).

The Illegibility Problem in Aggregate AI Capability

Project EAT faces the same coordination challenge I identified in Amazon's recent 16,000-person layoff: how do you develop workforce capability when the competencies required are themselves illegible to organizational decision-makers? Google's internal documents suggest they are trying to create enterprise-wide AI proficiency, but the variance puzzle applies here as forcefully as it does in platform labor markets. Give 10,000 employees identical access to the same AI tools and training, and you will observe power-law outcome distributions. Some employees will generate transformative productivity gains. Most will see marginal improvements. Some will see performance declines as they struggle with AI-mediated workflow disruption.

This variance cannot be explained by natural ability alone. It emerges from algorithmic amplification of initial differences in structural understanding. Employees who grasp the topology of AI system constraints (what kinds of tasks are well-suited to current AI capabilities, what kinds are not, how to decompose problems accordingly) will develop adaptive expertise. Employees who receive only topographical training (how to navigate specific AI interfaces, how to craft prompts for particular models) will develop brittle, context-dependent skills.

What Structural AI Literacy Would Require

If Google wanted to create transferable AI competence rather than tool-specific procedural knowledge, Project EAT would need to focus on schema induction targeting the structural features of AI-mediated work. This means teaching employees:

  • The constraint topology of current AI systems: what categories of tasks are computationally tractable versus intractable, and why
  • The coordination inversion problem: how AI tools shift the locus of expertise from task execution to task specification and evaluation
  • The illegibility mechanisms in AI-mediated communication: how AI intermediation changes what information flows between collaborators and what remains occluded (Hancock, Naaman, & Levy, 2020)
  • The transfer boundaries between AI systems: what competencies carry across different tools versus what requires context-specific relearning

This is substantially more difficult than teaching employees how to use ChatGPT or Gemini. It requires developing conceptual understanding of AI system architecture, training data limitations, optimization objectives, and failure modes. Most corporate training initiatives avoid this level of structural instruction because it is slower to show immediate productivity gains.

The Institutional Irony

There is a particular irony in Google, a company that builds AI systems, struggling with how to develop internal AI competence. It suggests that even organizations with deep technical expertise in machine learning face the schema development problem when trying to create adaptive expertise at scale. Building AI systems requires different competencies than effectively coordinating work through AI-mediated communication. The former is a technical problem. The latter is a coordination problem that technical expertise alone does not solve.

Project EAT will likely succeed in increasing employee usage of AI tools. Whether it creates transferable competence that persists as those tools evolve is a different question entirely. The distinction matters because the half-life of procedural AI training is measured in months, while the organizational investment required is measured in years.