KPMG's Lakehouse Pivot: Why Teaching Critical Thinking Before Technical Skills Gets the Sequencing Right
The Announcement and What It Actually Claims
KPMG US is restructuring its intern training program at its Lakehouse facility in Florida. The reported shift moves emphasis away from technical skill development and toward critical thinking and problem-solving as foundational competencies. This is a specific operational decision, not a general aspiration, and it deserves specific analysis rather than celebration or dismissal.
The instinct to applaud this move is understandable. But the more interesting question is whether KPMG's diagnosis of the problem is correct, and whether the proposed intervention actually addresses that diagnosis. My read is that they have identified a real structural problem but may be solving a symptom rather than the cause.
The Competence Sequencing Problem
KPMG's implicit theory of the problem goes something like this: interns arrive with technical training but cannot apply it flexibly when novel situations arise. The solution, then, is to train critical thinking first. This framing maps closely onto what Hatano and Inagaki (1986) called the distinction between routine expertise and adaptive expertise. Routine experts execute well under familiar conditions; adaptive experts reconstruct their approach when conditions shift. The consultancy world, increasingly mediated by AI tooling that changes what counts as a "familiar condition," is precisely the environment where routine expertise breaks down fastest.
The problem with KPMG's framing is that it treats critical thinking as a transferable general capacity that can be trained in isolation and then applied to domain content later. The cognitive science literature is skeptical of this. Gentner (1983) demonstrated that analogical transfer depends on structural alignment between source and target domains. You do not transfer reasoning capacity in the abstract; you transfer schemas that carry relational structure from one domain to another. If KPMG's critical thinking training does not build those schemas in relation to actual professional tasks, the Lakehouse program risks producing interns who are reflective but not effective.
Where This Connects to Algorithmic Work Environments
The deeper issue KPMG is responding to, even if their framing does not make it explicit, is that AI tools have restructured what junior professional work actually looks like. When Copilot or similar tools handle the first-draft synthesis of a client memo or the initial pass at data reconciliation, the intern's comparative advantage cannot be procedure execution. The procedures are being automated. What remains is judgment about whether the output is correct, contextually appropriate, and strategically sound.
This is precisely the structure that Kellogg, Valentine, and Christin (2020) identified in algorithmic work environments: the human role shifts from execution to evaluation, but organizations rarely train for the evaluative role explicitly. The awareness-capability gap I work with in my own research applies here. An intern who knows that AI can generate a client-ready document does not automatically know how to assess whether that document reflects accurate structural understanding of the client's situation. Awareness of the tool's capability is not the same as competence at supervising its output.
What the Right Intervention Looks Like
If KPMG wants to produce adaptive expertise rather than a slightly more reflective form of routine expertise, the training design needs to do something specific: it needs to expose interns to the structural features that recur across different client engagements, different service lines, and different AI tool configurations. This is what I would call schema induction, following Gentner's (1983) structure-mapping framework. The goal is not to teach general thinking skills in the abstract, but to build accurate mental models of how professional judgment tasks are structured, so that when the surface features change - a new client sector, a new AI tool, a new regulatory context - the underlying relational structure is recognizable.
Hancock, Naaman, and Levy (2020) made a related point about AI-mediated communication: the locus of skill shifts from message production to meta-level management of the communication process. For junior professionals, this means the training priority should be understanding the structure of professional judgment, not just practicing professional tasks. KPMG is closer to this insight than most firms. The question is whether their execution at Lakehouse reflects the distinction or papers over it.
The Broader Organizational Theory Implication
Firms that invest in this kind of reorientation face a genuine measurement problem. Critical thinking is harder to assess than technical certification, and the outcomes that matter - transfer to novel client situations, quality of AI output supervision - may not be visible during the internship period at all. This creates an incentive to revert to procedural training because it produces legible short-term signals. KPMG deserves credit for naming the problem. Whether they solve it depends on whether they are willing to accept delayed and ambiguous evidence of success, which is a harder organizational commitment than redesigning a curriculum.
References
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. *Cognitive Science, 7*(2), 155-170. https://doi.org/10.1207/s15516709cog0702_3
Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. *Journal of Computer-Mediated Communication, 25*(1), 89-100. https://doi.org/10.1093/jcmc/zmz022
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), *Child development and education in Japan* (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. *Academy of Management Annals, 14*(1), 366-410. https://doi.org/10.5465/annals.2018.0174
Roger Hunt