Hiring Managers Are Not Reading Your Resume: What AI Screening Reveals About the Awareness-Capability Gap
The Specific Problem in Front of Us
A story circulating this week makes a claim that practitioners in talent acquisition have quietly known for some time: hiring managers are not reading resumes. The driver is not laziness but volume. AI-based applicant tracking and screening systems now serve as the primary filter between candidates and human review, and job seekers have responded by optimizing their application materials for machine legibility rather than human persuasion. The result is a documented arms race in which candidates attempt to reverse-engineer screening algorithms while employers quietly raise the bar on what algorithmic scoring must produce before a human ever engages.
This is a coordination problem masquerading as a technology story. What the news coverage frames as a quirky labor market development is, from an organizational theory standpoint, a structural shift in how competence gets allocated and recognized inside algorithmically-mediated hiring pipelines.
Folk Theories and the Awareness-Capability Gap in Job Seeking
The research literature on algorithmic literacy draws a consistent and uncomfortable distinction between awareness and capability. Kellogg, Valentine, and Christin (2020) document how workers in algorithmically-managed environments develop what might be called surface awareness: they know that an algorithm is making consequential decisions about them, but this knowledge does not translate into improved outcomes. Knowing the system exists is not the same as understanding the structural logic the system applies.
The current advice circulating to job seekers reproduces this problem almost perfectly. Recommendations to "beat the ATS" typically involve keyword insertion, formatting adjustments, and section-label standardization. These are topographic responses. They address the visible surface features of the system without engaging its underlying logic. Gagrain, Naab, and Grub (2024) describe this pattern as folk theory reliance: individuals construct idiosyncratic, impression-based models of how algorithms behave and act on those models rather than on accurate structural understanding. The folk theory here is that keyword density is the primary signal. The structural reality is considerably more complex, involving semantic clustering, contextual relevance weighting, and in some systems, behavioral signals derived from how candidates interact with the application portal itself.
The gap matters because folk theories produce brittle strategies. A candidate who has optimized for one system's apparent preferences is not prepared when a different employer runs a different system. Routine expertise, as Hatano and Inagaki (1986) define it, enables reliable performance in stable, familiar conditions. Adaptive expertise enables performance transfer to novel conditions. The labor market is currently generating a large population of job seekers with ATS-specific routine expertise and very little adaptive capacity.
The Organizational Side of the Problem
Employers are not passive in this dynamic. The same news cycle reporting on candidate gaming strategies also reveals that hiring organizations are responding by escalating AI screening complexity rather than questioning whether algorithmic pre-screening produces valid selection outcomes in the first place. This is the organizational analogue of the awareness-capability gap. Firms are aware that their AI screening tools are being gamed. The structural response, adding more screening layers, addresses the topography of the problem without engaging its topology.
Rahman (2021) describes this dynamic in platform labor contexts as the invisible cage: workers are constrained by algorithmic systems whose rules they cannot directly inspect, and the organization deploying those systems retains information asymmetry as a coordination mechanism. The hiring context introduces a notable variation. Unlike gig platform workers, job applicants are not yet inside the organizational boundary. They are being pre-selected by a system they have no contractual right to query, no feedback mechanism to learn from, and no recourse when the system produces a false negative. The power asymmetry is arguably more acute than in established platform employment relationships.
What Schema Induction Would Actually Look Like Here
The ALC framework I am developing in my dissertation research makes a specific prediction relevant to this situation. General training targeting structural features of algorithmic systems should produce better transfer outcomes than platform-specific procedural training. Applied here: a job seeker who understands why AI screening systems are designed the way they are - what optimization targets the system serves, what signals its architecture is built to detect, what constraints the deploying organization operates under - is better positioned to adapt across multiple systems than a job seeker who has memorized a checklist for one specific ATS vendor.
The practical implication is not optimistic in the short run. Schema-level understanding requires access to information that most candidates do not have: training data characteristics, model architecture, validation procedures. Sundar (2020) notes that as machine agency increases in consequential decisions, the human subjects of those decisions face increasing difficulty constructing accurate mental models of the systems affecting them. This is structurally different from previous labor market opacity. Employers have always had private information about hiring criteria. What is new is that the criteria are now partially encoded in systems that even the organizations deploying them do not fully understand at the inference level.
The Coordination Failure Nobody Is Naming
What this news moment actually reveals is a coordination failure at the institutional level. Individual job seekers are solving the wrong problem, optimizing for folk-theory-derived signals rather than structural features. Employers are responding to gaming by adding complexity rather than improving signal validity. And the organizations selling AI screening tools have strong incentives to maintain information asymmetry. Hancock, Naaman, and Levy (2020) describe AI-mediated communication environments as producing systematic distortions when participants lack accurate models of the mediating system. The hiring pipeline is now an AI-mediated communication environment, and neither side of the transaction has the schema-level understanding necessary for it to function as coordination rather than noise amplification.
The resume is not dying because of AI. It is becoming illegible because the coordination infrastructure around it has changed faster than either party's capacity to understand what the new infrastructure is actually measuring.
References
Gagrain, A., Naab, T. K., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society.
Hancock, J. T., Naaman, M., and Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100.
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). W. H. Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication, 25(1), 74-88.
Roger Hunt