AI Surveillance, Performance Evaluation, and the Measurement Problem in Algorithmically-Mediated Work
A recent piece in the business press, co-authored by Zal.ai CEO Kayvon Touran and organizational leadership expert Ben Dattner, raises a pointed concern: existing laws and workplace norms are dangerously unprepared for the rapid expansion of AI-driven employee monitoring, performance evaluation, and compensation determination. The argument is not merely that surveillance is expanding - it is that the legal and ethical architecture governing employment relationships was built for a world where human managers made consequential decisions about workers, and that architecture is now being applied to systems that operate on fundamentally different logic. This is a more specific and more troubling claim than it might initially appear.
The Measurement Problem Is Not Primarily a Legal Problem
Touran and Dattner frame their concern largely in legal and ethical terms, which is understandable given their respective backgrounds. But the more foundational problem is epistemological. AI-driven performance evaluation systems do not simply measure behavior - they define which behaviors are legible to the algorithm and reward or penalize workers accordingly. Rahman (2021) describes this as an "invisible cage": workers are subject to evaluation criteria they cannot inspect, contest, or even fully observe. The consequence is not just unfairness in any particular case. The consequence is that workers begin optimizing for the signals the algorithm can detect rather than for the actual outputs their organization needs. When the measurement system and the productivity system diverge, workers rationally follow the measurement system.
Why Awareness Does Not Solve the Problem
One intuitive response to AI surveillance concerns is transparency: if workers understand how the monitoring system works, they can respond more effectively and employers can demonstrate accountability. This response has surface plausibility but runs directly into what I call the awareness-capability gap in the ALC framework. Kellogg, Valentine, and Christin (2020) document extensively that workers develop sophisticated awareness of algorithmic management systems without that awareness translating into meaningfully improved outcomes. Knowing that a system exists and knowing how to navigate it effectively are categorically different competencies. Transparency disclosures satisfy a legal and ethical demand without addressing the structural asymmetry between workers and the systems that evaluate them.
The Touran and Dattner piece implicitly acknowledges this when it notes that existing workplace norms are "dangerously unprepared." But norms are not the binding constraint. The binding constraint is that workers lack the structural schemas necessary to understand what the algorithm is actually measuring, how that measurement aggregates into performance scores, and which behavioral adjustments would produce genuine improvements versus merely gaming a metric. Gagrain, Naab, and Grub (2024) find that algorithmic media use develops folk theories - individual impressions about how systems work - rather than accurate structural understanding. The same dynamic almost certainly applies in workplace surveillance contexts.
The Organizational Design Implication
What makes the Zal.ai case analytically interesting is that it represents a particular organizational choice, not a technological inevitability. Organizations are selecting AI monitoring tools in contexts where the relationship between measurable signals and actual performance is often poorly understood even by the organizations deploying these tools. Hatano and Inagaki (1986) draw a distinction between routine expertise and adaptive expertise that is directly relevant here. Routine expertise involves executing known procedures reliably. Adaptive expertise involves understanding the principles underlying a procedure well enough to modify behavior when the procedure does not fit the situation. Organizations deploying AI surveillance systems are, in many cases, treating performance evaluation as a routine expertise problem when it is fundamentally an adaptive one.
The consequence of this misclassification is predictable. Schor et al. (2020) document how platform-mediated work creates dependence and precarity precisely because workers cannot inspect or contest the criteria by which they are evaluated. When that logic moves inside the traditional employment relationship - as Touran and Dattner describe - the precarity does not disappear simply because the worker has an employment contract rather than a platform account. The structural condition is reproduced in a new institutional context.
What This Means for Organizational Theory
The specific development Touran and Dattner are describing - AI systems making or heavily influencing compensation and performance determinations - is a natural experiment in what happens when measurement systems become organizationally authoritative before anyone has established the validity of what they are measuring. Hancock, Naaman, and Levy (2020) argue that AI-mediated communication fundamentally changes the nature of the communicative act. The same logic extends to AI-mediated evaluation: when an algorithm mediates between a worker's behavior and an organization's assessment of that behavior, the evaluation is no longer a judgment in any meaningful sense. It is a classification. Organizations, regulators, and researchers who treat these two things as equivalent are likely to generate frameworks that satisfy the letter of accountability requirements without addressing their substance.
The legal gap Touran and Dattner identify is real. But the harder problem is that closing the legal gap without addressing the structural epistemology of AI evaluation systems will produce compliance theater rather than genuine accountability. That is a problem organizational theory is better positioned to address than employment law.
References
Gagrain, A., Naab, T., & Grub, J. (2024). Algorithmic media use and algorithm literacy. *New Media & Society*.
Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. *Journal of Computer-Mediated Communication, 25*(1), 89-100.
Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), *Child development and education in Japan* (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. *Academy of Management Annals, 14*(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. *Administrative Science Quarterly, 66*(4), 945-988.
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., & Wengronowitz, R. (2020). Dependence and precarity in the platform economy. *Theory and Society, 49*(5), 833-861.
Roger Hunt