AI Emotion Surveillance and the Competence Inversion Problem in Worker Monitoring

A recent report from Fast Company describes a growing category of workplace AI tools designed to monitor not just worker productivity, but emotional states and "agreeability." Employers are deploying systems that analyze tone, facial expression, and communication patterns to flag workers whose affect falls outside acceptable norms. This is not speculative. The report cites active deployments where managers receive dashboards summarizing employee sentiment scores alongside output metrics. The implications for organizational theory are significant, and they are not the ones most commentators are reaching for.

The Measurement Problem Is Not What You Think

The standard critique of emotion surveillance focuses on privacy and power asymmetry. Those concerns are legitimate. But there is a prior theoretical problem that receives almost no attention: these systems invert the classical assumption about what competence means in a monitored environment. In classical organizational theory, surveillance presupposes a legible object. You monitor productivity because productivity can be defined independently of the monitoring instrument. Emotion surveillance breaks this assumption. When the system flags "low agreeability," it is not measuring a pre-existing trait. It is constructing a behavioral target that workers must then learn to satisfy. The monitored object and the monitoring instrument co-produce each other.

This is precisely the endogenous competence problem that my dissertation research addresses through the Algorithmic Literacy Coordination (ALC) framework. Platform environments do not assume workers arrive with pre-formed competencies. Competencies develop, or fail to develop, through participation in algorithmically-mediated environments (Kellogg, Valentine, and Christin, 2020). Emotion surveillance extends this logic from gig platforms into conventional employment relationships. The worker who wants to score well on an agreeability dashboard must develop a theory of how the system works, and that theory will almost certainly be wrong in ways that matter.

Folk Theories Will Dominate, and They Will Fail

Algorithmic literacy research consistently documents a gap between awareness and capability. Workers learn that an algorithm exists and that it affects their outcomes. They then construct folk theories about what the algorithm rewards, and those folk theories are systematically inaccurate (Gagnon, Naab, and Grub, 2024). The inaccuracy is not random. Workers tend to over-index on visible, surface-level features and miss the structural logic underneath. On content platforms, this means creators chase trending formats rather than understanding engagement dynamics. In emotion surveillance contexts, the equivalent error is performing visible emotional signals - smiling more in video calls, using positive language in emails - while the underlying behavioral patterns that the system actually weights remain opaque.

Sundar (2020) describes this as the problem of machine agency: when workers interact with AI systems, they attribute causal logic to the system that reflects their own cognitive schemas rather than the system's actual architecture. The result is a surveillance environment where workers are actively adapting, but adapting to a model of the system rather than the system itself. This produces a strange organizational outcome. The employer believes they are receiving signal about employee sentiment. The employee believes they are managing that signal. Both beliefs are partially wrong, and the gap between them is where organizational dysfunction accumulates.

What Routine Expertise Cannot Fix Here

Hatano and Inagaki (1986) distinguish between routine expertise, which is procedural knowledge that works within stable task conditions, and adaptive expertise, which is principled understanding that transfers when conditions change. Emotion surveillance systems create exactly the conditions that expose the limits of routine expertise. The system parameters change. The vendor updates the model. The manager interprets the dashboard differently this quarter than last quarter. A worker who learned a set of behavioral procedures to score well on the previous version of the system has no transferable knowledge when the system changes. A worker who understands the structural logic of how sentiment is operationalized - what linguistic and behavioral features the system class of models tends to reward - has something that transfers.

The practical implication is counterintuitive. Organizations that want workers to navigate emotion surveillance in ways that produce accurate signal, rather than performative noise, need to teach structural understanding of how these systems work. Procedural compliance training will produce gaming, not genuine behavioral data. This matters for the employer's own purposes. If the surveillance instrument is producing data contaminated by strategic performance, the organizational decisions made on that data are degraded. The employer is, in effect, training workers to corrupt the instrument.

A Boundary Condition Worth Naming

Rahman (2021) argues that algorithmic control systems create an "invisible cage" precisely because workers cannot see the full structure of the constraints they operate within. Emotion surveillance makes this cage more intimate than any previous version. It moves algorithmic mediation from task completion into self-presentation. This is a genuine organizational boundary condition that existing theory has not fully addressed: the point at which the object of algorithmic measurement becomes the worker's interiority rather than their output. How coordination theory handles that shift is an open question, and the answer will not come from the privacy debate alone.

Gagnon, C., Naab, T., and Grub, J. (2024). Algorithmic media use and algorithm literacy. New Media and Society. Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan. Freeman. Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work. Academy of Management Annals, 14(2), 366-410. Rahman, H. A. (2021). The invisible cage. Administrative Science Quarterly, 66(4), 945-988. Sundar, S. S. (2020). Rise of machine agency. Journal of Computer-Mediated Communication, 25(1), 74-80.