Algorithmic Managers and Missing Rights: What the Gigification of Nursing Reveals About Platform Coordination
A New Front in Platform Labor
A recent report by the AI Now Institute, covered this week in business press, documents how healthcare staffing applications are systematically reclassifying nurses as gig workers, subjecting them to AI-driven scheduling and performance monitoring, and in doing so stripping them of workers' compensation protections and stable employment guarantees. This is not a peripheral story about a niche labor market. Nursing represents one of the most credentialed, organizationally embedded, and institutionally regulated professions in the United States. If algorithmic management can penetrate that workforce, the structural questions it raises extend far beyond healthcare.
The AI Now Institute report frames this primarily as a legal and labor rights crisis, and that framing is correct. But it leaves a deeper coordination problem underexamined. The "gigification" of nursing through platform-mediated staffing does not just reclassify workers legally. It fundamentally changes how competence is assumed, evaluated, and rewarded inside organizations. That change has consequences that neither labor law nor conventional management theory is well-equipped to address.
The Competence Assumption Problem
Classical coordination theory, whether through markets, hierarchies, or professional networks, assumes that workers arrive with pre-existing, verifiable competence. A nurse hired through a hospital human resources process brings credentials, references, and institutional history. The organization then structures work around that assumed baseline. Algorithmic staffing platforms invert this. As Kellogg, Valentine, and Christin (2020) argue, algorithmic systems at work do not merely allocate labor. They actively filter, rank, and amplify differences in worker performance in ways that become self-reinforcing over time. A nurse with identical clinical credentials to a peer, but with less familiarity with how a particular platform scores response time or shift acceptance rates, will systematically receive worse assignments, lower visibility, and fewer opportunities. The platform does not know she is an equally competent clinician. It only knows what it can measure.
This is precisely the variance puzzle my dissertation research engages with. Platform workers with identical formal qualifications show dramatically different outcomes, and natural ability alone cannot explain the divergence. The AI Now Institute report offers a particularly stark version of this: nurses with decades of acute care experience find themselves ranked below less experienced peers who have simply learned to navigate the platform's scoring logic more effectively. Rahman's (2021) concept of the invisible cage is useful here. Workers are constrained not by explicit rules but by the structural architecture of algorithmic systems they cannot fully observe or contest.
Surveillance Wages and the Awareness-Capability Gap
The report introduces the term "surveillance wages," describing how platforms adjust compensation dynamically based on behavioral compliance metrics. This creates a particularly insidious version of what algorithmic literacy research identifies as the awareness-capability gap. Schor et al. (2020) document that platform workers often develop awareness that an algorithm governs their outcomes, but this awareness does not reliably translate into improved performance. Knowing you are being watched and scored is not the same as knowing which behaviors the scoring system actually rewards.
For nurses, this gap is amplified by the professional and ethical constraints of clinical work. A driver on a rideshare platform can experiment with route timing or acceptance behavior at relatively low cost. A nurse cannot ethically game patient assignment patterns to optimize her algorithmic rating. The behavioral flexibility that allows some gig workers to develop effective folk theories of platform logic, however imperfectly, is largely unavailable in clinical contexts. The result is a workforce that is algorithmically managed but structurally unable to adapt to that management through the trial-and-error learning that platforms implicitly assume.
What This Means for Organizational Theory
The AI Now Institute report is primarily a policy document, and its recommendations focus on regulatory intervention. That is appropriate. But organizational theorists need to engage with a prior question: what kind of coordination failure is this, exactly? Hatano and Inagaki (1986) distinguish between routine expertise, which executes known procedures reliably, and adaptive expertise, which reconfigures responses when the environment changes. Algorithmic staffing platforms effectively demand adaptive expertise from workers while providing only the conditions for routine compliance. The platform penalizes deviation from behavioral norms it never makes explicit, while simultaneously changing those norms through continuous model updates that workers cannot observe.
Asonye's (2021) work on organizational factors affecting nurse competence in acute care settings reinforces this point from a different angle. Organizational context shapes whether clinical competence can actually be expressed and recognized. When the organizational layer is an opaque algorithmic system rather than a visible management structure, the conditions for competence expression become structurally compromised regardless of what individual workers know or can do.
The gigification of nursing is not primarily a story about technology replacing human judgment. It is a story about a coordination mechanism being applied to a labor context it was not designed for, and the resulting gap between what workers can do and what the system allows them to demonstrate. That gap is not closed by awareness. It requires a different kind of structural literacy entirely, and the institutional conditions to act on it.
References
Asonye, C. C. (2021). Organizational factors associated with nurses' competence in averting failure to rescue in acute care settings. Journal of Client-Centered Nursing Care, 7(1). https://doi.org/10.32598/jccnc.7.1.358.1
Hatano, G., and Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, and K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.
Kellogg, K. C., Valentine, M. A., and Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.
Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., and Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5-6), 833-861.
Roger Hunt