South Korea's AI Mental Health Laws and the Schema Problem in Algorithmic Harm

South Korea just became the first major economy to enact comprehensive AI safety legislation that explicitly addresses mental health impacts of algorithmic systems. The law requires AI developers to assess and mitigate psychological harms, marking a significant departure from the narrower focus on privacy and bias that dominates Western regulatory approaches. This development reveals a critical gap in how organizations conceptualize algorithmic harm: most interventions target awareness when the real problem is structural illegibility.

The Awareness Theater in AI Safety

South Korea's legislation mandates that companies assess mental health impacts, but the framework assumes that awareness of potential harms leads to mitigation capability. This mirrors the awareness-capability gap I explore in platform coordination research (Kellogg et al., 2020). Knowing that algorithmic recommendation systems can induce anxiety or that content moderation algorithms expose workers to traumatic material does not automatically generate the organizational competence to address these harms. The law creates compliance requirements without providing the structural schemas necessary for meaningful intervention.

Consider what mental health impact actually means in algorithmic systems. Is it the immediate affective response to algorithmically-curated content? The long-term psychological effects of working under algorithmic management? The cumulative stress of navigating opaque recommendation systems? Each requires different organizational responses, but the legislation treats "mental health assessment" as a discrete, checkable task rather than an ongoing coordination challenge.

The Topology of Algorithmic Harm

The South Korean approach illuminates a deeper problem in AI governance: regulators are building topographical maps when organizations need topological understanding. Topography provides specific coordinates (do not show violent content to users under 18, assess worker stress levels quarterly), but topology reveals structural constraints (algorithmic amplification creates power-law distributions in exposure, optimization for engagement metrics inherently conflicts with psychological safety).

This distinction matters because procedural compliance with mental health assessments can coexist with systems that structurally generate psychological harm. An organization can conduct quarterly surveys on worker well-being while maintaining algorithmic management systems that create precisely the anxiety and precarity those surveys measure (Schor et al., 2020). The assessment becomes a ritual that documents harm rather than a mechanism that prevents it.

The Coordination Inversion Problem

What makes South Korea's legislation theoretically interesting is that it attempts to regulate harms that emerge from coordination mechanisms the law itself does not recognize. Mental health impacts from algorithmic systems are not bugs to be fixed but inherent properties of how platforms coordinate behavior. Recommendation algorithms optimize for engagement precisely because psychological arousal drives interaction. Content moderation at scale requires exposure to harmful material because human judgment remains necessary for edge cases. Algorithmic management systems create stress because uncertainty about evaluation criteria is a feature, not a flaw, of maintaining worker compliance (Rahman, 2021).

The legislation assumes organizations possess the competence to identify and mitigate these harms when given appropriate incentives. But platform coordination inverts the relationship between competence and participation. Organizations do not start with the capability to manage algorithmic mental health impacts and then deploy systems. They deploy systems and develop folk theories about psychological effects through trial and error. These folk theories, like the algorithmic folk theories platform workers develop, may increase awareness without improving outcomes (Gagrain et al., 2024).

What Structural Schema Would Look Like

Effective intervention requires understanding the structural features that generate psychological harm across algorithmic contexts. Rather than platform-specific assessments, organizations need schema-level knowledge: how optimization metrics shape information exposure, how illegibility in evaluation systems produces anxiety, how algorithmic amplification transforms individual variance into power-law outcome distributions. This is adaptive expertise rather than routine compliance (Hatano & Inagaki, 1986).

South Korea's legislation represents progress in recognizing that algorithmic systems create distinct forms of organizational harm. But without addressing the coordination mechanisms that generate these harms, we risk creating elaborate assessment rituals that document problems organizations lack the structural understanding to solve. The question is not whether companies are aware of mental health impacts. The question is whether awareness-based regulation can address harms that emerge from coordination structures the regulations do not name.