Malicious AI Extensions and the Invisible Cage Problem in Enterprise Security

When the Tool Becomes the Threat

A newly disclosed browser security campaign deserves serious organizational attention. According to Keep Aware's 2026 State of Browser Security Report, malicious AI assistant extensions harvested LLM chat histories and browsing data from platforms including ChatGPT and DeepSeek, accumulating nearly 900,000 installs and penetrating more than 20,000 enterprise tenants before the campaign was identified. This is not a story about a novel attack vector in the narrow technical sense. It is a story about what happens when organizations deploy workers into algorithmically-mediated environments without adequate structural understanding of how those environments actually function.

The Competence Assumption That Was Never Valid

Classical organizational security frameworks assume that workers arrive with a baseline competence about the tools they use. The logic is procedural: publish acceptable use policies, conduct annual compliance training, and document the rules. The Keep Aware data directly falsifies this assumption in the AI tooling context. When 41% of employees are using AI web tools while browser-based security monitoring remains a blind spot, the procedural documentation model has already failed before any malicious actor intervenes. Rahman (2021) describes how algorithmic systems create what he calls an "invisible cage," a structure of constraints that shapes worker behavior without workers necessarily perceiving the shape of those constraints. The employees installing these extensions were not reckless. They were operating inside an environment whose structural features were opaque to them.

Awareness Does Not Equal Capability

There is a precise theoretical distinction worth drawing here. Algorithmic literacy research consistently demonstrates a gap between awareness and capability: knowing that algorithms and extensions mediate your work environment does not translate into knowing how to respond effectively to that mediation (Kellogg, Valentine, & Christin, 2020). The 900,000 installs represent workers who were, in many cases, aware that browser extensions exist and that AI tools are potentially sensitive. Awareness at that level of abstraction is insufficient. What was absent was schema-level understanding of how browser extension permissions interact with LLM session data, how API calls propagate across enterprise identity systems, and how the surface area of a browser-as-operating-system differs structurally from a traditional endpoint. Sundar (2020) argues that as machine agency increases in work environments, the cognitive demands on human users shift from execution to evaluation. That shift requires adaptive expertise, not procedural recall.

The Routine Expertise Failure

Hatano and Inagaki (1986) distinguish between routine expertise, which produces reliable performance in familiar contexts, and adaptive expertise, which enables problem-solving when structural conditions change. Enterprise AI security is currently experiencing precisely the failure mode that routine expertise predicts. Organizations trained workers on legacy endpoint security procedures. Those procedures are structurally mismatched to a threat landscape organized around browser-resident AI tools. The Keep Aware report notes that many enterprises still treat the browser as an extension of network or endpoint security. That categorical error is a schema problem, not a policy problem. The browser in 2026 is architecturally distinct from the browser in 2015, and security intuitions calibrated to the older structure do not transfer.

What the Enterprise Search Forecast Compounds

This problem is not static. Mordor Intelligence projects the enterprise search market growing at over 9% CAGR through 2031, driven specifically by AI-powered knowledge management platforms. As organizations embed more LLM-adjacent infrastructure into daily workflows, the attack surface described in the Keep Aware report will expand proportionally. Workers will interact with a wider array of AI-mediated tools, each with its own permission architecture and data flow logic. The variance in security outcomes across workers with nominally identical access will increase, not decrease. This is precisely the dynamic that the Algorithmic Literacy Coordination framework predicts: power-law distributions in outcomes emerge from algorithmic amplification of initial differences in structural understanding (Schor et al., 2020).

The Organizational Response That Would Actually Work

The intervention that organizations should be considering is not another acceptable use policy. It is schema induction at the structural level: training that teaches workers the topological features of browser-based AI environments, how permissions propagate, where session data resides, and what trust signals are and are not reliable in extension ecosystems. Gentner's (1983) structure-mapping theory predicts that training organized around structural relationships, rather than surface-level procedures, produces far transfer to novel threat contexts. A worker who understands why browser extensions have privileged access to LLM sessions can evaluate a novel extension they have never encountered before. A worker who has only memorized a list of prohibited extensions cannot. The malicious extension campaign succeeded in part because organizations had invested in the wrong level of the problem.

References

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170.

Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K. Hakuta (Eds.), Child development and education in Japan (pp. 262-272). Freeman.

Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.

Rahman, H. A. (2021). The invisible cage: Workers' reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945-988.

Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., & Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5), 833-861.

Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication, 25(1), 74-88.