We're in The Matrix; but not THAT Matrix
A Pop Culture Association 2026 talk arguing that Cypher, not Neo, best captures contemporary algorithmic coordination: a triadic communication structure that produces the cognitive competency of algorithmacy.
I. We’re in The Matrix; but not THAT Matrix
Everyone in this room has seen The Matrix.
And the scene you remember is Neo dodging bullets. Neo suspended in digital rain. Neo waking up to discover reality was a simulation.
The philosophical tradition remembers Neo too. Putnam’s brain-in-a-vat. Descartes’ evil demon. Plato’s cave with better special effects.
Twenty-five years of treating The Matrix as a story about simulated reality. About whether what we perceive is real. About whether we might be deceived by the architecture of our experience.
Wrong scene.
The scene that predicted our actual future comes earlier. Cypher sits alone in front of a bank of monitors. Green code cascades down every screen. Trinity asks him what he sees. He says: “I don’t even see the code. All I see is blonde, brunette, redhead.”
Cypher learned to read the Matrix. Not to escape it. Not to unplug from it. He developed a fluency in its underlying communication protocols so total that the mediation became invisible.
Now consider a software engineer in 2026. Fifteen terminal windows. Claude Code CLI instances running simultaneously. Streams of code cascade in parallel, materializing function definitions, executing test suites, resolving dependency conflicts. The engineer did not write this code. The engineer composed a natural-language specification that an algorithmic system translated into coordinated computational work.
The engineer is Cypher.
Not Neo unplugging from a simulation. Not a battery powering robot overlords. Cypher. Reading the code. Coordinating distributed work through an algorithmic intermediary.
What competency is the engineer exercising? That competency has no name in our existing theoretical vocabulary. I am going to give it one.
II. Why Communication Theory Cannot See Cypher
Communication theory offers two frameworks for understanding human interaction with technology. Both assume the wrong geometry.
Computer-Mediated Communication treats technology as a channel. Two humans communicate through a neutral medium. Walther’s Social Information Processing theory in 1996 asked whether people could develop relationships without nonverbal cues. The answer was yes, given enough time. But the medium itself had no agenda. Email did not optimize for engagement. A telephone did not learn from your conversations and reshape subsequent calls. CMC assumed the machine was glass: transparent and inert. Neo’s world. Two humans talking through a window.
Human-Machine Communication reversed the polarity. Guzman and Lewis proposed in 2020 that machines are not channels but communicative partners. When you talk to Siri, Siri is not a window to another human. Siri is the interlocutor. But HMC replaced one dyad with another. Human-to-human through machine became human-to-machine, full stop. A conversation with the machine itself. Still not Cypher’s world.
The engineer does not communicate through Claude Code the way CMC describes email. The engineer does not communicate with Claude Code the way HMC describes chatbots. The engineer communicates through Claude Code to coordinate distributed cognitive work with systems, repositories, and eventually other humans who never appear in the interaction.
That is a triad. Human. Algorithm. Human. And the algorithm is not a neutral channel. The algorithm has its own optimization objectives, its own interpretive framework, its own logic of transformation. It produces coordination outputs without experiencing any of the coordination it produces. The input that enters the system is not the output that emerges.
Simmel identified the stakes of this geometry in 1950. The qualitative difference between the dyad and the triad is not arithmetic. In a dyad, each party confronts only the other. No collectivity stands above them. The triad introduces mediation, exploitation, and opacity as structural possibilities.
CMC gave us Neo’s world: humans talking through transparent glass. HMC gave us a conversation with the machine itself. Cypher’s world is neither. Cypher reads an intermediary’s communication protocols to coordinate work the intermediary transforms according to its own logic. No existing communication framework addresses that.
III. Algorithmacy: What Cypher Learned
Cypher learned a competency that has emerged twice before in human history.
The first emergence. For roughly two hundred thousand years, humans coordinated exclusively through speech. Oral cultures developed sophisticated cognitive technologies: mnemonic formulas, rhythmic patterns, situational reasoning, participatory identification. Havelock documented in 1963 how the entire Homeric tradition functioned as a mnemonic technology for encoding cultural knowledge in memorable speech. The competency required to navigate oral coordination was oracy.
The second emergence. Around 3200 BCE, the proto-cuneiform tablets from Uruk introduced a new coordination medium: writing. Ong argued in 1982 that this was not simply a new tool for old thoughts. Writing restructured consciousness. It enabled abstraction, analytical separation, the capacity to examine one’s own reasoning as an external object. Havelock called it the separation of the knower from the known. The competency required to navigate symbolic coordination was literacy.
The third emergence is happening now.
When you tap a button to request a ride, an algorithm matches you with a driver, sets the price, routes the vehicle, and evaluates the outcome. You did not represent a transaction. You brought a transaction into existence through an intermediary that transformed your input according to criteria you cannot see and do not control.
Three registers of coordination. In oral coordination, communication and coordination occupy the same space. You hand me the thing. Both parties observe the exchange. In literate coordination, a symbol separates from its referent. I write you a promissory note. The note travels. But the artifact remains stable between writing and reading. In algorithmic coordination, the communicative act does not represent the transaction. It constitutes it. And the intermediary transforms the communication between input and output.
The competency required to navigate this constitutive register is what I am calling algorithmacy.
Oracy. Literacy. Algorithmacy.
Physical register. Symbolic register. Constitutive register.
The green code on Cypher’s screens is the constitutive register made visible. He is not watching a representation of the Matrix. He is reading the communication system that produces it.
Eslami and colleagues demonstrated in 2015 that 62.5% of Facebook users did not know the News Feed was algorithmically curated. They attributed the visibility of friends’ posts to the friends’ behavior. They had developed behavioral adaptations to a system whose existence they did not recognize. Navigating the Matrix without knowing they were in it.
Cameron’s four-year ethnography of ridehailing drivers, published in Organization Science in 2022, documented workers developing what she called “relational” and “efficiency” strategies, creating meaning through micro-choices the algorithm structured. In her 2024 follow-up in Administrative Science Quarterly, she showed how algorithmic management manufactures consent through constant and confined choices. The workers develop sophisticated reasoning. But that reasoning is constituted by the system it navigates. The Matrix teaching its inhabitants how to move through it.
Shapiro identified in 2018 a reasoning style he called “qualculation,” a blend of intuition and strategic calculation that departs from the rational-actor model. DeVito documented in 2021 how users build working models of algorithmic behavior arranged from functional awareness to structural causal models, in continuous adaptive revision. Folk theories. Working maps of the Matrix built through trial and error.
The mechanism driving this is co-optation. Co-optation entered organizational theory through Selznick’s 1949 study of the Tennessee Valley Authority. The TVA enrolled local political opponents into its governance structure, giving them advisory roles that simultaneously diffused their resistance and aligned their interests with the federal program. Enrollment produced alignment.
Stark and Vanden Broeck extended this concept to platforms in 2024: whereas actors in hierarchies command, in markets they contract, and in networks collaborate, on platforms they are co-opted. Participation itself produces the competency required to participate. No university offers a degree in algorithmic resume optimization. The candidate learns through rejection and silence that formatting breaks the parser, that keyword density affects ranking, that application timing matters. The system trains the candidate by shaping behavior. The candidate trains the system by providing data.
Oracy developed through immersion. Nobody designed oral culture. Literacy developed through instruction. Schools taught writing. But the book did not adapt to the reader. Algorithmacy develops through co-optation. The intermediary is active, adaptive, and optimizing for objectives that are not the participant’s.
Nobody taught Cypher to read the code. The machines did not offer him a course. He learned by being inside the system long enough that the system’s logic became his logic.
IV. ALC: Seeing Blonde, Brunette, Red Head
Algorithmacy is the cognitive competency. But a competency is not the same thing as the environment that produces it.
Literacy is a competency. The alphabet is a communication system. You do not develop literacy in the abstract. You develop it through a specific system of inscription that structures what can be written, how it is read, and who gets to participate.
Application Layer Communication is the communication system through which algorithmacy develops.
ALC describes coordination where machines intermediate between humans who lack direct relationships. The engineer composing a specification for Claude Code. The job seeker formatting a resume for an Applicant Tracking System. The driver responding to surge pricing signals. In each case, the human communicates through an algorithmic intermediary to coordinate with humans on the other side who never appear. In each case, the human is reading the green code, whether they know it or not.
Three structural properties distinguish ALC from prior communication systems.
Asymmetric interpretation. The algorithm determines meaning. Your resume says one thing to you and something categorically different to the parser. Your prompt means one thing in natural language and produces outputs shaped by training distributions you cannot inspect. Users must learn to structure inputs in formats the algorithm can parse. The engineer learns prompt patterns. The job seeker learns keyword strategies. The driver learns which rides to accept. The user adapts to the system’s interpretive logic, not the reverse.
Constitutive orchestration. The algorithm does not transmit messages. It constitutes the coordination environment. It determines matches, rankings, visibility, and information flows. And the competency to navigate this environment develops through use, not instruction. Chung found in 2025 that users with higher algorithmic knowledge were less likely to correct misinformation. Knowledge of the intermediary does not predict the ability to coordinate through it. Teaching someone how ATS parsing works is algorithmic literacy. What happens to their cognition after two hundred applications is algorithmacy.
Stratified fluency. Competency exists on a continuum, and the continuum is not neutral. Sap and colleagues found in 2019 that content moderation algorithms flagged African American Vernacular English as offensive at rates 1.5 to 2 times higher than standard English. The competency required for a Black user to navigate the triad demands a forced code-switching that a white user never performs. Street warned in 1984 that what counts as literacy is defined by the dominant group. Algorithmacy is no different. Not everyone reads the Matrix from the same position.
Cypher’s “blonde, brunette, redhead” is full ALC fluency. The code is still cascading. The mediation has not disappeared. But the communication system has become so deeply acquired that the constitutive register reads as transparent.
And here is the cost. Jakesch and colleagues demonstrated in a 2023 randomized experiment that users given an opinionated AI writing assistant were twice as likely to write essays agreeing with the AI and significantly more likely to hold that opinion afterward. They called it latent persuasion. The participants navigated a triadic structure and the intermediary reshaped their cognitive orientation through the practice of using it. Co-optation producing competency and capture simultaneously. Cypher could read the Matrix. The Matrix also decided what Cypher wanted for dinner.
V. Neo Was Wrong. So Were Searle and Jackson.
Back to The Matrix.
Neo’s story is the brain-in-a-vat. Putnam argued in 1981 that a brain in a vat cannot coherently refer to brains or vats. Chalmers extended the argument in 2005, proposing that Neo’s beliefs about the world are correct even inside the simulation because his terms refer to computationally implemented reality. Both assume the problem is deception. That we might inhabit a false reality without knowing it.
We are not in Neo’s Matrix.
Nobody is deceived about the existence of algorithmic systems. The Eslami finding is more precise: people adapt their behavior to systems whose structure they do not perceive. They are not deceived about reality. They are navigating a triadic coordination system while perceiving only a dyad. Not illusion. Structural opacity within a real system that reorganizes cognition through participation.
Neo asked: is this real? Cypher never asked that question. Cypher asked: can I read it fast enough to be useful?
Now consider Searle’s Chinese Room. A person follows syntactic rules for manipulating Chinese characters, producing outputs indistinguishable from a native speaker. Searle argued in 1980 that the person does not understand Chinese. Syntax does not produce semantics. The room processes symbols without comprehension.
Searle asked whether the machine understands.
The engineer monitoring fifteen Claude Code terminals is inside that room. Code cascades. Functions materialize. Tests pass. The engineer reads the outputs and coordinates work through them. The question is not whether Claude understands the code. The question is what happens to the human who learns to coordinate through a system that processes symbols without comprehension.
Borg argued in Inquiry in 2025 that LLM outputs should be viewed as genuinely meaningful despite lacking understanding. The meaning is real. The coordination works. But the human inside the triadic structure develops a competency shaped entirely by the intermediary’s logic. Two hundred applications through an ATS, and the candidate has learned to write for the parser. Symbols processed without comprehension on both sides of the triad, for different reasons.
Searle’s operator was hypothetical. Cypher is not. Every user of every platform is inside the Chinese Room, coordinating through a system that manipulates symbols without understanding them, developing real competency in the process.
Now consider Jackson’s Mary. She knows everything physical about color but has never seen red. Jackson asked in 1982 whether she learns something new upon leaving her black-and-white room.
Replace Mary with the machine.
An LLM possesses all the statistical knowledge of language. Every distributional pattern. Every co-occurrence. Every contextual embedding. It produces meaning. It coordinates human activity. It writes sonnets and legal briefs and therapy responses. But it has never experienced meaning. It operates in Mary’s room permanently. Complete formal knowledge. Zero phenomenal experience.
What happens to human coordination when the intermediary produces without experiencing? When the system that constitutes the coordination environment generates outputs indistinguishable from comprehension while possessing none? When the co-optation loop trains humans to communicate through an entity that has all the distributional knowledge in the world and no experience of what any of it means?
That is Cypher’s real predicament. Not that the Matrix is fake. The Matrix is real. The coordination works. The blonde, the brunette, the redhead are really there. But the system producing those images has never seen a color in its life. And Cypher’s cognition has been restructured by the practice of reading them.
VI. Close
We are not batteries powering robot overlords. We have undergone a literacy transition.
The brain-in-a-vat assumed we needed to escape. We do not. We need to learn to read.
The Chinese Room asked whether the machine understands. It does not. The question is what we become by coordinating through it.
Mary’s Room asked what the machine is missing. The question is what we are developing in the presence of that absence.
Oracy. Literacy. Algorithmacy.
The task is not to unplug from the Matrix. The task is to understand what Cypher learned. And who gets to learn it.
Thank you.
Anticipated Questions
“How is algorithmacy different from algorithmic literacy?”
Algorithmic literacy applies Street’s autonomous model: teach people how algorithms work and they will navigate them better. Algorithmacy insists on the ideological model: the competency is always embedded in the power relations of the triadic structure producing it. Chung found in 2025 that users with higher algorithmic knowledge were less likely to correct misinformation. Knowledge of the intermediary does not predict the ability to coordinate through it.
“Is this just prompt engineering?”
Prompt engineering is algorithmacy’s most visible surface behavior. Reducing algorithmacy to prompt engineering is like reducing literacy to penmanship. Tour and colleagues proposed a prompt literacy framework in 2025 drawing on Luke and Freebody’s Four Resources Model. That captures one interface with one type of system. Algorithmacy is the deeper cognitive competency that transfers across platforms, interfaces, and interaction types.
“What about Ulmer’s electracy?”
The terminological debt is real. Ulmer proposed electracy in 2003 as a third apparatus after orality and literacy. Electracy addresses a 300-year civilizational shift encompassing all electronic media. Algorithmacy targets the specific cognitive competency produced by triadic coordination. Electracy is grounded in aesthetics and affect. Algorithmacy is grounded in opacity and recursive feedback. Electracy was theorized before machine-learning mediation existed. Algorithmacy addresses a world where the intermediary learns from you while you learn to navigate it.
“Isn’t this technological determinism?”
The triadic structure predates digital technology entirely. Selznick documented co-optation in 1949. Simmel theorized the triad in 1908. Digital technology industrialized the structure. It did not invent it.
“Where is agency?”
Agency in algorithmacy is reactive. The intermediary acts first; the subject responds. Cameron documented sophisticated strategic agency among ridehailing drivers, but that agency was constituted by the system it navigates. Co-optation produces capacity and capture simultaneously. The engineer monitoring Claude Code exercises genuine cognitive skill. That skill was produced through the practice of navigating the system. Agency exists. It is not autonomous.
“The Chinese Room inversion is interesting, but Searle’s point was about understanding, not about the human operator.”
Correct. Searle was concerned with machine semantics. The inversion shifts the question from the machine’s comprehension to the human’s transformation. Borg in 2025 and Tritschler in 2025 have begun examining LLM meaning-production on its own terms. The algorithmacy framework adds the relational question: not whether meaning is present on either side but what kind of coordination competency develops when the intermediary produces meaning without experiencing it.
“Is algorithmacy testable?”
Yes. Users with greater algorithmacy should demonstrate abstract reasoning about algorithmic behavior, cross-platform transfer of coordination competence, and predictive accuracy about system responses. The anchoring research question of my dissertation: can shared literacy in an algorithmically mediated communication system produce coordination among strangers that matches or exceeds coordination achieved through direct communication?
Full Citations for All References
- Borg, E. (2025). LLMs, Turing tests and Chinese rooms: The prospects for meaning in large language models. Inquiry. DOI: 10.1080/0020174X.2024.2446241
- Cameron, L. D. (2022). “Making out” while driving: Relational and efficiency games in the gig economy. Organization Science, 33(1), 231–252.
- Cameron, L. D. (2024). The making of the “good bad” job: How algorithmic management manufactures consent through constant and confined choices. Administrative Science Quarterly, 69(2).
- Chalmers, D. J. (2005). The matrix as metaphysics. In C. Grau (Ed.), Philosophers Explore The Matrix. Oxford University Press.
- Chung, M. (2025). When knowing more means doing less: Algorithmic knowledge and digital (dis)engagement among young adults. Harvard Kennedy School Misinformation Review, 6(5).
- DeVito, M. A. (2021). Adaptive folk theorization as a path to algorithmic literacy. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), Article 339, 1–35.
- Eslami, M., et al. (2015). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. CHI 2015, 153–162.
- Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A human-machine communication research agenda. New Media & Society, 22(1), 70–86.
- Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89–100.
- Havelock, E. A. (1963). Preface to Plato. Harvard University Press.
- Jackson, F. (1982). Epiphenomenal qualia. The Philosophical Quarterly, 32(127), 127–136.
- Jackson, F. (1986). What Mary didn’t know. The Journal of Philosophy, 83(5), 291–295.
- Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., & Naaman, M. (2023). Co-writing with opinionated language models affects users’ views. CHI 2023, Article 111.
- Ong, W. J. (1982). Orality and Literacy: The Technologizing of the Word. Routledge.
- Putnam, H. (1981). Reason, Truth and History. Cambridge University Press.
- Sap, M., et al. (2019). The risk of racial bias in hate speech detection. ACL 2019, 1668–1678.
- Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424.
- Selznick, P. (1949). TVA and the Grass Roots. University of California Press.
- Shapiro, A. (2018). Between autonomy and control: Strategies of arbitrage in the “on-demand” economy. New Media & Society, 20(8), 2954–2971.
- Simmel, G. (1950). The Sociology of Georg Simmel (K. H. Wolff, Ed. & Trans.). Free Press.
- Stark, D., & Vanden Broeck, P. (2024). Principles of algorithmic management. Organization Theory, 5(2), 1–24.
- Street, B. V. (1984). Literacy in Theory and Practice. Cambridge University Press.
- Tour, E., et al. (2025). Conceptualizing and operationalizing prompt literacy for English language learners. Journal of Adolescent & Adult Literacy.
- Tritschler, M. (2025). Undead signs: On the possibility of a computational illusion of meaning. Philosophy and Technology, 38(4), 1–21.
- Ulmer, G. L. (2003). Internet Invention: From Literacy to Electracy. Longman.
- Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23(1), 3–43.
Roger Hunt