The FIDO Alliance's Authentication Gambit and the Competence Problem in Agentic Delegation
The Specific Problem on the Table
This week, the FIDO Alliance announced a collaboration with Google and Mastercard to develop authentication standards for AI agents that make purchases on behalf of users. The technical problem is straightforward to state: when an AI agent initiates a financial transaction, the merchant needs to verify not just the identity of the human account holder, but also the authorization scope of the agent acting in their name. What counts as consent when the purchasing decision is made by a system, not a person?
Most coverage of this story treats it as a security and fraud prevention challenge. That framing is not wrong, but it is incomplete. The harder problem is not technical authentication. The harder problem is that delegating financial agency to an AI agent requires the human delegator to have an accurate structural model of what the agent can and cannot do. Most users do not have that model. They have folk theories.
Delegation Without Schema
Hancock, Naaman, and Levy (2020) draw a distinction between AI systems that augment human communication and those that replace it. Agentic purchasing sits firmly in the replacement category. The user is not assisted in making a decision; the decision is made on the user's behalf within parameters the user set, often imprecisely, in advance. This creates what I would call a schema deficit at the moment of delegation.
The issue is structural. Kellogg, Valentine, and Christin (2020) document how workers in algorithmically-mediated environments routinely develop awareness of the systems governing their behavior without developing operational understanding of those systems. The awareness-capability gap they identify in gig platform workers applies with equal force to consumers delegating to AI agents. A user who sets up an AI purchasing agent understands, in some general sense, that the agent will "buy things for them." That awareness does not translate into accurate prediction of what the agent will actually do when it encounters an ambiguous boundary condition - a subscription renewal, an upsell, a limit-adjacent purchase.
The FIDO Alliance's solution addresses the merchant side of this problem. It does not address the delegator side. Authentication standards tell the merchant that an authorized agent is acting. They say nothing about whether the human who granted that authorization had an accurate schema of what authorization meant.
Why the Organizational Parallel Matters
This is not only a consumer technology problem. Organizations are deploying agentic AI systems for procurement, scheduling, and contract management with the same structural deficit at the delegation layer. The manager or executive who authorizes an AI agent to "handle routine vendor payments" has typically received procedural instructions about how to configure the system. What they have not received is schema-level training about the structural features that determine when a payment is routine versus when it sits at an edge case the system will resolve autonomously in ways the authorizer did not anticipate.
Hatano and Inagaki (1986) distinguish routine expertise - the ability to execute known procedures reliably - from adaptive expertise, which is the capacity to respond correctly in novel situations by reasoning from principles. Procedural onboarding for AI agent configuration produces routine expertise. The delegation problem is fundamentally an adaptive expertise problem, because the edge cases that produce consequential errors are, by definition, not the routine cases the user rehearsed during setup.
Sundar (2020) notes that machine agency introduces a new category of communicative interaction where users must model not just the message content but the decision architecture of the sender. In a delegation context, the human is now responsible for accurately modeling a decision architecture they did not design and cannot fully inspect. The FIDO Alliance's framework, however technically sound, does not reduce this burden. It simply ensures that the agent's actions are traceable after the fact.
The Governance Gap the Authentication Standard Cannot Close
Traceability is not governance. Knowing that an authorized agent made a purchase does not help an organization determine whether the authorization boundaries were set appropriately in the first place. That is a prior question, and it is a competence question. Rahman (2021) describes the "invisible cage" dynamic in platform work as one where structural constraints shape behavior without being legible to the worker subject to them. Agentic delegation creates an inverted version of this: the human sets the cage, but does so without a clear model of its shape.
The practical implication is that organizations adopting agentic AI for financial or operational decisions need training interventions that target schema induction - building accurate structural models of what the agent optimizes, where its boundary conditions are, and how it resolves ambiguity - not just procedural instructions for configuration. The FIDO Alliance's work is necessary infrastructure. It is not, by itself, sufficient organizational preparation.
References: Hancock, Naaman, and Levy (2020), Journal of Computer-Mediated Communication. Hatano and Inagaki (1986), Contemporary Educational Psychology. Kellogg, Valentine, and Christin (2020), Academy of Management Annals. Rahman (2021), Administrative Science Quarterly. Sundar (2020), Journal of Computer-Mediated Communication.
Roger Hunt