Manitoba's MRI AI Deployment Exposes the Implicit Acquisition Crisis in Healthcare Platforms
Manitoba's health system is deploying artificial intelligence across its MRI infrastructure, with plans to have more than half of its machines using AI by spring 2026. This rollout represents a critical test case for what I call the Implicit Acquisition Problem: when coordination mechanisms require users to develop new communicative competencies without formal instruction, systematic failures emerge that organizations consistently fail to anticipate.
The news coverage frames this as a straightforward technology deployment story. The reality is far more complex. Manitoba is not simply installing software; it is fundamentally restructuring the communication system through which radiologists, technicians, and physicians coordinate diagnostic work. The AI doesn't just analyze images—it mediates how these professionals specify intent, interpret outputs, and coordinate collective diagnostic outcomes. Yet nowhere in the reporting is there evidence of systematic literacy acquisition planning.
The Stratified Fluency Problem in Clinical Settings
Application Layer Communication theory predicts what will happen next with disturbing precision. Healthcare professionals will develop vastly different competency levels in orchestrating AI-augmented diagnostics, creating coordination variance that existing quality assurance systems cannot detect or measure. High-fluency radiologists will learn to structure their interaction with AI outputs to generate richer diagnostic insights. Low-fluency practitioners will treat AI suggestions as binary accept/reject decisions, failing to develop the iterative refinement patterns that characterize expert human-AI coordination.
This stratification matters because medical diagnosis is fundamentally a coordination problem. Multiple specialists must interpret the same imaging data, communicate findings across professional boundaries, and aggregate individual judgments into collective treatment decisions. When AI enters this coordination mechanism, it doesn't simply augment individual capability—it transforms the entire communicative infrastructure through which collective diagnostic intelligence emerges.
The research on organizational factors in nursing competence (Chichi, 2021) demonstrates that when new coordination requirements emerge in acute care settings, organizational characteristics—not individual capability—determine systematic success or failure patterns. Manitoba's deployment appears to treat AI integration as a technical implementation rather than an organizational communication transformation requiring population-level literacy acquisition across multiple professional groups.
Asymmetric Interpretation in Diagnostic Coordination
The core challenge is asymmetric interpretation. The AI analyzes MRI scans deterministically according to its training parameters. Radiologists must interpret AI outputs contextually, integrating algorithmic suggestions with clinical history, patient presentation, and diagnostic judgment developed through years of practice. This asymmetry creates a fundamental coordination gap: the AI cannot adjust its communication to match radiologist expertise levels, yet radiologists must develop fluency in extracting meaningful signal from algorithmic output regardless of their baseline capability.
Unlike traditional diagnostic tools where learning curves are visible (missed findings, diagnostic errors, corrective feedback), AI literacy acquisition failures are largely invisible. A radiologist who fails to develop sophisticated human-AI coordination patterns may still appear competent by conventional metrics—they read scans, generate reports, coordinate with clinicians. The coordination loss manifests as foregone diagnostic depth: insights that high-fluency practitioners would extract but low-fluency practitioners never recognize as absent.
The Measurement Challenge for Healthcare Platform Governance
This connects directly to broader questions about platform governance in essential services. Healthcare systems deploying AI are creating platform coordination mechanisms where diagnostic outcomes depend fundamentally on population-level literacy acquisition patterns. Yet existing quality assurance frameworks measure individual competence through traditional metrics (error rates, turnaround times, inter-rater reliability) that cannot capture coordination variance created by differential AI fluency.
Manitoba needs to answer several questions that the current deployment narrative ignores: How will radiologists acquire competence in human-AI diagnostic coordination when no formal training infrastructure exists? What mechanisms will identify practitioners who fail to develop adequate fluency? How will the system measure coordination quality when AI-mediated diagnostic work externalizes previously tacit judgment processes?
The literature on individual and contextual variables affecting technology adoption (Katsoni & Sahinidis, 2015) suggests that without explicit organizational support for new communication competencies, adoption patterns follow existing capability distributions, amplifying rather than reducing professional variance.
Healthcare AI deployment is not a technology story. It is a literacy acquisition story with immediate implications for diagnostic coordination quality and long-term implications for systematic inequality in clinical capability. Manitoba's rollout will provide critical evidence about whether healthcare systems recognize this distinction before coordination failures become visible through patient outcomes.
Roger Hunt