Radiology was supposed to be the proving ground for clinical AI in India. The pitch was compelling: deep learning algorithms that could read chest X-rays faster than a radiologist, flag critical findings on CT scans before a human even opened the study, and bring diagnostic precision to hospitals that had never employed a full-time imaging specialist. For a country with an estimated 10,000 to 20,000 radiologists serving a population of 1.4 billion — roughly one per 100,000 people, the promise was not just attractive — it felt urgent.
And in certain hospitals, that promise has been delivered. Walk into any of the top multi-specialty centres in Mumbai, Bangalore, or Delhi, and you will find radiology AI running quietly inside their workflows. Algorithms are triaging chest X-rays for tuberculosis, flagging intracranial haemorrhages on non-contrast CT scans, and measuring lung nodules on follow-up studies. The technology is no longer experimental in these settings. It is operational.
But step outside those metro hospitals — into district hospitals, smaller private chains, or the vast network of diagnostic centres that form the backbone of imaging in India — and the story changes completely.
The Tier 1 Success Story
To understand the stall, you first have to understand why things went well where they did. India’s top-tier hospitals — the Apollos, Fortis, Narayana Health, Aster DM, and the large AIIMS network — share a set of characteristics that made radiology AI adoption almost frictionless. These are not just well-funded institutions. They are structurally prepared for technology integration in ways that most Indian hospitals are not.
First, they have mature PACS infrastructure. Picture Archiving and Communication Systems are the digital backbone of modern radiology. Every image acquired on every modality — X-ray, CT, MRI, ultrasound — flows into a central system where it can be viewed, reported, and stored. In Tier 1 hospitals, PACS has been standard for years. This means that when an AI vendor arrives with a product, there is a clean, standardised pipeline of DICOM images ready to be fed into the algorithm. The integration point is well-defined. The data format is consistent. The IT team knows how to route images from the modality to the AI engine and back to the reporting workstation.
Second, these hospitals have dedicated radiology IT support. There is almost always a team — sometimes a single individual, sometimes a department — responsible for maintaining imaging workflows, troubleshooting DICOM routing issues, updating software, and managing the network infrastructure that keeps images moving. When an AI tool is deployed, this team handles the integration. They configure the DICOM nodes, manage the API connections, monitor uptime, and solve problems when things break. This invisible layer of operational support is what keeps the AI running after the initial excitement fades.
Third, the radiologists in these settings are already working within structured, digital workflows. They receive their worklists electronically, dictate reports using voice recognition or structured templates, and operate within a reporting ecosystem that can accommodate AI-generated findings. When an algorithm flags a critical finding or generates a measurement, it slots into the existing workflow rather than disrupting it. The radiologist sees the AI output alongside their own assessment, and the two become part of a single decision-making process.
The hospitals where AI succeeds are not just buying better technology. They have spent years building the digital infrastructure that makes AI integration possible in the first place.
Why It Stalls Beyond Metro Centres
Now consider what happens when the same AI product is taken to a 200-bed hospital in a Tier 2 city — say, Raipur, Coimbatore, or Patna. On paper, the need is even greater. These hospitals handle enormous imaging volumes, often with one or two radiologists covering all modalities. The workload is punishing. AI-assisted triage and detection should be transformative here. But the reality on the ground tells a different story.
The first barrier is infrastructure. Many hospitals outside the metros do not have a functioning PACS. Images may be stored locally on the modality itself, burned to CDs for patients to carry, or pushed to a basic viewing station with no central archive. DICOM compliance — the standard protocol that allows medical images to be shared across systems — is inconsistent. Some older machines produce images in proprietary formats. Others generate DICOM files with incomplete metadata: missing patient IDs, incorrect study descriptions, or absent series information. For an AI algorithm that depends on clean, structured input, this is a fundamental problem. The data pipeline that Tier 1 hospitals take for granted simply does not exist.
Bandwidth is another persistent challenge. Many AI tools are deployed as cloud-based solutions because maintaining on-premise GPU servers is expensive and impractical for smaller facilities. But cloud deployment requires reliable, high-speed internet. A single chest CT study can be several hundred megabytes. A full-body PET-CT can exceed a gigabyte. Uploading these to a cloud server for processing, then receiving results back at the reporting station, requires bandwidth that many Tier 2 and Tier 3 hospitals do not consistently have. The result is latency — sometimes minutes, sometimes hours — that renders the AI output useless for real-time clinical decisions. By the time the algorithm flags a finding, the radiologist has already reported the case and moved on.
Then there is the matter of IT support. In most hospitals outside the top tier, there is no dedicated radiology IT team. The general IT department, if it exists, is responsible for everything from email servers to billing software. They may have no familiarity with DICOM, HL7, or the specific networking requirements of imaging systems. When the AI integration breaks — a DICOM node goes down, an API times out, a study fails to route — there is no one on-site who can diagnose and fix the problem. The AI vendor’s remote support team can only do so much. The tool goes offline, and it stays offline.
Perhaps the most significant structural barrier is the radiologist shortage itself. The very problem that AI is supposed to solve becomes the obstacle to its adoption. In hospitals with only one or two radiologists, those individuals are already overwhelmed with clinical work. They do not have the time or bandwidth to learn a new tool, adjust their workflow, provide feedback to the AI vendor, or troubleshoot integration issues. There is no one to champion the technology internally. Without that clinical champion, AI adoption stalls even when the technology is technically available.
The Workflow Integration Problem
Much of the industry conversation around radiology AI focuses on algorithmic performance: sensitivity, specificity, AUC curves, and FDA or CDSCO clearances. These metrics matter, but they address only one dimension of clinical utility. The harder challenge — and the one that determines whether an AI tool actually gets used — is workflow integration.
Clinical workflow is not a technical specification. It is the lived rhythm of how a radiologist moves through their day: opening a worklist, selecting a study, scrolling through images, dictating a report, signing it off, and moving to the next case. This rhythm is deeply habitual. It has been optimised, often unconsciously, over years of practice. Any tool that disrupts it — that adds clicks, opens separate windows, requires context-switching, or introduces unfamiliar display formats — will be resisted, regardless of how accurate it is.
The most common failure mode for radiology AI is not inaccuracy. It is irrelevance to the workflow. Consider a chest X-ray AI tool that runs on a separate web portal. The radiologist must log into a different system, look up the patient, view the AI-annotated image, then return to their primary PACS workstation to write the report. In a high-volume setting where a radiologist reads 100 to 200 X-rays in a session, this extra friction is intolerable. The tool might catch a subtle finding one time in fifty, but the other forty-nine times it adds unnecessary steps. The radiologist stops using it within a week.
An AI tool that lives outside the radiologist’s primary workstation is, for all practical purposes, invisible. If it is not in the workflow, it is not in use.
The AI products that have succeeded in Tier 1 hospitals understood this. They integrated directly into the PACS viewer, appearing as an overlay or a secondary series within the study. The radiologist does not have to leave their workspace. The AI findings appear automatically — a heatmap overlay, a measurement annotation, a structured summary in the corner of the screen. The radiologist can accept, modify, or dismiss the findings without breaking their reporting rhythm. This level of integration requires deep technical collaboration between the AI vendor and the hospital’s PACS team, and it is precisely this collaboration that is absent in most Tier 2 and Tier 3 deployments.
The Trust Deficit
Even when the infrastructure is adequate and the workflow integration is reasonable, there is a third barrier that is less tangible but equally significant: clinical trust. Radiologists — like all physicians — are trained to rely on their own judgment. They have spent years developing pattern recognition skills, building mental libraries of normal and abnormal findings, and learning to synthesise imaging with clinical context. When an algorithm presents a finding, the radiologist’s instinctive response is not acceptance. It is scrutiny.
This is not irrational. It is professional. The radiologist is ultimately responsible for the report. If the AI flags a false positive — calling a rib fracture on an X-ray that is actually a calcified costochondral junction, or highlighting a lung nodule that is a skin tag artefact — the radiologist must catch and correct it. Every false positive erodes trust. And in the early phases of AI deployment, false positives are common, especially when the algorithm encounters imaging patterns from populations or equipment types that were underrepresented in its training data.
The trust problem is compounded by a lack of transparency. Many AI tools operate as opaque systems. They produce an output — a probability score, a heatmap, a binary classification — without explaining the reasoning behind it. For a radiologist accustomed to articulating the evidence for every finding in their report, this opacity is deeply unsatisfying. They cannot validate the AI’s logic. They can only agree or disagree with its conclusion, and without understanding the basis for the conclusion, disagreement is the safer default.
Building trust requires more than accurate algorithms. It requires education, exposure, and time. Radiologists need to understand how the AI was trained, what data it was validated on, what its known limitations are, and how it performs on the specific patient populations and equipment at their hospital. They need to see the tool make correct calls repeatedly before they begin to rely on it. They need feedback loops that show them how the AI performs over time, and mechanisms to report errors and see those errors addressed in subsequent updates. None of this happens automatically. It requires deliberate investment in clinician engagement — an investment that most AI vendors have underestimated and most hospitals have not budgeted for.
What Needs to Change
If radiology AI is going to move beyond Tier 1 hospitals and deliver on its promise for the broader Indian healthcare system, several things need to change simultaneously. No single intervention will be sufficient. The challenge is systemic, and the response must be too.
Workflow-first product design. AI vendors need to stop leading with algorithmic performance and start leading with workflow design. The first question should not be “how accurate is this model?” but “how does this fit into the radiologist’s existing process?” Products designed for seamless PACS integration — or, in settings without PACS, for direct modality integration — will see dramatically higher adoption than standalone portals. Vendors should invest in lightweight, edge-deployable solutions that can run on local hardware without cloud dependency, reducing the bandwidth barrier for smaller facilities.
Tiered deployment models. The same deployment model cannot work across the Indian healthcare spectrum. Tier 1 hospitals need deep PACS-integrated solutions with rich analytics dashboards and audit trails. Tier 2 hospitals may need simpler, semi-integrated tools that work with basic viewing stations. For district hospitals and primary health centres, the model might be asynchronous: images are batched and processed during off-peak hours, with AI-flagged cases escalated to a remote radiologist for review. This tiered approach acknowledges the infrastructure reality rather than fighting against it.
Clinical training and onboarding programmes. Every AI deployment should include a structured training programme for the radiologists who will use it. This is not a one-hour product demo. It is a multi-week onboarding process that covers the algorithm’s capabilities, its known limitations, how to interpret its outputs, and how to provide feedback. The goal is not just competence but confidence. The radiologist needs to reach a point where the AI output is a useful data point in their decision-making, neither blindly trusted nor reflexively dismissed.
IT capacity building. Hospitals deploying AI need at minimum one staff member who understands DICOM workflows, basic networking for imaging systems, and the specific integration requirements of the AI tool. This could be an existing IT team member who receives targeted training, or a new role that bridges radiology and IT. Without this operational capacity, even the best-designed AI product will eventually break down and not be repaired.
Policy support from ABDM and regulatory bodies. India’s Ayushman Bharat Digital Mission has laid the groundwork for digital health interoperability. The next step is to extend this framework specifically to medical imaging. Standards for DICOM compliance, minimum infrastructure requirements for AI-enabled imaging, and guidelines for AI validation in Indian clinical settings would create a more predictable environment for both vendors and hospitals. A national imaging data registry, built on ABDM’s existing digital health ID infrastructure, could also provide the large, diverse datasets needed to train and validate algorithms that work across Indian populations.
The gap between where radiology AI works today and where it is needed most is not a technology gap. It is an implementation gap — and closing it requires systemic investment in infrastructure, workflows, and human trust.
Looking Ahead
Radiology AI in India stands at an inflection point. The technology has been proven. The clinical need is undeniable. But the path from proof-of-concept in a well-equipped metro hospital to reliable, daily use across the Indian healthcare system is far longer and more complex than the industry initially assumed. Hardware and algorithms are necessary but not sufficient. What matters equally — perhaps more — is the invisible infrastructure: clean data pipelines, stable connectivity, capable IT support, thoughtful workflow integration, and the slow, deliberate process of earning clinical trust.
The hospitals that have succeeded with radiology AI did not succeed because they had access to better algorithms. They succeeded because they had built, over years, the operational foundations that make AI integration possible. For the rest of India’s healthcare system to follow, those foundations must be intentionally constructed — not as an afterthought to AI deployment, but as a prerequisite for it.
The promise of radiology AI is real. But fulfilling it for the doctors and patients who need it most will require us to stop thinking about AI as a product to be installed and start thinking about it as a capability to be built — patiently, systematically, and with deep respect for the clinical realities on the ground.