As AI increasingly becomes a part of our lives, the future will be collaborative with AI tools working as an advanced clinical assistant rather than a replacement for doctors, Dr Sudhir Srivastava, Founder and Chairman of SS Innovations International has said.
In an interview about the role of artificial intelligence (AI) and robotics in healthcare with Firstpost’s Madhur Sharma, Srivastava said that AI can reliably handle pattern recognition tasks such as reading radiology scans, flagging abnormal lab values, and predicting risk scores, but clinical judgment, ethical decision-making, and navigating uncertainty remain fundamentally human.
In the interview, Srivastava also discussed how to assess if AI is actually working in healthcare, differentiate actual use of AI and robotics from gimmicks, and walk the fine line between using AI and being reliant on it.
The interview has been edited for brevity and clarity.
As AI becomes increasingly embedded in everyday life there is growing anxiety that it could replace doctors. Which aspects of healthcare can AI genuinely take over and what will remain fundamentally human? Is the idea of an AI doctor realistic?
AI can reliably handle pattern recognition tasks such as reading radiology scans, flagging abnormal lab values, and predicting risk scores. In radiology and pathology, algorithms already match or exceed average diagnostic accuracy for specific conditions. However clinical judgment, ethical decision-making, empathy, and navigating uncertainty remain fundamentally human. Patients seek reassurance trust and contextual advice not only diagnoses.
An AI doctor as a fully autonomous caregiver is unrealistic. The future is collaborative with AI as an advanced clinical assistant rather than a replacement.
Healthcare AI is often presented in futuristic terms with massive analytical systems that can read reports and detect conditions that even veteran doctors might miss. How realistic are such depictions? What are the most practical real-world uses of AI in healthcare today that are already improving patient outcomes?
Quick Reads
View AllFuturistic portrayals are partly grounded in reality but often exaggerated. AI systems can detect subtle patterns in imaging or ECGs that humans might overlook in narrow tasks. Today, practical uses include radiology, triage, diabetic retinopathy screening, sepsis prediction, workflow optimisation, and clinical documentation support. Tools embedded in hospital systems reduce reporting time and flag high risk patients earlier.
Rather than dramatic standalone systems, real gains come from integration into daily workflows augmenting clinicians and improving consistency speed and early detection.
India’s biggest healthcare challenge is often access, not technology given its vast geography and shortage of doctors. How can AI meaningfully expand access in tier two and tier three cities and rural areas?
AI can strengthen primary screening and telemedicine support. Automated tools for tuberculosis, diabetic retinopathy, cervical cancer, and ECG analysis can empower frontline workers in rural settings. Decision support systems can guide non-specialist doctors when specialists are unavailable. Combined with teleconsultation platforms, AI can triage cases and escalate complex ones to urban centres. Infrastructure including reliable internet digitised records and trained staff remains essential. AI will not replace doctors, but it can significantly extend their reach and improve consistency of care.
Many hospitals and doctors appear to adopt AI more for branding than for meaningful clinical benefit. How can we distinguish genuine clinically valuable AI from marketing driven adoption that reduces both medicine and technology to gimmicks and what, in your view, genuinely defines innovation in India today?
The distinction lies in evidence, transparency, and outcomes. Clinically valuable AI demonstrates peer reviewed validation, regulatory approval, and measurable improvements in patient safety or efficiency. Marketing driven adoption relies on buzzwords without publishing accuracy rates complication reductions or comparative data.
At the recent AI Summit, what should have been a moment of national technological confidence instead exposed a troubling pattern. These episodes raise a fundamental question: Are we building innovation or are we staging it? Those who cannot innovate import from China. That phrase may sound harsh but it captures a growing discomfort within India’s technology ecosystem. Importing advanced systems for benchmarking or learning is legitimate but presenting externally built platforms as indigenous breakthroughs is not. Innovation cannot be built on borrowed optics.
If India truly seeks technological sovereignty, we must privilege authenticity over applause, substance over spectacle, and credibility over convenience. The path to leadership is difficult. It cannot be imported.
What are the hard clinical metrics that could convincingly prove that AI in a given field is a necessity rather than a luxury offered by premium hospitals?
Convincing metrics include improved diagnostic sensitivity and specificity reduced surgical complication rates, shorter hospital stays, lower readmission rates, decreased mortality, and improved long-term survival.
In oncology, higher rates of complete tumour removal or earlier detection would be compelling. In critical care, reduced sepsis mortality or shorter intensive care stay would matter. Cost effectiveness is equally important. If AI lowers overall treatment cost per patient while improving outcomes it becomes essential infrastructure rather than a premium add on.
There are concerns that AI in medicine will lead to the de skilling of doctors. By integrating robotics into the MBBS curriculum are we at risk of training pilot doctors who cannot navigate when automation fails? How should medical education adapt?
The risk of overreliance is real. If doctors depend blindly on algorithmic outputs, clinical reasoning may weaken. Medical education must preserve strong foundations in anatomy physiology and bedside skills while teaching AI literacy. Students should understand how algorithms function where bias may arise and how systems can fail. Training should include scenarios where technology is unavailable, so clinicians practice independent judgment.
The goal is not pilot doctors but AI literate physicians who supervise technology critically and retain full clinical competence.
According to your organisation, you were involved in an intercontinental telesurgery. Do you see telesurgery as a scalable solution for rural India or is it limited by digital infrastructure? What is the way forward?
Telesurgery is not just a proof of concept. It is a practical and scalable solution, especially for a country like India where access to specialist care remains uneven. Our intercontinental telesurgery demonstrated that with the right technology, ultra-low latency connectivity, and robust robotic systems, distance is no longer a barrier to advanced surgical care.
India’s digital infrastructure has significantly evolved with improved broadband penetration, 5G rollout, and stronger data networks. When combined with stable power systems and trained on-ground surgical teams, telesurgery becomes both efficient and reliable — even beyond metro cities.
The way forward is to actively expand digital health infrastructure, create regional robotic surgery hubs, and adopt hybrid models where local surgeons collaborate with remote experts in real time. This ensures skill transfer while maintaining patient safety.
At SS Innovations International, we firmly believe telesurgery is the future of equitable healthcare. Through our MantraM Mobile Robotic Surgery Unit fitted with the SSI Mantra system, we are already working to extend advanced robotic care to Tier 2 and Tier 3 cities, making high-quality surgical expertise accessible regardless of geography.
Surgical robots are often extremely expensive costing crores and beyond the reach of most of India and much of the Global South. How do you view accessibility of robotics and AI in healthcare?
Accessibility defines practicality in public health. Precision benefits from robotics are meaningful but if limited to elite hospitals their population impact remains narrow. Over time competition local manufacturing and scale may reduce costs. Shared robotic centres, public private partnerships and outcome linked reimbursement can improve access. A healthcare technology cannot be transformative unless it improves care at scale. Innovation must align with affordability otherwise it risks widening inequity instead of reducing it.
Do you believe healthcare related laws and regulations have kept pace with AI and robotics? Who should be responsible for an AI system’s incorrect diagnosis that leads to complications?
Regulation is evolving but often lags behind technological change. Most frameworks treat AI as decision support which keeps ultimate responsibility with the physician. As systems gain autonomy shared liability models involving clinicians hospitals and developers may emerge. Clear standards for validation transparency and post market surveillance are essential. Without robust regulation trust in AI enabled care could erode and slow responsible adoption.
Ten years from now what would success look like for AI in healthcare? What would worry you if the industry gets it wrong?
Success would mean AI embedded quietly in health systems reducing diagnostic delays preventing avoidable deaths, lowering costs and expanding rural access. It would support clinicians rather than overshadow them and operate within transparent ethical frameworks. Failure would mean widening inequities opaque algorithms making unaccountable decisions and hospitals investing in spectacle without measurable benefit. If AI becomes more about image than impact or automation without accountability public trust could suffer lasting damage.


)

)
)
)
)
)
)
)
)



