5 medical specialties AI may never make obsolete
Industry Buzz
AI may be able to teach a robot how to perform a procedure but falls short regarding critical thinking related to complications and acute decision-making if an abrupt change in the procedure is necessary.
—Tonie Reincke, MD, interventional radiologist
[The doctor-patient relationship] is foundational. It involves understanding emotions and building connections with the entire family. These are critical elements that I believe AI will never fully replace.
—Ben Reinking, MD, pediatrician and pediatric cardiologist
AI is now firmly embedded in clinical workflows. But as its medical footprint expands, its limitations are becoming more clinically relevant: It can summarize charts, flag malignancies, and suggest differential diagnoses, but it still falters where medicine is most human.
Many physicians are of the opinion that AI systems lack genuine empathy, ethical judgment, and the ability to interpret non-verbal cues.[] Meanwhile, studies show that patients value empathy, contextual judgment, and moral guidance as much as technical accuracy—elements that remain stubbornly hard to code.
Ahead: five specialties that are least likely to be made obsolete by AI—and that best illustrate why physicians remain, in crucial ways, irreplaceable.
When empathy is the intervention: Psychiatry & mental health care
Research suggests therapy chatbots may help with short-term support, especially when they are used as iCBT-style tools to track mood, encourage self-reflection, and deliver CBT-based exercises. But the evidence remains limited: many studies are short, rely on self-report, and use small samples, making it difficult to know how well these tools perform in sustained clinical care.[][]
Related: AI-induced psychosis: A clinical phenomenon still taking shapeThe more clinically relevant concern is empathy. According to an April 2026 review in Current Opinion in Psychology, patients may perceive AI-written responses as empathetic (sometimes even more so than human-written responses), but chatbots and LLMs can only express elements of cognitive empathy; they lack affective empathy, motivational empathy, embodied interaction, and the interpersonal relationship that underpins therapeutic work.[]
AI can serve a teaching function or present cognitive exercises and suggestions to patients. But [it] only simulates empathy—a human emotion. AI is unable to detect transference, and because it has no feelings, by definition it has no countertransference, an essential tool in psychoanalytic work.
—Steven Reidbord, MD, psychiatrist
Psychiatrist Cooper Stone, DO, says that therapeutic empathy must be deployed deliberately to help patients “open up and provide valuable information,” noting, “At the end of the day, patients ultimately want to feel understood.” For the foreseeable future, humans will remain better equipped to interpret context, nonverbal cues, risk, and the unspoken dimensions of mental health care.
Where the diagnosis starts with trust: Primary care
Front-line physicians spend as much time coaching and counseling as they do prescribing. Unlike specialties who may lean more on imaging or lab data, primary care often centers around verbal communication, emotional cues, and trust-building—factors that AI still struggles to replicate.[]
Ben Reinking, MD, a board-certified pediatrician and pediatric cardiologist, calls the physician-patient relationship foundational. “It involves understanding emotions and building connections with the entire family. These are critical elements that I believe AI will never fully replace,” he asserts.
Meanwhile, a recent University of Maine study has shown that clinicians still outperform large language models when cases become ethically or emotionally complex.[]
C. Matt Graham, author of the study and Associate Professor of Information Systems and Security Management at the Maine Business School, said, “This isn't about replacing doctors and nurses. It's about augmenting their abilities. AI can be a second set of eyes; it can help clinicians sift through mountains of data, recognize patterns and offer evidence-based recommendations in real time."
The patients who can’t always tell you what’s wrong
Children, especially younger ones, often can’t articulate symptoms, feelings, or fears clearly.[] Pediatric care depends heavily on a physician’s observational skills, intuition, and ability to interpret non-verbal cues—something AI still cannot do reliably.
Furthermore, these algorithms, which are trained on adult data, can often miss developmental subtleties, and ethical frameworks for children demand heightened transparency and parental trust.
Dr. Reinking says that AI cannot replicate the nuanced approach that physicians take while interacting with children of different ages.
He adds, “Involving both the child and the parents is crucial in pediatric visits. The approach varies with the child's age, developmental status, and the parents' preferences. Some parents want their child involved from the start, while others prefer to handle the information themselves at home. AI lacks the ability to navigate these nuances.”
Evan Nadler, MD, MBA, the founder of ProCare Consultants and ProCare TeleHealth, states “AI won't be able to know whether a pediatric-age patient is at a developmental stage commensurate with their chronological age or not, which could impact choice of treatment for obesity or perhaps other disorders.”
Pediatricians adjust their language and behavior to fit a child’s age and developmental stage. AI lacks the flexibility to do this authentically, risking oversimplifying or misunderstanding a child’s needs.
When the answer is not more treatment: Palliative care
Hospice conversations blend clinical facts with existential questions that no algorithm can parse. A May 2025 systematic review of AI in palliative settings concluded that while machine learning can predict symptom trajectories, only human clinicians can understand and empathize with the moral nuance and cultural meaning at the end of life.[] As Moti Gamburd, CEO of CARE Homecare, observes, “In palliative care, silence and presence can be more powerful than words.”
With respect to dehumanization in medicine, connectional silence is one of the intangible human behaviors that algorithms cannot replicate because it relies on emotional intuition, timing, and real-time sensitivity to another person’s inner state.
Gamburd recalls how caregivers would “sit beside a client in their final hours, holding their hand quietly while the family gathered. No one needed to say anything. The presence alone brought comfort.”
When the next move can’t be preprogrammed: Surgery & interventional fields
Robotic platforms have advanced well beyond the da Vinci Xi, with newer systems such as da Vinci 5 adding improved visualization, greater computing power, and force-sensing capabilities. But even the most sophisticated surgical robots remain extensions of the surgeon’s judgment, not replacements for it.
Interventional radiologist Tonie Reincke, MD, observes, “AI may be able to teach a robot how to perform a procedure but falls short regarding critical thinking related to complications and acute decision-making if an abrupt change in the procedure is necessary.” Cost, training demands, limited tactile feedback, workflow integration, and the need for real-time clinical judgment continue to stand between robotic assistance and true surgical autonomy.
A separate systematic review on the ethics of robot-assisted surgeries highlights unresolved questions of accountability: Patients and regulators still expect a human surgeon to own every intra-operative decision.[]
Plastic surgeon Arnold Breitbart, MD, notes, “Patients come into my practice vulnerable and oftentimes anxious about their own body image… They are looking for reassurance, empathy, and sometimes tough, honest feedback on realistic outcomes.”
Across these five fields, AI’s role is likely to keep expanding—as a tool for pattern recognition, documentation, triage, and clinical support. But the work least vulnerable to automation is the work that requires presence: noticing what a patient does not say, adapting to family dynamics, weighing values under pressure, and making judgment calls when the clinical picture shifts.
Related: AI scribes promised to reduce EHR burden—are they delivering?