Will AI replace physicians? It’s complicated.

By Alistair Gardiner
Published December 21, 2021

Key Takeaways

We live in an era where healthcare and big data go hand-in-hand. According to a review published in the BMJ in March, the average person accrues an estimated 1 million gigabytes of personal data over a lifetime, and the overall volume of global healthcare data doubles every few years. Clinicians are increasingly working with artificial intelligence (AI) to turn data into insights that power earlier and more accurate diagnoses and treatments.

AI applications in healthcare are now pervasive. As noted in a study published in the Journal of Consumer Research in 2019, IBM has developed Watson, an AI application that can successfully diagnose heart disease. In one study using 1,000 cancer diagnoses, Watson found treatment options in 30% of cases that doctors missed. Researchers have developed algorithms that can identify eye diseases, like diabetic retinopathy, and there are even consumer apps that use AI, including SkinVision, to detect skin cancer with a high degree of accuracy.

AI is taking medicine by storm and for good reason: It can deliver cost-effective care at scale, and, in some cases, it can outperform humans. One study measuring the accuracy of triage diagnoses made by doctors and those made by an AI found that the AI was accurate 90.2% of the time, compared with the doctors’ 77.5% rate.

Despite all the evidence of AI’s efficacy, patients and providers aren’t going all-in just yet. Research indicates that most people still trust humans more than machines, even when the machine’s accuracy is demonstrated. 

Here are some of the medical areas in which AI is now outperforming doctors, along with some tips on how you and your patients can embrace AI.

Is AI really better than you at your job?

Authors of the aforementioned BMJ study point out that recent interest in medical AI applications has prompted headlines such as “Google says its AI can spot lung cancer a year before doctors” and “AI is better at diagnosing skin cancer than your doctor, study finds.” While headlines like these aren’t false, the reality is a little more complicated.

A study published in Nature Communications in August 2020 examined the efficacy of an algorithm that was designed for differential diagnosis in cases where there are multiple possible causes of a patient's symptoms. 

They found that the algorithm was accurate 77.26% of the time, which placed it among the top 25% of the 44 doctors involved in the study (who collectively had an average diagnostic accuracy of 71.4%). 

However, the authors concluded that future experiments should explore the effectiveness of these algorithms as clinical support systems that help guide doctors by providing a second opinion diagnosis. They describe the algorithm as “complementary to human doctors, performing better on vignettes that doctors struggle to diagnose,” and posit that ultimately the combined diagnosis of both doctor and algorithm will be more accurate than either alone. In short: While AI is becoming more accurate than clinicians in certain diagnostic tasks, the highest potential may be reached when the two are used symbiotically.

Likewise, authors of the BMJ review note that “claims about performance against clinicians should be tempered accordingly.” While most of the studies included in the review suggested their algorithm had comparable (or better) diagnostic performance compared to a clinician, authors noted that these studies tended to have overt limitations in design, reporting, transparency, and risks of bias. “Overpromising language leaves studies vulnerable to being misinterpreted by the media and the public,” authors wrote.

The field of medical AI is moving quickly, but most claims about algorithms outperforming humans deserve scrutiny. 

Do patients trust AI diagnoses?

As AI becomes a more prominent feature of the clinic, one of the key issues is getting the public to trust that it works. A review published in the Journal of Consumer Research in 2019 posits that the biggest barrier to this is a concern known as “uniqueness neglect.”

This phrase describes the general feeling that an artificial intelligence will be less capable than a human at understanding each patient’s unique characteristics. In fact, research indicates “resistance to medical AI is stronger for consumers who perceive themselves to be more unique.”

One of the studies cited in the review found that people tend to lose confidence in algorithms more quickly than humans, even when they make identical mistakes. However, authors found that this resistance can be tempered. They cited studies that found that when AI-provided care is presented as personalized, patients are more receptive to it. One study found that simply using phrases like “based on your unique profile” improved the perception of the care as personalized. 

Clarifying that a human physician is in charge of final treatment decisions, or explicitly framing the AI technique as supporting (rather than replacing) the doctor, can also help build patient confidence.

In short: AI is here to help. And, as the technology progresses, both doctors and patients will hopefully benefit from more accurate diagnostic tools and improved health outcomes. 

Share with emailShare to FacebookShare to LinkedInShare to Twitter
ADVERTISEMENT