Ad Blocker Detected
Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by disabling your ad blocker.
May 12, 2022 – Artificial intelligence has moved from science fiction to everyday reality in a matter of years, being used for everything from online activity to driving cars. Even, yes, to make medical diagnoses. But that doesn’t mean people are ready to let AI drive all their medical decisions.
The technology is quickly evolving to help guide clinical decisions across more medical specialties and diagnoses, particularly when it comes to identifying anything out of the ordinary during a colonoscopy, skin cancer check, or in an X-ray image.
New research is exploring what patients think about the use of AI in health care. Yale University’s Sanjay Aneja, MD, and colleagues surveyed a nationally representative group of 926 patients about their comfort with the use of the technology, what concerns they have, and on their overall opinions about AI.
Turns out, patient comfort with AI depends on its use.
For example, 12% of the people surveyed were “very comfortable” and 43% were “somewhat comfortable” with AI reading chest X-rays. But only 6% were very comfortable and 25% were somewhat comfortable about AI making a cancer diagnosis, according to the survey results published online May 4 in the journal JAMA Network Open.
“Having an AI algorithm read your X-ray … that’s a very different story than if one is relying on AI to make a diagnosis about a malignancy or delivering the news that somebody has cancer,” says Sean Khozin, MD, who was not involved with the research.
“What’s very interesting is that … there’s a lot of optimism among patients about the role of AI in making things better. That level of optimism was great to see,” says Khozin, an oncologist and data scientist, who’s a member of the executive committee at the Alliance for Artificial Intelligence in Healthcare (AAIH). The AAIH is a global advocacy organization in Baltimore that focuses on responsible, ethnical, and reasonable standards for the use of AI and machine learning in health care.
All in Favor, Say AI
Most people had a positive overall opinion on AI in health care. The survey revealed that 56% believe AI will make health care better in the next 5 years, compared to 6% who say it will make health care worse.
Most of the work in medical AI focuses on clinical areas that could benefit most, “but rarely do we ask ourselves which areas patients really want AI to impact their health care,” says Aneja, a senior study author and assistant professor at Yale School of Medicine.
Not considering the patient views leaves an incomplete picture.
“In many ways, I would say our work highlights a potential blind spot among AI researchers that will need to be addressed as these technologies become more common in clinical practice,” says Aneja.
It remains unclear how much patients know or realize about the role AI already plays in medicine. Aneja, who assessed AI attitudes among health care professionals in previous work, says, “What became clear as we surveyed both patients and physicians is that transparency is needed regarding the specific role AI plays within a patient’s treatment course.”
The current survey shows about 66% of patients believe it is “very important” to know when AI plays a large role in their diagnosis or treatment. Also, 46% believe the information is very important when AI plays a small role in their care.
At the same time, less than 10% of people would be “very comfortable” getting a diagnosis from a computer program, even one that makes a correct diagnosis more than 90% of the time but is unable to explain why.
“Patients may not be aware of the automation that has been built into a lot of our devices today,” Khozin said. Electrocardiograms (tests that record the heart’s electrical signals), imaging software, and colonoscopy interpretation systems are examples.
Even if unaware, patients are likely benefiting from the use of AI in diagnosis. One example is a 63-year-old man with ulcerative colitis living in Brooklyn, NY. Aasma Shaukat, MD, a gastroenterologist at NYU Langone Medical Center, did a routine colonoscopy on the patient.
“As I was focussed on taking biopsies in the [intestines] I did not notice a 6 mm [millimeter] flat polyp … until AI alerted me to it.”
Shaukat removed the polyp, which had abnormal cells that may be pre-cancerous.
Addressing AI Anxieties
The Yale survey revealed that most people were “very concerned” or “somewhat concerned’ about possible unintended effects of AI in health care. A total of 92%”said they would be concerned about a misdiagnosis, 71% about a privacy breach, 70% about spending less time with doctors, and 68% about higher health care costs.
A previous study from Aneja and colleagues published in July 2021 focused on AI and medical liability. They found that doctors and patients disagree about liability when AI results in a clinical error. Although most doctors and patients believed doctors should be liable, doctors were more likely to want to hold vendors and health care organizations accountable as well.