June 8, 2022
Want Clinical AI to Work in Healthcare? Start with Patient Education and Experiences
Most of our assumptions about artificial intelligence in healthcare come from surveys conducted by healthcare technology consulting firms and healthcare AI vendors themselves, both of which have a vested interest in their surveys’ results.
So, when an independently done survey on consumer opinions on AI in healthcare published in a peer-reviewed medical journal comes along, I’m compelled to read it and write about it as it’s more credible to me than company-sponsored research. So here goes.
Six researchers from the Weill Cornell Medical College and Yale School of Medicine surveyed a nationally representative sample of 926 adults about their overall views on using AI-powered tech to diagnose and treat patients and their specific comfort level with using AI-powered tech to diagnose and treat them.
They published their survey results in JAMA Network Open, and you can download the study here.
Overall, 55.4 percent of the respondents said that AI will make healthcare in the U.S. “much” or “somewhat” better within the next five years. A whopping 95.8 percent said that it’s “very” or “somewhat” important for providers to disclose to patients when AI played a big role in their diagnosis or treatment. Even if it played a small role, that percentage only dropped to 86.5 percent.
In sum, most patients say they believe AI has a lot of potential for good in medicine but, if you use it on them, you better tell them. I get that. I’d want to know, too, if a man or a machine made a life-or-death health decision for me.
Even more interesting are the survey findings on what patients aren’t keen on having AI do to them. In rank order, here are what patients said they were “very” or “somewhat” uncomfortable with. AI:
- Telling you that you have cancer (81.8 percent)
- Making the diagnosis of cancer (68.8 percent)
- Telling you that you have pneumonia (62.9 percent)
- Making the diagnosis of pneumonia (52.0 percent)
- Recommending the type of antibiotics, you get (47.5 percent)
- Reading your chest x-ray (45.1 percent)
Clearly, the more serious the condition, the less likely patients want AI involved. They prefer man to machine when it comes to a life-threatening diagnosis. In fact, 91.5 percent said they were “somewhat” or “very” concerned that AI will make the wrong diagnosis.
The patients also said they were “somewhat” or “very” concerned with the following non-clinical issues related to AI:
- My health information will not be kept confidential (70.9 percent)
- AI will mean I spend less time with my doctor (69.6 percent)
- AI will increase my healthcare costs (68.4 percent)
Again, not a particularly flattering opinion of AI in regard to privacy, face time with providers and costs.
At the same time, I don’t think the results of this survey or any other survey in which consumers give AI a lukewarm response is going to slow vendors and providers from developing and deploying the latest and greatest AI-powered technology for both clinical and administrative purposes. It’s what they do.
The only way to close the gap between what consumers fear and what AI can do for them is effective patient education and superior patient experiences.
“Clinicians, policymakers, and developers should be aware of patients’ views regarding AI,” the researchers said. “Patients may benefit from education on how AI is being incorporated into care and the extent to which clinicians rely on AI to assist with decision-making.”
To learn more about this topic, please read:
- “What Patients Really Think of Your Shiny New Clinical Algorithm”
- “What Healthcare AI Adoption and the Old Kiddieland Amusement Park Have in Common”
- “How can Healthcare Avoid Screwing Up AI’s Potential”
Thanks for reading.