March 2, 2020
How Can Healthcare Avoid Screwing Up AI’s Potential?
“Just what do you think you’re doing, Dave? Dave, I really think I’m entitled to an answer to that question.”—HAL 9000 in 2001: A Space Odyssey
I get that a lot.
But for the purposes of this column, that could be a camera- and voice-enabled software program in an operating room suite speaking to a surgeon to make sure that he’s not removing the wrong kidney.
Such is the potential power of artificial intelligence in healthcare. AI, as you know, is any technology that mimics the human thought process. Machine learning, or ML, is a type of AI technology that “learns,” or improves its predictive capability, as its algorithm processes more data.
Not only has AI become one of healthcare’s most overused buzzwords, even saying AI has become one of healthcare’s most overused buzzwords has become one of healthcare’s most overused buzzwords…or buzzwords.
So I thought it might be a good idea to do a reality check on AI technology’s penetration in healthcare, what people think about it and what needs to happen to ensure that AI is used for good and not evil.
Healthcare execs warming up to AI
Now, I’ve never worked in a healthcare setting nor am I a data scientist. I have no firsthand knowledge of what’s going on regarding AI’s use in healthcare. But here are some data points from a few surveys and polls released over the past few months that I thought signaled a real change in attitude about AI in an industry long criticized for lagging behind others in technology adoption.
85%
That’s the percentage of healthcare executives who said their organizations have implemented or plan to implement a formal AI strategy. That’s from an OptumIQ survey of 500 senior healthcare executives across four industry sectors: provider, payer, life sciences and employer health benefits organizations. You can download the eight-page survey report here.
59%
That’s the percentage of healthcare clinicians and practitioners who said that they are “comfortable” using AI to help them spot anomalies in patients’ health or medical conditions. That’s from a Philips-sponsored survey of nearly 3,200 healthcare professionals in 15 countries, including 203 in the U.S. You can download the 43-page survey report here.
56%
That’s the percentage of healthcare provider executives who said AI and ML technologies are already making the healthcare system more efficient. That’s from a Change Healthcare/Healthcare Executive Group survey of 445 provider and payer execs. You can download the 41-page survey report here.
37%
That’s the percentage of healthcare executives who said AI is “at least moderately functional” in their organizations. That’s from a KPMG survey of 751 executives across five industries, including healthcare. You can download a copy of the 20-page survey report here.
17.3%
That’s the percentage of healthcare executives who said their organizations are now using AI and ML technologies for “clinical transformation,” matching the same percentage who said they’re using AI and ML for “operational” work. That’s from a Healthcare Innovation magazine survey of about 200 senior execs working across different healthcare sectors. You can read the survey results here.
Assuming the survey and poll results accurately reflect what people are feeling, it sounds like healthcare leaders from both sides of the house—administrative and clinical—are starting to believe that AI and ML technologies can help them do their jobs better, whether that’s making operations more efficient or making diagnoses more accurate.
The market, to its credit, has responded, whether that’s from listening to those leaders or simply seeing gapping access, cost and quality holes in a $4 trillion industry that need to be filled. Citing data from the market research firm CB Insights, Healthcare Dive said healthcare AI startups attracted $4 billion in 367 separate deals last year, up from $2.7 billion in 264 deals in 2018.
Will bad actors corrupt AI’s good intentions?
The big question then becomes how can healthcare avoid screwing up what on the surface seems like its salvation, its fast pass to becoming a truly customer-focused industry, its solution to lowering costs and improving outcomes?
I’m not trying to be glib. The inspiration for this column came from my total shock reading about a drug company bribing an EHR vendor to embed a prompt in its software to encourage physicians to prescribe unnecessary opioids to their patients. Read the press release from the U.S. Justice Department yourself here.
Having covered the healthcare industry for 37 years now, I thought I’d seen everything. But that was a new low for the Healthcare Industrial Complex® — paying and accepting kickbacks to subconsciously trick doctors into giving harmful if not lethal drugs to sick people. It’s not much of a leap from that to an AI-powered voice assistant recommending to a surgeon that they stent a patient’s heart instead of using drugs to open a blocked artery.
Think about all the challenges facing healthcare today and how many of them you can trace back to a vested interest lining its pockets at the expense of patients? The answer is most, if not all.
Identifying the potential AI landmines
So how do we stop the healthcare industrial complex from adulterating healthcare AI and ML tech?
The first step is recognizing where the vulnerabilities are that vested interests can exploit or that well-meaning healthcare leaders can trip over.
In late December, the National Academy of Medicine, formerly known as the Institute of Medicine, released a report on healthcare AI that didn’t get a lot of attention but really does a nice job of outlining what’s ahead for the industry.
In its 269-page report, Artificial Intelligence in Health Care: The Hope, The Hype, The Promise, The Peril, the NAM cited three potential landmines on a macro level:
- Feeding biased data into AI algorithms that could exaggerate rather than resolve care disparities
- Assuming causality, resulting in ineffective and inappropriate treatment plans and interventions
- Exposing exponentially more patient health data and information to privacy and security risks
“While there have been a number of promising examples of AI applications in health care, we believe it is imperative to proceed with caution, else we may end up with user disillusionment and another AI winter, and/or further exacerbate existing health and technology driven disparities,” the NAM warned.
There also are a number of potential landmines on a micro level. AI in Healthcare magazine surveyed more than 1,200 executives and physicians working primarily at hospitals, health systems, integrated delivery networks, medical groups and imaging centers. They cited the following as the top 5 barriers to AI adoption:
- Lack of financial resources
- Lack of clear strategy for AI within an organization
- Limited understanding of insights from AI
- Lack of leaders’ ownership of and commitment to AI
- Uncertain or low expectations for return on AI investments
You can read more about the AI in Healthcare survey results here.
Looking at those five barriers, they look more like entry points for AI to get hijacked by self-interests.
How to keep AI in healthcare from derailing
So what can the healthcare industry and individual healthcare organizations do to stay on the right side of AI in healthcare?
On a macro level, the NAM report recommended what it called a “graduated approach” to the federal regulation of healthcare AI based on three criteria:
- Level of patient risk
- Level of AI autonomy
- Range of AI tech from static to dynamic
In-other-words, as AI’s power escalates, so too should be the oversight of that power. Makes sense.
Last February, President Trump signed an executive order that launched his American AI Initiative. One tenet of the initiative is to “set AI governance standards.” It’s not specific to healthcare, but it’s worth reading to get a sense of where the current administration sees the line. You can add your own interpretation here.
Last April, the FDA proposed a regulatory framework for overseeing healthcare AI and ML software as a medical device. To my knowledge, it’s still proposed. But, if you’re developing or using the stuff, you should probably read it to know what the FDA is thinking.
On a micro level, healthcare leaders might be wise to listen to some early AI adopters. In October, KLAS, the health IT market research and ratings firm, and the College of Healthcare Information Management Executives released a 17-page report on the experiences of health IT execs at 57 healthcare organizations that have attempted to adopt AI and ML technologies. Their three keys to success?
- Embed AI in the workflow
- Bring together experts on AI, data science, modeling, analytics and subject matter
- Take ownership for driving change management and operationalizing insights
In-other-words do everything you can and do the right things if you want your AI and ML models to serve their intended purposes. Again, makes total sense.
Seventy-five percent of the execs surveyed by KPMG said the government should regulate AI, and 90 percent said companies should have AI ethics policies in place to help keep everyone in line.
When you put it all together, a combination of smart federal regulation that supports innovation but protects consumers along with transparent and robust data quality, usage, security and governance practices, processes and protocols at the organizational level may be just enough to keep AI and ML technologies on the right trajectory.
If not, HAL may not open the pod bay doors when you’re trying to get back into your spaceship.
Thanks for reading.