← Back to Insights
August 21, 2020
Should We Regulate Healthcare Chatbots?
Authors
David Burda
Topics
Innovation Outcomes Policy
Channels
Blogs

Should We Regulate Healthcare Chatbots?

Don’t bury the lede. 

That journalism rule of thumb also applies in healthcare when you see your doctor.

If you tell your doctor your throat hurts, you have a headache and you have uncontrolled bleeding from a severed finger, there’s a good chance that your doctor will tell you to open your mouth and say “ahh.” That’s because their education, training and experience have taught them to focus on the first symptom that a patient describes because mentioning it first is a sign that it’s the most serious symptom and the one that needs immediate medical attention.

That’s how doctors think, and if you’re a patient who wants your finger to stop bleeding before you pass out, don’t mention your sore throat or your headache.

I don’t know how healthcare chatbots think. These are the software programs that you access on an app on your phone or tablet to “talk” to a computer about what’s ailing you. More than a symptom checker, these apps enable you to have a conversation with the computer that leads to an action. That action can range from do nothing, you’re fine to let’s have you talk to a clinician to dial 911 and call an ambulance.

There are lots of healthcare chatbots on the market today, and more will be coming to market. Each app developer claims or will claim that its chatbot is “intelligent,” meaning it’s built on some kind of artificial intelligence or machine learning technology. In-other-words, it gets smarter and more useful to patients as it learns from each conversation that it has with a patient. For example, every time a patient tells the chatbot that they have a sore throat, the chatbot asks them if their finger is bleeding. 

Many hospitals, health systems and medical practices are buying navigational, triaging and diagnosing  chatbots to man their new digital front doors. But how do they know they’ve made a good hire?

In a recent Viewpoint in the Journal of the American Medical Association,  two doctors and one medical informaticist with the Perelman School of Medicine at the University of Pennsylvania basically made the same point. 

“The evidence suggests CAs (conversational agents) are not yet mature enough to reliably respond to patients’ statements in all circumstances, even when those statements explicitly signal harm,” they said.

They proceeded to outline a framework for regulating what they defined as high-risk CAs. High-risk CAs are healthcare chatbots that “involve more automation (natural language processing, machine learning), unstructured, open-ended dialogue with patients, and have potentially serious patient consequences in the event of system failure.”

Their framework covers 12 aspects of a high-risk healthcare chatbot to make it safe to use with patients:

  • Bias and health equity
  • Content decisions
  • Cybersecurity
  • Data use, privacy and integration
  • Governance, testing and evaluation
  • Legal and licensing
  • Patient safety
  • Research and development questions
  • Scope
  • Supporting innovation
  • Third-party involvement
  • Trust and transparency

I much prefer competition that sparks innovation to regulation that stifles it. With healthcare chatbots, however, I’m concerned that there’s so much competition to build the best healthcare chatbot to pitch to hospitals, health systems and medical practices in a rush to open their digital front doors that someone is going to cut corners. It’s not like that hasn’t happened before. The potential for good is high but so is the potential for harm. 

The situation reminds of the Hungarian phrase book sketch by Monty Python, which you can watch here. No one wants a hovercraft full of eels. 

And no one wants a healthcare chatbot that gives them bad medical advice. 

My advice for hospitals, health systems and medical practices is to convert the researchers’ framework into a punch list to vet healthcare chatbot vendors. Under each of the 12 aspects in the framework, the researchers listed questions that potential regulators should ask about a healthcare  chatbot. Providers should ask the same questions of their potential chatbot technology partners.

If providers took that approach, the market, not regulators, will decide what chatbots create more value for patients when the inevitable market shakeout for healthcare chatbots happens. 

Thanks for reading.

Stay home, stay safe, stay alive.

About the Author

David Burda

David Burda began covering healthcare in 1983 and hasn’t stopped since. Dave writes this monthly column “Burda on Healthcare,” contributes weekly blog posts, manages our weekly newsletter 4sight Friday, and hosts our weekly Roundup podcast. Dave believes that healthcare is a business like any other business, and customers — patients — are king. If you do what’s right for patients, good business results will follow.

Dave’s personnel experiences with the healthcare system both as a patient and family caregiver have shaped his point of view. It’s also been shaped by covering the industry for 40 years as a reporter and editor. He worked at Modern Healthcare for 25 years, the last 11 as editor.

Prior to Modern Healthcare, he did stints at the American Medical Record Association (now AHIMA) and the American Hospital Association. After Modern Healthcare, he wrote a monthly column for Twin Cities Business explaining healthcare trends to a business audience, and he developed and executed content marketing plans for leading healthcare corporations as the editorial director for healthcare strategies at MSP Communications.

When he’s not reading and writing about healthcare, Dave spends his time riding the trails of DuPage County, IL, on his bike, tending his vegetable garden and daydreaming about being a lobster fisherman in Maine. He lives in Wheaton, IL, with his lovely wife of 40 years and his three children, none of whom want to be journalists or lobster fishermen.

Recent Posts

Uncategorized
4sight Friday 11/22/24
4sight Friday | The Next Four Years of Alternative Payment Models | Can RFK Jr. Really Make Us… Read More
By November 22, 2024
Outcomes
Podcast: What Direction Will Alternative Payment Models Head Over the Next Four Years? 11/21/24
We talked about the future of Alternative Payment Models (APMs) under a Trump administration based on the latest… Read More
By November 21, 2024
Economics
AMA Draws Blood With Policy on Hospital Tax Exemptions
There are certain lines you don’t cross, even when you point the finger at another healthcare industry sector… Read More
By November 20, 2024