August 21, 2020
Should We Regulate Healthcare Chatbots?
Don’t bury the lede.
That journalism rule of thumb also applies in healthcare when you see your doctor.
If you tell your doctor your throat hurts, you have a headache and you have uncontrolled bleeding from a severed finger, there’s a good chance that your doctor will tell you to open your mouth and say “ahh.” That’s because their education, training and experience have taught them to focus on the first symptom that a patient describes because mentioning it first is a sign that it’s the most serious symptom and the one that needs immediate medical attention.
That’s how doctors think, and if you’re a patient who wants your finger to stop bleeding before you pass out, don’t mention your sore throat or your headache.
I don’t know how healthcare chatbots think. These are the software programs that you access on an app on your phone or tablet to “talk” to a computer about what’s ailing you. More than a symptom checker, these apps enable you to have a conversation with the computer that leads to an action. That action can range from do nothing, you’re fine to let’s have you talk to a clinician to dial 911 and call an ambulance.
There are lots of healthcare chatbots on the market today, and more will be coming to market. Each app developer claims or will claim that its chatbot is “intelligent,” meaning it’s built on some kind of artificial intelligence or machine learning technology. In-other-words, it gets smarter and more useful to patients as it learns from each conversation that it has with a patient. For example, every time a patient tells the chatbot that they have a sore throat, the chatbot asks them if their finger is bleeding.
Many hospitals, health systems and medical practices are buying navigational, triaging and diagnosing chatbots to man their new digital front doors. But how do they know they’ve made a good hire?
In a recent Viewpoint in the Journal of the American Medical Association, two doctors and one medical informaticist with the Perelman School of Medicine at the University of Pennsylvania basically made the same point.
“The evidence suggests CAs (conversational agents) are not yet mature enough to reliably respond to patients’ statements in all circumstances, even when those statements explicitly signal harm,” they said.
They proceeded to outline a framework for regulating what they defined as high-risk CAs. High-risk CAs are healthcare chatbots that “involve more automation (natural language processing, machine learning), unstructured, open-ended dialogue with patients, and have potentially serious patient consequences in the event of system failure.”
Their framework covers 12 aspects of a high-risk healthcare chatbot to make it safe to use with patients:
- Bias and health equity
- Content decisions
- Cybersecurity
- Data use, privacy and integration
- Governance, testing and evaluation
- Legal and licensing
- Patient safety
- Research and development questions
- Scope
- Supporting innovation
- Third-party involvement
- Trust and transparency
I much prefer competition that sparks innovation to regulation that stifles it. With healthcare chatbots, however, I’m concerned that there’s so much competition to build the best healthcare chatbot to pitch to hospitals, health systems and medical practices in a rush to open their digital front doors that someone is going to cut corners. It’s not like that hasn’t happened before. The potential for good is high but so is the potential for harm.
The situation reminds of the Hungarian phrase book sketch by Monty Python, which you can watch here. No one wants a hovercraft full of eels.
And no one wants a healthcare chatbot that gives them bad medical advice.
My advice for hospitals, health systems and medical practices is to convert the researchers’ framework into a punch list to vet healthcare chatbot vendors. Under each of the 12 aspects in the framework, the researchers listed questions that potential regulators should ask about a healthcare chatbot. Providers should ask the same questions of their potential chatbot technology partners.
If providers took that approach, the market, not regulators, will decide what chatbots create more value for patients when the inevitable market shakeout for healthcare chatbots happens.
Thanks for reading.
Stay home, stay safe, stay alive.