May 7, 2020
Heed the FTC’s Warnings on Your Use of Healthcare AI
I’m not sure how many times data scientists and marcom people are in the same room at a hospital, health system or medical practice. But if you’re looking for a reason to invite both to the same Zoom meeting, the Federal Trade Commission just gave you one.
In April, the FTC shared its thoughts on how to protect consumers from the increasing use of technologies powered by artificial intelligence, machine learning and algorithms. The agency thoughts came in the form of a post on the FTC’s blog written by Andrew Smith, director of the FTC’s Bureau of Consumer Protection.
You can read his blog post here.
The post isn’t specific to healthcare, but it applies to healthcare. In fact, the post opens with a reference to the study published last October in Science that found racial bias in an algorithm used for population health management.
The FTC’s detailed guidance—and warnings—to businesses like healthcare that use AI-enabled tech to make decisions about their customers like patients falls under five general recommendations. Below are the recommendations with a potential landmine, chosen by me, from the detailed guidance under each of the recommendations:
✓ Be transparent
Warning: “Secretly collecting audio or visual data – or any sensitive data – to feed an algorithm could also give rise to an FTC action.”
✓ Explain your decision to the consumer
Warning: “If you are using AI to make decisions about consumers in any context, consider how you would explain your decision to your customer if asked.”
✓ Ensure that your decisions are fair
Warning: “You can save yourself a lot of problems by rigorously testing your algorithm, both before you use it and periodically afterwards, to make sure it doesn’t create a disparate impact on a protected class.”
✓ Ensure that your data and models are robust and empirically sound
Warning: “If you provide data about your customers to others for use in automated decision-making, you may have obligations to ensure that the data is accurate.”
✓ Hold yourself accountable for compliance, ethics, fairness and nondiscrimination
Warning: “If you’re in the business of developing AI to sell to other businesses, think about how these tools could be abused and whether access controls and other technologies can prevent the abuse.”
Wherever the FTC uses the word “customer” in this post, you can exchange it for the word “patient” or “member” or “beneficiary,” and it would all make perfect sense. Healthcare AI vendors are partnering with providers and payers to develop AI models for everything from clinical decision support to revenue cycle management—all using the data supplied by the providers or the payers to feed their algorithms.
The FTC’s guidance should be incorporated into the data governance protocols of providers and payers and be part of the marketing plans of healthcare AI vendors hawking their latest technologies. Given AI’s huge opportunity to transform healthcare financing and delivery, stepping on any of these landmines laid out by the FTC would be disastrous.
To learn more about this topic, please read “How Can Healthcare Avoid Screwing Up AI’s Potential?” on 4sighthealth.com.
Stay home, stay safe, stay alive.
Thanks for reading.