Some important questions for AI ethics and safety

Whether it's predictive analytics, RPA, NLP or other use cases, health systems should be thinking seriously about context and workflows, says a UC Berkeley expert who will offer examples during HIMSS22 Digital.
By Nathan Eddy
12:22 PM

Photo: Matt Lincoln/Getty Images

The healthcare industry faces a series of pressing challenges concerning the implementation of safe and ethical artificial intelligence.

But there are resources available for health care teams to help support safe and ethical AI to start them off and set a common foundation for the discussion.

As Jessica Newman, director of the AI Security Initiative at the Center for Long Term Cybersecurity at the University of California, Berkeley, explained there are four common types of AI technologies being implemented in healthcare today.

  • The first is predictive and prescriptive analytics, used in healthcare applications like precision medicine, where a system might be used to predict the most successful treatment based upon particular attributes and context.

  • The second type of AI is robotic process automation, or RPA, which is designed to automate and replicate relatively simple, rule based administrative processes and health care. This is used for things like updating patient records or billing.

  • A third is natural language processing, which enables language applications like speech recognition, text analysis and translation, and can be used to analyze clinical notes or transcribe patient interactions.

  • The fourth type of AI technology is computer vision, where machine learning-enabled image analysis can help recognize potentially cancerous lesions in radiology images, support retinal scanning or help detect a brain hemorrhage.

"Practitioners always have to consider if the tool is appropriate in the specific context of the deployment and how it can be integrated into existing workflows," Newman said. "The second challenge is accuracy and reliability."

The third challenge is bias and fairness, a pervasive challenge for AI systems, as they typically learn from imperfect data sets that include human bias and historical bias.

Despite the challenges, Newman pointed to promising governance and noted this is not a completely unregulated space: The FDA for example regulates medical devices that use AI enabled software, while the FTC is cracking down on the sale or use of racially biased algorithms and data privacy and security regulations such as HIPAA and GDPR also apply to AI technologies.

One example of a tool that's publicly available is the Ethics and Algorithms Toolkit, which includes six steps with key questions to consider, followed by targeted risk mitigation strategies for teams developing AI projects.

Newman added there are also open-source auditing tools, which can audit machine learning models for discrimination and bias, as well as open-source A.I. monitoring tools that can visualize model performance and give a prioritized list of issues to debug.

"Collectively, these resources and governance processes can help teams implement aid safely," she said. "But that doesn't mean they will always be sufficient. Sometimes, AI isn't the answer."

Newman's HIMSS22 Digital session, "Can We Trust AI in Healthcare? Assessing the Ethical and Safety Implications," is scheduled to air Tuesday, March 15, from 11:50 a.m.-12:10 p.m. EDT 

Nathan Eddy is a healthcare and technology freelancer based in Berlin.
Email the writer: nathaneddy@gmail.com
Twitter: @dropdeaded209

HIMSS22 Coverage

An inside look at the innovation, education, technology, networking and key events at the HIMSS22 Global Conference & Exhibition in Orlando.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.