Healthcare must control the 'whether, when and how' of AI development and deployment

At the HIMSS23 opening keynote discussion, artificial intelligence innovators debated about how to weigh the "tremendous opportunities" for healthcare algorithms with the "risks – some of which we may not yet know about."
By Mike Miliard
12:36 PM

"There are still deep scientific mysteries that we are just coming to grips with," said Microsoft's Peter Lee about artificial intelligence at HIMSS23 on Monday.

Photo: Lotus Eyes Photography for HIMSS

CHICAGO – HIMSS23 kicked off in full force here on Tuesday, in a full-to-capacity opening keynote with an audience from around the world. HIMSS CEO Hal Wolf noted that the organization's membership has now surpassed 122,000 – a 60% increase over the past five years – with an increasingly global feel. Healthcare and technology leaders from more than 80 countries are represented at the show, all trying to tackle similar challenges.

"We've had to solve a lot of problems in the past three years," said Wolf. 

Beyond the pandemic, other hurdles to health and wellness remain, in U.S. and worldwide: aging populations, chronic disease, geographic displacement and challenges with health access, financial pressures, staff-shortages and fundamental shifts in care delivery such as the rise of consumerism and the move toward telehealth and home-based care.

To solve those challenges, "the need for actionable information is stronger now than at any time in the past," said Wolf.

And management of those enormous troves of information is increasingly being powered by fast-evolving artificial intelligence – the topic of a sometimes amicably contentious opening panel discussion.

AI and machine learning can "open up new horizons if – IF – we use them appropriately," said Wolf. He nodded jokingly to the recent wave of publicity around OpenAI's ChatGPT by noting that he recently asked the AI model a simple question: "how to solve the global healthcare challenges?"

In seconds, the software returned a 300-plus word answer.

Those challenges are "complex and multifaceted, and therefore require a comprehensive approach involving multiple stakeholders, strategies and solutions," said ChatGPT, which listed improved access, investments in preventative care, technological innovation, addressing health disparities and global collaboration among its top suggestions.

When Mayo Clinic Chief Information Officer Cris Ross – who moderated the panel discussion – said that healthcare is only in the early stages of "creating and learning how to manage these emerging AI tools," it was hard to argue.

Ross convened the discussion, "Responsible AI: Prioritizing Patient Safety, Privacy, and Ethical Considerations," with a quartet of AI innovators who have been thinking hard about the very real challenges and opportunities of this transformative technology.

Andrew Moore, founder and CEO of Lovelace AI; Kay Firth-Butterfield, CEO of the Centre for Trustworthy Technology; Peter Lee, vice president of research and incubation at Microsoft; and Reid Blackman, author of the book "Ethical Machines" and CEO of Virtue, all were tasked with exploring a simple question about AI posed by Ross: "Just because we can do a thing, should we?"

'It's not simple, and there's a lot to learn'

As he has in the past, Ross contrasted what he calls Big AI – "bold ideas like machines that can diagnose disease better than physicians," with Little AI – the "machines that are already listening, writing, helping – and irrevocably changing how we live and work."

Those AI tools are already helping their users do "increasingly bigger things," he said. And it's through the accretion of Little AI advancements that big AI will emerge.

And it's happening quickly. For that reason, Moore argued that health systems should get their arms around the challenges now.

Even though the fast-advancing capabilities of large language models like ChatGPT might feel uncanny to some, "I would expect a responsible hospital should be using large language models now," he said, for tasks such as customer service and call center automation.

"Don't wait to see what happens with the next iteration," said Moore. "Start right now, so you'll be ready."

The capabilities of generative AI are showing themselves and emerging in ways that could benefit healthcare significantly.

The useful use cases are already well-visible: integrating generative AI to improve clinical note taking. For instance, see Epic's generative AI announcement with Microsoft and Nuance this week, or medical schools deploying the tools so AI can "play act the role of a patient."

But there are "also some scary risks," said Lee. "It's not simple, and there's a lot to learn."

To manage those risks, Lee implored the crowd at HIMSS23 that "this community needs to own 'whether, when and how' these AI technologies are used in the future."

Yes, "there are tremendous opportunities," he said. "But also risks, some of which we may not yet know about."

So the "healthcare community needs to assertively own" how the development and deployment of these tools evolve – with a keen eye on safety, efficacy and equity.

For his part, Blackman said he still has real concerns about the black box aspects of too many AI models, and said more transparency and explainability are fundamental must-haves if these tools are to find more acceptance, especially in clinical settings.

"ChatGPT 4 can be phenomenally useful," he said. But it "doesn't give you reasons" for the decisions it makes and the answers it gives.

At times, as was evidenced by the model's response to Wolf's question, the answers are accurate and truthful. But they're often arrived at using a mysterious and complex jumble of calculations whose effect can feel something like "magic," said Blackman.

"Maybe we're ok with magic that works," he said. "But if you're making a cancer diagnosis, I need to know exactly the reason why."

An LLM is "a word predictor, not a deliberator," he said. Even upon close examination, "when you get those reasons, they're not the reasons you actually got the diagnosis."

But ultimately, he said, healthcare organizations will need to think hard about "what are we OK with? Are we OK with a black box model, even when it works well?"

For Firth-Butterfield – who this past month was one of more than 26,000 signatories to an open letter calling on labs to pause training of new powerful AI systems for at least six months – a key question is not how does a model arrive at its answers, but "where is [it] available?"

"Although there are 100 million people using ChatGPT, there are three billion that don't have access to the internet." Her concerns about AI have a lot to do with health equity, bias, fairness and accountability, she said.

"If you're going to be using generative AI, what data are you going to share with those systems?" she asked. And "who do you sue when something goes wrong?"

Lee agreed that "accountability issues are [something] very serious that the world needs to figure out. And it needs to be looked at sector by sector," with healthcare and education of particular concern.

As Blackman noted, there's a "difference between technology that is intelligent versus capable of deceiving people into thinking it's intelligent."

AI is evolving, faster than even many experts thought it would.

The question, then, is "where should healthcare be with these technologies that are transformative but mysterious," said Ross.

Indeed, "there are still deep scientific mysteries that we are just coming to grips with, in addition to the ethics," said Lee – who has been grappling with the ethics of AI for years.

"We are hurtling into the future, without taking a step back and designing it for ourselves," Firth-Butterfield cautioned. "What is it that we want from these tools for our future?

Lee reiterated his hope that the healthcare community collaborate together to help answer that question – ensuring guidance and guardrails are in place – as various stakeholders "work together toward some common ground."

He acknowledged that there's still a lot of fear, uncertainty and doubt about what AI is doing already – and what it may be capable of in the future. "This touches a nerve," he said. "It's an emotional thing."

So his advice was to combat that uncertainty by learning. "Get hands-on. Try to get immersed, and understand. And then work with the rest of the community.

Moore agreed that passive observation is not an option.

"Don't stop and wait to see what happens," he said. "Have your own people building models. Don't just rely on vendors. Make sure you're in it, and your people understand what's happening."

Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com

Healthcare IT News is a HIMSS publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.