'Smarter AI can help fight bias in healthcare'

A panel of global experts showcased how AI can be developed and used to tackle inequities in health during the recent meeting of the Radiological Society of North America (RSNA).
By Mélisande Rouger
03:06 AM

Leading researchers discussed which requirements AI algorithms must meet to fight bias in healthcare during the 'Artificial Intelligence and Implications for Health Equity: Will AI Improve Equity or Increase Disparities?' session which was held on 1 December. 

The speakers were: Ziad Obermeyer, associate professor of health policy and management at the Berkeley School of Public Health, CA; Luke Oakden-Rayner, director of medical imaging research at the Royal Adelaide Hospital, Australia; Constance Lehman, professor of radiology at Harvard Medical School, director of breast imaging, and co-director of the Avon Comprehensive Breast Evaluation Center at Massachusetts General Hospital; and Regina Barzilay, professor in the department of electrical engineering and computer science and member of the Computer Science and AI Lab at the Massachusetts Institute of Technology.

The discussion was moderated by Judy Wawira Gichoya, assistant professor in the Department of Radiology at Emory University School of Medicine, Atlanta.

WHY IT MATTERS

Artificial intelligence (AI) may unintentionally intensify inequities that already exist in modern healthcare and understanding those biases may help defeat them.

Social determinants partly cause poor healthcare outcomes and it is crucial to raise awareness about inequity in access to healthcare, as Prof Sam Shah, founder and director of Faculty of Future Health in London, explained in a keynote during the HIMSS & Health 2.0 European Digital event.

Taking in the patient experience, conducting exploratory error analysis and building smarter and robust algorithms could help reduce bias in many clinical settings, such as pain management and access to screening mammography.

ON THE RECORD

Judy Wawira Gichoya, Emory University School of Medicine, said: "The data we use is collected in a social system that already has cultural and institutional biases. (…) If we just use this data without understanding the inequities then algorithms will end up habituating, if not magnifying, our existing disparities."

Ziad Obermeyer, Berkeley School of Public Health, talked about the pain gap phenomenon, where the pain of white patients is treated or investigated until a cause is found, while in other races it may be ignored or overlooked.

"Society's most disadvantaged, non-white, low income, lower educated patients (…) are reporting severe pain much more often. An obvious explanation is that maybe they have a higher prevalence of painful conditions, but that doesn't seem to be the whole story," he said.

Obermayer explained that listening to the patient, not just the radiologist could help to develop solutions to predict the experience of pain. He referenced an NIH-sponsored dataset that helped him experiment with a new type of algorithm, with which he found more than double the number of black patients with severe pain in their knees who would be eligible for surgery than before.  

Luke Oakden-Rayner, Royal Adelaide Hospital, suggested conducting exploratory error analysis to look at every error case and find common threads, instead of just looking at the AI model and seeing that it is biased.

"Look at the cases it got right and those it got wrong. All the cases AI got right will have something in common and so will the ones it got wrong, then you can find out what the system is biased toward," he said.

Constance Lehman, Harvard Medical School, said: “About two million women will be diagnosed with BC and over 600,000 will die in the US this year. But there’s a marked discrepancy in the impact of BC on women of colour vs. caucasian women.”

In the EU, one in eight women in will develop breast cancer before the age of 85 and an average of 20% of breast cancer cases occur in women when they are younger than 50 years old, according to Europa Donna, a Europe-wide coalition of affiliated groups of women that facilitates the exchange of information concerning breast cancer.

Lehman presented an algorithm, which she developed with Regina Barzilay to help identify women's risk for breast cancer based on their mammogram alone. The solution uses DL and an imaging coder that takes the four views of a standard digital analogue mammogram, without requiring access to family history, prior biopsies or reproductive history.

“This imaging only model performs better than other models and supports equity across the races,” she said.

Regina Barzilay, MIT Institute of Medical Engineering & Science, explained how to build robust AI to support equity in health. “An image-based model that is trained on diverse population can very accurately predict risk across different populations in a very consistent way,” she said.

The AI community is working hard on tools that can work robustly against bias, by making sure that models are trained to be robust in the presence of bias, which may come from the nuisance variation between devices used to take the imaging.

“Humans who are ultimately responsible for making a clinical decision should understand what the machine is doing, to think of all possible biases that the machine can introduce. Models that can make their reasoning understandable to humans could help”, she concluded.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.