Top

AI in medicine: how to cope with race bias?

Artificial intelligence is used in various healthcare settings, from analyzing medical images to assisting in surgical procedures. Although AI can sometimes outperform qualified physicians, its capabilities are only sometimes put to equal use. For example, last year, a study published in Lancet Digital Health claimed that AI models could accurately predict an individual’s race in different X-ray images, a task impossible for human experts. This a warning to those who, perhaps not entirely wrongly, envision a future in which software, beyond what it “sees,” can obtain additional information about individuals to classify them and make categorical models without their consent. A scenario, the Lancet fears, that could exacerbate racial disparities regarding AI in medicine.

Examples of bias in natural language processing are endless. Using both a public and private dataset, MIT scientists confirmed the above. Using imaging data from chest X-rays, limb X-rays, CT scans of the chest, and mammograms, the team trained a deep learning model to identify patients’ race as white, black, or Asian, even though the images contained no explicit mention of origin. A recent science article reminds us that AI can predict people’s age and sex based simply on the ultrasound response during a cardiac exam. Heart rate varies from age to age and by gender, so creating a somewhat accurate range of such information is simple, almost trivial for software that processes millions of data points per minute.

The question is more technical than an ethical one: will we one day interface with a doctor made of bits rather than flesh and blood who will tell us how to get by? Yes, but the critical issues go even further if we want to call them that.

AI in medicine
AI models could accurately predict an individual’s race in different types of X-ray images

(Im)partial clusters in AI

Once an artificial intelligence platform has created clusters, based on race, of patients of the same age group, gender, and social background, what will prohibit insurance companies from making ad-hoc offerings, either improved or worsened, as the case may be? In short: how will social inequality be avoided in the merits of healthcare, medical care, and welfare? “The ability of AI to predict health variables for a race,” James Zou, Judy Wawira Gichoya, Daniel E. Ho and Ziad Obermeyer explain in Science, “from medical images can be used to create disparities in the health care system.

Bias is then just around the corner. Without looking too far back, think of what happened with Covid. MRIs of patients hospitalized for bilateral pneumonia showed clear signs of the disease on their bodies. If one day we fed all these findings to an AI, it would be able to create categories of individuals at greater risk of Covid pneumonia than others.

Taking into consideration what other variables? If one did not include seemingly minor factors such as smoking, past problems, familiarity, and other decisive metrics, there would be no “intelligence” that would hold but only a bunch of information catalogued in a good way. Predicting something based on race is not using the data in its holistic form, with one eye on the particular and one on the context in which it is embedded but only as a mere “game” of systemic aggregation, with little value. In the United States, where the application of technology in the medical field runs much faster than here, something is moving, albeit slowly, intending to regulate the implications of AI better.

U.S. moves

Civil rights groups have convinced the White House to update the race reporting standard from 1997 in favour of disaggregating data by subgroups (e.g., Vietnamese Americans, Asian Americans). It may take years for this process to materialize in health data, and in the meantime, AI imputations can potentially increase disparities between granular and more proximate subgroups. To date, racial variables are not a determinate element in medicine. But they could become one, especially as tools such as generative AI become more widely available to the public. Understanding what features AI uses mechanistically to predict race variables will then be essential to make the data and algorithms unbiased. It will also take human effort to reduce bias in how it “uses” the data in front of it in correlation to the race of the patient in front of it—a potentially more challenging task than reducing bias in the algorithms themselves.

Antonino Caffo has been involved in journalism, particularly technology, for fifteen years. He is interested in topics related to the world of IT security but also consumer electronics. Antonino writes for the most important Italian generalist and trade publications. You can see him, sometimes, on television explaining how technology works, which is not as trivial for everyone as it seems.