Millions of black people affected by racial bias in health-care algorithms

Andy

Retired committee member
An algorithm widely used in US hospitals to allocate health care to patients has been systematically discriminating against black people, a sweeping analysis has found.

The study, published in Science on 24 October1, concluded that the algorithm was less likely to refer black people than white people who were equally sick to programmes that aim to improve care for patients with complex medical needs. Hospitals and insurers use the algorithm and others like it to help manage care for about 200 million people in the United States each year.

This type of study is rare, because researchers often cannot gain access to proprietary algorithms and the reams of sensitive health data needed to fully test them, says Milena Gianfrancesco, an epidemiologist at the University of California, San Francisco, who has studied sources of bias in electronic medical records. But smaller studies and anecdotal reports have documented unfair and biased decision-making by algorithms used in everything from criminal justice to education and health care.
https://www.nature.com/articles/d41586-019-03228-6
 
I assume such algorithms will also disadvantage women, given the historic bias in describing medical conditions from a male perspective, resulting in a number of conditions being more often under diagnosed and/or misdiagnosed in women. Any algorithms based on such would presumably also disadvantage woman.

Also most of us would also assume that a number of conditions that predominantly effect woman are often misdiagnosed or inappropriately treat. It would be interesting to see if such algorithms also disadvantaged women in this situation or if the were actually better at being gender neutral.
 
Looks like it was to do with cost: https://www.google.com/amp/s/www.ws...racial-bias-in-hospital-algorithm-11571941096
The reason? The algorithm used cost to rank patients, and researchers found health-care spending for black patients was less than for white patients with similar medical conditions.

“What the algorithm is doing is letting healthier white patients cut in line ahead of sicker black patients,” said Dr. Ziad Obermeyer, the study’s lead author and an acting associate professor of health policy at the University of California, Berkeley.
 
Last edited:
The article states that lower use of the health care system results in an evaluation that Black patients are less sick rather than their actual higher level of illness. There is a greater historical reluctance among Black people to turn to the health care system because of past and current poor treatment.

The algorithm assumes that those who access the system less are not as sick. This results in an inaccurate evaluation of their health care needs. This is also one of a number of reasons that Black people are disproportionately undiagnosed with ME.
 
Last edited by a moderator:
But in a case like this, where one group in particular is disadvantaged by an algorithm, it seems reasonable to look at that particular case rather than jumping to generalizations. It may well be that this algorithm disadvantages other groups, but we don’t know that. This study found millions of Black peoples being affected by this algorithm. It’s worthwhile to look at the repercussions of this particular case.
 
This is the study on which the article is based:
Dissecting racial bias in an algorithm used to manage the health of populations

Abstract
Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs.

We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses.

Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%.

The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise.

We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
 
This post replies to a now deleted post.

A limited viewpoint is undoubtedly the problem in bias like this, although I think a lot of bias and prejudice is really just down to a limited viewpoint rather than active hate or dislike.

Raving fascists aside, most everyday bias is just because people don't know or 'get' what another person's life is like.

Take ME as an example. A big issue with clinical prejudice is clinicians not knowing (m)any patients with the disease.

Once clinicians know someone with the disease personally, they tend to become more open-minded and curious, and ultimately more accepting.

If they don't have that knowledge, they judge it through their own lens: 'If I felt that way, I'd just push through.' 'Everyone's tired. Why are these patients so soft?' 'If they're really this ill, why can't I see it or measure it?' Etc, etc.

It never occurs to them that they're only seeing a small part of the picture because they've never been made to step back and see the whole thing.
 
Last edited:
Glad to see they’re writing about it. Coding may be as powerful as law (if we’re not careful in how it is done and audited) in forming society and the world we live in.

The point is that this is often not coding but rather it is due to machine learning where the bias is a result of the choice of training samples and can often be implicit.

With Bias in ML algorithms sometimes people pull out two concerns

1) The data set selected just doesn't have samples from particular minority groups. So there are a number of examples of things like facial recognition systems or even automated taps that work less well with black people because the developers didn't include data from black people in the training set.

2) Inferred Bias. This is where there is potential correlation which can also represent societal biases in datasets which is then learned into the algorithm and features that correlate become used to predict. An example given here is around crime prediction where again race becomes a 'predicting factor' where it may really relate to being disadvantaged but the easy feature for a classifier to pick up on is race.

There are many more issues with the use of ML and biases and there is an active research field around the ethics of AI and how things like explanation become important tools that should be used prior to deploying a model.

A classic example was where a classifier was trained to recognize cats and dogs but all the cats pictures included grass and all the dog pictures didn't. So what it learned was to recognize small furry animals on grass or other surfaces. This was due to bad training (and test) data. It reflects the dangers of ML and the need to be very careful about inbuilt biases.

Maybe the issue is that ML really isn't very good and can pick up on the wrong features and will only generalize from the examples that it has been given. Image recognition systems will, for example, often pick up on texture rather than shape so if you replace the fur on a cat picture with a texture like rhino skin then an ML algorithm will often see the cat as a rhino even though the shape is very different.

That is without going into the attack space where someone may be trying to force a bad decision. My new avatar is an adversarial patch so if you put a small image of that next to a object and get a ML (or one particular algorithm) to classify the object it has a high chance of classifying it as an otter.
 
Last edited by a moderator:
Raving fascists aside, most everyday bias is just because people don't know or 'get' what another person's life is like.


The issue from an algorithmic perspective (or really an machine learning one) is that if you train on data which include these everyday biases then the algorithm picks up on them. For example in algorithms that sift for good CVs if they are trained on data of what a good CV is based on previous job selections then it will pick up on all the biases of past selections (i.e. it builds in sex, social biases etc)


A good short blog here:
https://www.clsa.com/idea/ethical-ai/
 
The issue from an algorithmic perspective (or really an machine learning one) is that if you train on data which include these everyday biases then the algorithm picks up on them. For example in algorithms that sift for good CVs if they are trained on data of what a good CV is based on previous job selections then it will pick up on all the biases of past selections (i.e. it builds in sex, social biases etc)


A good short blog here:
https://www.clsa.com/idea/ethical-ai/
Absolutely.
 
An example given here is around crime prediction where again race becomes a 'predicting factor' where it may really relate to being disadvantaged but the easy feature for a classifier to pick up on is race.

That's probably true, too, but it's also true that there's over-policing of Black people. This actually may begin in school.
As you said, the data points aren't unbiased.
http://www.justicepolicy.org/news/8775m
 
I'll also draw a parallel to M.E.

It's kind of like, when we want to discuss M.E. and the C.D.C., is like, Oh, yeah, great topic: "chronic unwellness." And we're like, Um, no, that's not... (sigh)... you don't get it. :headdesk:

And then we start to worry that M.E. would get erased.

Of course it's true that M.E. and chronic illness generally face some similar issues, but it's also true that M.E. has some unique issues. And sometimes we need a space to talk just about M.E. and not every illness in the world. (Definitely there's a time and place to talk about how it's all connected, though.)
 
Last edited by a moderator:
That's probably true, too, but it's also true that there's over-policing of Black people. This actually may begin in school.
As you said, the data points aren't unbiased.
http://www.justicepolicy.org/news/8775m

Its an example I have heard others giving. I've not gone into detail the point for me is around implicit biases in data sets where an algorithm extracts information that it uses to predict which demonstrates a bias which reflects a correlation in the data (perhaps due to bias) but one that shouldn't be predictive. @Wilhelmina Jenkins point about black people being less likely to turn to the health system and therefore treated as not as sick is another really good example.


It is useful to look at it from an ME perspective but it is also useful to understand the general points that are being made in an area like this (bias in AI/ML algorithms) as it is a very active research area. Then it becomes interesting to put the general points back into the ME world.

For example, I worry that if ML algorithms were to be applied to ME diagnosis the data may reflect something about the doctor that someone visits (and hence the diagnosis ME, CFS, MUS, BDS,....) rather than the symptoms and accuracy of the diagnosis. This in turn could pull out strange correlations are predictions such as wealth (to see private doctors) or where someone lives. Rather than picking up on actual symptoms.

Another concern which I have read is that if algorithms reflect current practices (say with automated diagnosis and treatment recommendations) it could become very hard to update medical knowledge and it could mean treatment strategies become very static. So for example an AI system could learn that on diagnosis doctors recommend CBT/GET for ME patients. If this process become automated it becomes entrenched and self-reinforcing and very hard to change as new knowledge should influence practice.
 
Last edited by a moderator:
Its an example I have heard others giving. I've not gone into detail the point for me is around implicit biases in data sets where an algorithm extracts information that it uses to predict which demonstrates a bias which reflects a correlation in the data (perhaps due to bias) but one that shouldn't be predictive. @Wilhelmina Jenkins point about black people being less likely to turn to the health system and therefore treated as not as sick is another really good example.



It is useful to look at it from an ME perspective but it is also useful to understand the general points that are being made in an area like this (bias in AI/ML algorithms) as it is a very active research area. Then it becomes interesting to put the general points back into the ME world.

For example, I worry that if ML algorithms were to be applied to ME diagnosis the data may reflect something about the doctor that someone visits (and hence the diagnosis ME, CFS, MUS, BDS,....) rather than the symptoms and accuracy of the diagnosis. This in turn could pull out strange correlations are predictions such as wealth (to see private doctors) or where someone lives. Rather than picking up on actual symptoms.

Another concern which I have read is that if algorithms reflect current practices (say with automated diagnosis and treatment recommendations) it could become very hard to update medical knowledge and it could mean treatment strategies become very static. So for example an AI system could learn that on diagnosis doctors recommend CBT/GET for ME patients. If this process become automated it becomes entrenched and self-reinforcing and very hard to change as new knowledge should influence practice.
Good points. Thank you.

I agree that this would be very risky for us.
 
Back
Top Bottom