Its an example I have heard others giving. I've not gone into detail the point for me is around implicit biases in data sets where an algorithm extracts information that it uses to predict which demonstrates a bias which reflects a correlation in the data (perhaps due to bias) but one that shouldn't be predictive.
@Wilhelmina Jenkins point about black people being less likely to turn to the health system and therefore treated as not as sick is another really good example.
It is useful to look at it from an ME perspective but it is also useful to understand the general points that are being made in an area like this (bias in AI/ML algorithms) as it is a very active research area. Then it becomes interesting to put the general points back into the ME world.
For example, I worry that if ML algorithms were to be applied to ME diagnosis the data may reflect something about the doctor that someone visits (and hence the diagnosis ME, CFS, MUS, BDS,....) rather than the symptoms and accuracy of the diagnosis. This in turn could pull out strange correlations are predictions such as wealth (to see private doctors) or where someone lives. Rather than picking up on actual symptoms.
Another concern which I have read is that if algorithms reflect current practices (say with automated diagnosis and treatment recommendations) it could become very hard to update medical knowledge and it could mean treatment strategies become very static. So for example an AI system could learn that on diagnosis doctors recommend CBT/GET for ME patients. If this process become automated it becomes entrenched and self-reinforcing and very hard to change as new knowledge should influence practice.