BBC2, 9pm 1 Nov: Diagnosis on Demand? The Computer Will See You Now

Sasha

Senior Member (Voting Rights)
Sounds very interesting:

BBC said:
Could a machine replace your doctor? Dr Hannah Fry explores the incredible ways AI is revolutionising healthcare - and what this means for all of us. This film chronicles the inside story of the AI health revolution, as one company, Babylon Health, prepare for a man vs machine showdown. Can Babylon succeed in their quest to prove their AI can outperform human doctors at safe triage and accurate diagnosis?...

Read the whole thing at:

https://www.bbc.co.uk/programmes/b0bqjq0q

I wonder if part of the reason that PWME have difficulty getting diagnosed, let alone treated, is that doctors don't have the time or the tests to deal with us. Perhaps AI plus better testing tech will be part of the answer in the future...
 
Exciting stuff but but have to say I'd be a little concerned over who owns the AI and how much control they will have over it I mean will the AI be trained to best serve the needs of the patients or to best serve the balance sheet of the medical industry.
 
Exciting stuff but but have to say I'd be a little concerned over who owns the AI and how much control they will have over it I mean will the AI be trained to best serve the needs of the patients or to best serve the balance sheet of the medical industry.

Any half-way competent programmer could programme for both situations, and after all testing was done they could just change a programmed setting and everything could be based on the bottom-line and profits.
 
In the future, hopefully that will help, provided the right people are directing things, as has been said.

At the moment, it’s being found that AI is magnifying human bias (even unintentional bias), from for example the data sets used for training.


This phenomenon was highlighted in a paper, called “Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints”, which won the Best Paper award at EMNLP this summer. The paper looks at gender bias in datasets and also the image classification and Visual Semantic Role Labelling algorithms which were trained on them. It found that in the imSitu dataset of the images of somebody cooking, they were 66% of the time female. However, once the algorithm was trained, it amplified that bias to predict 84% of the people cooking to be female.

https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/
A system called COMPAS, made by a company called Northpointe, offers to predict defendants’ likelihood of reoffending, and is used by some judges to determine whether an inmate is granted parole. The workings of COMPAS are kept secret, but an investigation by ProPublica found evidence that the model may be biased against minorities.

https://www.ibm.com/blogs/policy/bias-in-ai/
Bad data can contain implicit racial, gender, or ideological biases. It can be poorly researched, with vague and unsourced origins. For some, end results can be catastrophic: Qualified candidates can be disregarded for employment, while others can be subjected to unfair treatment in areas such as education or financial lending. In other words, that age-old saying, “garbage in, garbage out” still applies to data-driven AI systems.

https://www.theguardian.com/technol...bit-racist-and-sexist-biases-research-reveals
These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.

But it looks like AI folk are trying to fix this.

https://www.pbs.org/wgbh/nova/article/ai-bias/
Exposing AI’s biases starts by scrapping the notion that machines are inherently objective, says Cathy O’Neil, a data scientist...
...
If, for example, a company wants to automate its hiring process, it might use an algorithm that’s taught to seek out candidates with similar profiles to successful employees—people who have stayed with the company for several years and have received multiple promotions. Both are reasonable and seemingly objective parameters, but if the company has a history of hiring and promoting men over women or white candidates over people of color, an algorithm trained on that data will favor resumes that resemble those of white men.
...
Credit scores—which are affected by factors related to poverty but often not related to driving—factored into these algorithms so heavily that perfect drivers with low credit scores often paid substantially more [for auto insurance] than terrible drivers with high scores
...
When practices like this are automated, it can create negative feedback loops that are hard to break, O’Neil says.
...
That’s one reason why some are encouraging researchers and programmers to consider bias as they’re building tools.

Doesn’t look from this article or anything else I read, like they’re doing much to include any marginalized populations, however (chronically ill, disabled, or otherwise).
 
AI is not the panacea most hope it will be.
Its a computer that is designed to compare A to B and spit out an answer. Its only as good as the programmer who designed it and has no common sense or intelligence. That said i know many doctors who have no sense either but that doesn't make a computer a diagnosing deity.
 
As a patient, understanding how the AI functions; how it processes its inputs to arrive at its outputs, will be important to avoid mistreatment as it'll be programmed to assess whether something is in the patient's head despite the claims of the patient - all the failures of modern medicine will be programmed into it. so really not that dissimilar to the current situation with idiot doctors. It might actually be worse though as it'll be harder because there will be less tell tale signs, such as facial expressions, mannerisms etc. for patients to judge where AI is in its processing of the information you give it.
 
Back
Top Bottom