ME/CFS Skeptic
Senior Member (Voting Rights)
I thought it might be useful to create a thread to organize news on what Artificial Intelligence is able to achieve. Its applications might be useful for ME/CFS advocacy and research.
Terminally ill man 'cured' of immune illness by AI technologyA terminally ill patient about to enter a hospice is in remission after AI found him a life-saving drug.
Many people with serious illnesses, from cancer to heart failure, survive for years on various treatments before all available drugs stop working and they face a death sentence.
Artificial intelligence can help by rapidly searching through thousands of existing drugs for unexpected ones which might work.
The New England Journal of Medicine has now reported the case of a man with a rare immune condition whose life was saved by the technology.
The patient, who is remaining anonymous, has idiopathic multicentric Castleman's disease (iMCD), which has an especially poor survival rate and few treatment options.
But an AI tool searched through 4,000 existing medications, discovering adalimumab - a monoclonal antibody used for conditions ranging from arthritis to Crohn's disease - could work.
Dr David Fajgenbaum, senior author of the published study on the breakthrough, from the University of Pennsylvania, said: 'The patient in this study was entering hospice care, but now he is almost two years into remission.
'This is remarkable not just for this patient and iMCD, but for the implications it has for the use of machine learning to find treatments for even more conditions.'
I don’t have the capacity to explain fully now, but the field of Explainable AI (XAI) would probably be the best bet when it comes to insight. In short, XAI tries to find out what the model has learned or how it made its choice.This demonstrates the one thing I think AI is good for, big improvements in efficiency at dealing with large datasets.
But I seriously doubt that it is going to deliver amazing new conceptual insights, of itself. Seen no evidence for that thus far.
This demonstrates the one thing I think AI is good for, big improvements in efficiency at dealing with large datasets.
But I seriously doubt that it is going to deliver amazing new conceptual insights, of itself. Seen no evidence for that thus far.
I don’t have the capacity to explain fully now, but the field of Explainable AI (XAI) would probably be the best bet when it comes to insight. In short, XAI tries to find out what the model has learned or how it made its choice.
In the iMCD example, you would use different techniques to try and figure out why it landed at adalimumab.
I don’t think we’re able to ask the model directly yet, because you usually just end up with a model that’s good at rationalising after the fact.
The major challenge is that most of the models in use today are subsymbolic, meaning that everything in the model is represented as millions, billions or trillions of values (often between 0 and 1), so we don’t get anywhere by just looking under the hood, so to speak.
IBM has an intro to XAI here: https://www.ibm.com/think/topics/explainable-ai
We don’t know if it’s impossible or not. So I tend to not take extreme opinions very seriously.There are some quite serious people like Wolfram who think it's essentially impossible to explain and understand AIs' 'reasoning.'
Terminally ill man 'cured' of immune illness by AI technology
Sure. But that's an odd standard, given that we don't know what leads humans to make decisions either, and we can rarely explain it either.@rvallee the problem isn’t the level of reasoning, but being able to understand why an AI-model gives any given output.
This is important to avoid e.g. discrimination.
I believe you’ve misunderstood what the reasoning is going to be used for.Sure. But that's an odd standard, given that we don't know what leads humans to make decisions either, and we can rarely explain it either.