Their reasoning:
Plitidepsin, a marine-derived cyclic depsipeptide that inhibits SARS-CoV-2 replication at nanomolar concentrations by targeting the host protein eukaryotic translation elongation factor 1A, could be a suitable candidate treatment for "Long COVID" because of a triple mechanism of...
Sweden and Norway!
TL;DR: Standard procedure is to have two people check the images. They used AI to classify low and high risk. Low risk was checked once manually, high risk was checked twiced.
This resulted in overall better performance and 44 % fewer manual checks in total in the AI+Human...
That’s very good! My cardiologist said I don’t need more treatments because my BP is fine. I tried to tell him about reduced bloodflow to the brain and he wouldn’t listen. Said the body adjusted that on its own.
He’s a nice guy, so he might be concinced to check CBF is they have the equipment.
The apparent disconnection between clinical symptoms and biological aberrations is an intriguing observation that gives further merit to studies suggesting mental processes as the main determinant of symptom persistence after COVID-19 (50), and deserves further investigations.
Wyller comes...
Not in that sense.
AI needs input that matches the training data. If it hasn’t trained on something, it can’t know how to deal with it (it will guess (badly) in some cases). This is why AI often fails at very niche edge cases - it wasn’t covered in its training.
This is because AI can’t create...
The only benefit of this study is that it talks about LC and FM at the same time as something more acceptable like MS.
The BPS lobby will probably spin this as «the mind can cause more fatigue than MS!»
I don’t have an opinion on that. I’m just for reducing any source of bias and using the best tools (and not the hyped ones) to solve any given problem.
That’s a trait of the problem it’s being applied to, not the type of algorithm.
But I agree that minimizing human bias is the wanted path in most cases.
Human judgement is 100 % required to create an AI model. The human provides the training data, and decides which meaning(s) to assign to the numerical value(s) that the model outputs The human also decides how and where to implement the model. Human judgement isn’t going anywhere anytime soon.
ME/CFS is more common in women, so it isn’t completely unreasonable to expect that some of the unique characteristics of women’s biology influences the symptoms.
Idk how to (dis)prove causality, though. Any ideas?
Should we somehow archive the patient info page? Idk how to do that, and I don’t have the capacity.
It might be needed in the future, they are not above deleting information when it gets called out.
One way to reduce stigma is to educated the general population. I wonder what they would have found if they asked Norwegians about basic facts about DT1&2.
PAID-20 only has one question about interactions with others, so it appears that most of the «blame» is put on the patient:
Uncomfortable...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.