1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 15th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Rethinking The Purpose of Diagnostics

Discussion in 'Diagnostic Criteria and Naming Discussions' started by mat, Dec 7, 2020.

  1. mat

    mat Senior Member (Voting Rights)

    Messages:
    135
    Initially, I intended to write this as a response in another post, but I think it deserves its own post now. These thoughts have followed me for quite some time. I'd be interested in reading what your view on this is.

    I often bring this up because I can't concur with this kind of logic as I described it in the context of vague antibody tests as control or in the context of psychosomatism/depression. Diagnose someone with psychosomatism/depression, and the therapy will most likely lead to improvement simply due to the general nature of psychotherapy. Just as you could diagnose someone with "pain syndrome" and give them pain killers. If a medical doctor in applied medicine builds their experience of diagnosis based on verification measured by improvement of therapy, they will eventually become more and more convinced that these kinds of diagnoses are accurate and true. For the same reason, homeopaths are so convinced about their globules. Every person has some kind of psychosomatic response.

    In computer science and maths, this is what we would describe proof by induction. You can invent any kind of partial theory/diagnosis/mechanism/logic, and it will work in an inductive proof if given the right assumptions (making it complete). If physiological therapy was never invented and psychological therapy is the only thing we knew and could imagine, we could define every disease and therapy just by psychological discriminators and would see certain degrees of success within this realm. We could live in such a reality, and medicine would laugh at you if you put something like physis in your mouth, just as modern medicine would be laughed at during the ages of medieval medicine (potentially considering it witchcraft).

    Since the enlightenment, we have developed empirical thinking, fortunately. Yet, this thinking in medicine is similarly imperative as it was during the medieval ages in that diseases are still classified very generally and symptom-based-first. During the medieval ages, there was not really a lack of sophisticated observation. Apart from empirical thinking, in my view, what we still lack today is functional thinking. Therapy is not primarily a consequence of a diagnosis and of symptoms. Diagnosis is just a necessary step so that therapy can happen. Symptom discriminators can be part of a diagnosis, but they do not really have to. We all have one disease, and that is aging. Every healthy person will eventually develop a disease, just like an HIV-positive person can be perfectly healthy for many years until the virus becomes active.

    For that reason, the logic symptom -> diagnosis -> therapy is outdated and belongs to history IMHO. There are more and more smart home diagnostic technologies available, so that the economic aspect of this way of thinking isn't very relevant anymore. Soon, you can monitor most of your essential biomarkers at home every day. Not even blood has to be removed by physicians any more thanks to blood spot tests. All of these technologies aren't very precise yet, but the point of them isn't precision. It is to give you indications and likelihoods so that the response can improve your average lifestyle. As another instance, the potential in cancer prevention doesn't derive from the precision of tests anymore but from the interval of testing.

    This means that therapy shouldn't be considered a consequence anymore. We all age and have risks for certain diseases. A variety of diagnostic measures can assess these risks. The greater the diagnostics coverage and the more often done, the more precise your status can be determined. But you will eventually get sick, and the risk can already be determined at your birth. If the most likely cause of death can already be determined, where is the point to it then? We all do therapy every day just by the choices we make in our lifestyle, diet, supplements, medication. Some people might associate this with orthomolecular medicine, but it is much more.

    If disease definitions are only a means to the ongoing set of therapeutic measures (i.e., lifestyle, diet, etc.), then their definition methodology has to change. Then, diseases are best defined by projection from the complete set of available therapies. If you are looking at the choice of therapy with a pathophysiological mechanism, then the disease discrimination will perform best if the participating pathophysiological markers define it. You don't have Rheuma anymore then because Rheuma doesn't tell you the mechanism, neither does it clarify therapy's choice very well. Today, we know the mechanisms such as the HLA-B27 genotypes and other cytokine and chemokine system dysfunctions. These dysfunctions are shared among many autoimmune diseases. The type of dysfunction is the best discriminator for the choice of therapy. No reason to call something Rheuma anymore, or Hashimoto or the many other diseases that share so many similarities and overlaps. Symptomatic diagnosis only will remain relevant in the context of symptomatic therapy then. You would still have arthritis/"joint pain" or whatever term suits a selectable symptomatic therapy.

    If you move on to this kind of logic, there is another potential benefit. At the moment, disease classification is mostly black and white thinking. Either someone has a disease or not. In some parts of medicine, we see progress, fortunately, for example, in that Diabetes II now has glucose intolerance approved as its precursor (and widespread awareness), which will lead to therapies such as Metformin prescription and a diet change. However, this is still black, gray, and white. The field of genetics is the most advanced within this realm. Despite the categorical nature of the DNA, assessment is done statistically. There is never a guarantee for any disease due to mutations, even if you have an identical twin who developed the disease after genetically testing positive for a rare disease. One random mutation can give someone the chance of not getting it. Risks can not be assessed very well in likelihoods either because there is no perfect comparison group as just described. Your risk can only be determined by comparing you to the rest of the population or ethnicity or other subgroups such as therapy subgroups.

    If therapy is considered an omnivalent, categorical thinking becomes obsolete. A set of ongoing diagnostics will regularly inform you about your risks and numeric degrees of all the diseases that can eventually become more or less symptomatically apparent. The same set of ongoing diagnostics can inform you about the persistent therapeutic choices you make every day. The projection can be performed best by machine learning algorithms, not by medical doctors' interpretation and personal experience. Medical doctors can be of much more use in the field of research so that machine learning gets more input and becomes more precise.

    This will all take some time until technologies are far enough for such a lifestyle to become a reality, but you have to start with the paradigm shift. Without the paradigm shift, this won't ever become possible because it requires starting at 0, regaining all the experience, and doing all the studies within the new paradigm. Currently, the success of a therapeutic choice is measured within the imperative paradigm, usually by double control (=categorical differentiation). This has to be repeated in numerical terms to have any meaning within the new paradigm. An intermediary age of studies could also infer to both paradigms. The first set of studies could also focus on the success rate of both paradigms. Statistically, there is only one possible winner, though. Early stages of the common pathological diseases can be determined far earlier and eliminated. Certainly, this will happen naturally when smart diagnostics and machine learning are omnipresent. But it could happen far earlier if a scientific motivation is put behind.
     
    Last edited: Dec 8, 2020
  2. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,827
    Location:
    Australia
    I agree that medical diagnostics are going to undergo a revolution this century, which may well put many medical practitioners out of business.

    But a key point when it comes to scientific reasoning, is that specificity matters. This is the basis of the "evidence based" medicine paradigm that relies on specific diagnoses and highly controlled trials testing of treatments for specific diagnoses. Things are of course blurred where that level of specificity is not achievable, particularly with symptom based diagnoses.
    The borders are also necessarily fuzzy (subject to statistical uncertainty), and unless a patient has an illness that is having a significant impact on their life, it is often better not to intervene with any sort of surgery or pharmacological therapy due to excess risk of iatrogenic harm.

    Hence the idea that medical doctors will intervene early before a patient develops a serious disease (and hence a diagnosis) is still speculative. Current attempts, for example, the use of statins are still being debated in the medical community.

    The exception is usually encouraging "healthy" behaviours (diet and exercise) and discouraging "unhealthy" behaviours. But I think the impact that this has on disease is overstated. Many diseases occur due to stochastic reasons (bad luck), that have little to do with these behaviours.

    Social factors also tend to be overlooked - risks of injury or death due to automobile collisions is something that many people in our society prefer to pretend as if it doesn't actually exist - because our societies are built around this technology. Our society is also just recently having a spot of insight as to the impact of international travel on circulating infectious diseases that kill many thousands every year. Australia has had about a thousand less influenza deaths this year, due to COVID quarantine policies (note, I am not talking about lockdowns!). But will the lesson be learnt such that there is long term change? I think most people don't want to give up the idea of international travel without enforced quarantine.



    To address the first point that you brought up, the rigour by which outcome measures are collected matter. Unfounded assumptions are often made about efficacy when the response was simply a result of natural healing that would have occurred anyway, or response biases, namely the individual falsely reports the level of improvement due to a wide variety of cognitive and social biases. If a fake treatment is offered (placebo), then people often mistakenly conclude that there is a "powerful placebo effect", when the truth is often that symptom reporting in the context of a medical intervention is unreliable (hence the need for genuinely blinded control group trials).

    Spot blood tests cannot be a panacea, small quantities of blood are often insufficient for sufficient sensitivity for many types of diagnostic tests, leading to unreliability. Such tests cannot be reliable, no matter how advanced the technology is.

    You talk about "HLA-B27 genotypes and other cytokine and chemokine system dysfunctions". But those HLA genotypes don't predispose to any specific disease. As you say, this is simply a risk factor, some people with those SNPs will not get ill at all. Secondly, there are no specific cytokine/chemokine system dysfunctions common to autoimmune illnesses. In fact cytokine testing is not a suitable biomarker/diagnostic tool for any autoimmune illness, except the so-called Cytokine release syndrome. Notably those patients tend to already be monitored as they tend to be hospitalised due to severe infections or are receiving experimental immunotherapies against certain types of cancer.
     
  3. mat

    mat Senior Member (Voting Rights)

    Messages:
    135
    @Snow Leopard I appreciate your comment and I agree with most parts. Allow me to add some points though.

    If there were sufficient practitioners at the moment, I would agree. But at least in Germany, there is a lack of medicine graduates due to absurd enrollment requirements. There will still be demand if the structure of medicine won't change as well.

    Absolutely! Maybe robotic surgeries or gene therapy will become low risk one day, but this is speculative.

    I'm aware of the discord on the statin matter, and I think it's a good example to look at how significant the bias in self-reported symptoms can be. Talking about medication, Metformin might be the better example, which seems to be promising as a prophylactic medication if certain genotypes are excluded (e.g. G6PD deficiency).

    This seems like speculative thinking regarding the future. Stochastic reasons (= risk bias) only happen due to a lack of predictors. In general, given more data, more predictors can be established.

    I second this. Especially society and social culture have a great influence, even greater than the prosperity of society. This can be seen very well contrasting Bhutan to western societies. Moreover, the internet culture with its dissocialization will leave its marks.

    Mask wearing actually might be the key contributor. It doesn't have to be very difficult. Mask wearing is only a cultural obstacle, IMHO.

    I've personally experienced this at the time I first started using supplements. In the first weeks, I would change my perception only for the positive because I'm too optimistic. But in the long term, after periods of taking it and not taking it, I have a better feeling which supplements actually made a difference and which not. This might still be a placebo, but at least a placebo that works for me, if we disregard the science that backs things like Vitamine D. But my first response would often be how effective it is. Therefore, some doctors definitely didn't get the correct feedback.

    When it comes to cell counting, blood spot tests don't produce sufficient numbers. The bias would be greater than the reference ranges. I concur. We all remember Theranos. For certain things, they work, though. But I've never focused on this field. But there is another option that might soon become a possibility, i.e. robotic equipment that removes blood and automated testing. I followed the improvements in optical sensor manufacturing and machine-learning-optimized optical processing. This enables low-cost home equipment. Laboratories are already streamlining and automating the essential blood markers into unified and fast tests, thereby cutting costs below 30 € for a complete test with all the common blood markers. Blood removal is nowhere at the moment because no one relied on remote diagnosis before COVID-19.

    COVID-19 won't change this, but military and space research usually brings up major new concepts. NASA/SpaceX soon intend to establish either a Moon or Mars base. Remote diagnostics will become crucial to this because of the comparably long return times to Earth. I'm certain that NASA will start programs to solve remote diagnosis, and robotic blood removal might be the next best thing that will become part of the automated test processes. By robotic, I only mean an apparatus that can scan the veins with certain light spectrums and inject the cannula. In space, there will be additional challenges, such as the prevention of blood clotting. If we get to Mars, things will change on this front, I'm sure. Since NASA traditionally make their patent licenses affordable, it's only a matter of time until more compact and affordable home devices end on the market. Elon Musk already has shown his ambition for a new medical era with Neuralink. He puts focus on costs, so if SpaceX participates in the medicine program, they might as well bring their solution to market, just as they do with Starlink.

    You are assessing this by employing the old paradigm. The new paradigm, as I imagine it, won't allow any single-specific conclusions. Single specific thinking only serves the purpose that humans can grasp and imagine the meaning, so that practitioners can interpret them. But overall, specificity can be greater in complex models with multispecific markers. If someone is a carrier of HLA-B27 defects, related symptoms, and markers should be observed early on. Moreover, a positive test narrows down possible choices of therapy. Here, IL-23 might become even more significant, and there are already three IL-23 antibodies on the market. So why wouldn't you first check IL-23 levels to assess if the rheumatic disease is, in fact, HLA-B27 mediated as the genotype would suggest? I think cytokine testing hasn't become a suitable tool yet because it is assessed within an inadequate paradigm. If they are assessed by groups of the typical pathological diagnoses, the projection between diagnosis and therapy will be biased by many confounding factors. Machine learning needs to be able to measure confounding factors; then, they can be eliminated without any human effort. This is why markers themselves have much more value to such an algorithm than the human interpretation of a guideline that shades the markers. Additional interpretation input isn't a bad thing. The algorithm would eventually asses the quality of the interpretation.

    I say that everyone gets ill eventually. HLA-B27 would definitely have an influence on it to a certain degree, just like any other gene. Because we age and aging enhances how prone someone is to diseases. The influence can be predicted with a certain bias. HLA-B27 is only an example, but the association to the microbiome is known. This is already something that every HLA-B27 carrier could influence by lifestyle choices, compensating the risks.

    Genotypes are not always only bad and good. HLA-B27 improves viral response, too much sometimes - leading to reactive autoimmunity. It's just a matter of environmental factors. They can be measured as well. Some people have very resilient liver genes and prone kidney genes; some have it vice versa. Sometimes, it depends on other factors. In normal research, every subgroup would have to be explicitly defined, selected, etc. factors had to be identified manually. There is a lot of work involved. Neural networks could do all of this implicitly, given big datasets of smart home diagnostics.
     
    Snow Leopard and alktipping like this.
  4. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,827
    Location:
    Australia
    This is what I would be working on if I was never ill. Microfluidics is of a particular interest of mine. The strength of microfluidics is that a wide range of chemical processes can be done with very small sample volumes.

    We studied most of the novel/experimental testing technologies in my undergraduate degree (nanotechnology). The fact is that there are physical limits of detection, which no amount of fancy machine learning and optical processing can overcome. I stated as much in my term paper. A few years later I learned about Theranos and I was taken aback. My conclusions were either that they were lying, or they had developed technology well beyond what any other scientists had published. We know in hindsight how that turned out. I would never have personally invested in Theranos without proof of concept.

    I've read hundreds of papers with fancy analytical approaches, including machine learning as you describe and they have so far failed to provide sufficient predictive value when replication is attempted.

    The more complex the approach, the more easily you can be fooled by randomness. The danger is assuming particular values are pathological or cause for concern, when they are not. The sample sizes required for statistical validity grow rapidly for each additional predictor variable. To the point that impractically large sample sizes are required, at least with current funding and cost of sampling. So generally speaking, this is a physical limitation that cannot be overcome simply by using more sophisticated analytical approaches.
     
    cfsandmore, mat, Amw66 and 1 other person like this.
  5. mat

    mat Senior Member (Voting Rights)

    Messages:
    135
    I brought this up not because I believe that blood spot testing can become a thing if optical processing becomes more precise. This is about the current size of automated laboratories. You can't put them into a space capsule or a smart home at the moment. It has to become more compact and the developments in the manufacturing of optical chips and dedicated "AI chips" (in simple terms) make it more likely to happen soon. There will even be a loss of precision to a certain degree but this is where AI can become useful. If sensors are cheap enough, multiple sensor inputs can be combined and AI can provide the perfect filter. Redundancy and distributed processing from low-cost components are what drives new technologies, especially in space and rocket science.

    No attempt could have been done given the data that is available. No data is annotated in terms of this new paradigm. Without the annotation, you can only use terms of the old paradigm as test sets, and doing so is like trying to peel an apple as if it was an orange.

    What you're describing is the common machine learning challenge of under- vs. overoptimization. But optimization is something very different than complexity. Optimization is basically part of the input/preprocessing for the core of the AI, whereas complexity is innate to the model. A simple neural network isn't complex at all. Only case-specific adaptations squeeze precision out of them by making certain assumptions over the use case. But in general, more data means more potential for precision. This is why test sets are used to optimize case-specific algorithms and this is how their precision is measured. The only physical limitation is the precision of input data. But as mentioned before, this can be overcome by the distribution of input sensing. If something has any kind of relevance, and in a complex system as the human metabolism, almost everything is connected, then the algorithm can be optimized to it.

    Maybe I haven't made this point clear enough why the new paradigm has to start from zero. A current use case would look like this. You have two biomarkers and one diagnosis to improve specificity on. Let's say these are ANA and CRP and the diagnosis in question is rheumatoid arthritis (RA). This already has been optimized statistically in studies. Due to the linearity and simplicity of input data, there's nothing a neural network could improve. You would end up with overoptimization indeed. Because you use the existing optimized classifier RA in the test set. But the diagnosis RA isn't adaptive to patient outcome unless explicitly specified in the underlying study models. So if there is a subgroup of patients who don't respond to CRP, another test could clarify this question and this is how CRP precision can be improved. A lot of work goes into this and has be done methodically and manually. The test has to be certified, doctors and laboratories have to be trained, etc.

    Now let's imagine that genome testing isn't the only marker clarifying if someone is a CRP responder. There might be a pattern in another set of markers that are cost-efficient and done frequently. This is why big data is important when it comes to the optimization of AI. AI has to be trained with all possible data available. Then, the AI could detect this pattern without an expensive genome test. All the work going into the establishment of the genome test could have been saved. This is why big data is crucial. AI detects patterns that a human wouldn't notice. Given full genome sequences of all test groups, the AI would eventually detect the genetic markers as well. However, this isn't about the paradigm yet. This could be done already if medical data and privacy wouldn't be very sensitive issues.

    The new paradigm wouldn't consider RA a diagnosis because diagnosis is only a means to therapy. An AI has no reason to bother about generalized human terms like RA. The real question is what kind of therapy (including diet, lifestyle, supplements) you will have to consider every day. The goal is to become an informed patient who has all the data available to self-assess the priorities of risks. Machine learning naturally provides confidence values for their predictions, so if there is a lack of precision, you would be aware of the uncertainty of a risk as well. But the AI would also be aware of which kind of measurements would improve the precision. Compare it to genomic risk testing. You can do a cheap microarray test and it would selectively pick certain important markers. If something turns out significant, you could do another 30x sequencing to narrow down the risk and confirm it.

    At the moment, there is no data available that tells you if a CRP non-responder should prefer another RA therapy. These are separate concerns and why would someone look into them when there are more obvious links between other markers and therapies within the set of confirmed RA diagnoses? But an AI would find it and thereby improve the outcome of therapy. This also goes beyond the "choice" of a certain therapy. AI could also determine dosages implicitly. This is done by including randomness into the outcome of live predictions. Small deviations in dosages would be applied to the AI's suggestion so that they are within safety ranges but sufficient to distinguish subgroups beyond bias. Eventually, the AI would learn from the randomness and adjust outcome and/or safety ranges. In today's context, this would be an adjustment to age+gender. But the AI could detect many more factors such as related biomarkers (e.g., cytokines as mentioned before, or body size, muscle mass or hormone levels instead of gender, etc.).

    A disease is no categorical concern unless external factors cause an immediate change (e.g., infection, car accident, grief for a loved one). There are also lingering environmental factors such as air quality which could be measured and predicted. And there are diseases which are genetically inherited. But all genetic diseases usually are maladaptation to the environment. So for everyone, there could exist an environment that doesn't trigger the inherited disease. Regardless, the genetic disease will always be present on a scale. Right after birth, the risk could be assessed. From there on, every therapeutic choice (lifestyle, etc.) could make a difference to the outcome.

    In identical universes, identical twins would not be subject to randomness. So there have to be factors that contributed to the outcome if one identical twin gets the disease and the other one doesn't. When it is always present on a scale, and everyone has every disease on the scale, even if the likelihood is below anything measurable, why bother about it? Why not put all of it on a list and start with the most likely ones, do measurements on them to improve their precision, create a good set of the most likely factors that contribute to your health and life span? The only thing someone has to bother then is the same any health-conscious person bothers about every day. Which choices have the greatest chance of making a difference and do you want to put effort into them? RA or not RA? Doesn't matter then. If you have symptoms, report them to the AI and the AI will not only learn from it to serve other people but also deliver you adapted predictions on your therapeutic choices.
     
    Last edited: Dec 9, 2020

Share This Page