Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

Discussion in '2021 Cochrane Exercise Therapy Review' started by Lucibee, Feb 13, 2020.

  1. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    OK, but can't do it right now - have to work.
     
  2. Trish

    Trish Moderator Staff Member

    Messages:
    52,641
    Location:
    UK
    I wonder whether the difference of opinion on subjective outcomes in unblinded trials may be partly down to misunderstanding.

    Take the PACE trial as an example, and ignoring for the sake of argument the ethical problems and outcome switching, and just focusing on objective versus subjective outcomes.

    The trial was set up to find out whether patients with CFS improved or recovered with GET. We all agree it was an unblinded trial.

    There were several subjective outcome measures which were combined to form criteria for improvement and published in the main paper in the Lancet, with a claim that 60% of patients improve with exercise. Great success. Lots of publicity. Similarly great publicity for the recovery paper, again based on a combination of subjective measures.

    There were also several objective outcome measures that were not reported in the main papers or publicity because, inconveniently for the researchers, they showed GET to be no better than doing nothing.

    So on that basis, the PACE trial was, in effect, an unblinded trial with subjective outcomes, and worthless. If the researchers had been honest and published, with equal fanfare, the fact that patients got no fitter, could still walk far less distance in 6 minutes than healthy people, and showed no benefit in being able to go back to work, then they would have had to report honestly that, apart from a transient subjective feelings of slight improvement, the treatment was ineffective on fitness, stamina and ability to work, and the trial would not have been so worthless.

    So what I am saying is, yes, the PACE trial had both objective and subjective outcome measures, so should not have been worthless, and honest researchers would have reported both the subjective and objective outcomes, which would be fine.

    It became worthless when the authors discarded the objective outcomes. The Cochrane review did the same. They knew objective outcome measures were possible and chose to write a protocol that excluded them, making the Cochrane review worthless, in my opinion.

    It would be like an Asthma review only including subjective outcomes, when the objective outcome shows very different results, as in the graph someone posted up thread.
     
    Woolie, JohnM, Mithriel and 29 others like this.
  3. Hutan

    Hutan Moderator Staff Member

    Messages:
    27,188
    Location:
    Aotearoa New Zealand
    I think you are being too kind @Trish.

    Subjective measures might be reasonable things to include in an unblinded trial for a range of reasons, but they don't measure treatment efficacy. The PACE authors incorrectly represented subjective responses to questionnaires as accurately measuring treatment efficacy. They did this in an environment that actively increased positive expectations from the GET and CBT treatment arms, and actively made failure to improve from these treatments shameful.

    In the asthma study, it was a reasonable thing to ask patients how they thought they were after each of the interventions. The difference between the reported well-being and the actual impact on breathing gives us the important finding that asthma research should not rely on subjective reporting for assessing treatments in unblinded trials. It also tells clinicians to encourage patients to use their peak flow meter to objectively assess condition rather than relying solely on how they feel. If the asthma study had had a question 'how convenient did you find the treatment?' that would have been a perfectly valid subjective outcome to measure and it might have given a useful insight. Subjective measures can tell us things, but not reliably whether a treatment worked.

    Perhaps if the PACE authors had reported both the objective and subjective measures (notwithstanding all the issues that there were with the "objective" as well as subjective measures), as the asthma researchers and the Mendus trial did, it could have helped more people to understand that ME/CFS research should not use subjective outcomes to assess whether a treatment helps.

    Obviously, the existence of a subjective outcome in an unblinded trial doesn't make the trial worthless. The key thing is what question any outcome is used to answer. Using a subjective outcome as the measure of whether a treatment works in an unblinded trial gives a result that can't be assumed to be accurate.
     
    Woolie, Chezboo, 2kidswithME and 16 others like this.
  4. Medfeb

    Medfeb Senior Member (Voting Rights)

    Messages:
    574
    I have admittedly missed parts of this thread but I'm struggling to understand why so much focus and discussion just on the issue of subjective measures in unblinded trials.

    @Hilda Bastian's example of surgery and a pain outcome appears to be a valid example of an unblinded trial with a subjective outcome.

    But even if that's questioned, PACE isn't problematic just because it used only subjective measures in unblinded trials. It also ignored objective measures,
    ... switched outcome measures,
    ... used one recovery measure that meant the patient could worsen from entry but be considered better,
    ... used selection criteria that selected patients who did not all have ME,
    ... failed to account for the biomedical evidence that directly discredits its claimed mechanism of action for the therapy,
    and I'm sure others can fill in more

    Its the whole package that makes that trial such a problem
     
  5. Art Vandelay

    Art Vandelay Senior Member (Voting Rights)

    Messages:
    588
    Location:
    Adelaide, Australia
    Indeed. Instead, they buried the results from these objective outcomes in subsequent papers. It took patients to point out that the results from these objective outcome measures didn't match up with the subjective outcomes:

    scr.png
    https://journals.sagepub.com/doi/full/10.1177/1359105316675213
     
    Last edited: Jun 10, 2020
    Woolie, alktipping, rvallee and 5 others like this.
  6. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Personally I think it most likely because the key players are supremely adept at cultivating high level influential relationships across a wide sphere of influence. The sort of 'scientists' whose mantra is "it's not what you know but who you know". They then, in effect, choreograph the behaviours of other influencers, to achieve the broader outcomes they seek. Of course those seeking to limit NHS funding may well do their own influencing - knighthoods come to mind. And yes, I am a big fan of "Yes Minister" :).
     
  7. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,870
    Location:
    betwixt and between
    Did anybody actually say this? I think it's clear for all of us that including relevant subjective endpoints is always highly valuable, also in unblinded trials, but the point some of us repeatedly are trying to make is that if you, in unblinded trials, don't at the same time, in addition to subjective outcomes, apply robust objective outcomes, the evidence on that particular treatment effect is zero.

    The examples you provided in the quote above (length of the surgery, blood loss, mortality) all seem to me to be relevant objective outcomes but they measure different treatment (side) effects. I think nobody suggested that objective outcomes are spoiled by also measuring subjective outcomes?

    I think part of the trial design is to word a hypothesis and define which treatment effects matter according to the hypothesis and how they are measured. If you're saying a particular subjective outcome like pain relief is not the most or the only relevant treatment effect in a particular study that also objectively reported other relevant treatment (side) effects, then why use this example as an argument in favor of using solely subjective endpoints in trials were blinding is not possible?

    (Edited for clarity.)
     
    Last edited: Jun 10, 2020
    Woolie, Skycloud, JemPD and 8 others like this.
  8. Daisybell

    Daisybell Senior Member (Voting Rights)

    Messages:
    2,631
    Location:
    New Zealand
    I have been wondering if the issue is one of whether or not the problem is temporary...
    So - if you are measuring pain which is not chronic, as in childbirth, then a subjective measure is fine. But if you are measuring a subjective measure for a long-term illness/syndrome, then its not ok. You need an objective measure to be able to say that the subjective measure is actually worth measuring. Perhaps if you are looking just at pain, then a subjective measure is ok - but if you are looking at chronic pain, then your measures should include something objective about functioning as well as a subjective rating of the pain itself.
    And - i do think that when we are talking about ‘therapies’ designed to alter the way we think, subjective measures, particularly when they are short-term only, are particularly problematic. I suppose the field of depression must be rife with problems of this nature...?
     
  9. Hutan

    Hutan Moderator Staff Member

    Messages:
    27,188
    Location:
    Aotearoa New Zealand
    I guess because it's such a fundamental issue when determining what studies provide useful evidence. And because there seems to be a surprising level of faith in subjective measures as reliable indicators of treatment utility in unblinded trials.

    But even for things we might expect to be obvious ('epidural analgesia reduces pain levels when giving birth'), subjective outcomes have the potential to mislead. In the blinded example I quoted, the finding was that analgesia delivered during the second stage of delivery did not result in statistically lower pain scores. Of course that is a finding specific to the dosage and drug and protocol, but still. I expect that if the pain relief had not been blinded, the reported pain scores would have looked quite different. So, in this case of this temporary pain, the subjective measure in an unblinded study probably would not have been fine.

    Given all the many examples of outcomes being distorted by bias in subjective measures, I don't understand how people can see a pain outcome being a valid example of a subjective outcome that reliably tells you whether a treatment works in an unblinded trial.

    As @MSEsperanza says, I don't think anyone is saying that subjective measures can't be useful.
     
  10. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Absolutely. As I think @Jonathan Edwards has said in the past, subjective outcomes are fine even in open label trials provided objective outcomes are also employed to calibrate them against.
     
    2kidswithME, rvallee, JemPD and 2 others like this.
  11. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,258
    My attempt to better explain how I see the lack of blinding + subjective outcomes problem.

    Lack of blinding makes subjective outcomes unreliable.

    Unblinded clinical trials that attempt to determine if a treatment is effective with subjective outcomes are generally worthless.

    There may be exceptions to this, if nonspecific factors are very carefully kept equal between treatment groups, but that is very difficult.

    CBT/GET and studies of similar interventions do not keep nonspecific effects equal between groups. Bias that will affect subjective outcomes is built into the interventions because their goal is to modify the patient's perception of their illness and symptoms and induce an optimistic state of mind. The investigators seem to believe this is one of the active ingredients of the therapy. They do not consider the possibility that all they're doing is using known methods to introduce bias and confusing this with having found an effective treatment. If an intervention was more than just bias, then objective outcomes should improve. The objective outcomes however are consistent with there being no treatment effect. Proponents of CBT/GET and similar interventions simply ignore unflattering objective outcomes when they make claims of treatment efficacy.

    The design of CBT/GET clinical trials and the many other problems with them (like outcome switching or absurd definitions of recovery) make it possible to obtain positive results for almost any intervention. This is not serious science. It is at best incompetence, at worst a deliberate attempt to mislead. This problem is not limited to ME/CFS, it is becoming a widespread problem affecting many interventions and health conditions. The ultimate outcome of this will be a proliferation of implausible and even absurd interventions (like for example the lightning process) that seem to work for a wide range of conditions that all have in common the absence of biomarker of disease severity.

    Once researchers have obtained a positive result with such a flawed clinical trial, they tend to view it as confirmation that their explanatory models for the illness are correct. These models typically claim that the illness is somehow a product of the patient's mind (and this seems logical if an intervention that manipulates mental state appears to help).

    Objective outcomes may also be somewhat unreliable, depending on what is measured and how. In the context of ME/CFS would not consider a brief improvement in objectively measure daily steps or a brief return to work as reliable evidence of treatment efficacy because the problem in ME/CFS is more one of being unable to sustain activities than doing them at all. If properly used, objective outcomes should be able to give much more accurate information on treatment efficacy.
     
    Last edited: Jun 10, 2020
    JohnM, Chezboo, 2kidswithME and 16 others like this.
  12. Hutan

    Hutan Moderator Staff Member

    Messages:
    27,188
    Location:
    Aotearoa New Zealand
    Here's an example I mentioned earlier - for those who think pain is ok as a primary measure of treatment utility in an unblinded study.

    Mendus study

    So there was a blinded controlled crossover design that found no benefit from a supplement (MitoQ) on pain. If anything, there was a trend to more pain with MitoQ.
    Screen Shot 2020-06-10 at 7.53.19 PM.png

    At the same time, people who missed out on taking part in the blinded study were able to buy their own MitoQ and participate in an open-label study. For this study, MitoQ was reported to reduce pain (albeit not by much) over the study period. There was a statistically significant decrease in pain in the first 40 days.
    Screen Shot 2020-06-10 at 7.53.39 PM.png

    So, do we conclude that the trial provides evidence that MitoQ is slightly helpful for pain in ME/CFS?
     
  13. Hutan

    Hutan Moderator Staff Member

    Messages:
    27,188
    Location:
    Aotearoa New Zealand
    Beautifully summarised @strategist
     
    2kidswithME, rvallee, Trish and 3 others like this.
  14. Joan Crawford

    Joan Crawford Senior Member (Voting Rights)

    Messages:
    564
    Location:
    Warton, Carnforth, Lancs, UK
    From a psychologist's perspective - my understanding of GET and it's underlying rationale.

    GET is aimed at increasing physical activity despite on-going symptoms and flare up of symptoms - because these on-going symptoms are understood within the GET model of ME/CFS to due deconditioning, misattribution of benign bodily sensations as malign, and patient focusing attention on these and getting distressed and avoidant of activity as a result. This cycle goes round and round maintaining the patients' symptoms.

    The model asserts that flare ups of pain are normal when people start rehab after being inactive. Akin to the acute exacerbation of pain, stiffness and so forth post surgery or after an accident, for example, during acute physio rehabilitation. This is largely 'to be expected'. Any increase in pain or other symptoms is purely down to deconditioning, lack of fitness, stamina, lack of use, inflammation and so forth. Within the GET model of CFS model there is no physical reason why patients cannot increase activity safely and consistently - other than the patient blocks the process - due to fear of harm or worsening symptoms like pain and debility. The GET model assumes that the symptoms the patients experience are due to the patients misinterpreting and misattributing benign bodily sensations as malign. Once the patient starts to move and gets going, bit by bit they can and are encouraged actively to do more and more. The theory being that this process can be additive until the person is functioning well and largely as normal and has re-learned that their symptoms are benign. Lots of talk about two steps forward, one back - like in standard physio rehab of acute injury.

    So, that should be straightforward to do in practice and to demonstrate objectively. Easy peasey. (If it were true).

    It is in essence a behavioural intervention to try and overcome a fear / phobia of movement, activity and exercise. Phobias are straightforward to treat and overcome in many instances and circumstances. Again easy peasey (If it were true).

    However, this completely misses the point: The patients main symptoms of post exertional malaise (PEM) and increased debility across a wide range of symptoms. The more activity (mental and physical) they do, the worse they feel and more debilitated they become. There is objective evidence for increased activity making pwME/CFS worse. When objective measures of activity are used and patients increase activity then go on to do less activity and report more pain and lowering of mood. That is the opposite of what would be expected by the GET model.

    The patients voice is completely absent from the GET way of working. The underlying clinician beliefs and the GET model being used are not openly shared with the patients. When this is subsumed within the MUS model (TC et al see these things as the same, e.g. CF = CFS = ME = ME/CFS = MUS = SSD = BDD = FMS = IBS etc) sharing the underlying model is actively discouraged. It's opaque. This is, in my view, unethical. There is no way a patient can truly give their informed consent. It is the opposite of good medical care. It (GET) is 'done' to the patient who is not fully informed. I have no doubt that the clinicians who are 'doing' this are well intentioned - but that is not enough for professional, ethical practice. And no objective checks are made to see if the process is effective or has construct validity - that what is being 'done' in research or clinic resembles or is based on what the model says it is doing. It only appears to matter if the patient 'feels better'. Which they are going to because if they have failed to improve - it is by definition of this model - the patient who has failed. And no one likes that - so there is huge psychological and social pressure to conform, continue and smile - whether it is working or not. Especially if the clinician was nice, welcoming, supportive, caring and so forth. And the patient had been pre-primed and given messages throughput the process that GET was effect, safe and so forth. And that change was down to the patient to take forward. Not achieving a small, positive effect under such circumstances would be more startling.

    From a theoretical perspective the GET model should easily show high levels of change if the model was correct and had good validity - including face & construct validity. I would expect large effect size changes which can be measures objectively, subjectively and can be independently verified. Assessors pre and post therapy can be independent of the treating clinicians. That could/should be done to reduce bias too. Small subjective changes should ring large alarms bells. It does to me. Absence or no change in objective measures or the active dismissal / minimisation of the usefulness of objective measures by researchers should be ringing massively large clanging bells of bias.

    As the human is largely highly loss averse - approaching the idea that GET is not effective is psychologically a difficult process - if the researcher / clinician has truly and wholly believed in it. The belief that GET for ME/CFS works will be maintained pretty much at all costs - and the human will change the goal posts until the desired outcome for the belief is proven - i.e. persuade themselves, co-researchers, funding agencies, colleagues, peer reviews that switching outcomes and relying on subjective measures etc is ok - unless they are held to account independently (that should happen via peer review...) and by objective evidence. Otherwise it is all belief and wishful thinking - no matter how well intentioned or desired.

    Joan Crawford
    Counselling Psychologist
     
    Woolie, JohnM, Mithriel and 25 others like this.
  15. cassava7

    cassava7 Senior Member (Voting Rights)

    Messages:
    987
    Excellent points w.r.t. the discussion on subjective & objective outcomes in an unblinded trial (bolding mine). @strategist sums up things very well. Hope you can read those posts @Hilda Bastian
     
    Last edited: Jun 10, 2020
    MEMarge, Trish and Caroline Struthers like this.
  16. Joan Crawford

    Joan Crawford Senior Member (Voting Rights)

    Messages:
    564
    Location:
    Warton, Carnforth, Lancs, UK
    Doing the above behaviours is what the human will do to preserve loss (face, professional identity) etc when they know they are wrong and lack the courage to face up to this fact. This kind of behaviour should alert others to the pretty obvious fact that there was a dud result that the researcher don't want to fess up too / lack the capacity to accept. When that's coupled with a vice like grip onto the 'model' (I'm right, I'm right, I'm right...... because I say so...) it won't be relinquished easily. And, if I'm being cynical, highly debilitated patients are the least likely group to kick up a fuss - so the researchers have carried on - in the face of objections. However, the researchers have completely misunderstood the human spirit: the desire to be understood and the drive for knowledge, health and decency. People will not rest until they are understood.
     
    Binkie4, Mithriel, Chezboo and 17 others like this.
  17. Trish

    Trish Moderator Staff Member

    Messages:
    52,641
    Location:
    UK
    I agree with all your points in that post, @Hutan. You put better than I did what I was trying to say. The subjective outcomes are not measures of efficacy, they are measures of how the patient is (temporarily) reporting how they feel. As with the asthma paper, it shows that subjective outcomes are misleading if the required outcome is efficacy in improving breathing. The subjective outcomes measured something different.

    If, like Chalder, you think CFS is psychosomatic, with symptoms being a misinterpretation of normal bodily sensations, then in her world of logic, efficacy is measured by self reported improvement on her ridiculous fatigue scale. The fact that the patients' physical health and ability to function were no better than those in the group with no treatment seems to be of no interest to her because for her there was no physical health problem in the first place, and ability to function is related to psychological blocks, not physical ones. She, and many others are so steeped in the myth that this is a psychosomatic problem, and getting us to interpret our symptoms differently so we report reduced subjective symptoms is, in her eyes a cure of our psychosomatic illness.

    Sorry, yes, I knew using PACE as my example would be a problem, that's why I explicitly said I was setting aside the ethical and other problems to just focus on the subjective/objective outcomes aspect of the trial. My point was that because only the subjective outcomes were reported in the main papers and were used to claim GET is effective, that makes the trial - as published - worthless.
     
  18. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,870
    Location:
    betwixt and between
    I'm still only able to occasionally read and post, so again apologies for missing most of the discussion when replying to a particular post.

    Yes, having objective outcomes that are only self-reported is also problematic. I think there mostly exist some ways to reduce the risk that objective outcomes aren't adequately reported, though. Consumption of pain killers that need to be prescribed might be a feasible thing to double-check?

    Of course, outcomes like taking pain killers or how many steps you walk each day are not utterly objective in all the senses of the term, e.g. if participants have a certain range of options to endure more pain or to endure the payback of any increased activity etc.

    There are also a couple of other objective outcomes that aren't ideal to reliably measure an improvement. But then, a combination of objective outcomes could be used.

    I think this example and all the discussions on other forum threads show that it's more complex than just subjective and objective outcomes; it's also about assessing the adequacy of outcomes in general, as well as the best way to measure and report both subjective and objective outcomes.

    Many forum members are aware of this complexity and discuss implications for trial design, also together with investigators that are interested in getting our input.

    However, that complexity to me still doesn't seem to challenge some basic facts about bias in trial design.
     
    Last edited: Jun 10, 2020
    2kidswithME, Sean, Hutan and 3 others like this.
  19. Trish

    Trish Moderator Staff Member

    Messages:
    52,641
    Location:
    UK
    I agree with all of that. It was what I was rather ineptly trying to say.

    But there's something that puzzles me about what the asthma study shows.
    [​IMG]
    https://www.nejm.org/doi/full/10.1056/Nejmoa1103319#:~:text=Although albuterol, but not the,medication in patients with asthma.
    Doesn't this demonstrate that placebo effect is so powerful in its effects on subjective outcome measures that such measures should never be used for assessing efficacy for physical symptoms, even in double blinded trials? After all, if only the subjective measure had been used, the researchers would have concluded that Albuterol is no better than placebo as an asthma treatment, and therefore that it is ineffective.

    What does this say about the conclusions drawn from the Phase 3 Rituximab trial:
    This used subjective outcome measures. One thing it clearly shows is a strong placebo response, but does it show that Rituximab was ineffective in treating ME? On the basis of the asthma study's findings, is that conclusion valid?
     
    inox, Woolie, Sean and 4 others like this.
  20. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,870
    Location:
    betwixt and between
    I missed that. What do you mean by a valid example, @Medfeb ? Do you mean that investigating pain as an outcome in an unblinded, not adequately controlled trial with self-reported pain relief as the only measure is valid?

    I found these statements by @Hilda Bastian:

    I am not sure what you mean here, @Hilda Bastian, but since I didn't read all posts I maybe missed it and you said somewhere that it also applies to unblinded trials that...

    ...and that this will provide reliable evidence?

    I don't think you're suggesting only because people don't apply certain criteria in other studies this means it's OK to do studies that way?

    Or that, only because it would be helpful to know something there always must be means available to know that?

    Apologies for being trivial, but in case it could be helpful for the discussion:

    Do you agree, that, if you have well-founded reasons to apply certain treatments but aren't sure about their superiority compared to other treatments, because for whatever reasons this can't be properly assessed, the best thing to do is to honestly say what you know for sure and what you don't know? That you should say what is your reasoning behind suggesting a certain treatment, but at the same time you should be very clear about the fact that there is not sufficient evidence to favor one treatment over another?

    I think documenting and registering outcomes of a certain intervention in practice can be of great value. But it's something different than doing an RCT.

    For ME research I think treatment documentations and observational studies could be very useful, but these approaches have to be clear about the biases involved, apply adequate protocols and shouldn't be used to exaggerate the evidence.

    (Apologies in advance if I'm not able to get back to replies for a while.)

    Edited for clarity.
     
    Last edited: Jun 10, 2020

Share This Page