1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 15th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Cochrane review and the PACE trial

Discussion in 'Psychosomatic research - ME/CFS and Long Covid' started by Sly Saint, Feb 21, 2018.

  1. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,584
    Location:
    UK
    MEMarge likes this.
  2. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    I've just been looking back at Larun's reply to Courtney's comment on their switching of primary outcomes.

    His comment: https://sites.google.com/site/mecfs...ic-fatigue-syndrome/primary-outcome-switching

    To me, it seems devoid of substance, but I wanted to check if I was missing anything.
    Reply:

    Also the abstract of that BMJ paper makes it seem an odd reference to use seeing as the nonsignificant difference was for this comparison: "Exercise therapy versus treatment as usual, relaxation or flexibility".

    http://www.bmj.com/content/343/bmj.d3340.long

    It's hard to argue that 'treatment as usual, relaxation or flexibility' are so effective in CFS that breakthrough new treatments will be unlikely to reach a statistically significant difference.

    edit: The comments on that BMJ piece include some harsh criticism from Michael J. Campbell, and a comment from Paul McCrone. To me, the piece didn't seem terribly relevant, and indeed, currently there seems to be more concern in discussions about psych research that the traditional cut-off for statistical significance is too loose, rather than too tight.

    Considering the other reasons for fearing that this outcome would wrongly favour exercise therapy due to problems like social desirability bias, etc, that seems a weak point.

    Also, they fail to address the reason why outcome switching is seen as a bad thing- it allows researchers to choose to present results in ways that favour their own preconceptions.

    Do any of our more statistically skilled members think that I'm missing anything of substance here?:

    "We disagree that presenting MD and SMD rather than SMD and MD is an important change, and we disagree with the claim that the analysis based on MD and SMD are inconsistent. This has been discussed as part of the peer-review process. Confidence intervals are probably a better way to interpret data that P values when borderline results are found (2). Interpreting the confidence intervals, we find it likely that exercise with its SMD on -0.63 (95% CI -1.32 to 0.06) is associated with a positive effect. Moreover, one should also keep in mind that the confidence interval of the SMD analysis are inflated by the inclusion of two studies that we recognize as outliers throughout our review. Absence of statistical significance does not directly imply that no difference exists."
     
    Last edited: Mar 2, 2018
  3. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    This claim from the Minister at the recent PACE trial Westminster Hall debate could be a reason to get ask MPs to ask questions of Cochrane.

     
  4. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    bump.
     
    adambeyoncelowe and MSEsperanza like this.
  5. Amw66

    Amw66 Senior Member (Voting Rights)

    Messages:
    6,332
    Do we know the effect of excluding the outliers?
     
  6. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    Not beyond what they say there. I don't think that there's any good reason for excluding them either. If one takes action to artificially lowering the differences between studies, and only then gets a significant overall effect, that would seem a pretty questionable way of doing things.

    OT: I just took another look at the latest version of this review, and see that it says in the references "Wearden 2010 {published and unpublished data}".

    I wondered if this was a change made in response to the Courtney comment pointing out they had clearly used unpublished data from FINE, despite claiming otherwise, but actually the review also still includes this claim:

    "For this updated review, we have not collected unpublished data for our outcomes but have used data from the 2004 review (Edmonds 2004) and from published versions of included articles."

    How can they have these two contradictory claims in their review at the same time, even after the problem has been pointed out to them?!
     
  7. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,510
    Location:
    London, UK
    A number of the following posts have been moved from another thread.

    I think a much more fundamental change is needed. There is nothing wrong with Oxford criteria studies per se. The scientific problem with Oxford studies of exercise therapy like PACE is more subtle and relates to the fact that the criteria will skew the recruitment of patients who have been informed of the nature of the treatment arms.

    The more fundamental problem is that the people who have been assessing these trials simply have no understanding of basic trial methodology and reliability of evidence. The reviews need to be done by people who understand trials. The current situation seems to relate to the fact that the Mental Health section of Cochrane was set up by people who do not understand.

    If the reviewers understood then ALL the exercise therapy trials would be rejected because none of them are controlled trials and Cochrane reviews require controlled trials. The current reviewers do not understand what a controlled trial is.

    I think it needs to be made clear to Cochrane that they have to have competent assessors. I have tried to do that but have had no feedback. I am not that optimistic that even people like Iain Chalmers understand the problem. The phoney nature of Cochrane Mental Health board needs to be exposed but it may take time to get that into the public consciousness.
     
    Last edited by a moderator: Mar 21, 2018
  8. Sasha

    Sasha Senior Member (Voting Rights)

    Messages:
    3,780
    Location:
    UK
    How can this be done? I.e. what can we and/or our charities and/or our researchers do?
     
  9. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,510
    Location:
    London, UK
    I am not sure that ME charities or researchers can do much. They will just make themselves unpopular. I think it has to come from outside. We need more people like James Coyne. There are various initiatives going on but that is all I can say at present.
     
    MSEsperanza, Atle, deleder2k and 13 others like this.
  10. Medfeb

    Medfeb Senior Member (Voting Rights)

    Messages:
    565
    I agree that Cochrane is not appropriately assessing the trial methodologies. That may have to do with their placing these reviews in the mental health section although the initial evidence review conducted by the US Agency for Healthcare Research and Quality (AHRQ) also ranked PACE as a good trial in spite of the trial flaws.

    But I disagree that there's nothing wrong with Oxford.

    By definition, the Oxford definition is chronic, disabling fatigue for which there is no medical explanation. The AHRQ evidence review noted Oxford's non-specificity and stated "its use as entry criteria could have resulted in selection of participants with other fatiguing illnesses or illnesses that resolve spontaneously with time." It also noted that it "may provide misleading results” that are not applicable to patients who meet other case definitions of ME or ME/CFS. As a result, both AHRQ and the NIH's Pathways to Prevention report called for Oxford to be retired because it “may impair progress and cause harm.” This led to AHRQ redoing its analysis after excluding Oxford studies which resulted in the downgrading of recommendations for CBT and GET. They noted that trials of CBT and GET in patients who had hallmark criteria like PEM were "blatantly missing."

    IOM didn't even consider Oxford studies. But it called out both the 1994 Fukuda and 2005 Reeves definitions for also being overly broad and including patients with other conditions. More generally, the IOM noted both the lack of internal validity of the evidence base due to e.g. trial methodology and lack of external validity of the evidence base due to issues with the definitions used. Cochrane is not paying attention to either of these issues and both are problematic when developing treatment recommendations for people with ME.
     
    Last edited: Mar 21, 2018
  11. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    The terms "RCT" and "gold standard" seem to get bandied about whilly nilly by people who should know much better, and people who should know much better seem to get duped by it.
     
    inox, MSEsperanza, WillowJ and 3 others like this.
  12. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,584
    Location:
    UK
    Over on a thread about a review of probiotics research papers I came across the 'Jadad scale'.
    "The Jadad scale was used to asseverate the quality of the clinical trials considered".

    Not heard of it before so had a look on wikipedia:

    "
    Description
    The Jadad scale independently assesses the methodological quality of a clinical trial judging the effectiveness of blinding. Alejandro Jadad-Bechara, a Colombian physician who worked as a Research Fellow at the Oxford Pain Relief Unit, Nuffield Department of Anaesthetics, at the University of Oxford described the allocating trials a score of between zero (very poor) and five (rigorous) scale in an appendix to a 1996 paper.[1] In a 2007 book Jadad described the randomised controlled trial as "one of the simplest, most powerful and revolutionary forms of research".[2]"

    This bit was interesting:
    "
    Criticism
    Critics have charged that the Jadad scale is flawed, being over-simplistic and placing too much emphasis on blinding,[15][16] and can show low consistency between different raters.[17] Furthermore, it does not take into account allocation concealment, viewed by The Cochrane Collaboration as paramount to avoid bias.[18]"

    https://en.wikipedia.org/wiki/Jadad_scale

    (see also the allocation concealment link https://en.wikipedia.org/wiki/Randomized_controlled_trial#Allocation_concealment )
     
  13. alex3619

    alex3619 Senior Member (Voting Rights)

    Messages:
    2,143
    RCTs are NOT the gold standard. Properly designed double blinded RCTs with objective outcome measures are the "gold standard" (for clinical trials), ignoring for now the "platinum standard" of meta-analyses.

    The whole point of designing studies and investigating evidence in an evidence based review is to minimise biases, but not all biases can be addressed using strictly formal methods. The Cochrane Review presumed, without testing this concept, that the PACE trial was a high quality study. It clearly is not. Its obvious even on a casual read of the first paper. I spotted problems with it, and I am sure a great many others did too. It only takes an undergraduate knowledge of science and the scientific method to do that.

    When you amalgamate data you have to know exactly what it is you are amalgamating, and that there is no systemic bias in gathering the data in many or all of the studies. So if there is systemic bias, and for uncertain diagnoses, all a metastudy does is reinforce these problems. There are however methods for ascertaining these problems, and some of them are formal methods, but its not clear the Cochrane Reviews we are discussing did this, or did this properly.

    In short, a metastudy of poor quality data results in a poor quality metastudy. Its GIGO all over again, though I do like to use BIBO, babble in, babble out.

    The other issue arises with the claim that if a study is not gold standard then its not evidence based. This is a big misrepresentation of the issue. Even anecdotal evidence is evidence based. EBM is about ranking evidence into categories with similar bias risks. Its all evidence based, its about how reliable that is, and what kinds of bias go with what kinds of studies.
     
    MSEsperanza, Woolie, Inara and 13 others like this.
  14. alex3619

    alex3619 Senior Member (Voting Rights)

    Messages:
    2,143
    I look forward to reading more about this at the appropriate time.
     
  15. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Quite so. Errors compounding errors. Not so different from the principle of measuring off a set of marks on a piece of wood, say. If you need to make 12 cuts an inch apart, do you measure the next mark from the previous one? Or from the original baseline one? As we all know, if you do the former then preceding errors accumulate into the next, as it implicitly assumes the previous marks are error-free.

    I've no expertise on such things, but I would have thought a fundamental principle of any metastudy should be reassessing the integrity of the underlying trials (additional independent peer reviewing maybe), and not just blindly trotting out what the original authors reported.
     
  16. alex3619

    alex3619 Senior Member (Voting Rights)

    Messages:
    2,143
    Cochrane and other guidelines have methods to do this. However they are most notable in the breach, not the adherence to these guidelines. Furthermore they are checklist guidelines, so that if a problem falls outside the checklist it wont be identified. Typically in an evidence based review an investigator (and there are often many) might have hundreds or even thousands of studies to investigate. So they run a fast checklist test. They typically do not do any deep investigating. This is one of the problems with EBM.

    Now in a well researched area with a great number of very high quality studies any bad study will only be a blip in the total review process. In a poorly researched area, where most studies have systemic flaws, and most studies use poorly validated criteria (e.g. Oxford) then its not a blip anymore, and will overwhelm the high quality studies if they exist at all. This problem is why psychobabble, and psychoquackery, is so dangerous. Most of the research is deeply flawed, so every evidence based review will be substantially biased.
     
    MSEsperanza, Inara, WillowJ and 9 others like this.
  17. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,510
    Location:
    London, UK
    I was asked to review the most recent exercise therapy study. What was interesting was that as an independent peer reviewer I was told that I should not consider the quality of the evidence because that was done in house! Needless to say I ignored these instructions and assessed the quality of evidence.

    As far as I can see there is a problem with a standardised 'tool' Cochran uses for evidence quality - it is not fit for purpose. Either that or it is not applied.

    I have made some notes about this.
     
    inox, MSEsperanza, Keela Too and 24 others like this.
  18. alex3619

    alex3619 Senior Member (Voting Rights)

    Messages:
    2,143
    That is what I see too.
     
    MSEsperanza, Inara, Trish and 2 others like this.
  19. Kalliope

    Kalliope Senior Member (Voting Rights)

    Messages:
    6,279
    Location:
    Norway
    Prof. Gundersen tweeted about a fresh Cochrane review on CFS and CBT/GET with link to this article, but I can't find the publication date.
    Is this brand new?

    Larun et al Exercise therapy for chronic fatigue syndrome
    AUTHORS' CONCLUSIONS: Patients with CFS may generally benefit and feel less fatigued following exercise therapy, and no evidence suggests that exercise therapy may worsen outcomes. A positive effect with respect to sleep, physical function and self-perceived general health has been observed, but no conclusions for the outcomes of pain, quality of life, anxiety, depression, drop-out rate and health service resources were possible. The effectiveness of exercise therapy seems greater than that of pacing but similar to that of CBT. Randomised trials with low risk of bias are needed to investigate the type, duration and intensity of the most beneficial exercise intervention.

    https://twitter.com/user/status/1010448332186058752

     
    Inara likes this.
  20. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    It says 2017 at the link.
     
    Trish and Kalliope like this.

Share This Page