1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 8th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Publication bias and the evidence base for CBT/GET for ME

Discussion in 'Psychosomatic news - ME/CFS and Long Covid' started by Campanula, May 9, 2021.

  1. Campanula

    Campanula Established Member (Voting Rights)

    Messages:
    54
    Location:
    Norway
    Question: Has anybody looked into publication bias in regards to the evidence base for CBT or GET for ME?

    I read somewhere that in clinical psychology when they appraise the evidence base for CBT for depression, they control for publication bias. (They control for the fact that a lot of studies without significant results, or that actually could have control groups that do better than the intervention groups, are never published.) Without controlling for this the intervention might actually look considerably better than it actually is.

    And given how unobjective and dishonest many of the researchers who research CBT for ME seem to be, I find it probable that publication bias could be even more of an issue here?

    I don't know how this is measured practically - maybe by looking into the amount of trials registered in a given time period and looking at how many published studies these have resulted in?

    I would be interested to hear your input on this. Has it been done before? Could this be something worth looking into? I'm afraid I don't have the energy or knowledge to look into it myself, but if anybody's keen on exploring this topic, I think it could yield some interesting results.
     
    Last edited: May 9, 2021
    Hutan, Mike Dean, alktipping and 10 others like this.
  2. cassava7

    cassava7 Senior Member (Voting Rights)

    Messages:
    985
    Systematic reviews and meta-analyses sometimes include a funnel plot that is meant to show the existence or absence of publication bias. I don't know whether any of the systematic reviews of GET and CBT for ME/CFS, such as Cochrane's or the AHRQ's, have done so, but I believe not.

    ETA: the Cochrane review of exercise therapy for CFS (revised/2019 version) states:

    Assessment of reporting biases

    We planned, at the protocol stage, to construct funnel plots when sufficient numbers of studies allowed a meaningful presentation, to establish whether reporting biases could be present (Egger 1997). Asymmetry in funnel plots may indicate publication bias. We identified an insufficient number of studies to use this approach in the present version of the review. We considered clinical heterogeneity of the studies as a possible explanation for some of the heterogeneity in the results.
    The 2008 Cochrane review of CBT for CFS only had one funnel plot:

    Consideration of publication bias

    Funnel plots were produced for the Comparison 01 [CBT vs. usual care] primary out-come of reduction of anxiety symptoms (6 studies). Visual inspection of the funnel plot indicated possible asymmetry, which might suggest that small trials with negative outcomes were not included in the review. However, the small number of studies included in the funnel plot limits further meaningful interpretation.
    The AHRQ evidence review (p. 10 / p. 53 of the PDF) states:

    Grading the Body of Evidence for Each Key Question

    The overall strength of evidence was assessed for each Key Question and outcome in accordance with the AHRQ Methods Guide. (...) There was no way to formally assess for publication bias due to the small number of studies, methodological shortcomings, or differences across studies in designs, measured outcomes, and other factors.​
     
    Last edited: May 9, 2021
    Milo, Hutan, Mike Dean and 14 others like this.
  3. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    You need quite a few trials in order to see if there's publication bias. This review combined all "behavioral interventions with a graded physical activity component" and said: "We found some indication of publication bias."
    Differential effects of behavioral interventions with a graded physical activity component in patients suffering from Chronic Fatigue (Syndrome): An updated systematic review and meta-analysis - PubMed (nih.gov)

    I think there might also be an option to explore p-values. Normally p-values close to 0.05 (which is often used as a demarcation of statistical significance) should be quite rare, but we see these all the time in publications.
    There are relatively simple tests to look for publications bias this way. Unfortunately, because of the severity of my illness and lack of education, I'm not able to do such an analysis but I hope that others will look into it.

    I think it's pretty certain that there is a lot of publication bias given that the whole GET/CBT paradigm constantly published 'significant results' without enormous sample sizes. Usually, these studies have a statistical power of around 80% meaning that even if the effect is there the study wouldn't always be able to pick it up as significant. Yet these authors claim to have found an effect almost every time...
     
    Milo, Hutan, MEMarge and 11 others like this.
  4. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,252
    A simple test might be to just count the studies reporting desirable effects and those reporting a lack of desirable effect. When researchers nearly always confirm their hypothesis or find their favorite treatment works then something isn't right because nobody is that good at solving difficult problems.
     
    MEMarge, rvallee, Campanula and 3 others like this.
  5. Mithriel

    Mithriel Senior Member (Voting Rights)

    Messages:
    2,816
    I remember reading that someone checked the BPS papers using the same algorithm (I think it is called) that they use to check that pharma companies are not hiding studies that show no benefit.

    They said that there was no way you could have such glowing results in every paper.
     
    EzzieD, inox, alktipping and 7 others like this.
  6. Campanula

    Campanula Established Member (Voting Rights)

    Messages:
    54
    Location:
    Norway
    Thanks for all the replies thus far, lots of valuable insights and thoughts!

    I really think that if we could get someone with a sound knowledge of these methods to apply them to the evidence base and evaluate it systematically, and publish the findings in a peer reviewed journal afterwards, it could be a very valuable reference to have to counter the claims from the BPS brigade. It would be yet another nail in the coffin of the behavioral model that has slowed the scientific development for far too long.

    Really hoping somebody looks into this, and looking forward to see more of your thoughts regarding this subject!
     
  7. Mike Dean

    Mike Dean Senior Member (Voting Rights)

    Messages:
    147
    Location:
    York, UK
    It's pretty basic, but sorting trials into those done by BPS adherents versus independent replications ought to be interesting. I can remember a time when there were no positive independent trials, and no negative BPS trials.
     
    inox, Sean, alktipping and 5 others like this.
  8. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,444
    Location:
    London, UK
    If you want to look for publication bias you can do it with a clever technical statistical device like a funnel plot if there are enough studies but just looking at how many studies look brilliant is no good. If a treatment is brilliant, like penicillin or TNF inhibitors, then all the trials will look brilliant and there will be no publication bias because nobody needs to hide studies if they are all brilliant.

    The funnel plot uses a clever trick relating to variance in relation to sample size that can show that not all studies are being reported whether or not they are on average brilliant.

    But to be honest I see this as a false quest because none fo these trials are any use at all. Because they are so open to bias the results are uninterpretable and meaningless. So looking for publication bias isn't necessary. We know the evidence is no good. Moreover, careful analysis of PAC suggests that almost certainly the treatments do not work. The intriguing thing about expectation bias (therereal problem) is that it does not affect your ability to draw conclusions about what was not expected - i.e. that the treatment does not work. We can draw conclusions about that because we can be pretty sure that the trials are not biased to showing the treatment does not work!
     
    Hutan, Michelle, FMMM1 and 8 others like this.
  9. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,420
    Location:
    Canada
    There's also another thing that's harder to account for: reporting spin. The conclusions of the PACE trial are not supported by the evidence, but also the claims made about it are not even close to be supported by the conclusions. They report that this is a full and total cure for some, while having to admit, when pressed, that this is non-specific, only applies to mild cases and does not in no way constitute a cure. Even though what the evidence shows is that none of this has any effect and all this is down to chance, the treatments have no more effect than a wish or a spell.

    And all the lies from Richard Horton about the PACE researchers, who literally invented the whole thing and have huge personal conflicts of interest advising insurance companies how to save money by miscategorizing us, being neutral researchers with no stake in the matter who took a step back to carefully evaluate. An incredibly blatant lie. And yet, it's allowed because it wasn't made in a scientific publication, just spin and marketing from someone who was supposed to be a neutral editor to the process, yet personally shilled for it with the full weight of his position as editor-in-chief of a major medical research journal.

    Problem is nobody is bothered by this and I have no idea where in the chain it is supposed to be accountable. Same with the CODES and ACTIB trials, which failed and yet were marketed as successful because of secondary outcomes. Something that is supposed to be forbidden. And yet if you carefully read the papers they don't make any claims of miraculous recoveries, even though this is what they sell, it is the marketing pitch.

    Because most of the spin is between publishing and reporting. There is a lot of spin in the research itself but by far it's in how they go wildly beyond what even the most generous interpretation warrants.

    Who is tasked with that? Is anyone? Doesn't appear to be, works mostly on an honor system, which clearly doesn't work when deceit is both the intent and the substance.
     
  10. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    From the studies that exist, we can be pretty sure that, if there is a treatment effect, it isn't going to be a large one. So I think there's a valid argument to state that it doesn't make sense that all those studies are reporting a moderate effect because they aren't powered to detect it so consistently even if the effect was real.

    For GET/CBT studies it might be beside the core issue, but for other topics, say perfectionism in ME/CFS, it might be useful to point this out.
     
  11. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,444
    Location:
    London, UK
    I think that is the relation to sample size that makes the funnel plot work - if the power is low it is because the sample size should produce a lot of variance and as you say, it seems to be missing.
     
  12. Sean

    Sean Moderator Staff Member

    Messages:
    7,155
    Location:
    Australia
    Nor a sustained one.
     
    Michelle, Trish, Midnattsol and 2 others like this.
  13. Campanula

    Campanula Established Member (Voting Rights)

    Messages:
    54
    Location:
    Norway
    This is an important point. I just came across a Twitter-thread that was a very good example of this. It's from how the Telegraph reported the results of the PACE-trial in 2011. It's an extreme misinterpretation of the findings and it shows just how bad it's been:

    Adam på Twitter: «Graded Exercise for ME/CFS How it started >> How it's going https://t.co/pnallTlvT4» / Twitter
     
    Mike Dean, rvallee, Snowdrop and 3 others like this.

Share This Page