1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 18th March 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Bias due to a lack of blinding: a discussion

Discussion in 'Trial design including bias, placebo effect' started by ME/CFS Skeptic, Sep 22, 2019.

  1. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    A further thought on this. Apologies if it's repetition of anything in earlier posts.

    The paper alleges to compare the effectiveness of non-blinded trials relative to blinded ones; but of course it is really only comparing reported effectiveness - that is the only data the paper has access to. As we well know reported effectiveness is highly dubious for unblinded trials using subjective outcomes.

    So their counter-argument to the "non-blinding with subjective outcomes inflates reported effectiveness compared to blinding" criticism, seems to be ... to compare the reported effectiveness of unblinded trials with subjective outcomes against those for blinded trials. A self-fulfilling prophesy basically.
     
    Last edited: Jan 27, 2020
  2. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    I don't follow why it would be a self-fulfilling prophesy to have similar results rather than it being expected that nonblinded trials would be more likely to lead to exaggerated claims of efficacy.
     
    Invisible Woman and Barry like this.
  3. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Badly worded on my part maybe. But the unblinded trials with subjective outcomes may well be relying on inflated results to make them appear to be similar to blinded trials, else they might very well show much poorer results.
     
    Mithriel and Invisible Woman like this.
  4. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,819
    Location:
    Australia
    Because they've been cherry-picked...

    To be pedantic, they compared reported differences in Cochrane meta-analyses, from a limited number of Cochrane systematic reviews.
     
  5. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    What makes you say that at this point? I feel like I've still got no idea of the details of this.
     
  6. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,273
    Location:
    London, UK
    I agree that the whole thing is very complicated and counterintuitive. I got at least one thing wrong last time I tried to give some explanations.

    To me the most likely thing is that for treatments that work adequate blinding tends to be used. For treatments that don't, people use inadequate blinding because that is the only way they can get it to look as if the treatment works. You end up with similar effect sizes because the cheating process tends not to be taken beyond what looks plausible - a phenomenon shown clearly for studies of extra-sensory perception, where everyone cheats to give precisely the same slightly positive value, aware that a very positive value would not be credible.
     
    Mithriel, MSEsperanza, Sean and 5 others like this.
  7. Robert 1973

    Robert 1973 Senior Member (Voting Rights)

    Messages:
    1,281
    Location:
    UK
    I’m completely puzzled by the Moustgaard study but I don’t have the capacity to read through it forensically at the moment. Instead I’ve been reading the BMJ editorials and have pasted some quotes and comments on them below.

    Drucker et al write:
    I don’t think dogma is an appropriate term to use in this context. There is no evidence that blinding increases the risk of bias and reliable evidence from many other studies that it reduces the risk, as Drucker et al note: “The findings conflict with established methodological principles and previous systematic reviews that found noticeable inflation of effect estimates based on within trial comparisons between non-blinded versus blinded patients and outcome assessors 3-5.”

    There can be few trials in recent history in which the investigators’ beliefs about intervention efficacy were as strong as those of the PACE investigators.


    Anand et al write:
    The opposite of what was done in PACE.


    It’s not spelt out but, however one interprets the Moustaarg study, there still seems to be a tacit acceptance from the BMJ editorial writers (including Godlee) that unblinded studies which are a) conducted by investigators with strong prior beliefs in the effectiveness of the therapies being tested and b) reliant on subjective outcome measures, are likely to suffer from a high risk of bias.

    Thanks to those who are digging deeper to try to understand this Moustgaard study.

    I see there are now 3 rapid responses on the BMJ site (all critical): https://www.bmj.com/content/368/bmj.l6802/rapid-responses
     
  8. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,494
    Location:
    Belgium
  9. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,273
    Location:
    London, UK
    Whois this half-wit Robert Howard?
     
  10. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,273
    Location:
    London, UK
    He seems to be a psychogeriatrician from King's moved to UCL.
     
  11. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,494
    Location:
    Belgium
    Woolie, Hutan, JohnTheJack and 4 others like this.
  12. James Morris-Lent

    James Morris-Lent Senior Member (Voting Rights)

    Messages:
    903
    Location:
    United States
    Howard:
    """but the measures have generally been well validated and cover areas that matter to us and our patients and that we’d like to improve."""

    What does this even mean? That new questionnaires are made sure to correlate with older questionnaires, but not too much? What is it all anchored to?

    @JohnTheJack I would be curious to know what he thinks he means by this, if you are up for asking :)
     
    Woolie, Hutan, Barry and 1 other person like this.
  13. Snowdrop

    Snowdrop Senior Member (Voting Rights)

    Messages:
    2,134
    Location:
    Canada
    IMO the people who make this kind of statement already think they are being rational even though they are working off of a belief system. They are impervious to considering any alternative possibility.
     
    Woolie, JohnTheJack, rvallee and 4 others like this.
  14. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    JohnTheJack, lycaena, Sarah94 and 4 others like this.
  15. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Yes, exactly - especially when the reason for "not liking" them is because they are flawed, and hence warrant those faults being exposed and discussed. Why would genuine scientists baulk at that?
     
    JohnTheJack, lycaena, Sarah94 and 3 others like this.
  16. James Morris-Lent

    James Morris-Lent Senior Member (Voting Rights)

    Messages:
    903
    Location:
    United States
    Half? Where did you see that much?
     
    rvallee likes this.
  17. James Morris-Lent

    James Morris-Lent Senior Member (Voting Rights)

    Messages:
    903
    Location:
    United States
    This kind of reminds me of the argument that Sharpe endorsed that was meant to raise fears about open data access: bad-faith actors will bog down the scientific process with malicious data requests (akin to a DDoS attack), and then spam spurious and obfuscatory re-analyses.

    It's not that this sort of thing couldn't ever become a legitimate concern, but at the moment the whole PACE voyage has shown that it is the universities, journals, and researchers who are the relevant sources of obfuscation and aspersion.

    It's such a goofball non-argument anybody sensible would be embarrassed to tweet forth. Good heavens, of course people bother to make arguments about things because they are motivated in some way or another.

    Imagine if the case put forward against PACE was "When people like the results of clinical trials they are prone to try and defend them"? I guess this guy would have to think that that would be a devastating blow? I guess nobody could really be allowed to argue about anything. I guess there would have to be exceptions for the 'right' people.

    Ultimately I suppose it shows the strength of the PACE position when defenders' go-to (non-)arguments are (1) some bologna about motivation and decorum of specific individuals (who, let us remember, they say are just a few), or (2) 'we don't feel like any of the problems pointed out are really that bad, and it would have been inconvenient to do it up to the standards of other fields'; rather than anything actually of substance.
     
    Last edited: Jan 29, 2020
    Woolie, Hutan, rvallee and 6 others like this.
  18. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,273
    Location:
    London, UK
    At least the second part of this sentence seems fair. I agree that validation means nothing much more than that people who speak English can understand the questions.

    Subjective endpoints are to be preferred if they reflect key features of distress or disability. But that is irrelevant if they are open to bias. As we have discussed before it is perfectly possible to keep the subjective aspect in a primary outcome measure, as long as it is combined with an objective corroborative measure (as we do routinely for rheumatoid arthritis).

    The real problem is that clinical psychologists appear to be totally blind to the psychology of trials - like pharmacologists who respect the importance of psychology.
     
    JohnTheJack, rvallee, Barry and 5 others like this.
  19. James Morris-Lent

    James Morris-Lent Senior Member (Voting Rights)

    Messages:
    903
    Location:
    United States
    I'm on board with this and definitely don't mean to take issue in principle with making subjective outcomes primary.

    It does seem to me that there is some mystique around saying that a questionnaire or other subjective measurement is 'validated' when in reality that could mean a cursory process with that doesn't do anything to account for the psychology of trials, like you said.

    The thing I wonder is if they think that by performing 'validation', they are addressing the potential bias arising from this psychology of trials, when the actual process of validation used in BPS-type studies is probably doing nothing useful for that issue. Or perhaps they are just not concerned with it at all.
     
    rvallee, Sean, Trish and 1 other person like this.
  20. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,273
    Location:
    London, UK
    As far as I can see 'validation' means nothing other than that you get the same sort of answers on several trials. It means a questionnaire is probably being adequately understood. Nothing more. It has nothing to do with validation of the measures in the sense most people would think of.

    Anyone who uses this sort of language is basically bullshitting.
     
    Last edited: Jan 29, 2020
    Daisybell, Hutan, JohnTheJack and 7 others like this.

Share This Page