Bias due to a lack of blinding: a discussion

I think this follows the approach we see in Wessely's textbook and in Jonathan Sterne's ROB2. What is egregious here is what is left out - any consideration the psychology of trials - the impact of knowing what is the 'test' treatment on the therapist and the patient. I am afraid I see this as deliberate.
Yep.

They know it is a massive fault line in epistemology, with profound consequences for the clarity and robustness of causal modelling and testing, and its safe practical application.

These guys are not stupid, just wrong and unable to admit it.
 
Yep.

They know it is a massive fault line in epistemology, with profound consequences for the clarity and robustness of causal modelling and testing, and its safe practical application.

These guys are not stupid, just wrong and unable to admit it.
As some wise guy said, if they didn't do that, then nobody would recover.

And it's true, nobody actually recovers because of their interventions. At best some recover naturally and now that information gets lost. That information likely involves our immune system, but we won't know until we remove the rotting elephant corpse occupying the entire room.

Take away the exemptions and not only does the entire field of psychosomatics disappears, but it would likely take with it a very large chunk of all psychological and mental health research. I'm really leaning towards the fact that almost none of it is valid, almost none of it will stand the test of time. The sausage factory involves very little actual meat outside of the factory workers themselves.

The most important lesson from centuries of science is that it is critical to be objective in a systematic process. EBM is the embodiment of not learning that lesson and committing the same mistakes all over again. It allows people who want some things to be true to fool themselves into believing they are doing something useful. It doesn't have to, but it does. Failure of execution, not of conception.
 
Posting this here:

Day & Altman. BMJ Statistical Notes. Blinding in clinical trials and other studies, 2000. https://www.bmj.com/content/321/7259/504
In research there is a particular risk of expectation influencing findings, most obviously when there is some subjectivity in assessment, leading to biased results. [...] Blinding patients to the treatment they have received in a controlled trial is particularly important when the response criteria are subjective, such as alleviation of pain, but less important for objective criteria, such as death.
 
but less important for objective criteria, such as death.

"less important" :rolleyes:

I think for this criteria we can safely call blinding completely irrelevant.

Not necessarily.

I the trial was of a treatment for Covid-19 and open label there might be significant bias. Patients receiving the treatment someone wants to show is effective might be looked after better, put on ventilators earlier, and have increased survival.

Bias can creep in a hundred different ways and even when outcomes are fully objective they may be subject to bias.
 
I the trial was of a treatment for Covid-19 and open label there might be significant bias. Patients receiving the treatment someone wants to show is effective might be looked after better, put on ventilators earlier, and have increased survival.

Bias can creep in a hundred different ways and even when outcomes are fully objective they may be subject to bias.
Fair point.

Perhaps it might be better to say that death is the most unambiguous of possible outcomes. Difficult to put a positive spin on it.
 
Not necessarily.

I the trial was of a treatment for Covid-19 and open label there might be significant bias. Patients receiving the treatment someone wants to show is effective might be looked after better, put on ventilators earlier, and have increased survival.

Bias can creep in a hundred different ways and even when outcomes are fully objective they may be subject to bias.
I'd not really thought of it like that, but yes I can see the validity of what you say. Scary but valid. Presumably perceptions of survival chances could also influence how long to persevere with life support, consciously or unconsciously; must be incredibly difficult anyway.
 
New blog post: Problems with the MetaBLIND study



The MetaBLIND study is likely the largest study on the effect of blinding in randomized trials to date. Contrary to expectations, the study did not find a relationship between exaggerated treatment effects and lack of blinding of patients, healthcare providers, or observers. I’ve contacted the authors to obtain the dataset of one of the most important analyses of the study, namely the impact of blinding trial participants on patient-reported outcomes. After screening the blinded and unblinded trials that were compared to each other, it became clear that the MetaBLIND study suffers from serious flaws. Some of the analyses had little relevance to medical trials, others included trials that were wrongly labeled as blinded and in most cases trials were simply too different for a meaningful comparison.

https://mecfsskeptic.com/problems-with-the-metablind-study/
 
From the twitter thread I learned that there is a Cochrane bias group and I'm sitting here seriously wondering what they could be doing all day and frankly can't imagine what that could be. Playing cards, I guess?

Just to repeat: there is a group at Cochrane dedicated to bias that has not noticed at all that there is excessive bias at Cochrane, or is entirely indifferent to it. Can't make this stuff up, department of lack of self-awareness is more like it.
 
That's a great find, and a great analysis Michiel.

The author of the review is Helene Moustgaard; her response to Michiel's analysis (linked at the bottom of the analysis) is quite odd. In her letter to Michiel, she appears to be saying 'yes, yes, of course we know about all those problems, here are the links to my papers about them', while appearing to not understand that the problems mean that the MetaBLIND study tells us nothing about the utility of blinding. Which is a shame, given that the point of the study was to tell people something about the utility of blinding.
 
Last edited:
Moved from "A general thread on the PACE trial"

Cant' assess at the moment if this warrants an own thread and anyway don't feel up to open one, so park some thoughts here:

How to deal with the unwillingness of so many people that should know better to acknowledge the issue with subjective outcomes as the sole primary endpoints in unblinded trials?

How often is that unwillingness due to a misunderstanding, and when is it due to an agenda?

Related to that: Which of the points that the PACE / SMILE investigators and their defenders repeatedly made in response to criticism still warrant to engage with? (Because they may appear somewhat plausible to others, even if they are part of an agenda).

- Are there points by the proponents of the cognitive behavioral model and believers in 'exercise-aslways-helps' that haven't explicitly been addressed yet by the usual suspects (most of them S4ME forum members) and other critics?

- Are there some points we missed that actually or 'partially' make sense?
 
Last edited:
What I most struggle to understand is, when people state that there are several serious risks of bias, shortcomings in the trial methodology, limitations in applicability etc., but that isn't acknowledged in their conclusion. That seems to me to be the case in the assessment of PACE by Students for Best Evidence linked above.

On the one hand they say:

-- "[...] due to the nature of the interventions, only the statistician doing the analysis of the results could be blinded, therefore there is room for bias from the clinicians and patients.[...]

-- "We [...] cannot discount the patients’ bias because most of the outcome measures were subjective and self-reported by the patients."

Plus, they mention restrictions of applicability.

But still the conclusion is:

"The results show that CBT and GET, when added to SMC, are effective treatments for CFS."

How is that possible?
 
Last edited:
Another point that I don't have the expertise to understand is:

Is it really possible to compensate / correct serious shortcomings in trial methodology with some additional statistical work?

E.g., again from the example above:

The authors performed various statistical analyses to assess if there was bias in the results from each clinician, however these analyses showed no bias.

I think there are similar examples, also from Crawley and the defenders of the SMILE trial.
 
Another point that I don't have the expertise to understand is:

Is it really possible to compensate / correct serious shortcomings in trial methodology with some additional statistical work?

Not serious shortcomings, no.

Checking whether there was differential bias between the clinicians is possible, but this is not the same as being able to control for bias if all the clinicians caused a similar sort of bias.
 
Anyone familiar with this 1998 document: ICH E9 STATISTICAL PRINCIPLES FOR CLINICAL TRIAL?

I think ICH stands for The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH).
And it looks like their 1998 guideline was adopted by regulatory agencies in Europe and the US. I found it on the website of the Europea Medicine Agency (EMA, here) and on the website of the Food and Drug Administration (FDA, here).

The document states:

"The most important design techniques for avoiding bias in clinical trials are blinding and randomization... [...] In single-blind or open-label trials every effort should be made to minimize the various known sources of bias and primary variables should be as objective as possible."

 
I've found multiple references that say when blinding is not possible, it is important to look at objective instead of subjective outcomes because are susceptible to bias.

Here's a quote that suggests that lack of improvement on objective outcomes in unblinded trials may question the results of subjective outcomes in the same trials. It comes from a study by Moustgaard & Hrbojartsson, the two main researchers of the MetaBLIND study. In a paper where they discuss definitions of objective and subjective outcomes, they write:

The fact that bias seems to differ according to type of outcome implies that the concordance in effect between correlated outcomes of different type can become important. In a clinical trial where the blinding of patients and care providers is not possible but no improvement is found for an ‘‘objective outcome’’ (eg, peak flow), it seems reasonable to be less confident in an improvement of a ‘‘subjective outcome’’ (eg, quality of life) as this may not be caused by the intervention as such.
Full text at: https://linkinghub.elsevier.com/retrieve/pii/S0895435614003369
 
Back
Top Bottom