RoB 2: a revised tool for assessing risk of bias in randomised trials (2019) Sterne et al.

Are we seeing an attempt to protect the vast amounts of research investment into unblinded trials with subjective out comes, not just in relation to ME, but more widely in relation to CBT and further in psychology in general?
I think we are.

Literally thousands of incomes, careers, empires, and egos – and, of course, some major government policies – depend critically on this 'science' not going down.

It is why the CBT/BPS crowd have managed to survive and prosper for so long. Sure as shit ain't due to the quality of their science and ethics.
 
Literally thousands of incomes, careers, empires, and egos – and, of course, some major government policies – depend critically on this 'science' not going down.

The affected egos surely could help each other with a bit CBT?

On a more serious note, if they just wanted to be good and helpful clinicians, therapists or researchers, they wouldn't have to be afraid of having nothing to apply their professional education to.

There is a lot of urgent stuff to do and to investigate for psychiatrists and psychologists.

In Germany at least, there are long waiting lists for patients who are in need of help from these professions (depression, addiction etc.)

And so many basics of psychiatric or mental illness and neuropsychiatric ailments are not well understood, or not well treatable or drugs have severe side effects; so there actually is need of good research in the fields of psychology, psychiatry and neuropsychiatry.

Sufficient amount of work to do in these fields without the need to invent problems that people don't have or to design fanciful illness models that have nothing to do with ill people's realites.
 
Last edited:
From their appendix:
"Changes to analysis plans that were made before unblinded outcome data were available, orthat were clearly unrelated to the results (e.g. due to a broken machine making data collection impossible) do not raise concerns about bias in selection of the reported result."
Anyone could help me understand this example?

Are there machines for data collection that cannot be repaired or new ones would be too expensive, so that a broken machine would require utterly different methods of data collection or the collection of other data?
 
Are there machines for data collection that cannot be repaired or new ones would be too expensive, so that a broken machine would require utterly different methods of data collection or the collection of other data?

I suspect this example has been dreamt up as a case where there would not be bias because the breakdown of a machine is not something you organise deliberately in order to stop collecting data.

The problem is that it illustrates very clearly the naivety of anyone thinking this is bias free. If yo didn't;t like the way the data looked you could say you could not afford to replace the machine. If the data looked the way you wanted you could apply for a top up grant and carry on.

What is so devastating here is the total lack of understanding of Feynman's phrase - the easiest person to fool is yourself.
 
Does "broken machines" include actimeters? (asking for a friend)

In a sense it would be difficult since it would need to be a set of broken machines. If it were a single one (or even a few) then I assume that would be considered as missing data.

I guess what could go wrong is the data collection if for example the server collecting the data was having issues meaning data from all devices was lost.

I can see the argument for broken machines. But at the same time a trial should have a quality and resilliance plan and careful design so that broken equipment doesn't lead to failures.
 
Changes to analysis plans that were made before unblinded outcome data were available, orthat were clearly unrelated to the results (e.g. due to a broken machine making data collection impossible) do not raise concerns about bias in selection of the reported result.

I think the broken machines is a red herring put in deliberately to divert attention from the much more dubious 'changes to analysis plan made before unblinded outcome data were available...' Looks to me like they were trying to slip that through unnoticed, and it's a serious problem. PACE and SMILE both did it.
 
changes to analysis plan made before unblinded outcome data were available

It really is absurd that they should be saying this is allowed or so and so is allowed as if they were in a position to lay down rules when what matters is simply whether or not bias might have been slipped in.

Changes to an analysis plan are only changes if the study has started and if a study has started in an unblinded trial than outcome data are available. In fact some aspects of outcome data are available in blinded trials - for instance that nobody so far has shown 50% improvement.

In PACE they would not have known in advance where the functional grade results would cluster. But once they knew where they were clustering they could change the analysis to maximise the chance of systematic expectation bias generating a statistically significant difference.

And so on and on as we have said before.
 
The affected egos surely could help each other with a bit CBT?

On a more serious note, if they just wanted to be good and helpful clinicians, therapists or researchers, they wouldn't have to be afraid of having nothing to apply their professional education to.

There is a lot of urgent stuff to do and to investigate for psychiatrists and psychologists.

In Germany at least, there are long waiting lists for patients who are in need of help from these professions (depression, addiction etc.)

And so many basics of psychiatric or mental illness and neuropsychiatric ailments are not well understood, or not well treatable or drugs have severe side effects; so there actually is need of good research in the fields of psychology, psychiatry and neuropsychiatry.

Sufficient amount of work to do in these fields without the need to invent problems that people don't have or to design fanciful illness models that have nothing to do with ill people's realites.
It's pretty clear by now that psychiatry is underperforming relative to the rest of medicine, has not delivered reliable outcomes, makes very few predictions and has in fact barely progressed at all beyond the odd fortunate guesswork. Which is frustrating, obviously, as there rarely are those moments that every physician lives for, the undeniable recovery, where the patient leaps with joy and sings "I'M CUUUUUURED". That rarely happens, if at all, in psychiatry, which doesn't even have reliable objective metrics or even basic definitions for what "recovery" even means.

Just to emphasize how mediocre progress in psychiatry is, not long ago, Wessely tweeted about what he saw as the most significant advances that happened in ME research and listed two things (that I remember anyway): that mood is not a predictor of disease severity (a stupid question no one in their right mind thought was relevant) and the creation of CFS, a very strong contender in the top 10 of worst blunders in the whole history of medicine.

This is 3 decades of research, millions spent promoting an entire new paradigm and this is the sum of all the progress made there: one useless answer to a question no one should have ever asked and a major regression negatively impacting millions and stalling progress for a whole generation. That's harsh. That's very demoralizing without significant doses of lying-to-yourself.

So it looks like within the field of psychiatry, things are moving to what they see as "easier" conditions, away from the frustrating stagnation that pervades the whole field. Unfortunately, it's moving onto those "easier" conditions with the exact same ideology that has systematically failed over and over again, despite having already massively failed thanks to Wessely and his ilk.

Which make the project of steering chronically ill people away from traditional medicine and onto the least effective, reliable and objective specialty of medicine criminally insane. The field of mental illness has not delivered anything comparable to the rest of medicine and the idea is to steer people away from what works, though slowly and expensively, and onto what doesn't. Complete madness.
 
Last edited:
David Tuller: Trial By Error: Lead Author of Cochrane's New Bias Guideline is LP Study Co-Author

Why did Professor Sterne sign off on the Lightning Process trial paper, in which critical information about retrospective registration and outcome-swapping were withheld from the public version of events? Is Professor Sterne aware that prospective registration and pre-designated outcomes are considered essential in reducing the “risk of bias”? Regarding the school absence paper, does Professor Sterne really believe that a study featuring a hypothesis, generalizable conclusions, and in-person interviews conducted by the lead investigator can reasonably be defined as “service evaluation” and appropriately exempted from ethical review?

These are legitimate questions not only for Professor Sterne but for Cochrane, BMJ, and Professor Sterne’s many co-authors on this “risk of bias” revision. Given that both the Lightning Process and school absence studies are fraught with the kinds of methodological and ethical irregularities that should be obvious to first-year epidemiology students, it is unclear how Professor Sterne can currently serve as a credible authority on anything.
 
David Tuller: Trial By Error: Lead Author of Cochrane's New Bias Guideline is LP Study Co-Author

Why did Professor Sterne sign off on the Lightning Process trial paper, in which critical information about retrospective registration and outcome-swapping were withheld from the public version of events? Is Professor Sterne aware that prospective registration and pre-designated outcomes are considered essential in reducing the “risk of bias”? Regarding the school absence paper, does Professor Sterne really believe that a study featuring a hypothesis, generalizable conclusions, and in-person interviews conducted by the lead investigator can reasonably be defined as “service evaluation” and appropriately exempted from ethical review?

These are legitimate questions not only for Professor Sterne but for Cochrane, BMJ, and Professor Sterne’s many co-authors on this “risk of bias” revision. Given that both the Lightning Process and school absence studies are fraught with the kinds of methodological and ethical irregularities that should be obvious to first-year epidemiology students, it is unclear how Professor Sterne can currently serve as a credible authority on anything.
 
The revised tool seems to make it quite easy for unblinded trials to be rated as low risk of bias. The paper reads:
The revised tool recognises that open trials can be at low risk of bias, if there were no deviations from intended intervention that arose because of the trial context.

More information is given in the supplementary material. Most of the risk of bias due to a lack of blinding is assessed at Domain 2: Risk of bias due to deviations from the intended interventions. Suppose you have a trial where both patients and the personnel delivering the treatments are aware of the intervention group. In other words, they know who is getting the intervention and who isn't. According to RoB 2 this isn't much of a problem, as long as "no deviations from intended intervention arose because of the trial context." The term 'trial context' refers to:
effects of recruitment and engagement activities on trial participants and when trial personnel (carers or people delivering the interventions) undermine the implementation of the trial protocol in ways that would not happen outside the trial. For example, the process of securing informed consent may lead participants subsequently assigned to the comparator group to feel unlucky and therefore seek the experimental intervention, or other interventions that improve their prognosis.
In other words, it doesn't refer to expectation bias directly. It doesn't refer to patients answering questionnaires differently because either they or their therapists know they are in the intervention group. And as long as "no deviations from intended intervention arose because of the trial context", the trail can be rated low risk of bias for this domain, despite being unblinded.

The figure belows explains the algorithm: questions 2.1 and 2.2 do not form a problem if the answer to 2.3 is 'probably no'.
upload_2019-8-31_22-36-33.png


There is one other aspect of blinding in Domain 4: Risk of bias in measurement of the outcome. Here they determine whether outcome assessors were aware of the intervention received (For participant-reported outcomes, the outcome assessor is the study participant.). Once again the judgement is mild. There is only a high risk of bias if it is likely that assessment was influenced by knowledge of the intervention. The example they give of this is when physiotherapist who delivered the intervention is also making an assessment of recovery.
 
Last edited:
I understand the words of 2.3, but not really what it means ... what does it mean: "Deviations that arose because of the trial context?" Trying to understand how N/PN to that means Low Risk. Especially as subjective outcomes ignored for that low risk conclusion, despite being fully unblinded.
 
The revised tool seems to make it quite easy for unblinded trials to be rated as low risk of bias. The paper reads:


More information is given in the supplementary material. Most of the risk of bias due to a lack of blinding is assessed at Domain 2: Risk of bias due to deviations from the intended interventions. Suppose you have a trial where both patients and the personnel delivering the treatments are aware of the intervention group. In other words, they know who is getting the intervention and who isn't. According to RoB 2 this isn't much of a problem, as long as "no deviations from intended intervention arose because of the trial context." The term 'trial context' refers to:

In other words, it doesn't refer to expectation bias directly. It doesn't refer to patients answering questionnaires differently because either they or their therapists know they are in the intervention group. And as long as "no deviations from intended intervention arose because of the trial context", the trail can be rated low risk of bias for this domain, despite being unblinded.

The figure belows explains the algorithm: questions 2.1 and 2.2 do not form a problem if the answer to 2.3 is 'probably no'.
View attachment 8269

This is what they say about the changes to 2:

Bias due to deviations from intended interventions 1. The original tool only dealt with whether participants, carers, and people delivering the interventions were aware of participants’ assigned intervention during the trial. The revised tool recognises that open trials can be at low risk of bias, if there were no deviations from intended intervention that arose because of the trial context.
2. Whether the analysis was appropriate to estimate the effect of assignment to intervention was previously assessed in relation to missing outcome data.
3. The original tool did not address bias in estimating the effect of adhering to intervention. Imbalances in co-interventions, failures in implementing the intervention, and non-adherences can all bias such estimates. An appropriate analysis has the potential to deal with such biases, in some circumstances.

It sounds like they completely changed it from something that provided useful information about the risk of bias to something that limits assessment to only a small sub-section of the reasons for concern about trial design increasing the risk of bias.

This is the (very brief) summary provided in the 2011 BMJ paper on their risk of bias tool:

Allocation concealment Describe the method used to conceal the allocation sequence in sufficient detail to determine whether intervention allocations could have been foreseen before or during enrolment Selection bias (biased allocation to interventions) due to inadequate concealment of allocations before assignment

https://www.bmj.com/highwire/markup...postprocessors=highwire_figures,highwire_math

https://www.bmj.com/content/343/bmj...ndmd&int_medium=cpc&int_campaign=usage-042019

Has anyone seen the full text of the earlier risk of bias tool (for which Sterne was also a co-author)?
 
Back
Top Bottom