Yes. A clear understanding of the changes would be good.Has anyone seen the full text of the earlier risk of bias tool (for which Sterne was also a co-author)?
[my bold]we define bias as a systematic deviation from the effect of intervention that would be observed in a large randomised trial without any flaws
If I was them I would also be trying desperately to convince the world that what I was doing was nothing like homeopathy.Examples may include patient-reported symptoms in trials of homeopathy,
The penny may have dropped re how many policies, careers and egos would be affected if bias is addressed. .......To date no rapid responses have been published - are we the only ones left who care about bias in science?
I think another key source of bias is hope. A person desperately wishes for an intervention to work, and to self report otherwise serves to destroy that hope. It is not a conscious thing, nor a dishonest thing, just normal human psychology. But it is very real and can be very significant.The implication is that bias will only be a problem when patients have silly ideas about quack treatments. That fails to take into account the fact that they might have been encouraged to think a conventional treatment worked. It also fails to take into account the desire to please the investigator. The physio example is fair but why not when the assessment is done by the patient?
Basically moving the goalpost to wherever they want to kick the ball.From the outset, it seems that RoB 2 is quite friendly to flawed trials, not solely on the issue of blinding.
For example, if the trial analysis was not in accordance with a pre-specified plan (question 5.1) that only raises 'some concerns'. So if the authors publish a protocol and then don't stick to it (for example they change primary and secondary outcomes) because the pre-specified plan doesn't give good results, that's no reason to say the trial has a high risk of bias. And as Esther pointed out, changes that were made before unblinded outcome data were available, are seen as no problem at all. Such a trial can be rated as low risk of bias trial. But what with large unblinded trials where researchers get an indication of the direction the main outcomes are going before looking at the data? Apparently RoB 2 doesn't see this as a problem, as long as researchers don't select from multiple possible analyses or outcomes measures.
Regarding missing data (question 3.1), that doesn't form a problem unless the assessor thinks that 'missingness' depends on the 'true value' for example if it is related to the patients' health status. They write: "If all missing outcome data occurred for documented reasons that are unrelated to the outcome then the risk of bias due to missing outcome data will be low (for example, failure of a measuring device or interruptions to routine data collection)." In such case, a trial can be rated as low risk of bias despite having a lot of missing data. That seems like a rather mild judgement as well and it leaves a lot of room to the interpretation of the assessor (the person doing the review). If he/she thinks the missingness isn't related to the true value than he/she can rate trials as having low risk of bias and the problem of having a lot of missing data would be out of the way. A reader of a review who only looks at the colourful overview of risk of bias, would see a green light and get the impression that there was no issue with missing data.
I also find it weird that if the allocation sequence wasn't random this only raises 'some concerns'. So randomization isn't that important after all?
I think the 'user feedback' the RoB 2 team addressed were mostly researchers complaining that their wonderful trials were rated as high risk of bias. Of course, I have never run a clinical trial but it doesn't seem too unrealistic to require that researchers properly randomize and conceal the randomization (and report it adequately in their paper), that they publish a protocol and stick with it, that they use intention to treat analysis and blinded assessors etc. I don't think those are unreasonable demands or that it would cost too much to do this. I think it's mostly a question of professionalism and a tradition of accepting these standards as necessary.
So are they saying that not reporting certain outcomes is not a problem of that trial or indicating bias of those researchers, it's only a problem for the review because certain outcomes were not available? I would disagree with that. If researchers leave out certain outcomes that would make me very suspicious of their trial and the outcomes that they did report...Bias in selection of the reported result 1. Unlike the original tool, this domain does not deal with bias due to selective non-reporting of results (either because of non-publication of whole studies or selective reporting of outcomes) for outcome domains that were measured and analysed. Such bias puts the result of a synthesis at risk because results are omitted based on their direction, magnitude, or statistical significance. It should therefore be dealt with at the review level, as part of an integrated assessment of the risk of reporting bias.
Or to wherever the ball randomly ended up after they kicked it with blindfolds on.Basically moving the goalpost to wherever they want to kick the ball.
Agree entirely. It is not simply about the outcomes being missing - it is about why they are missing. A highly plausible reason could be the authors seeking to bias the reporting of a trial's results. Removing this check from the tool is smoke and mirrors of the flimsiest kind.So are they saying that not reporting certain outcomes is not a problem of that trial or indicating bias of those researchers, it's only a problem for the review because certain outcomes were not available? I would disagree with that. If researchers leave out certain outcomes that would make me very suspicious of their trial and the outcomes that they did report...
Yes, becomes and exercise in linguistic agility rather than sound trial design.Another problem I see with this detailed specification of what is and what is not considered high risk of bias is that immediately researchers will be checking the list before they send off their manuscripts to make sure they are worded in exactly the right way to avoid the obstacles. We are likely to end up like food labelling - Pork sausage, contains at least 10% pork (or at least that is what the abattoir man said).
It makes one wonder whether the people involved have ever done an experiment wanting to know the right answer (rather than the answer they wanted).