Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

It feels like trying to fix the way reviews are done is fixing the wrong problem.

A million times yes!!! That's what Cochrane cannot seem to wrap their collective review-obsessed heads-in-the-sand around. When they bang on about "advocating for evidence", they don't mean advocating for better primary research, they mean advocating for their own way of doing reviews which is using GRADE, Risk of Bias, exhaustive search strategies, meta-analysis techniques, and all the other things their methodological boffins invented to produce systematic reviews and systematic reviews only. Even CONSORT (guidance for reporting randomized trials) was invented to meet the needs of systematic reviewers only, not to improve how trials are done to make the results actually mean something, and be an ethical use of time and money.
 
The way they pick reviewers is also problematic. NICE at least strives for a balanced and representative guideline development committee, with a separate technical team, and then has things like the consultation to put its findings out into the world and get feedback on them.

Cochrane seems almost designed to leave decision-making to those with COIs and is neither as transparent nor as rigorous. And if the editors can't change those decisions, it's very easy for people to say they won't budge without very good reason.
Reviewers generally pick Cochrane. So ones with COI can take advantage of Cochrane's volunteer model to do a review on pretty much whatever they like. All they have to do is convince the coordinating editor of the review group that it's a good idea. In the case of the Exercise and CBT for CFS reviews, they were managed by the Common Mental Disorders Group that was led by Simon Wessely and then his protegee Rachel Churchill. At least I think this dysfuntional review group structure is being dismantled with the "New Order" coming in the aftermath of the NIHR pulling the funding. I hope they can find a way to mirror what NICE does - I do not see why that would not be possible, but who would police the review team selection to make sure there was no COI sneaking in there. NICE? I will write and ask them...
 
Its interesting looking at this from the perspecitive of different disaplins. As someone who works in security we do reviews of systems to try to spot potential failings and security issues but I think more importantly try to build methodology and tools into the standard processes in order to improve quality and reduce security issues. For example, there are code analysis tools that spot issues and the use of weak libraries and tools that look at interactions between components for known vulnerabilities. I wonder if there could be an equivilant for trial design in terms of methodology and tools to help go through the design and help identify potential issues (both with the measures take, controls, as well as with the stats methods used (i.e. what techniques are appropriate given lack of independence or underlying distributions and baises in errors)). I was also thinking that aspects of systems are built on well known primatives and protocols to ensure secure communications (things like crypto and associated protocols such as TLS) - these are very well studied with considerable effort put in to break them - to the extent that anyone using their own cryptography would be laughed at. Is there an equivelent in the medical trial world - I get the impression that perhaps randomization is studied in this way but measurement systems seem to be a weak point where someone doing a trial can make up what they record (or use clearly poor questionnaires such as the CFQ).

The way to really address things is not to have a review system (like Grade) that will pick up on issues but to have better standards for trial design. Even to try to run through GRADE prior to running the trial could be valuable (perhaps an ethics committee should do this and say trials giving low quality or very low quality evidence are unethical as the results are meaningless!). It feels like trying to fix the way reviews are done is fixing the wrong problem.

The other important thing is to continuously look for flaws both in a given trial (and hence the trustworthyness of those results) and in the underlying things used in a trial. So in the securtiy world there is a big group of people who spend their lives trying to break systems (both attackers and defenders) and there is money for the defensive side in terms of bug bounties. Perhaps journals should offer bug bounties for people who find issues with published trials and that would help improve the quality of the published work. Also if for example particular methods are shown to be weak then there needs to be an appreciation that the trials that use them may be impacted and they need to be looked at from that perspective.

I suspect there is something around risks that also could be brought out. Where for example, there is concern that some mittigations may not work (for example - is a control group strong enough to control for the important factors) then perhaps additional measurement strategies should be put in place to help to judge the validity of a given control group.

I've only skimmed this but it reminds me of what @Brian Hughes has been saying i.e. these studies simply replicate previous flawed studies (un/inadequately blinded plus using subjective outcome indicators - questionnaires). Yes, why isn't this designed out at the beginning and how come the ethics committees don't point out that the study can't tell us anything?

I think PACE raises other questions e.g. actimetry/activity monitoring was supposed to be used - thereby dealing with the inevitable lack of blinding. The objective outcome indicator (actimetry/activity) was dropped mid-way through the trial. There are lots of other questions re PACE see @Jonathan Edwards comments here* but yes why didn't the ethics committee deal with these pre/post trial?

*https://www.s4me.info/threads/what-...e-unherd-tom-chivers.22082/page-2#post-367778
 
The reason they chose not to downgrade for imprecision was because they post hoc decided that the question the review was answering wasn't whether exercise therapy had a clinically meaningful effect (a question that people are actually interested in), but whether it has any non-zero effect at all, trivial or not (a question no one is interested in).

Given such a strange goal for a review, it ought to have made it explicitly clear that's the question it was answering in claiming there was moderate quality evidence.
Quite. Moderate quality evidence ... for what?
 
Is there any notion of independent auditing of clinical trials, at various stages of their progress. Manufacturing, engineering, etc has all manner of such things.

So for instance, the company I work for ships large laboratory instruments all over the world, by air. The risks of such goods being tampered with at any point during or after manufacture, and between there and the aircraft, with all the intermediate storage areas, is very non-trivial. So there are highly regulated procedures and regulations in place, for all people involved, in any part of that chain. Auditing and regular training (including refresher training at least every 2 years) is mandatory, and if you do not do the training in time then you are only allowed on site if accompanied, no matter how long you have worked there.

It is all about the risks to human safety. It seems to me that some refresher training for some well known scientists is long overdue, and without it they also should not be allowed anywhere near clinical trials, together with good independent auditing to confirm they apply what they should have learnt.
 
well, ethics boards and trial oversight committees are theoretically supposed to be providing some oversight. but as we know, these processes can be easily subverted.
And as with journals so called publication ethics and gatekeeping, once they've published a paper, that's it. Peer reviewers are not held accountable and are usually anonymous, ethics boards and trial oversight committees can be invoked to "prove" everything was done properly, in the latter case, even if the trial oversight committee is stuffed with people with conflicts of interest. Papers might as well be carved on tablets of stone. Once published in a "top" journal, the science can never be corrected, let alone correct itself! Cochrane authors can then smash the tablets of stone into dust add a pinch of GRADE, risk of bias (and water) and mould it to whatever shape suits their purposes. Got a bit carried away with that metaphor...
 
Quite. Moderate quality evidence ... for what?

Don't know if I'm answering your question here.

I think moderate quality evidence that the effect of the intervention was non-zero. However, if you look at the fact that the trials were unblinded and used subjective outcome indicators, then the claim that these trials provide ---moderate quality evidence doesn't hold up. @petrichor set out the question was whether the intervention had a non-zero effect --- the unblinded trials, with subjective outcome indicators, indicated that the effect of the intervention was non-zero --- it's bonkers.

Reminds me of Sir Humphrey appearing at the committee
"load of meaningless drivel"
 
It strikes me as a total non scientist that would be great if some of the engineering/systems perspective being written about on this thread could be formalised into some type of paper.

You're probably right but @Jonathan Edwards set it out very simply* and @Brian Hughes set it out in more blunt terms --- EDIT there is a better way to do studies --- they just don't seem to be motivated to do better work. Maybe the funders (NIHR, MRC ---) should be our target; I think someone mentioned that NIHR had stopped funding Cochrane ---- seems like an unexpected bit of enlightenment.

*https://www.s4me.info/threads/nice-...21-discussion-thread.23066/page-7#post-387365
 
well, ethics boards and trial oversight committees are theoretically supposed to be providing some oversight. but as we know, these processes can be easily subverted.
Yes. The processes I'm thinking of would be incredibly difficult to subvert though, and the consequences significant. More independence I suspect, and less chance for cronyism.
 
Last edited:
I hope they can find a way to mirror what NICE does - I do not see why that would not be possible

It would indeed be possible and it would be an good idea since there may be folks who would look at the GRADE/Cochrane analysis and could publicly challenge it.

but who would police the review team selection to make sure there was no COI sneaking in there. NICE? I will write and ask them...

Yes, if NICE are going to use Cochrane/GRADE then NICE should work with Cochrane to try to improve the Cochrane/GRADE assessments.

I suspect that @Hilda Bastian may support a system of consulting on Cochrane reviews; however, I also expect that getting that revision through is reminiscent of David versus Goliath (and I wouldn't be betting on David).
 
Last edited:
A study that could easily be confusing reporting bias with true treatment effect should not count as valid evidence for treating patients.
This.

Being as generous as I can in the circumstances, moderate quality evidence for a non-zero effect might (might) sometimes indicate potentially productive areas of further research. But nothing more. It certainly is not even close to justifying application in the clinic, on real human lives.

There is also no reason for it in the first place. If they did their research properly up front this would not be an issue. We are only here because they have spent decades ruthlessly trashing hard-earned methodological and ethical standards.

It is their stubborn recklessness in the face of all evidence and warnings that stuns and angers me. They really don't care about the harm they cause to others. They care only about harm to their reputations and empires.

It strikes me as a total non scientist that would be great if some of the engineering/systems perspective being written about on this thread could be formalised into some type of paper.
It would make a brutal compare and contrast.
 
well, ethics boards and trial oversight committees are theoretically supposed to be providing some oversight. but as we know, these processes can be easily subverted.
There is a difference between oversight and audit. Audit (or at least form the perspective of the auditors I've worked with) involves looking at two things - firstly are the defined processes and proceedures being followed and in the case of a trial I expect that would be is the protocol and any manuals being followed. When I did some work with auditors (looking at automating aspects) their job involved getting evidence (such as work logs, database entries etc) and checking it against the proceedures so that they can tell they are being followed. The second thing they do is look at the processes and check that they are sufficient to mitigate risks - and from what I saw they seemed to choose a new risk each year. In a company they will often form a separate reporting chain up to senior management thus providing a better governence route.

This is very differeent from say an ethics board and trial oversight committee who may provide some governance at a policy level but I suspect do very little in actively providing an indepentant view that the protocol and policies are being followed.

I would also note we found that having strong (continuous) audit helped improve compliance to the processes.
 
Being as generous as I can in the circumstances, moderate quality evidence for a non-zero effect might (might) sometimes indicate potentially productive areas of further research.

When you use subjective outcome criteria (questionnaires) and the study is inadequately blinded then you will get a non-zero +ve effect. @Jonathan Edwards described this as "placebo" and it reminds me of the Hawthorne effect*. However, if you look at the studies of actimetry (objective) versus questionnaires then you'll see that questionnaires are just unreliable - therefore the evidence produced by these studies is unreliable.
*[https://en.wikipedia.org/wiki/Hawthorne_effect]

There is also no reason for it in the first place. If they did their research properly up front this would not be an issue. We are only here because they have spent decades ruthlessly trashing hard-earned methodological and ethical standards.
Yes even Fluge and Mella (Rituximab Phase 3 - 2014) used actimetry, and PACE (2011) was supposed to, so the technology is available and getting better/cheaper all the time. Using this type of outcome indicator you may indeed be able to identify "potentially productive areas of further research". You would of course be able to discount useless interventions --- interventions which these people make their living out of.
 
When you use subjective outcome criteria (questionnaires) and the study is inadequately blinded then you will get a non-zero +ve effect. @Jonathan Edwards described this as "placebo" and it reminds me of the Hawthorne effect*. However, if you look at the studies of actimetry (objective) versus questionnaires then you'll see that questionnaires are just unreliable - therefore the evidence produced by these studies is unreliable.
*[https://en.wikipedia.org/wiki/Hawthorne_effect]


Yes even Fluge and Mella (Rituximab Phase 3 - 2014) used actimetry, and PACE (2011) was supposed to, so the technology is available and getting better/cheaper all the time. Using this type of outcome indicator you may indeed be able to identify "potentially productive areas of further research". You would of course be able to discount useless interventions --- interventions which these people make their living out of.
Yes.
A big general data capture and then analysis might find some interesting things.
Wearable make this eminently doable.


We' ve found oxygen saturation in PEM can dip significantly.
 
Back
Top Bottom