2025: The 2019/24 Cochrane Larun review Exercise Therapy for CFS - including IAG, campaign, petition, comments and articles

Worth remembering that statisticians tend not to understand anything about blinding of trials. Blinding is there to solve a problem with human nature entering with measurements. Statisticians are often not that good on human nature. Medical students get steeped in it from their first clinical year!
I say stats, "research methodology, "trial design" and "pitfalls of data analysis" would be better. If the data can't be trusted I don't think knowledge on human nature should be needed.

At that stage it is simply a question of assuming that young doctors have basic common sense.
I've met too many doctor students and doctors without common sense to assume having it is a common trait amongst them.

There is no justification for using rules based on what other people think. Someone assessing reliability has to know why things are reliable themselves. It isn't difficult. It is almost entirely common sense. But rules can only ever be an approximation to what some other people think and in science you never go by what other people think anyway.
The justification I'm exposed to is that rules create an opportunity to have things assessed in a similar manner and thus create more objective and reproducible results. There's not necessarily a lot of room for students of science to think for themselves, so by the time that is a possibility you've already used the rules for years and I assume it gets somewhat automated to do so.
 
I've met too many doctor students and doctors without common sense to assume having it is a common trait amongst them.

Sure, but if people don't have common sense there is no point in trying to replace it with rules created by eminent people. The rules will always be misinterpreted and distorted, even if they were any good to start with. rule like GRADE and RoB2 are hopelessly flawed.

So there isn't an awful lot of point in further training. You have to rely on someone with common sense at some point pointing out that the method of a particular trial will not do.
 
It is taught in clinical pharmacology courses, if I remember rightly at the preclinical stage of basic sciences. When I was taught it we used Desmond Laurence's textbook (Desmond was at UCL). It is part of the basic explanation of why we do double-blind trials. Every medical student has heard of double blind trials and ought to have an understanding of why they are done. The let out is that if you have truly objective endpoints you may not need to double blind, but that is the exception.
Judging from comments made by physicians about this problem, it seems that there is a widespread acceptance for lower standards in clinical psychology simply because it can't meet those standards and therefore they would not be able to do the bulk of their work, which is to run clinical studies.

As in, yes, everyone in medicine agrees that this is the case for drug trials and serious procedures about 'real' problems, but since clinical psychology trials can't possibly meet those standards then they simply shouldn't bother. Thus follows Cochrane's business model, which is to take such studies and give them a few rounds of numerical waterboarding, assigning fake numbers to arbitrary shapes, and make this all appear to be just as reliable as drug trials.

It's because of the exemptions. Everyone understands those problems, it's simply accepted that it must be exempted here because it would invalidate almost everything in evidence-based medicine. Not because it has to, but because of decades of accepting garbage standards leaves no other choice. Specifically it would invalidate the top of the absurd pyramid of evidence, systematic reviews and clinical trials, even though the whole thing is several kilometers underground compared to literally all other professional standards. As you said, no other profession even looks at garbage evidence like this, let alone makes use of it.

But here it's all they have and it has been used to influence hundreds of millions of lives. It warrants closing the whole thing down, but it just runs into a wall of embarrassment that the industry isn't ready to jump over yet. Possibly never, not voluntarily anyway.
 
Interesting that MEAction are focusing solely on the editorial note rather than withdrawal, stressing their link to the IAG and avoiding any mention of our ‘withdraw of Larun et al’ campaign.

It maybe that they regard the editorial note as the most realistic goal, though it seems to me something of a cop out, in it gives Cochrane a way of putting the issue to bed if the outcry gets too embarrassing, without addressing the central problem of the inherent bias in the use of subjective outcomes in unblinded trials. Both @Hutan and I reference these wider issues in our comments on the MEAction blog. I wonder, given our comments are still lurking in moderation (echoes of Hilda’s moderating strategy), if our demand for withdrawal of the old review is a hot potatoe for them too.

A bad negotiation tactic too.
To set the bar so low.
 
The choice of criteria is relevant but if criteria are wider than the set of interest they are still valid for that set unless there is evidence to the contrary.

That seems very backwards to me. I might be too foggy to understand this now, but I still don’t get it.

If you have a group of people with CF, and you don’t know if they have PEM or not, how can the results be generalized to everyone with PEM? You can’t prove that anyone had PEM, and you can’t check how that subset in particular responded.

PACE might have accounted for PEM (I have not spent energy on reading it), but many GET/CBT studies don’t.

To take it to the extreme: Say you do a study on humans, monkeys, pigs, and rats, but you don’t write down the species. So you don’t know how many humans that was involved, if any.

Based on that study, surely you can’t say anything in general about humans?

I understand that if humans were involved, and they responded differently than the other animals, then that will be reflected in the CI, etc. But what if they were not involved? That’s what I’m stuck on.
 
The justification I'm exposed to is that rules create an opportunity to have things assessed in a similar manner and thus create more objective and reproducible results.

Yes, we hear that all the time, but it is a contradiction. You only need rules if you don't understand how to work out if something is reliable yourself. If you don't you won't know when the rules are not appropriate to a situation in the way they might seem. You can only achieve the best answer by understanding the problems yourself.

There's not necessarily a lot of room for students of science to think for themselves, so by the time that is a possibility you've already used the rules for years

My students were always encouraged to think for themselves right from the start. My trainees were told not to use rules but to understand what the problem is. I agree that this may not be what happens elsewhere but there is never a justification for following rules provided by eminences in science.
 
Judging from comments made by physicians about this problem, it seems that there is a widespread acceptance for lower standards in clinical psychology simply because it can't meet those standards and therefore they would not be able to do the bulk of their work, which is to run clinical studies.

I agree, but I strongly suspect that the difficulty in meeting standards is almost entirely a reflection of the fact that the treatments don't work.

If psychotherapy really worked there would also be a replicable dose response effect. It might be that benefit rose sharply going from 3 to 4 to 5 sessions but by ten the effect was a plateau. With drugs you see that clear as day. If CBT worked you would see it clear as day. So you could assess efficacy using a dose response study, which would almost entirely remove the problem of expectation bias because nobody would have any specific expectations for the optimum dose.
 
That seems very backwards to me. I might be too foggy to understand this now, but I still don’t get it.

I am sorry about that but this is basic probability theory applied to sets. I have been through all the relevant arguments. I may not have expressed them well but as far as I am aware they are what they are.

If you have a group of people with CF, and you don’t know if they have PEM or not, how can the results be generalized to everyone with PEM?

Because they all have CF and if you found a property of the CF set then the highest probability is that it will apply to subsets with PEM.

PACE might have accounted for PEM (I have not spent energy on reading it), but many GET/CBT studies don’t.

Yes, but none of the other studies even begin to figure in terms of reliable evidence. PACE was the only study that NICE even had to deliberate over because nothing else scored the minimum. The same should have applied to the Cochrane review.
 
Based on that study, surely you can’t say anything in general about humans?

You can say that the most likely interpretation is that the result applies to humans as much as rats or monkeys and if it applies to humans it applies in general to them.

If you are not going to go with the most likely you need to have evidence to support there being a reason for a differential.
 
Back
Top Bottom