Caroline Struthers' correspondence and blog on the Cochrane Review: 'Exercise therapy for chronic fatigue syndrome, 2017 and 2019, Larun et al.

"Tovey: Please see above. Many of the criticisms of the PACE study, including reported COI of the researchers, and changing outcome measures prior to analysis of the results, are not unique to this study."
my bolding.

Eminence based medicine. And 'everybody does it so it's good science'.

Bah humbug. Cochrane should be ashamed of themselves.
And the outcome changes may have been done prior to formal analysis, but if anyone believes it was done before the researchers knew which way the results were trending, then they will believe anything. Of course they knew. And no, it's not unique, especially amongst other BPS studies. And it depends very much on motives and integrity.
 
"Tovey: Please see above. Many of the criticisms of the PACE study, including reported COI of the researchers, and changing outcome measures prior to analysis of the results, are not unique to this study."

And the outcome changes may have been done prior to formal analysis, but if anyone believes it was done before the researchers knew which way the results were trending, then they will believe anything. Of course they knew. And no, it's not unique, especially amongst other BPS studies. And it depends very much on motives and integrity.
Given the nuances with language deployed by PACE team, and their carefulness to be economic with the truth but not lie, I could believe they didn' t see their data prior to changing outcomes.

They didn' t need to. FINE data was available and would have flagged up issues.

They could not afford another null trial. Especially the only such trial funded with public money, and prehyped.

Edit atrocious spelling
 
Lucibee's blog said:
Investigators often fail to understand that you should not use an intervention that interferes with the outcome in this way. It is akin to tampering with your BP recording equipment to give a more favourable outcome. Where your ‘equipment’ is the patient themselves, you have to be very careful about how any intervention is delivered.
Exactly @Lucibee. One of the first things I learnt in electronics (a good while ago), is that when using a voltmeter to measure a voltage, if you do not know what you are doing, the voltage you are measuring can be changed by the very action of connecting the voltmeter. Instead of the 10.2 volts that might actually exist at the point you are measuring, once the voltmeter is attached it might change to 9.8 volts, and is what your voltmeter would tell you the voltage is. Basically the measuring instrument is intrusive. Modern voltmeters are much better, but there can still be issues.

Essentially any measuring instrument needs to be non-intrusive, and if it cannot be then you have to understand the bias an intrusive measurement can introduce (its existence and to what degree), and account for that.
 
Given the nuances with language deployed by PACE team, and their carefulness to be economic with the truth but not lie, I could believe they didn' t see their data prior to changing outcomes.

They didn' t need to. FINE data was available and would have flagged up issues.

They could not afford another null trial. Especially the only such trial funded with public money, and prehyped.

Edit atrocious spelling
What I really meant was that even though they may not have seen the finally recorded dataset itself, there will surely have been many discussions amongst the researchers, and given the trial was fully unblinded, they will have had a strong sense during any such discussion of which way the wind was blowing. The researchers were privy to how the data was trending, because they were the very people gathering and inputting the data, and I cannot imagine they did not discuss such things off the record, and start to sweat when they realised things were not going to pan out as the planned.
 
Last edited:
Given the nuances with language deployed by PACE team, and their carefulness to be economic with the truth but not lie, I could believe they didn' t see their data prior to changing outcomes.

They didn' t need to. FINE data was available and would have flagged up issues.

They could not afford another null trial. Especially the only such trial funded with public money, and prehyped.

Edit atrocious spelling

Wessely has already admitted they changed the protocol to bring the results inline with other studies FFS!
 
"Tovey: Please see above. Many of the criticisms of the PACE study, including reported COI of the researchers, and changing outcome measures prior to analysis of the results, are not unique to this study."

Wessely:
In essence though they decided they were using a overly harsh set of criteria that didn’t match what most people would consider recovery and were incongruent with previous work so they changed their minds – before a single piece of data had been looked at of course. Nothing at all wrong in that- happens in vast numbers of trials. The problem arises, as studies have shown, when these chnaged are not properly reported. PACE reported them properly. And indeed I happen to think the changes were right – the criteria they settled on gave results much more congruent with previous studies and indeed routine outcome measure studies of which there are many.
https://www.statnews.com/2016/09/21/chronic-fatigue-syndrome-pace-trial/comment-page-6/#comments


The new recovery definition they changed to halfway through the trial meant people could be declared ill enough to enter the trial and declared recovered at the same time.

All completely normal scientific practice then.

I think herein lies the problem Tovey is realising he needs to normalise bad practice hoping no one looks at the consequences, rather than opening a can of worms on the whole Cochrane brand and potentially how bad it is and much of the scientific literature is overall.
 
Last edited:
Wessely said:
The problem arises, as studies have shown, when these chnaged are not properly reported. PACE reported them properly. And indeed I happen to think the changes were right – the criteria they settled on gave results much more congruent with previous studies and indeed routine outcome measure studies of which there are many.
Yes, @large donner, that quote alone of SW's clearly illustrates how the driving motivation for changing the outcome criteria was simply getting the results they wanted. And where is he coming from, by suggesting that so long as changes are reported properly, then they must be OK! If that were the case then they could have changed the criteria so that corpses were deemed recovered.
 
Nothing at all wrong in that- happens in vast numbers of trials.
Psych trials maybe, but not in real science.

As I have said before, it is legitimate to explore different analytical approaches to a given data set. There is nothing inherently wrong with modifying analytical protocol, as long as you also publish the results according to the original protocol, and give good reasons for modifying it.

The fact that they didn't, and instead fight tooth and nail to prevent the original results seeing the light of day, is the problem.
 
Last edited:
I think we need to work on how to convince people of the how flawed it is to combine subjective outcomes with unblinded trials. There is clearly a good deal of convincing still to do. I seem to recall @Jonathan Edwards saying some time back on PR that quite a few people took a lot of convincing initially when he raised the point. Coming to all this from scratch, once pointed out it seemed terribly obvious to me and still does. But I think there is a lot of inertia in establishment mind set that really doesn't see otherwise. How to change that. For psych therapies aimed at genuine psych conditions, the notion may hold some water, but when targeted toward physical conditions with objective outcomes available, it's junk. Something for 2019.
Trying to catch up with threads from Thailand.

The problem of unblinded trials with subjective end points is not 'just an opinion'. It is the only sensible view and is embedded in all evidence based medicine. With due respect if Tovey's advisors do not share this view they are incompetent. Again with due respect if Tovey is not aware of this he is not qualified to act as an editor in an organisation like Cochrane. He seems to have acted fairly so far but in this instance he appears to be indicating that he has no grasp of reliable evidence. That makes Cochrane something of a basket case maybe.

The problem of subjective end points with no blinding is as basic as it gets. It is the reason we have blinded trials. No trial with this design of a drug would be taken seriously. Moreover the problem is much worse with therapist delivered treatments, not less bad, because of the role playing and manipulation of attitudes that is inherent in the human interaction.

I am afraid this hits rock bottom.
 
I think an apology is due, together with an admission that the comment about 'just an opinion' was not appropriate. If a Cochrane official makes a statement like this it effectively destroys the Cochrane reputation entirely.

It seems to have remained under the radar so far. Is there any way to bring more attention to this? That the editor of one of the most reputable medical journals basically says objective evidence is overrated and cherry-picking is OK should be a huge concern to everyone working in medicine, especially relevant with the overall Cochrane controversy.

Everything that happens to us almost seems to happen in an alternative bubble of reality, things that would normally be alarming, like moving the finish line behind the starting line and calling it a win, are basically met with a shrug. And Tovey saying this is common and fine?!

I assume he means in psychosomatic medicine, which is likely true, but then they have to decide whether it's truly medicine and therefore has to confirm to its standards because right now it has all the weight of standard medicine with none of the accountability.
 
The problem of unblinded trials with subjective end points is not 'just an opinion'.
I strongly suspect Tovey is not just saying that of his own volition, but likely echoing drip-feed whisperings that have been going on in his ear. Which in a way shows him to be even less qualified, because he should be above all that, and have the knowledge, confidence and courage to stand by what he should know to be good science.
 
Its not just inherent in the interaction the manipulation of attitudes is fundamental to the interventions they are testing.
Quite so. In their world modification of people's perceptions is at the heart of their treatments, whose effectiveness they assess by measuring people's perceptions, which is of course wholly subjective. And knowing which treatment you are getting can itself help to modify perceptions, meaning certain biases are intrinsic to their treatment regimes. So is totally (and potentially dangerously) inapplicable when treating any illness not due to flawed perceptions.
 
Last edited:
I am travelling and not in a position to send formal communications but for starters the original complainant could demand an apology, pointing out the unjustified content of the reply.

When I mentioned to the chief of medicine at UCL that I would like to see a trial set up in ME he made it clear that if blinding was not cast iron (no tell tale side effects even) he would regard the trial as worthless and would want it in his department. That is even without mentioning endpoints - because even objective endpoints are tricky in ME, (especially if you assume a BPS approach).

Tovey's remark is just completely out of order.
 
because even objective endpoints are tricky in ME
Which maybe warrants some discussion of what exactly do constitute incontrovertible objective outcomes for ME. Not nearly as straightforward as it might seem.
  • Physical activity tested within a trial, may potentially be "exchanged" by doing less activity outside of the trial.
  • Cognitive activity is highly relevant, but extremely difficult (or impossible?) to measure objectively.
  • Longer term follow up measures are very relevant, but presumably very difficult to control for confounding factors.
  • Plenty of other things I've not thought of here ...
Edit: I think we may already have another thread for this?
 
Exactly @Lucibee. One of the first things I learnt in electronics (a good while ago), is that when using a voltmeter to measure a voltage, if you do not know what you are doing, the voltage you are measuring can be changed by the very action of connecting the voltmeter. Instead of the 10.2 volts that might actually exist at the point you are measuring, once the voltmeter is attached it might change to 9.8 volts, and is what your voltmeter would tell you the voltage is. Basically the measuring instrument is intrusive. Modern voltmeters are much better, but there can still be issues.

Essentially any measuring instrument needs to be non-intrusive, and if it cannot be then you have to understand the bias an intrusive measurement can introduce (its existence and to what degree), and account for that.

Exactly. What you are recording is the output of an instrument, this cannot be assumed to be the same as the underlying phenomena.

How a patient answers on a symptom questionnaire is not the same as the underlying experience of that symptom. Compared to voltmeters which have well-defined or straightforward to control errors, questionnaires are subject to a whole host of uncontrollable or difficult to control biases.
 
Back
Top Bottom