Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

I cannot thank enough everyone who has been commenting on this thread to describe the consequences of the twisted science behind GET and CBT, either on their lives as patients or carers or generally on the medical and scientific fields. And for calling on doing better, much better, than harming ME sufferers with inappropriate treatments.

While I have only recently become ill with ME, your support and advocacy effort are invaluable for the generations of ME patients to come -- first and foremost those who may come down with ME due to the current pandemic --. Thank you for helping us.

@Hilda Bastian, thanks very much for your work on the review and for engaging here. Are you aware of Professor Garner's interest in ME/CFS? He seems to be rapidly getting up to speed on the ME/CFS literature while he deals with post-Covid effects.
Agreed. Although the Infectious Diseases group is not responsible for the review on exercise therapy in CFS, reaching out to Prof Garner is worthwhile. Is that possible @Hilda Bastian? I dream that he would be able to sit on the review committee, or otherwise that he would provide the committee with an expert commentary/input given his position at Cochrane.
 
Last edited:
@Hilda Bastian, thanks very much for your work on the review and for engaging here. Are you aware of Professor Garner's interest in ME/CFS? He seems to be rapidly getting up to speed on the ME/CFS literature while he deals with post-Covid effects.

Agreed. Although the Infectious Diseases group is not responsible for the review on exercise therapy in CFS, reaching out to Prof Garner is worthwhile. Is that possible @Hilda Bastian? I dream that he would be able to sit on the review committee, or otherwise that he would provide the committee with an expert commentary/input given his position at Cochrane.

Yes - thanks for drawing my attention to it, though. We've been messaging/emailing about all this. Yes, I expect he will be contributing his views, and definitely keeping up with emerging data and research on post-Covid effects.
 
I am grateful for your interest and contribution to the peer review process here, @Hilda Bastian. I agree with almost everything you say but the quote above unsettles me a bit.

I am not quite sure why I should be grateful for your interest in an illness that I never worked on or have in my family but I guess there may be three answers. One is Bob. As I think Tovey realised, if we forget Bob we may as well wash our hands of the whole business. 'Due process' does not cut it. A related one is whatever makes people decide to march for George Floyd. I guess that must be why I hang around this forum. The third is a feeling of disgust and embarrassment at the behaviour of people who consider themselves my scientific colleagues (one of whom has accused me of 'disloyalty', as if I should be loyal to them rather than patients).

I started out as an invited advisor on research projects in ME/CFS. I tried to get myself boned up on the immunology and metabolism that were relevant. In the process I attended a meeting where I was introduced to the PACE trial by Peter White. I had little interest in psychological therapies but was interested to get a rounded view. I have listened to thousands of colleagues present their research. His presentation was something I had never encountered before. It came across as a deliberate and disingenuous manipulation designed to discredit patients in order to protect poor science. Within two minutes I felt I was being scammed. A couple of slides were flashed up to prove that PACE proved CBT and GET worked (with no mention of methodology or even a valid Y axis) and the rest of the fifteen minute talk was abuse of patients.

The claim was made that ad hominem 'impugning of character' was anti-science. But this was a transparent conflation of two different issues. A psychiatrist who gives a lecture every time a patient insults them is unlikely to have much time for clinical work. It is part of the job and has no effect on research practice or reputation. What damages scientific reputation, and rightly so if justified, is pointing out that the research is poor quality. That was what White was upset about. And the pointing out was justified. Adding to that 'impugning of character' seems not unreasonable since it has derived not from a 'difference of opinion' but from there being every reason to think that someone is acting out of self interest.

Over the last five years I have gradually come to learn just how appalling the execution of these studies has been. Not only do they break basic rules for generating reliable evidence but we know from FOI requests how the massaging of results came about and has been repeatedly covered up (with vast sums spent on refusing to share the facts). I keep thinking that I have heard the worst only to find that I haven't. The patients opened my eyes. Some may oversimplify but most of those are just expressing the only way they can understand things and it makes sense. And as a community S4ME produces level of scientific debate that puts most academic meetings to shame.

This is not an area where there is reasonable difference of opinion. I presented my analysis of the situation to the UCL Division of Medicine Grand Round and not a single person raised disagreement when I asked for a show of hands. The one person, who initially abstained, on hearing the case said it was of utmost importance that the weakness of the studies was brought to the notice of NICE. Only people with a vested interest disagree with the principle that open label studies with subjective endpoints are essentially worthless. The problem I perceive is that in the 'methodology/quality control' business vested interests come in a remarkable range of flavours. (We have seen some strange goings on recently with shift in the Risk of Bias Tool.)

I understand the process that has been set in motion and that it is hard to see another way forward. But I think the patients have every reason to think that they are being let down. You only have to look at what is going on in Norway at present with the Lightning Process to see how much the rot is still there. I think we all owe it to Bob to do better. If we can't at least try to do that medical science ceases to have legitimacy.
Awesome! Just awesome!
 
I am grateful for your interest and contribution to the peer review process here, @Hilda Bastian. I agree with almost everything you say but the quote above unsettles me a bit.

I am not quite sure why I should be grateful for your interest in an illness that I never worked on or have in my family but I guess there may be three answers. One is Bob. As I think Tovey realised, if we forget Bob we may as well wash our hands of the whole business. 'Due process' does not cut it. A related one is whatever makes people decide to march for George Floyd. I guess that must be why I hang around this forum. The third is a feeling of disgust and embarrassment at the behaviour of people who consider themselves my scientific colleagues (one of whom has accused me of 'disloyalty', as if I should be loyal to them rather than patients).

I started out as an invited advisor on research projects in ME/CFS. I tried to get myself boned up on the immunology and metabolism that were relevant. In the process I attended a meeting where I was introduced to the PACE trial by Peter White. I had little interest in psychological therapies but was interested to get a rounded view. I have listened to thousands of colleagues present their research. His presentation was something I had never encountered before. It came across as a deliberate and disingenuous manipulation designed to discredit patients in order to protect poor science. Within two minutes I felt I was being scammed. A couple of slides were flashed up to prove that PACE proved CBT and GET worked (with no mention of methodology or even a valid Y axis) and the rest of the fifteen minute talk was abuse of patients.

The claim was made that ad hominem 'impugning of character' was anti-science. But this was a transparent conflation of two different issues. A psychiatrist who gives a lecture every time a patient insults them is unlikely to have much time for clinical work. It is part of the job and has no effect on research practice or reputation. What damages scientific reputation, and rightly so if justified, is pointing out that the research is poor quality. That was what White was upset about. And the pointing out was justified. Adding to that 'impugning of character' seems not unreasonable since it has derived not from a 'difference of opinion' but from there being every reason to think that someone is acting out of self interest.

Over the last five years I have gradually come to learn just how appalling the execution of these studies has been. Not only do they break basic rules for generating reliable evidence but we know from FOI requests how the massaging of results came about and has been repeatedly covered up (with vast sums spent on refusing to share the facts). I keep thinking that I have heard the worst only to find that I haven't. The patients opened my eyes. Some may oversimplify but most of those are just expressing the only way they can understand things and it makes sense. And as a community S4ME produces level of scientific debate that puts most academic meetings to shame.

This is not an area where there is reasonable difference of opinion. I presented my analysis of the situation to the UCL Division of Medicine Grand Round and not a single person raised disagreement when I asked for a show of hands. The one person, who initially abstained, on hearing the case said it was of utmost importance that the weakness of the studies was brought to the notice of NICE. Only people with a vested interest disagree with the principle that open label studies with subjective endpoints are essentially worthless. The problem I perceive is that in the 'methodology/quality control' business vested interests come in a remarkable range of flavours. (We have seen some strange goings on recently with shift in the Risk of Bias Tool.)

I understand the process that has been set in motion and that it is hard to see another way forward. But I think the patients have every reason to think that they are being let down. You only have to look at what is going on in Norway at present with the Lightning Process to see how much the rot is still there. I think we all owe it to Bob to do better. If we can't at least try to do that medical science ceases to have legitimacy.
So very, very well said.
 
I am actually more optimistic than some here. I see reasons to think that by the end of this year there may be some positive outcomes on long standing issues.
That is very encouraging to hear. @Hilda Bastian's earlier post gives me some hope you may be right.
I agree people with ME/CFS were badly let down by the previous process, and have good reason for expecting no better as a result. This early in the process all I can do is acknowledge that, and point to signals that it's not business as usual. I think the editor-in-chief choosing me for this role is one signal - she knew that was embracing disruption and high levels of engagement, and you will see the first results of that soon.
 
My own conclusion is that a difference of opinion doesn't have the explanatory power to explain what is happening. It looks like the PACE trial authors and some of their allies are willing to do whatever it takes, even hurting patients, to advance certain interests. One doesn't start a professionally conducted smear campaign against a patient group over differences in opinion. One doesn't spend a quarter million pounds for lawyers to hide clinical trial data over differences in opinion. One doesn't define recovery in a way that allows severely impaired patients to get worse and be counted as recovered without a clear intent to obscure the truth.
Absolutely.
 
No worries. The story so far: The previous editor-in-chief (David Tovey) proposed withdrawing the review, but agreed to amend it instead, initiating a process involving multiple people. It was advanced but not completed when the new editor-in-chief took over in June 2019 (Karla Soares-Weiser). She decided the amendment did not resolve all issues, and announced a comprehensive update that would involve an international advisory group involving patient advocacy groups when the review was published (at the beginning of October): https://www.cochrane.org/news/publication-cochrane-review-exercise-therapy-chronic-fatigue-syndrome

In October 2019, Soares-Weiser announced the appointment of me as lead for the advisory group process: https://www.cochrane.org/news/appointment-lead-independent-advisory-group I expect there'll be more about this in the upcoming report. We made a further announcement in early March that addresses several of your questions: https://community.cochrane.org/orga...eholder-engagement-high-profile-reviews-pilot There was a timeline: the pandemic has slowed some aspects down, but I'm hoping we'll make up ground.

Review authors draft the protocols for Cochrane reviews, which go through peer review and editorial review and are then published. The same happens with the review. The advisory group will be involved in shaping its own role, but you will see the initial proposal they will be discussing in the link above. I'm expecting we'll have another report in the next few weeks, and that will explain more about who is now involved in what: I don't think any questions on your list will be left unanswered then.
Thanks again for this explanation. There remains one question that I think it would help our members to have an answer to before your report in a few weeks' time.

Can you please tell us where the process of appointing people to be on the advisory group is up to?

As a forum, if we are to nominate someone to represent us on the advisory group, we will need time for members to discuss this and to agree on a name of someone suitable and willing to be put forward. But there would be no point in our going through such a process if we have already missed the boat.
 
Is there any information about the time it took to change the withdrawal policy? And the process? It seemed quite quick to me. They withdrew the Chinese Medicine review in 2018, and then the policy changed less than a year later in my view to make it impossible to withdraw the Exercise and CBT reviews because there isn't consensus that they harm people. There's never going to be consensus on that is there? There seemed to be no massive years long consultation that usually ensues with a very large Cochrane policy shift like this. Can you find out what the process was? I have never known any Cochrane policy change happen so quickly. The conflict of interest one went on for literally years. I think I was on some working group as "consumer" rep in around 2013/4 endlessly discussing it. And the policy actually changed only last year. But I may be mixing up my policies.
The problem is that if someone - maybe someone who has a recent diagnosis of ME - types "chronic fatigue syndrome" or "myalgic encephalomyelitis" into the Cochrane Library search box, the results don't present a coherent picture. It would be confusing for those not privy to what's been going on to see a note on the exercise review saying it's being updated, but no note on the CBT review to indicate there's a problem with it. Many will read the plain language summaries and/or not notice the note on the abstract anyway. This is an observation and a concern, not a question requiring a response.
 
Secondly, new protocol with new authors and editorial process is another signal that this is a re-start, not a continuation.

One of my issues is around whether any protocol could be sufficient to look at the current trials. I certainly think that any protocol that uses subjective outcomes as its end-points is not worth bothering with. This could be made worst if (as with the current protocol) they use the CFQ as a endpoint - this has so many problems including question phrasing in dealing with change; two sometimes contradictory marking schemes and that the lack of linearity. (I'm not impressed with the sf36 either).

That leaves looking at those trials that report more objective measures. PACE should have reported 2 a step test and a 3 minute walking test but they have only published step test results in the form of a graph (refusing to give numbers despite it being a secondary outcome!).

Other trials I assume use other more objective techniques so it may be hard to compare and aggregate. Care also needs to be taken here - for example PACE didn't do the 6mwt properly in that they had too short a length for people to walk and more turns. So digging into the details of what exactly was measured before aggregating is essential.

I just have a feeling that the overall data quality from all these trials is just too poor to make any sense in terms of a meta analysis.

So I do feel an important part of any protocol is to dig deeper than simple measures used and to look at the properties of the measures and how they were implemented in each trial and hence whether they are safe and can be safely combined.
 
The problem is that if someone - maybe someone who has a recent diagnosis of ME - types "chronic fatigue syndrome" or "myalgic encephalomyelitis" into the Cochrane Library search box, the results don't present a coherent picture. It would be confusing for those not privy to what's been going on to see a note on the exercise review saying it's being updated, but no note on the CBT review to indicate there's a problem with it. Many will read the plain language summaries and/or not notice the note on the abstract anyway. This is an observation and a concern, not a question requiring a response.
Yes, agree.
Thanks again for this explanation. There remains one question that I think it would help our members to have an answer to before your report in a few weeks' time.

Can you please tell us where the process of appointing people to be on the advisory group is up to?

As a forum, if we are to nominate someone to represent us on the advisory group, we will need time for members to discuss this and to agree on a name of someone suitable and willing to be put forward. But there would be no point in our going through such a process if we have already missed the boat.
The next report will include the first wave of members. After the second wave, there will be one final slot that the members will choose how to fill. We don't expect rapid turnaround on nominations when we invite a group, once the group has accepted a position.
 
I just have a feeling that the overall data quality from all these trials is just too poor to make any sense in terms of a meta analysis.
I agree.

That leaves looking at those trials that report more objective measures.
I think that quite a lot of the measures that might be regarded as objective are actually subjective. Certainly a 6 minute walk can be influenced by the enthusiasm the participant has for an open-label treatment or the sense of shame someone might have been taught to feel if they aren't 'morally strong enough' to recover.

Results from a week of activity monitoring can be similarly skewed, as can school attendance.

I recall when my son desperately wanted to go back to school and was encouraged to do so by a psychologist. In the first month, his attendance was very good. If he had been assessed at that point, the return would have been regarded as a success. A month later, his attendance was down to about 50%, but still that was big increase over the homeschooling of the previous year. In the fourth month, my son was sleeping 20 hours a day and struggling to get to the bathroom and eat enough in the remaining hours. And it took a year for him to recover the level of health he had started school with.

So, I think that, unless a measure of activity extends over at least three months and probably 6 months, it is not possible to be sure that any change is sustainable.

Another type of objective measure sometimes used is cytokine levels in peripheral blood. Usually a large number of cytokines are measured, creating opportunities for some random differences to be picked out and highlighted.

So, yes, I think there won't be many objective measures that really stand up to proper scrutiny in these studies.

And that's before digging into things like drop out percentages.
 
The problem is that if someone - maybe someone who has a recent diagnosis of ME - types "chronic fatigue syndrome" or "myalgic encephalomyelitis" into the Cochrane Library search box, the results don't present a coherent picture. It would be confusing for those not privy to what's been going on to see a note on the exercise review saying it's being updated, but no note on the CBT review to indicate there's a problem with it. Many will read the plain language summaries and/or not notice the note on the abstract anyway. This is an observation and a concern, not a question requiring a response.
Yes, agree. It's a weird "in-between" time for Cochrane reviews in general and for the CFS reviews in particular. There's going to be a new system that enables this easily, but it's not functional yet. And the transfer of editorial responsibility isn't complete. Should be simple, but it's not. It will happen.
 
Results from a week of activity monitoring can be similarly skewed, as can school attendance.


One of the things with school attendance is it doesn't represent performance or how well a student is doing. So they may be their physically but not concentrating vs more limited attendance where they are doing more learning. There can also be places in schools for children who aren't well to rest which again isn't reflected in the figures. Crawley's LP study also did self reported attendance so its not clear if that is accurate.
So, I think that, unless a measure of activity extends over at least three months and probably 6 months, it is not possible to be sure that any change is sustainable.

I think sustained results over a long time is an important issue for any protocol and also the reporting time used. From what I remember about many studies is early reporting suggests improvement which tail off. So the choice of a time for an end-point can be really critical. I would prefer as long a time as possible because I think that reflects best whether there is any meaningful improvement.

Another type of objective measure sometimes used is cytokine levels in peripheral blood. Usually a large number of cytokines are measured, creating opportunities for some random differences to be picked out and highlighted.

Whilst such measures are interesting for research we don't have a way of interpreting if they have a meaning so we can't really use them as an endpoint.

So, yes, I think there won't be many objective measures that really stand up to proper scrutiny in these studies.

Yes that is the real problem. Is there any meaningful data with sufficient quality and comparability that allows a meta analysis to make sense. Hence I think any protocol has to put a lot of effort into assessing the validity of data (and probably defining ways to do this). Also there needs to be a willingness to simply say the data is not sufficiently meaningful to give a comparison.
 
Is there any research on exercise therapy for ME/CFS that:

Uses objective outcomes as primary outcome measures (with clinically significant not just statistically significant differences between groups).
Uses currently acceptable diagnostic criteria that includes PEM.
Includes long term follow up (at least 6 months after the end of treatment?)
Properly records adverse effects including significant worsening of symptoms.
Measures patients' adherence to the therapy.

If not, then there are no studies to be included and the only possible outcome of the review is that there are no eligible studies to review.
 
May be off topic, but I have been thinking about all this (ME patients often have to lie there unable to do anything but think, one of our few advantages, really)

Basically, we have the professional medical researchers with their published, peer reviewed studies who claim that CBT and GET do not harm patients. On the other side, you have patients who claim that these treatments can lead to severe harms, including becoming wheelchair or bed bound for life and this can happen in both adults and children. They also claim that even people who do not experience such drastic harm can become much sicker for an extended period of time.

This debate is presented to other scientists and the public as activist patients harassing researchers who are working for the benefit of patients.

When you lay it out like that it is glaringly obvious that the patient claims MUST be investigated. In no other branch of medicine do the patients have to prove a treatment is harmful. The onus is on the doctor to prove it is safe to use.

Why is this basic human right denied ME patients?

Now this is without going into whether the trials are methodologically sound or if the researchers are well intentioned. The antivaxxers claim that vaccines cause harm and a fortune has been spent proving them wrong. Why not research our claims instead of making it a war? Even if the surveys are self selected, how many children need to become bedridden and tube fed before a product can be said to be harmful or that we need to find a way to identify those who will be harmed before using the treatment as universal?

These are basic points for medicine and are not confined to the debate about ME.
 
Is there any research on exercise therapy for ME/CFS that:

Uses objective outcomes as primary outcome measures (with clinically significant not just statistically significant differences between groups).
Uses currently acceptable diagnostic criteria that includes PEM.
Includes long term follow up (at least 6 months after the end of treatment?)
Properly records adverse effects including significant worsening of symptoms.
Measures patients' adherence to the therapy.

If not, then there are no studies to be included and the only possible outcome of the review is that there are no eligible studies to review.
Precisely. Cochrane is the only organisation that does (occasionally) publish empty reviews. But currently they seem to view these as something to be avoided - an editorial embarrassment. I have argued that empty reviews or reviews where studies are mostly poor quality (ie. most reviews) should be welcomed, as long as the review question and outcomes have been prioritised and specified by patients, not driven by researchers or what's reported in the literature. Empty reviews could be used systematically and constructively to advocate for and prescribe better primary research - campaign documents and trial/study protocols rolled into one. These documents could be available and/or pushed to funders who could then focus their call for proposals according to the knowledge gaps identified by the reviews. That said, a trial on exercise for ME would now probably be considered unethical. Maybe a trial on withdrawal of GET would be better!
 
I recall when my son desperately wanted to go back to school and was encouraged to do so by a psychologist. In the first month, his attendance was very good. If he had been assessed at that point, the return would have been regarded as a success. A month later, his attendance was down to about 50%, but still that was big increase over the homeschooling of the previous year. In the fourth month, my son was sleeping 20 hours a day and struggling to get to the bathroom and eat enough in the remaining hours. And it took a year for him to recover the level of health he had started school with.
This is a great example of the problems with adverse effects monitoring in those experiments. The timeline for observation in this disease is far longer than usual and cannot rely on single points in time, the natural fluctuation of the illness itself demands it.

The people running those experiments have no useful understanding of the illness, its course and progression. They commit basic mistakes that would normally invalidate their work in any non-discriminated disease but since their misunderstanding has dominated for years, the rest of medicine is equally confused and unable to properly assess how to evaluate the illness.

I don't understand how the PACE long-term follow-up that recognized that not only there were no benefits but actually an increase in disability benefits, with the authors plainly saying "this is not curative" despite having gone on a PR tour for years touting the opposite, did not cancel the very weak questionnaire-based "benefits" initially presented.
 
I think that quite a lot of the measures that might be regarded as objective are actually subjective. Certainly a 6 minute walk can be influenced by the enthusiasm the participant has for an open-label treatment or the sense of shame someone might have been taught to feel if they aren't 'morally strong enough' to recover.
And there is the non-trivial bias that was reported somewhere by a PACE participant (maybe more than one), that because they wanted to show how "well" they were doing, they basically exchanged some of their normal activity in order to do better in the trial activities.
So the trial's "encouragements" to demonstrate best improvement, was actually motivating some to bias these supposedly objective results anyway, by transferring some of their non-trial activity across into the trial activity. Which in itself is clear example of how open label trials can foster subtle but significant bias, especially when the 'treatment' explicitly provokes bias-inducing behaviour.

ETA: Just realised @John Mac beat me to it, but much more succinctly.
 
Back
Top Bottom