It is - you are so much better than me. For some reason it never occurred to me to use web.archive for a dropbox file.
I added Wayback as an extension to Chrome a while back, and so if a site comes up with a 404 error, Wayback automatically pops up and asks me if I'd like to see if it has an archived copy. And in this case, very obligingly, it did :).
 
How would you proceed with documents that cite the PACE Trial and papers that, again, cite the PACE trial, and that cite (psychological) papers which claim to show psychotherapy and activity (often exercise) are helpful in CFS?

I read a guideline about "non-specific functional and somatic symptoms" by BPS people (subsuming ME, FM, IBS and more), and treatment recommendations are throughout CBT, GET, Antidepressants, psycho clinics. ONLY psychological papers are cited.

There is another German guideline doing the same.

(@Dx Revision Watch pointed to it:
http://www.awmf.org/fileadmin/user_...atoform_Bodily_Complaints_2013-abgelaufen.pdf)
 
I didn't actually spot the reference to PACE itself in that. Lots of the research which is claimed to show that CBT and GET are evidence based treatments for CFS suffer from the problem of being nonblinded trials relying on subjective self-report outcomes.

It's possible for therapists to manipulate people into answering questionnaires more positively without this leading to real improvements in people's health. That why there's great scepticism of nonblinded trials relying on subjective self-report outcomes when they are conducted by alternative medicine practitioners, or pharmaceutical companies. They are not seen as reliable. For some reason, this problem often gets ignores for treatments like CBT and GET.

Jonathan Edwards wrote an article largely focussing on this here: http://journals.sagepub.com/doi/full/10.1177/1359105317700886

Tom Kindlon discussed results from more objective outcome measures in this comments: http://www.bmj.com/content/350/bmj.h227/rr-10
 
Thanks, @Esther12

The NFS guideline doesn't cite PACE (but lots of other papers about effectiveness of exercise in ME), a Degam guideline "Fatigue" does.

How's it possible a guideline is accepted, and even accepted as evidence based, if it is grounded only on papers of ONE area, here psychology?

I cannot go through all papers, but I could pick some and check for scientific standards.

I think the same problem exists in UK, right? How to tackle that?

Do you think PACE will "fall" now, at last? If PACE falls, shouldn't all other papers related with CBT and GET/exercise for ME go down, too?
 
How's it possible a guideline is accepted, and even accepted as evidence based, if it is grounded only on papers of ONE area, here psychology?

I cannot go through all papers, but I could pick some and check for scientific standards.

I think the same problem exists in UK, right? How to tackle that?

Do you think PACE will "fall" now, at last? If PACE falls, shouldn't all other papers related with CBT and GET/exercise for ME go down, too?

There are a lot of influential and powerful people who do not want PACE to 'fall'. The fact that they do not seem to be able to defend PACE should matter more than it does, but all we can do is keep pushing them.

There are meant to be ways that evidence is collected and assessed, like Cochrane reviews... but these also have serious flaws.

I posted this in a discussion about Cochrane:

Esther12 said:
Maybe it would be good to ask them to explain why it was that patient concerns (specifically the published responses from Courtney and Kindlon) were allowed to be dismissed without proper explanation in their review of exercise therapy? Having patients involved should mean that those at Cochrane are willing to engage in meaningful discussion and debate with patients, but so far we've seen anything but. This is particularly harmful when poor quality Cochrane reviews are then being used by other researchers to evade concerns about their own work.

The Cochrane exercise review authors just say that they want to "agree to disagree", even when (for example) they were caught wrongly claiming that "a positive effect of exercise therapy was observed both at end of treatment and at follow-up with respect to sleep (Analysis 1.12; Analysis 1.13), physical functioning (Analysis 1.5; Analysis 1.6) and self-perceived changes in overall health (Analysis 1.14; Analysis 1.15).” In fact the addition of exercise therapy failed to lead to a significant difference in self-reported physical functioning and changes in overall health (1.6 & 1.15).

(Kindlon's comment is on p117/141, followed by Courtney's) https://www.dropbox.com/s/koehut6iw2bm9v5/Larun_et_al-2017-The_Cochrane_Library.pdf?dl=0
QUOTE]

I don't have any great of easy answers to your questions, which we've been struggling with for years. It seems that the best thing to do is keep pushing to raise standards, and hope to chip away at the quackery. I certainly think that there's been a big change in the way criticism is received in the last few years though, now that a number of academics have joined with patients to raise concerns.
 
I'm reading the protocol for PACE,
https://bmcneurol.biomedcentral.com/articles/10.1186/1471-2377-7-6

Is it a false feeling that the sections about "Advert Events" sound a bit...well...strange?

In the event of an adverse event (AE), the centre leader or nominee will judge the seriousness of the event, the relationship to a trial supplementary therapy or SSMC prescribed treatment, clinical severity and the expectedness of the event.

That sounds very subjective, and it sounds as if the danger of having the opinion of "the patient just simulates" is not so small. Ah, what I want to say is: The measurement for AE seems very subjective in a way that a possible AE won't be categorized as a serious AE, and maybe not as an AE at all. Is my impression wrong?
 
I'm reading the protocol for PACE,
https://bmcneurol.biomedcentral.com/articles/10.1186/1471-2377-7-6

Is it a false feeling that the sections about "Advert Events" sound a bit...well...strange?



That sounds very subjective, and it sounds as if the danger of having the opinion of "the patient just simulates" is not so small. Ah, what I want to say is: The measurement for AE seems very subjective in a way that a possible AE won't be categorized as a serious AE, and maybe not as an AE at all. Is my impression wrong?
Everything about PACE is subjective from what I can see. The authors seem utterly incapable of approaching their "science" in any other way. It's as if objectivity is to SW et al, what sunlight is to vampires.
 
I'm reading the protocol for PACE,
https://bmcneurol.biomedcentral.com/articles/10.1186/1471-2377-7-6

Is it a false feeling that the sections about "Advert Events" sound a bit...well...strange?



That sounds very subjective, and it sounds as if the danger of having the opinion of "the patient just simulates" is not so small. Ah, what I want to say is: The measurement for AE seems very subjective in a way that a possible AE won't be categorized as a serious AE, and maybe not as an AE at all. Is my impression wrong?

The reporting of harms is better in PACE than in most similar trial, and to me it looks like they didn't have the sort of problems with harm within PACE than patients often report from their experiences outside of clinical trials. Tom Kindlon has done stuff on PACE and harms, eg:

From 2011, Section 6 of this focusses on the PACE trial:
Kindlon T. Reporting of Harms Associated with Graded Exercise Therapy and Cognitive Behavioural Therapy in Myalgic Encephalomyelitis/Chronic Fatigue Syndrome. Bulletin of the IACFS/ME. 2011;19(2):59-111.
http://iacfsme.org/PDFS/Reporting-of-Harms-Associated-with-GET-and-CBT-in.aspx

This was in the recent Journal of Health Psychology Special issue: 'Do graded activity therapies cause harm in chronic fatigue syndrome?': http://journals.sagepub.com/doi/full/10.1177/1359105317697323
 
Can anybody explain what this means?

As an example, to detect a difference in response rates of 50% and 60%, with 90% power, would require 520 participants per group; numbers beyond a realistic two-arm trial. Therefore, we will study equal numbers of 135 participants in each of the four arms, which gives us greater than 90% power to study differences in efficacy between APT and both CBT and GET. We will adjust our numbers for dropouts, at the same time as designing the trial and its management to minimise dropouts.
 
I didn't actually spot the reference to PACE itself in that. Lots of the research which is claimed to show that CBT and GET are evidence based treatments for CFS suffer from the problem of being nonblinded trials relying on subjective self-report outcomes.

It's possible for therapists to manipulate people into answering questionnaires more positively without this leading to real improvements in people's health. That why there's great scepticism of nonblinded trials relying on subjective self-report outcomes when they are conducted by alternative medicine practitioners, or pharmaceutical companies. They are not seen as reliable. For some reason, this problem often gets ignores for treatments like CBT and GET.

Jonathan Edwards wrote an article largely focussing on this here: http://journals.sagepub.com/doi/full/10.1177/1359105317700886

Tom Kindlon discussed results from more objective outcome measures in this comments: http://www.bmj.com/content/350/bmj.h227/rr-10

About the problem of subjective vs more objective outcomes, there's also @Graham 's paper:
Cognitive behaviour therapy and objective assessments in chronic fatigue syndrome
Graham McPhee
June 19, 2017
Journal of Health Psychology
A review of studies incorporating objective measures suggests that there is a lack of evidence that cognitive behavioural therapy produces any improvement in a patient’s physical capabilities or other objective measures such as return to work.
http://journals.sagepub.com/doi/abs/10.1177/1359105317707215
 
There are a lot of influential and powerful people who do not want PACE to 'fall'. The fact that they do not seem to be able to defend PACE should matter more than it does, but all we can do is keep pushing them.

I absolutely agree.
But so powerful, that even (real) scientists won't speak and criticize non-scientific work that may have wide range influences? In fact, I would say the increasing psycho influence is also bad for real science.

Or don't real scientists complain because major funders are involved? (Which would lead to a stop of funding of one's own research. There are some kind of "unofficial rules of behavior" inside the research and academics field.)

In the end, it always leads to politics and money... :(
 
The reporting of harms is better in PACE than in most similar trial, and to me it looks like they didn't have the sort of problems with harm within PACE than patients often report from their experiences outside of clinical trials.
I do wonder if they were honest about some of those who dropped out though. Being the cynic I am, I could imagine those on the PACE team perhaps "being supportive" of patients leaving the trial, if they suspected their results might go the wrong way for PACE. Was it ever possible to get any kind of results for those who dropped out, or is that impossible by definition?
 
Thank you @Barry and @Cheshire . Did you work out that I was "encouraged" to rewrite the summary? The language is far from my own, and much more tactful. Other than that, I did have a lot of support and help from others here: well, you can guess that of course!

What bugs me is that homeopathy could have been the fourth group in the PACE trial rather than their "scaredy cat" pacing, and I'm sure it would have performed just as well as CBT - an improvement in what patients think they can do (or perhaps, more accurately, a re-rating of what they can do) but with no actual, physical improvement. Then they would have been scuppered. But then, why worry, because Esther Crawley has now done it with the Lightning Process: the "evidence" for that is on a par with the "evidence" for CBT. They must be feeling that life is getting trickier. A recent conversation I had with someone who is within the Holgate/Crawley interaction groups said that Crawley was being seen as a bit of an embarrassment.
 
Did you work out that I was "encouraged" to rewrite the summary? The language is far from my own, and much more tactful. Other than that, I did have a lot of support and help from others here: well, you can guess that of course!
No I didn't though now you mention it I can see it. Overall it all works very well. The laying down of self-illuminating facts, in a clear logical progression, is a way of presenting and developing such arguments that I favour. Because as I've said elsewhere, communication is not simply about transmitting information, but about it being received effectively. I suspect your teaching experience serves you well here.
A recent conversation I had with someone who is within the Holgate/Crawley interaction groups said that Crawley was being seen as a bit of an embarrassment.
Oh goody! Keep it up Esther!
 
Can anybody explain what this means?
As an example, to detect a difference in response rates of 50% and 60%, with 90% power, would require 520 participants per group; numbers beyond a realistic two-arm trial. Therefore, we will study equal numbers of 135 participants in each of the four arms, which gives us greater than 90% power to study differences in efficacy between APT and both CBT and GET. We will adjust our numbers for dropouts, at the same time as designing the trial and its management to minimise dropouts.

Errr... not confidently. I always forget what exactly those terms mean and how those figures should be calculated. Maybe it refers to this?:

Compellingly, increasing the number of research arms increases the probability within one trial of reliably showing that at least one new treatment is superior to control, even allowing for the inevitable correlation between comparisons. With the assumption that the underlying probability that a trial reports that an individual research arm is superior to the present standard of care is 50%,6 the probability of at least one success increases rapidly as the number of groups increases (figure 1), reaching 83% with five independent research arms and a common control, and an encouraging 75% with three arms. Although higher correlations can arise (eg, when the treatment arms are assessing different durations or doses of the same drugs), the advantage persists.

https://www.ncbi.nlm.nih.gov/pubmed/25066148
 
Back
Top Bottom