One understanding of its meaning is as adaptive pacing therapy, which is facilitated by healthcare professionals, in which people with CFS/ME use an energy management strategy to monitor and plan their activity, with the aim of balancing rest and activity to avoid exacerbations of fatigue and other symptoms.
They just can't stand the thought that patients may actually be more knowledgeable and competent than the 'expert pros'.

They are just arbitrarily inserting themselves, for no good purpose. It is just empire building, income generation, and egos.
 
They just can't stand the thought that patients may actually be more knowledgeable and competent than the 'expert pros'.
One understanding of its meaning is as adaptive pacing therapy, which is facilitated by healthcare professionals, in which people with CFS/ME use an energy management strategy to monitor and plan their activity, with the aim of balancing rest and activity to avoid exacerbations of fatigue and other symptoms.

They are just arbitrarily inserting themselves, for no good purpose. It is just empire building, income generation, and egos.
Agreed. Real pacing is a very subtle balancing act between energy conservation versus expenditure, with look-ahead to account for delays between cause and effect.

The previous definition is very good to my mind ...
In this guideline, pacing is defined as energy management, with the aim of maximising cognitive and physical activity, while avoiding setbacks/relapses due to overexertion. The keys to pacing are knowing when to stop and rest by listening to and understanding one's own body, taking a flexible approach and staying within one's limits; different people use different techniques to do this.
... and fits extremely well with how I observe my wife's approach to pacing. The APT advocates have no clue (nor humility) how their "adaptive" modifier completely changes the strategy such that it is no longer pacing in any recognisable sense.
 
Henrik Vogt is tweeting today



It is tempting to just laugh it off, but he is an MD and has initiated a "patient organisation" with people who have recovered from conditions by their own means/undocumented treatments (meaning ME-patients recovered by Lightning Process, but that isn't said out loud because you are not allowed to advertise for alternative treatments in Norway, including sharing success stories).

People listen to him and he continues to smear ME patients that are critical of Lightening Process and the BPS approach to ME.

Edit to add: The tweet might be a reaction to a recent and critical article in the Journal of the Norwegian Medical Association about the PACE-trial.
There is a post about that article in the thread Rethinking the treatment of chronic fatigue syndrome - A reanalysis and evaluation of findings from a recent major trial og graded exercise and CBT with google-translation.
 
Last edited:
Dr Vogt is right, with a slight twist:

#mecfs: Those who think that current criticisms of the PACE trial is about a fair scientific discourse, read @TheLancet editorial from 2011. It is part of an aggressive campaign to discredit anything that smells of scientific argument.

There is nothing but surmise in that editorial. And the editor needed to save face.
Vogt is interesting because he is so unable to understand his own psychology. A perfect match for Sharpe.
 
Action for ME:

The PACE trial and behavioural treatments for M.E.

August 29, 2018


This statement sets out our position on the PACE trial and behavioural treatments for M.E.


In the past, Action for M.E.’s strategy was to support all forms of research into M.E. As part of this strategy, in 2007 the charity was asked to be involved in a large-scale research project, the PACE trial, which compared standardised specialist medical care (SMC) alone, with SMC plus adaptive pacing therapy (APT), cognitive behavioural therapy (CBT), or graded exercise therapy (GET) for people with M.E./CFS.

“I am sorry that the charity did not advocate for this considerable level of funding to be invested in biomedical research instead. It was never our intention to contribute to any stigma or misunderstanding about the illness and I sincerely apologise to those who feel that, in not speaking out sooner and more strongly, we have caused harm.

“Our position on recommending treatment and management approaches for M.E. is set out below and, over the coming months, we will review all our printed and online information to reflect this. This is no small task, but one that the team will prioritise and complete as quickly and comprehensively as we can.

“We will learn from our past mistakes. We will continue to provide practical support to our Supporting Members and others with M.E., to challenge the stigma and neglect they experience, and work with professionals and policy-makers to transform the lives of children, young people and adults with M.E. in the future.”

https://www.actionforme.org.uk/news/pace-trial-and-behavioural-treatments-for-me/



Edit: thread here
 
Last edited:
Bear with me, as this is - eventually - about PACE. More as an interesting exercise rather than any in-depth investigation.

I'm getting into @Brian Hughes' book Psychology in Crisis, and been grappling with the idea of the null hypothesis. Need to re-read and further inwardly digest before I fully understand, what seems simple in principle, but apparently many psychological researchers completely misunderstand. One of the points Brian makes is that although a clear understanding of a study's null hypothesis is crucial, many psychology researchers tend to short circuit that bit and just jump straight to, and only consider, their main hypothesis.

As I understand it for PACE, the main (i.e. alternative?) hypothesis is along the lines that patients are locked into a vicious circle of being deconditioned, reinforced and perpetuated by activity avoidance. GET reverses that vicious circle and thereby reconditions patients.

So what would the null hypothesis have been? Was it ever reported? What would the p-value have been for data pertaining to that null hypothesis? Was it reported for PACE?

Presumably the null hypothesis would be something along the lines that patients are not deconditioned, and there is no vicious circle to break out of. GET would therefore have no beneficial effect.

I'm well aware that correctly stating the alternative and null hypotheses is crucial, and that I for sure will not have achieved that here. But I'm interested to see if anyone else finds this aspect of PACE interesting, and wants to add their two penn'oth.

Given we are extremely confident this null hypothesis is in fact true, this would presumably have to mean that a soundly constructed and operated trial would produce data showing a very high p-value for that null hypothesis. What would the p-value be for PACE?

I'm also (now) aware that p-values can be hugely distorted as a consequence of bad trial methodology etc.

Am I on the right lines here?
 
@Barry

As I understand it for PACE, the main (i.e. alternative?) hypothesis is along the lines that patients are locked into a vicious circle of being deconditioned, reinforced and perpetuated by activity avoidance. GET reverses that vicious circle and thereby reconditions patients.

This is the abstract paradigm/'theory' behind the study but you wouldn't put this language in a hypothesis for statistical testing; it's not quantifiable.

Instead you would say something like:
Null Hypothesis: "CFS patients treated with (CBT, GET, or APT) + SMC will not show any more improvement (i.e. equivalent improvement or less improvement) on (Chalder Fatigue Scale, SF-36 score, etc.) than patients treated with SMC alone.""
Alternative Hypothesis: "CFS patients treated with (CBT, GET, or APT) + SMC will show greater improvement on (Chalder Fatigue Scale, SF-36 score, etc.) than patients treated with SMC alone."

Then you can run an experiment, and calculate a test statistic and a p-value from your observed results. A p-value is a measure of how unusual your observed result is assuming the null hypothesis is true. I.e. is it reasonable to assume that the deviation in your observed results from the null hypothesis is due to random chance or not. If p=.001, there is a 1/1000 chance that you would get a test statistic as extreme or more than the one you got based on your observations if the null hypothesis is true - it's reasonable to assume in this case that the null is not true.

(PACE's p-values are in table 3 at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3065633/)

PACE rejected the null hypothesis for CBT+SMC and GET+SMC. This simply means that it is very unlikely that the results (more improvement on Chalder scale/SF-36 in these groups vs SMC alone) could be explained by random chance. But does this support the abstract theory? Is the improvement on the questionnaires a reliable indicator of improvement in the patients' disease brought about by addressing unhelpful beliefs and reconditioning? No, because of all of the many problems that have been pointed out.

Something to keep in mind is that, even without some of the weird issues specific to PACE, we should expect these results to be quite easily replicable because (a) the unblinded-treatment/subjective-outcome problem - CBT or GET will be tested as the intervention that's supposed to help, so patients receiving it will be biased to say they are better, (b) CBT and GET train patients to say they are better, so it's not an interesting result that patients say they are better after being treated. These results cannot be taken to support the 'deconditioning/unhelpful beliefs' hypothesis.

I hope that's not too muddled. I'm not learned on all of the particulars of PACE methodology so I hope others who are will clean up any mess I made, but I think this will help to conceptualize the basics.

Barry, a book I found helpful for learning the basic concepts of statistics is Statistics by Freedman, Pisani, and Purves. I think you would find it quite helpful in sorting out your questions about hypothesis testing, and it's quite fun to work through!
 
@James Morris-Lent has explained it very clearly.

To put it even more bluntly (and to over-simplify), the statistician is not interested in the theories behind the study. Nor are they interested in what the treatment involves.

The question their statistical test is addressing is, 'is there a statistically significant difference between the means of the two sets of data?'. It doesn't matter to the statistician whether the data represent the number of cups of tea the patients are drinking in a year, or the number of steps they can walk in a day - it's just numbers.

The null hypothesis is 'there is no difference between the means', ie any difference is just a result of chance variation.

You then work on the basis that this is true and calculate what the probability of getting the observed difference by chance. If that probability is very small, you reject your null hypothesis and conclude that the treatment group have a 'statistically significant' difference in outcome to the untreated group.

It's then over to the researcher to decide whether the statistically significant difference in the number of steps or cups of tea is clinically significant and whether it supports or disproves their pet theory. They should also take into consideration confounding factors like whether the treatment involved persuading the patients to drink more tea or take more steps... But that's another story.
 
This is why I asked Sharpe this question:



An trial can only tell you whether one treatment is better than another on the outcome measures you have chosen. It cannot test your underlying theory. That should have already been established in previous experiments, case studies etc. It also cannot tell you how much bias is involved in those measures. Again, that work should already have been done before it goes anywhere near a trial. If the measures you are using have not been properly validated for use in that circumstance, the stats are not going to tell you anything useful.
 
An trial can only tell you whether one treatment is better than another on the outcome measures you have chosen. It cannot test your underlying theory. That should have already been established in previous experiments, case studies etc. It also cannot tell you how much bias is involved in those measures. Again, that work should already have been done before it goes anywhere near a trial. If the measures you are using have not been properly validated for use in that circumstance, the stats are not going to tell you anything useful.
I guess statistical methods are simply a tool set, and it's the person using the tool that is responsible for its proper use. The same as a hammer cannot be blamed for smashing a priceless vase, when the user was trying to knock in a nail nearby.
 
This is why I asked Sharpe this question:



An trial can only tell you whether one treatment is better than another on the outcome measures you have chosen. It cannot test your underlying theory. That should have already been established in previous experiments, case studies etc. It also cannot tell you how much bias is involved in those measures. Again, that work should already have been done before it goes anywhere near a trial. If the measures you are using have not been properly validated for use in that circumstance, the stats are not going to tell you anything useful.



What was Sharpes reply? or did he block you?
 
So if the PACE hypotheses had been based purely on truly objective outcomes, rather than the usual subjective ones, then the p-value would presumably have come out very high, assuming all other things were done properly.

I know it's been mentioned in the past that the statistical analyses done for PACE are OK, which may be true so far as it goes. From the comments above, the statistical analyses are essentially specialist number crunching exercises; even if done with 100% competence, if the input data is rubbish then it's a case of garbage in garbage out.

So what is the scope of a trial statistician's remit? Is it purely the statistical calculations itself? Or is the statistician, supposed to still understand and advise on what might constitute sane input data. Is the statistician supposed to understand trial methodology better than the trial authors themselves? And even if the statistical analyses themselves have been done correctly, is it still valid for a statistician to say that a trial's stats are all OK, even if the input data is clearly flawed? Where is the "trial stats" boundary? What is the accepted scope?
 
I second these questions. For me it seems like the value in having people trained in statistics is not so much in doing the sterile number crunching - for this you should just click a button and a computer script calculates and outputs the results - but in critically examining the validity of the input for testing; then, if statistical testing is warranted, to make sure that the conclusion accurately reflects what the test is telling you.

From PACE
When added to SMC, CBT and GET had greater success in reducing fatigue and improving physical function than did APT or SMC alone
Should say, at best: "When added to SMC, CBT and GET had greater success in inducing participants to register improvements on questionnaires that we would like to think reflect fatigue and physical function."
 
I would expect statisticians have a mathematical training background. So I would expect a statistician checks if the things he applied actually can be applied, i.e. he also checks if the theory holds. That's what numerical mathematicians always do (and we made "jokes" about situations where this hadn't been done, e.g. an oil platform that cracked on the high ocean since the meshing wasn't done correctly - according to theory). I would expect the statistician to report to the researcher whether he thinks the data/hypothesis don't fulfill the criteria - because then, theory doesn't hold anymore.

E.g. computing the p-value is only possible if you assume the null hypothesis is true (here comes in first and second type errors).
 
I second these questions. For me it seems like the value in having people trained in statistics is not so much in doing the sterile number crunching - for this you should just click a button and a computer script calculates and outputs the results - but in critically examining the validity of the input for testing; then, if statistical testing is warranted, to make sure that the conclusion accurately reflects what the test is telling you.

I agree these are key questions. The problem is that people trained in statistics have no particular reason to understand the more basic problems of trial design that arise from sources of bias - human nature. You cannot use numbers to assess the likelihood of bias. The jobs you want done are not jobs for statisticians. Which is why I was disappointed that the BBC said they were not going to do a Newsnight programme because they had asked a statistician who said that PACE was not too bad.

I think it may have been a mistake to focus on statistical issues with PACE in the first place, but it got people interested and there is no doubt that there are major statistical problems as well.

As with all these things it looks as if statisticians are increasingly oiling the wheels of garbage in garbage out. Statisticians tend to work on projects they are asked to look at by people who want positive results. They are asked 'how can we show this is significant'. Statisticians who reply 'I think you may be cherry picking' will not get asked again.
 
Back
Top Bottom