Brain Retraining treatment for ME/CFS and Long COVID - discussion thread

[2] patients are recovering from doing things like "brain retraining"

Not quite.
Patients who have been exposed to brain retraining have recovered.
The causal link needs demonstration in a controlled fashion.
People exposed to homeopathy recover from things.

The history of these sorts of treatments is that they never get validated - probably because they have no specific efficacy. And the recent history of non-pharmacological treatments for ME/CFS is of trials failing to demonstrate any major benefit if at all.

In addition, a lot of these people dying report having been made worse by such treatments.

Further, if brain retraining is supposed to work by suddenly getting someone who has been protecting themselves from activity for a bit longer than necessary to be more active then what is to say that they didn't get better in the end because they had protected themselves for such a long time. Brain retraining surely presumes a prior period of protecting. So maybe that is what does the trick?
 
Yes, good point. It had a statistically significant reduction in fatigue, but not physical function, and it wasn't maintained at follow-up.

No, it had a statistically significant reduction in self-reports of fatigue, but not in self-reports of physical function, and the reduction in self-reports of fatigue wasn't maintained at follow-up.

I genuinely don't know why you continue to treat subjective measures in open-label trials as though they are measuring the things you wish they were. I would love to understand.
 
The problem is that these aren't really markers for ME/CFS, and in mild patients (who tend to go into these trials), there may be no difference in step counts or wages.

They aren't biomarkers but ME/CFS is significantly disabling and so activity levels would be expected to be a reasonable proxy. The NHS classes 'mild' ME/CFS as follows: 'You’re able to care for yourself but may have problems moving around; you may be able to go to work or school, but will not have energy to do much else.' So even at the mildest level where you're just sick enough to be diagnosed, I would expect to see differences in step counts, for instance.

But do we know that it's 'mild' patients who tend to go into trials? On the one hand, you'd have to be well enough to deal with the trial, but on the other hand, maybe you'd have to be sick enough to be motivated to take part in the first place.

And in at least some trials, you're not allowed to be in the trials if you're only mildly affected. In PACE, they specified that study participants had to score 65 or less on the SF-36 at baseline, while the norm is >90 (see the Wilshire et al. PACE 'recovery' critique).
 
They aren't biomarkers but ME/CFS is significantly disabling and so activity levels would be expected to be a reasonable proxy. The NHS classes 'mild' ME/CFS as follows: 'You’re able to care for yourself but may have problems moving around; you may be able to go to work or school, but will not have energy to do much else.' So even at the mildest level where you're just sick enough to be diagnosed, I would expect to see differences in step counts, for instance.

To this point, I definitely saw a big increase in my steps when I found a supplement that really worked for me, even though I was mild before that. I practically jumped at the chance to take long walks every day for the first time in years.

Another person I know who got a huge boost from the same supplement got up to cook for the first time in months despite being mostly bedbound previously.

When something actually works for a person with ME, it tends to be extremely obvious that it works from their sudden change in activity, regardless of their baseline severity.
 
Since the factsheets are supposed to be factsheets and free from beliefs and feelings I think it could be very valuable if you could point out where exactly you got that impression from? Where there specific passages or wordings or is it rather from a more general reading?

The factsheets are factsheets so they are focused on hard evidence. Vague ideas on speculative hypotheses in reference to hardly scientific notions from neuroscience and other ideas won't make the cut. Similarly you should note that there is no reference to biological mechanisms with unsubstantial evidence in the factsheets, prevalence numbers aren't overestimated etc.



I think you might find that a few (though not enough) psychologists/psychatrists are active on S4ME, others have collaborated with psychologists (Brian Hughes to name one) with others being friendly with psychologists/psychatrists and others visiting them regularly.
Glad you said it before I did - as I’d be reporting myself that it’s because of the psychology background and the shame and embarrassment poor behaviour and thinking brings on the proper scientific subject that bad propaganda charading as research and claiming to be acceptable as scientific psychology needs to be called out.

Interesting to see what we are up against re how some people’s thinking patterns get stuck by belief systems and how dangerous it is /must be if patients are stuck near them as they can’t see or hear anything that doesn’t correspond to old fashioned outdated beliefs systems/bigotry .

I feel like I have my answer in the old wine new bottles even down to the arrows trying to be thrown it’s clear ‘brain training’ is a thinly veiled rebrand for those with the same old ideas on certain people to hide and pretend they are modern to themselves.

it’s worth us remembering with these sheets that this is a sad norm most/many are up against in their daily interactions and how explicit we do need to be therefore on the flaws of it and harms etc.

I can see it’s a painful process for those who don’t want to change because the status quo has suited them well whether it’s true harmful or factually incorrect and uncalled for with devastating attacks on identity created by the lack of responsibility or acknowledgment of what they are knowingly doing deliberately not differentiating. I certainly feel let’s be honest this is just trying to double-down on the horrific 2007 fear-avoidance debunked harmful guidelines that caused a dystopia
Some examples: no mention of stress as a precipitating factor. Saying that graded exercise has been "not shown to help", then talking about surveys. Similar for CBT. That isn't an accurate portrayal of the evidence.
Which again misses out on the fact that this has been ongoing for several decades, and that many members of this forum have been at it for years. We've seen all this. Many times. This field is built entirely on bad studies, all of which were conducted after the model was invented. They invented the model before they had any evidence.

The IOM/NAM report dismissed most of it as too low quality to interpret. So did NICE. So did IQWIG. And many others. This is the peril of relying on bad evidence: the bad part far outweighs the evidence part.
I think technically calling the bps a model is inaccurate

At the time was being pushed and invented by eg Wessley in the form it was - writing down a bigotry and the drawing a big circle and just lumping some ambiguous terms in the middle - it absolutely’stood out’ as breaking with any norm of standard of a psychology model ie done so as to be testable

this was the discussion and what should have been picked up on by any competent person in psychology around the early 2000s . so in a sense it presented an attack on psychology as a subject and as a professions/sciencd trying to set up and do things the right way being undermined - I find it shocking people handed over letting them use their terms (like CBT and model,which needs to recreate/was about getting proper data early on to check things weren’t based on film glam or bigotry then continuing based on fake claims)


so it isn’t actually proper psychology either - and given a circle on paper isn’t a testable if falsifiable model calling it a model and giving it that fake privilege when it never met that is part of the problem/smoke and mirrors

it never met any basics

the closest it comes to as what the bps thing for cfs is/was is a marketing slogan and a propaganda plan.


Weirdly for me these two areas happen to be my background of studies and experience so I say this technically and factually
 
Last edited by a moderator:
However, are they useful in assessing a subjective outcome, such as depression from asthma?

Why would you argue that. Your claim is that if subjective measures are proved bad for one disease then we should still believe they are ok for other diseases, That doesn't make sense the safer conclusion is to discard them unless proved effective.

What is also interesting is that the more objective measures don't back up the subjective outcomes in ME trials. More suggestion that subjective measures are bad. In fact the more subjective scale (cfq) reported the best results in PACE and the more objective the measures the worse the results got.

I will say the CFQ is a work of complete incompetence and should never be used or trusted.
 
When something actually works for a person with ME, it tends to be extremely obvious that it works from their sudden change in activity regardless of their baseline severity.

This.

The huge weight they've been carrying has eased, and they're away.

And the idea that people wouldn't notice this is absurd. As soon as a severely ill patient needed to turn over in bed, or a moderately ill patient needed to stand up, they'd feel the marked difference in their muscle function. It's like the broadband's come back on again after being forced to rely on a faltering mobile connection.
 
Last edited:
Yes, good point. It had a statistically significant reduction in fatigue, but not physical function, and it wasn't maintained at follow-up.

No it didn't it had a statistically significant reduction in the sum of a set of random questions about fatigue when scored over a unbalanced answer scale across two groups of patients. The notion that the sum of these questions some how represents a good proxy for fatigue is farcical.
 
NICE downgraded the evidence due to indirectness (not requiring PEM). It wasn't due to the quality of the evidence itself.
That’s incorrect
It's frankly insulting and ridiculous to equate psychology with astrology and astrology and Scientology.
actually scientific psychology was set up to make the area above board and some bits despite being ‘parapsychology’ in the quality of their method and critical thinking are under that subject in order that scientific psychology tests them to define what is evidenced or not

bps and psychoanalysis before it have historically been well known to have this problem of wanting to shake off the basic standards that would make it not ‘para’ and indeed put it in the safety tested reliably enough to distinguish it from astrology or spoon bending - despite being given the roadmap and choice. It’s actually on ongoing ‘debates and themes’ thing in scientific psychology that most students will touch upon in some form


So the faux outrage and pretending it’s a patient think for whatever benefit that conveys in argument based on sophism (picking out quotes of claims or inferences from a papers abstract and sales spiel part without checking if it’s a valid methodology) or that said inferences can even be made from the results rather than scientific argument (analyzing which papers aren’t fraud and what limitations each have so you see what can be taken reliably from what and actually applies to what population accurately - by looking at the results and method)

it is true that you don’t want to meet the bars of claims that would put these claims or this area into science but somehow feel you are different to those other areas that also use para methods and ‘models’ (aren’t properly testable just ‘show testing’)

can you confirm how the testing standards and regs the bps stuff for cfs claims have been based on has been different to eg astrology regarding the model being made testable at each point (before bps all models were top down and had specific defined cause—> testable outcome, not just a big circle you can’t test so it can never get past correlations of associations in claims)

before you get into throwing such accusations of @rvallee being rude and other suggestions just for pointing out where it sits grouping wise with regard to rigor of testing/test ability of the belief system?
 
Last edited:
Yes, good point. It had a statistically significant reduction in fatigue, but not physical function, and it wasn't maintained at follow-up.
Statistical significance is irrelevant in the absence of clinical significance.
Well the problem we have is [1] patients are dying from malnutrition
Which can be solved by helping feeding the patients and/or tube feeding.
[2] patients are recovering from doing things like "brain retraining".
You’re implying causation again when there is none.
There are many aspects of these therapies that aren't problematic, and which seem to help patients. So what is wrong with either testing those, or offering patients advice?
Which aspects? And helpful in what way?

How many times does someone have to ask you to be specific before you do it without being prompted?
Surely death or permanent disability isn't the desired outcome here. Giving up and doing nothing until someone finds a biomarker or similar doesn't seem useful either.
Not doing something like an unproven treatment can be a better alternative than doing the treatment. Do you agree?

And nobody has given up on researching the causes of ME/CFS and potential treatments. Where on earth did you get that impression from? Is it just because your pet theories are not considered viable?
The problem is that these aren't really markers for ME/CFS, and in mild patients (who tend to go into these trials), there may be no difference in step counts or wages.
I had a long list of markers and all you say is that one of them might not show a difference for a subgroup of patients?

And I never claimed they are markers for ME/CFS. They are proxies. I thought that was rather obvious, especially when I specifically stated that they are not perfect.
 
@dundrum, your participation on this and other threads has used up a lot of energy of members responding and trying to explain to you the problems with much of the BPS research. Some of the people taking the trouble to do this are very sick, and expending precious energy to try to help you understand. I think in return it is reasonable to ask you to give straight answers to 3 questions.

1. Has this discussion helped you to understand better why trials like PACE are problematic and should not be relied on as evidence that CBT and GET are of any real benefit to pwME?

2. Do you accept the evidence that GET, CBT, LP and brain retraining have harmed many pwME and should therefore be used with extreme caution and only in the context of properly run clinical trials?

3. You say your aim is to help pwME. Do you have any professional and/or financial interest in brain retraining or other therapy for ME/CFS, as a clinician, therapist, trainer, trainee, researcher, podcaster or other related role.
 
Last edited:
If you look at things like the 6mwt and the step test (which they still haven't published properly)
Yeah, where are the scatter plots? Which are kind of important info. If the plots supported the claims made by the PACE authors then they would have been published in the main paper, or shortly thereafter. The fact that they are still not published 14 years later, and have not been made available for others to analyse, strongly suggests they do not support those claims.

but what people on here care about is whether the statistically significant difference between groups that existed at 52 weeks was maintained at long-term follow-up,
Yep. That form of study is for doing a comparison between groups, not within a group over time.

I agree that subjective outcomes are not ideal and introduce bias. The Wechsler trial is interesting, and does show that subjective outcomes are not useful in assessing asthma.
Nor ME/CFS, as was clearly demonstrated by Fluge and Mella's RCT for Rituximab in ME/CFS, which also used a robust placebo control.

The fact that the thing you’re trying to measure is subjective by nature, doesn’t change the inherent unreliability of the subjective outcome measures. It has to be accounted for, even if it’s the best you’ve got. And there is no law that says that your best is good enough for science. So we have to entertain the possibility that it might not be good enough, period.
This. Has to meet a minimum standard. Has not yet, and is increasingly unlikely to.
 
Yeah, where are the scatter plots? Which are kind of important info. If the plots supported the claims made by the PACE authors then they would have been published in the main paper, or shortly thereafter. The fact that they are still not published 14 years later, and have not been made available for others to analyse, strongly suggests they do not support those claims.


With the step test they published a bar chart with values but they turned down a FoI request from Graham earlier and @JohnTheJack may have got numbers in an FoI. I remember Graham trying to estimate numbers from their plot.
 
The fact that the thing you’re trying to measure is subjective by nature, doesn’t change the inherent unreliability of the subjective outcome measures. It has to be accounted for, even if it’s the best you’ve got. And there is no law that says that your best is good enough for science. So we have to entertain the possibility that it might not be good enough, period.

I suspect the whole mental health research industry has issues with measurement (too much subjective stuff and also some really badly done questionnaires that shouldn't be used as a measurement scale). But perhaps that just says we need better research into measurement approaches - and with the advent of reasonably good wearables (and even phones that are fairly accurate on step counts) much more is possible. Advances in AI could allow for much better recording at the time or interpretation of activity. (things like automatic speech recognition and sentiment analysis could potentially help).

But if measurement systems are not robust enough to give reliable results then doing research on humans using them is unethical.
 
But do we know that it's 'mild' patients who tend to go into trials? On the one hand, you'd have to be well enough to deal with the trial, but on the other hand, maybe you'd have to be sick enough to be motivated to take part in the first place.
I'm mild, currently working full-time and have a family to care for. Taking part of a study doesn't easily fit into my energy budget. I'm not saying that it fits into the energy budget of the more severe patients either, but that the decision making tree could look different for all of us. For example if I had been on disability I might not lose (more) money by having a period of increased exertion and a potential crash, and I wouldn't have to plan resting around work. Two things that prohibits me from taking part in a study (well that and the studies closest to me are of no interest to me/I'm not in the demographic they are looking for).
 
Why is that?

My intuition agrees, but I struggle to express why atm.
I would say that using measurement systems like the Chalder Fatigue questionnnaire as a primary outcome measure in a clinical trial for ME/CFS would be unethical because it cannot provide reliable, relevant or statistically analysable data.

That means that you are asking people to put themselves through a disruptive and potentially harmful treatment program on the promise that it will provide useful information to help their fellow sufferers, when in fact it cannot provide useful data using that questionnaire, so patients are being misled into risking their health to no purpose. They have been asked to sign assent at the start to something that is useless and may harm them on the basis of false information.
 
@dundrum, your participation on this and other threads has used up a lot of energy of members responding and trying to explain to you the problems with much of the BPS research. Some of the people taking the trouble to do this are very sick, and expending precious energy to try to help you understand.

I've been thinking how useful it would have been during all this to have had a factsheet on why BPS trials produce results that can't be trusted, and that explicitly tackles the profoundly illogical things that proponents say to defend those results (such as 'You can't do unbiased trials therefore you have to trust the results').

We could have saved ourselves the time and energy of producing what has now probably been tens of thousands of words explaining the issues and just pointed to the factsheet instead.

It continues to astonish me that no such paper appears to exist in the literature.
 
Back
Top Bottom