A new consensus? - ME/CFS skeptic blog

Subjective self-report measures are particularly problematic when the treatment is aimed at changing cognition/perception. It is just ripe for all sorts of bias and confounding. Worst case scenario, indeed. It actually makes blinding, or additional less subjective measures more important, I think.

Need to distinguish between two different levels of causal knowledge: Casual relationships, and causal mechanisms.

It is possible to demonstrate a casual relationship (A causes B, at least probabilistically), without knowing the process by which that occurs.

Ideally, of course, you want to know both. But you have to be able to demonstrate at least the first to claim useful science-based knowledge.

Which is the problem with using unblinded subjective measure on their own. They don't allow us to say even that much. Causation has not been revealed and clarified. So we don't know if any effect is due to the treatment, or response bias, or...? All we can say is there is an effect, but not know its causal nature. Nor, hence, if it is of actual practical benefit.

Which leads to this sort of guff.

Of course the difficult thing with what BPS (and perhaps others) have turned their 'CBT and other variations' into is basically priming for the test, and so blinding - they have one thing right with their excuses - is only 'equal' if the control has the same amount of brainwashing/coercion to report differently on each and every question they will be asked e.g. to play down their pain and be positive about how they feel that day (don't be a whinger after all, you'll feel better for it).

None of these things are proven to make people any happier and in the older, more scientific days of psychology then telling people to put a fake smile on was known as bad and not really helpful at all as people needed to be allowed to be 'normal' - ie real normal, not the 'you must be normal like the rather weird definition of it those in charge think is normal behaviour' and discuss real. So to me it is all mental health harm, and indeed the measures they are taking of 'what people report' are no evidence at all that someone is happier, just that they are reporting differently even if you were just talking in a mental health sense, it's BS.

Make for a nice con of an industry of people who want to delude themselves though. Never have to prove what they do actually helps health.

It's like a personal trainer just brainwashing their groupies into thinking they are fitter by fibbing to them about their times, it doesn't make said people any fitter or happier. It certainly doesn't actually help their mental health or their support network long term. BUt these people believe this short term conning is doing people a great big favour... or do they... are they that stupid or just really bad?

Anyway all of this nonsense has got everyone stuck in the reeds, down in the mud of whether lying and nonsense placebo that doesn't work for longer than the pretence + 10 seconds anyway and has massive backfires and harms as the only thing having money being spent on it - so they are all happy because who cares whether it works if it is in that perpetual money-making circle.

I wonder whether the issue with measures is the issue that we can't experiment on the severe and we don't do home measures and longitudinal in a condition that for those who are milder can push through but then have massive objective payback - but over months and months. Subjective is a problem there because until you are 6-12months down the line you are so filled with determination to keep carrying on given the abuse and lack of sympathy and your own bigotry to yourself or denial that you need those measures like Workwell did for the marathon runner who did the 2-day CPET after she'd carried on training in order to realise.

In those who are more severe the impact would be pretty much more instant in the objective [and subjective I think] measures, but of course insisting that they are one thing or another re: symptoms is a problem - because if someone has RLS vs noise/light sensitivity vs stuck awake in up-regulation vs being able to sleep it off is going to produce massively different things that mean 'exremely bad' vs less bad. SO it is a right conundrum as long as people want to take measures that are too short and aren't pattern-based certainly.
 
Last edited:
As Simon says, I think it's still an important principle that there is not a reliance only on subjective outcomes in unblinded studies. When pain is bad, people can't keep functioning as normal, not on any regular basis. Secondary objective outcomes could include:
time spent being sedentary or lying down;
medication taken to control pain;
alterations in gait
results of cognitive tests

These things would obviously need to be tracked over a sufficiently long period of time. If it wasn't possible to solely rely on patient reported outcomes, I bet reasonable objective outcomes would be found.


The recent study with the PHD student working with Bateman HOrne was one of the first where I thought they were beginning to be onto something, including time spent horizontal etc. They had really interesting learning points from their methods and results I think. Agreed that given the weirdness of being adrenalined if over-threshold and made to carry on, but that eventually leading to you having your stamina decreasing further, means that we do need something that is 'within individual' and is comparing patterns (such as time spent horizontal and activity etc), but yes also things that we mightn't notice like reaction times and cognitive tests/gait over long periods of time.
 
PS: To contradict to my own earlier post slightly, longer follow-up times might be less reliable in recently diagnosed people because of adaptation.

Two or three years after diagnosis with erosive arthritis my function had improved even though my impairment was worse, because I'd re-learned how to do some things. There isn't the same potential in ME, but learning pacing skills may improve pain and QoL scores, especially in people who're less severely affected.

It's not an argument against using subjective outcomes as part of the toolkit, but it's a possible confounder in mixed cohorts of long term and newly diagnosed participants.
 
I think the naming of Directive CBT and Supportive CBT as both "CBT" is a problem. When I first read about CBT being used in the PACE trial I didn't realise that CBT came in different flavours and assumed it was the supportive kind.

I wonder how many people in the general healthy population are aware that CBT isn't always the same. The same is true for people with illnesses like cancer - they might be offered the supportive CBT and not realise that the other kind even exists. I think the directive CBT should be given it's own name, but I don't have any suggestions as to what that name might be.

As far as I was aware there isn't anything that "CBT" actually is, and that pretty much anything can be called "CBT"?
 
Back
Top Bottom