Discussion in 'PsychoSocial ME/CFS Research' started by Andy, Jan 12, 2018.
This is a hard hitting piece.
He's got a summary near the beginning, though it's still good to click through so his blog gets the hits:
I am sure I will enjoy reading this in due course. But I have a sneaking suspicion this is just a rehash of Esther12's PM to me about three years ago, which I had the opportunity to rehash for the Journal of Health Psychology. PACE started off bad, but every opportunity to mitigate that badness was turned around into a synergistic worsening.
This one though does sound a bit Pythonesque. Sort of 'if you think Fatang-Fatang is a mouthful try Harquin Fim Tim Lim Bim Bus Stop Fatang Fatang Ole Biscuit Barrel'.
You got it. This one has a few extra surprises though. The timing of assessment varied between the control group and the intervention group, and the authors dismissed the need to collect data on potential harms. Both of these sound highly unusual to me (at least in context of fairly consistent reports of harm in surveys).
Sounds like this is going to be the most amazing blog of all time!
This alone shows how lacking in rigour they are, to put it mildly.
I enjoyed that. No rehash! I don't remember reading much from other people about this study previously. Maybe because it came out when everyone was PACE obsessed? I don't think I've ever read the paper referred to.
Paper is currenlty here: http://www.karger.com.https.sci-hub.tv/Article/Abstract/438867
Does it seem weird that this paper was only published in 2015, when the trial registration say in was completed in 2006? http://www.isrctn.com/ISRCTN15823716
Interesting that they said this:
I thought this group has generally been pretty gung-ho with that - even more so this time?!
Great to see more attention being brought to this:
Weird about the different times between baseline and second assessment. I don't think I've ever seen that before.
In some ways, I wondered if that would actually make much difference, see as the wait-list control group was already so poor at controlling for nonspecific effects?
Some parts I was a bit less sure on:
Personally, I'm not too keen on what I've seen of Beck's 'collaborative empiricism', and I suspect Knoop & co could claim they use some form of it, although I probably need to look into this more. The problems we see around ME/CFS often makes me think the worst of related work, and I always fear that supposedly 'collaborative' approaches can end up being used to manage patients according to the assumptions of the professional. [edit: I was thinking about this last night - while they may not explicitly distance themselves from 'collaborative empiriricism', their approach to CBT does sound quite different.]
Is this bit right?:
Fair to say that they seem to consider the Oxford criteria obsolete, but all the other various criteria still seem to be floating around. The * links to the concern that patients may not have been properly assessed before entry into the trial, but that doesn't make the criteria obsolete. I admit that I don't really know what I'm talking about with all the different CFS criteria, but this point seemed overblown to me.
I'm often wary of debates about defining ME/CFS (and including things like the quote from Hooper) within the context of this sort of research (although I recognise these discussions can be important in other social and political contexts). There is still a lot of uncertainty around ME/CFS and how it should be defined, and if these trials were providing good evidence that their intervention was helping some group of patients, that would still be of interest. What most concerns me is that problems with the research so often makes it impossible to say that anyone is being helped. Some of my caution with this point may stem from years of watching discussions being taken off-track by irrelevant 'CFS and the stigma of MH' warbling.
It's great that a negative analysis has been published first by someone outside of the ME community - shows the tides are turning and that other people are now looking more at the way trials have been carried out and reported.
James Coyne has been blogging about ME and other psychological research for a while. I'm pleased too that he's exposing the flaws in the research, but I don't know what the readership is for his blogs.
I'm not sure the people who need to hear this analysis would regard blogs as 'published' in the sense that they are not peer reviewed.
Separate names with a comma.