Copied from ReCOVer: A RCT testing the efficacy of CBT for preventing chronic post-infectious fatigue among patients diagnosed with COVID-19.
You are probably much more on the money that I am. My mind initially went to even more cynical assumptions. Goodness knows why they need a less and more active group if they aren't going to measure their level of activity at any point other than the start. I wonder if he's spotted something useful in the past as the reason for his quite specific but unusual 'approach'. I noticed this KNoop paper was a reference in the Jul 2021 Chalder was part of: https://www.sciencedirect.com/science/article/pii/S0022399913002663?via=ihub#s0075
You'll have to tell me if I can screenshot just the 2 graphs in results section that happen, weirdly to be sat side-by-side: Perceived activity and Objective activity (actimeter). I just find this presentation astounding. They've broken people down into 4 groups POST-HOC (my next point) based on whether they were 'fast responders>non-responders' on 'the fatigue scale'. Anyway, the non-responders' perceived activity almost reflects their objective activity. The responders' perceived keeps going up whilst their actual follows a similar level to non-responders.
Strangely (having framed these results quite differently in their description) the results ends with a Q of asking patients what aspect they had found most helpful, and note that after 'changing sleep hours' (72%), the next most cited was 'increasing physical activity' (65%). But they made it v clear that despite including some reporting on it, for some reason actual activity was NOT a 'process variable'
There is the following classic: "Lower pre-treatment objective activity predicted lower fatigue at the first interim measurement. This finding is difficult to explain. Levels of objective activity during treatment were not related to subsequent levels of fatigue at other measurements. In the early phases of treatment, even patients who remain severely fatigued after treatment increase their level of physical activity (see Fig. 3). So an increase in physical activity per se does not seem sufficient to reduce fatigue. An increase in perceived activity, however, does seem important." Then all sorts of confusing pseudophilosophy...
Anyway there seems to be all sorts of strange things going on with post-hoc groupings by fatigue level and claims that those who refused to participate were worse at the end than those who did (rather than a control) despite there only being 291 people whittled down to 183 'results' they were working with by that point. They had the pre-activity level and indeed put the low-activity (non boom-bust) straight onto exercise without 'balancing sessions' first, so these were a pre-hoc variable. How can people call things 'research' or 'experiment' when it feels like if you throw enough 'process variables' at a wall then analyse with different framings and you might make something out of it.
Watching someone create groups of 40 to call them 'responders' based on perceived fatigue at measuring point 1,2, 3 [not any independent variable of the treatment itself or individuals within it] and do cross-comparisons to other similar (probably internally consistent if we went through all their papers) dependent variables, as if that is meaningful of the TREATMENT is just another level. It starts to make you feel that this group really think that 'the fatigue scale' is an objective measure. More so than an actimeter. Which is kind of what their pseudophil discussion suggests.
Despite the following being in the treatment protocol: "They are encouraged to perceive feelings of fatigue as a normal part of an active and healthy life and stop labelling themselves as a CFS patient."
The whole thing is weird and I've started to get the feel when responding o some of these papers recently that BPS weren't/aren't really fighting for anything other than the scam of getting a methodology where nothing you test can fail. The competitive advantage of being allowed to make up their own foolproof methodology sans regs vs any other science area (cheap fast research as no Blinded Controls, that counts anything towards level of significance whilst science has to begin with subtracting the placebo/control response, so 'double-win'). Grow the kingdom, then tail wags dog because that dept is the only place with space so more get sent there, rhetoric gets repeated to justify it and so on..
Given methodology/design/quality/validity is/should be somewhat universal and the end result is sucking funding for research and then resources slowly out of other parts of medicine (and scientific psychology) I'm shocked that instead of sitting on their own forums taking the mick out of patients they aren't waking up to this uneven playing field. And how it will affect them, and if they don't demand parity the right way - will it end up spilling into other parts (seems like it is with talk of 'less rigid' etc), death of science and quality etc being respected?
It's become silly/weird where a cheap B12 injection can give one person immediately visible differences - to anyone's eyes - and all sorts of objective measures for x time people see with their own eyes works for some and don't harm others but it needs a heterogenous trial. And yet for expensive treatments paper surveys done by beneficiaries themselves on subjective meaningless measures with high drop-outs not explained, hidden raw data and objective outcomes and all sort of 'whittling and post-hoc can overspeak people coming out of it in a wheelchair and requires no yellow card reporting. And we let those doing the latter claim 'placebo' on the first when patient also self-reports feeling better.
I haven't read the above so this is speculation---
[Knoop] "The actometer is only used to monitor patients prior to treatment divide into an active group and a less active group."
Incredible seems reasonable. Actimetry is OK to select participants, but it's not OK to measure the outcome --- seems illogical.
Surely the more obvious answer is Knoop doesn't use actimetry, as an outcome indicator, because it indicates the intervention doesn't work. As others have pointed out on this site, these folks [Knoop, Garner ---] know their interventions work, so the actimetry measurements must be unreliable - since it doesn't confirm improvement!
These folks shouldn't receive a cent of public [taxpayers] money for their shoddy work. This crap should be relegated to nonsense individuals are free to believe/not believe but have no place in public decisions - like publicly funded health care.
You are probably much more on the money that I am. My mind initially went to even more cynical assumptions. Goodness knows why they need a less and more active group if they aren't going to measure their level of activity at any point other than the start. I wonder if he's spotted something useful in the past as the reason for his quite specific but unusual 'approach'. I noticed this KNoop paper was a reference in the Jul 2021 Chalder was part of: https://www.sciencedirect.com/science/article/pii/S0022399913002663?via=ihub#s0075
You'll have to tell me if I can screenshot just the 2 graphs in results section that happen, weirdly to be sat side-by-side: Perceived activity and Objective activity (actimeter). I just find this presentation astounding. They've broken people down into 4 groups POST-HOC (my next point) based on whether they were 'fast responders>non-responders' on 'the fatigue scale'. Anyway, the non-responders' perceived activity almost reflects their objective activity. The responders' perceived keeps going up whilst their actual follows a similar level to non-responders.
Strangely (having framed these results quite differently in their description) the results ends with a Q of asking patients what aspect they had found most helpful, and note that after 'changing sleep hours' (72%), the next most cited was 'increasing physical activity' (65%). But they made it v clear that despite including some reporting on it, for some reason actual activity was NOT a 'process variable'
There is the following classic: "Lower pre-treatment objective activity predicted lower fatigue at the first interim measurement. This finding is difficult to explain. Levels of objective activity during treatment were not related to subsequent levels of fatigue at other measurements. In the early phases of treatment, even patients who remain severely fatigued after treatment increase their level of physical activity (see Fig. 3). So an increase in physical activity per se does not seem sufficient to reduce fatigue. An increase in perceived activity, however, does seem important." Then all sorts of confusing pseudophilosophy...
Anyway there seems to be all sorts of strange things going on with post-hoc groupings by fatigue level and claims that those who refused to participate were worse at the end than those who did (rather than a control) despite there only being 291 people whittled down to 183 'results' they were working with by that point. They had the pre-activity level and indeed put the low-activity (non boom-bust) straight onto exercise without 'balancing sessions' first, so these were a pre-hoc variable. How can people call things 'research' or 'experiment' when it feels like if you throw enough 'process variables' at a wall then analyse with different framings and you might make something out of it.
Watching someone create groups of 40 to call them 'responders' based on perceived fatigue at measuring point 1,2, 3 [not any independent variable of the treatment itself or individuals within it] and do cross-comparisons to other similar (probably internally consistent if we went through all their papers) dependent variables, as if that is meaningful of the TREATMENT is just another level. It starts to make you feel that this group really think that 'the fatigue scale' is an objective measure. More so than an actimeter. Which is kind of what their pseudophil discussion suggests.
Despite the following being in the treatment protocol: "They are encouraged to perceive feelings of fatigue as a normal part of an active and healthy life and stop labelling themselves as a CFS patient."
The whole thing is weird and I've started to get the feel when responding o some of these papers recently that BPS weren't/aren't really fighting for anything other than the scam of getting a methodology where nothing you test can fail. The competitive advantage of being allowed to make up their own foolproof methodology sans regs vs any other science area (cheap fast research as no Blinded Controls, that counts anything towards level of significance whilst science has to begin with subtracting the placebo/control response, so 'double-win'). Grow the kingdom, then tail wags dog because that dept is the only place with space so more get sent there, rhetoric gets repeated to justify it and so on..
Given methodology/design/quality/validity is/should be somewhat universal and the end result is sucking funding for research and then resources slowly out of other parts of medicine (and scientific psychology) I'm shocked that instead of sitting on their own forums taking the mick out of patients they aren't waking up to this uneven playing field. And how it will affect them, and if they don't demand parity the right way - will it end up spilling into other parts (seems like it is with talk of 'less rigid' etc), death of science and quality etc being respected?
It's become silly/weird where a cheap B12 injection can give one person immediately visible differences - to anyone's eyes - and all sorts of objective measures for x time people see with their own eyes works for some and don't harm others but it needs a heterogenous trial. And yet for expensive treatments paper surveys done by beneficiaries themselves on subjective meaningless measures with high drop-outs not explained, hidden raw data and objective outcomes and all sort of 'whittling and post-hoc can overspeak people coming out of it in a wheelchair and requires no yellow card reporting. And we let those doing the latter claim 'placebo' on the first when patient also self-reports feeling better.
Last edited: