James Morris-Lent
Senior Member (Voting Rights)
blehIt has nothing to do with validation of the measures in the sense most people would think of.
blehIt has nothing to do with validation of the measures in the sense most people would think of.
https://twitter.com/Chrissox/status/1222062872785801216
What makes you say that at this point? I feel like I've still got no idea of the details of this.
As far as I can see 'validation' means nothing other than that you get the same sort of answers on several trials. It means a questionnaire is probably being adequately understood. Nothing more. It has nothing to do with validation of the measures in the sense most people would think of.
Howard:
"""but the measures have generally been well validated and cover areas that matter to us and our patients and that we’d like to improve."""
What does this even mean? That new questionnaires are made sure to correlate with older questionnaires, but not too much? What is it all anchored to?
@JohnTheJack I would be curious to know what he thinks he means by this, if you are up for asking![]()
I suspect Howard is well-intentioned but misguided. Subjective outcomes have their place I'm sure, especially in psychology, albeit backed by objective outcomes where blinding is impossible, as will often be the case for psychological interventions.Robert Howard said:Almost all trials in mental health and dementia have what you call “subjective outcomes”, John. I know from your tweets that you don’t like it - but the measures have generally been well validated and cover areas that matter to us and our patients and that we’d like to improve.
When you validate something you have to validate it against some valid datum .
Thought this was interesting.
He previously defended the PACE trial on Twitter, when the HRA-report came out.
This is not a curative treatment. The number of people who are not in work and receiving benefits of some form after one of the treatments was 8-12% higher in any of the treatment arms than in the control group.
There was little evidence of differences in outcomes between the randomised treatment groups at long-term follow-up.
If the data matches pre-trial expectations then presumably they consider the outcome measure to be valid. If not, as with the definition of recovery, they “prefer” to change it to make it match their preconceptions. Presumably, in their minds, this then validates the new way of measuring the outcome. It is valid in the sense that it corroborates their preconceptions, which are treated as axiomatic.Wilshire et al said:With regard to the recovery measure, we previously addressed all of Sharpe et al.’s justifications for altering these in our original paper, and see no need to repeat those arguments here (see [4] p. 8, see also [7, 8]). To summarise, Sharpe et al. “prefer” their modified definition because it generates similar rates of recovery to previous studies, and is also more consistent with “our clinical experience” ([5], p. 6). Clearly, it is not appropriate to loosen the definition of recovery simply because things did not go as expected based on previous studies. Researchers need to be open to the possibility that their results may not align with previous findings, nor with their own preconceptions. That is the whole point of a trial. Otherwise, the enterprise ceases to be genuinely informative, and becomes an exercise in belief confirmation.
You would have thought so, but not in this field.
For a professor in the field to not understand this or deliberately obfuscate seems to me less being misguided and more incompetent.
I have a terrible suspicion that part of the descent of medicine into this sort of post truth situation is to do with an obsession with being inclusive. We are not allowed to discriminate - not even against poor quality research and incompetent teaching. Everyone is to be treated the same.
Howard:
"""but the measures have generally been well validated and cover areas that matter to us and our patients and that we’d like to improve."""
What does this even mean? That new questionnaires are made sure to correlate with older questionnaires, but not too much? What is it all anchored to?
@JohnTheJack I would be curious to know what he thinks he means by this, if you are up for asking![]()
I don't think its post truth but rather the inability of some people to think clearly and systematically - but then perhaps this is what post truth is.
Yes, I think this is important. One of the positives that I hope will come from PACE is that it may eventually help to improve standards in psychology and therapist based research.I do think his response was revealing. First, that the redefinition of ME as chronic fatigue and so essentially 'mental health' has been successful and is a key part of what has happened over the last 30 years. Second, much of the problems we have had in exposing the CBT-GET science as flawed is because a lot of the science round psychotherapy has been flawed. We're challenging a whole body of junk.
Moustgaard et al said:Conclusion No evidence was found for an average difference in estimated treatment effect between trials with and without blinded patients, healthcare providers, or outcome assessors. These results could reflect that blinding is less important than often believed or meta-epidemiological study limitations, such as residual confounding or imprecision. At this stage, replication of this study is suggested and blinding should remain a methodological safeguard in trials.
I think it is more like "religious truth", the 'truth' being ... whatever you want it to be.I don't think its post truth but rather the inability of some people to think clearly and systematically - but then perhaps this is what post truth is.
I think it is more like "religious truth", the 'truth' being ... whatever you want it to be.
Yes, I suspect people naturally have a poor ability to be consistent over repeated assessments on subjective measures. Some people will be better than others. Any one person will be better at it some times than at other times. Moreover, it is a cognitive endeavour being required of people whose symptoms typically include cognitive impairment.Something like test/retest is important as a property which says each time you fill in the questions do you give the same answer for the same level of illness.
Dealing with the relative aspect is likely to result in an even greater scattering of self reported readings, because the baseline reference become ever vaguer.That's a really difficult one for something like the CFQ which is a relative measure (relative to some ill defined point).
Puts me in mind of trying to measure something that keeps changing in size using an elastic ruler.Its also difficult to validate in a fluctuating illness as a test/retest score would naturally change. Of course that isn't taking account of any recording biases.