PACE trial TSC and TMG minutes released

The point that Likert made is that if want to measure someones views on A then you may wish to ask them basically the same question multiple times because they may make errors or not understand the question. So you ask the same question multiple times using different words and add up the answers.

I am sceptical that this should have a major impact on assessment tool structure. If it is a matter of understanding then presumably one should bracket together all questions that you think are asking the same question in different ways and score one piece of information if any answer is convincingly positive. (My understanding is that some questionnaires are set up with similar questions used not to add but to check that the answers are consistent.) Scoring more than one point for several answers seems to me inefficient and open to problems.

I think we may have discussed this before but not quite in this context. We talked about how you get a single primary outcome measure that has the advantages both of subjective symptomatic questions and objective backup. The issue I have in mind here is to minimise the chance of trivial questions open to major subjective bias giving an apparent effect across comparison of two cohorts. Maybe a solution is to do a pre-treatment questionnaire and decide which answers are independently relevant. Then repeat the questionnaire just checking those questions. People worry about customising assessment to each patient but it is perfectly OK.
 
But I think there are assumptions of the pieces of evidence being IID (Independent and identically distributed).

If you want to apply a strict Bayes equation yes but the pragmatic solution of the ACR does not need to get that precise. You certainly want the ABCD to be telling you something potentially independent but I doubt that in a real world application of this sort of confidence question involves completely independent evidence does it?
 
The issue I have in mind here is to minimise the chance of trivial questions open to major subjective bias giving an apparent effect across comparison of two cohorts.
Are we saying here we don't know at the time of asking that the question is trivial? Else why ask it?
 
If you want to apply a strict Bayes equation yes but the pragmatic solution of the ACR does not need to get that precise. You certainly want the ABCD to be telling you something potentially independent but I doubt that in a real world application of this sort of confidence question involves completely independent evidence does it?
It almost sounds like we are getting into needing some confidence level of their independence?! ...
 
jennysunstar‏ @jennysunstar 3h3 hours ago
Replying to @keithgeraghty
Keith please look at MM p58, FMS patients actively recruited to trial, confirmed by Dept of Health, GPs given financial inducements to get FM patients to join trial.
http://www.margaretwilliams.me/2010/magical-medicine_hooper_feb2010.pdf …

Even with the inclusion of FM pts, the results are still dire. I know that some with FM do find benefit from exercise, but am I right in thinking that it still needs to be approached really carefully, because overdoing it can result in a symptom flare?
 
Even with the inclusion of FM pts, the results are still dire. I know that some with FM do find benefit from exercise, but am I right in thinking that it still needs to be approached really carefully, because overdoing it can result in a symptom flare?

Yes I think that is right but friend who has FM has to exercise every day or she gets worse but if she overdoes it she gets worse too. So it could be that the FM people in the trial did skew things that they found that some exercise was beneficial. The results are so poor that having some trial participants with FM could have been significant?

Whatever, it is totally wrong to actively tout for people with a different disease for a trial and palm them off as having ME (ie anyone reading the PACE paper is unaware this is the case).
 
Yes I think that is right but friend who has FM has to exercise every day or she gets worse but if she overdoes it she gets worse too. So it could be that the FM people in the trial did skew things that they found that some exercise was beneficial. The results are so poor that having some trial participants with FM could have been significant?

Who cares?

They used the most non-specific criteria possible (Oxford) and still had zero change on the step test and trivial changes on the 6 minute walking distance test (albeit which wasn't conducted in a high quality manner - has been criticised that making people walk up and down a short corridor is a poor choice for this test). The changes on the self-report questionnaires were minimal, basically the smallest difference they could possibly report as statistically significant and such small changes could easily be influenced by other biases, such as difference in encouragement between groups etc. leading to a difference in questionnaire results.
 
Last edited:
Who cares?

They used the most non-specific criteria possible (Oxford) and still had zero change on the step test and trivial changes on the 6 minute walking distance test (albeit which wasn't conducted in a high quality manner - has been criticised that making people walk up and down a short corridor is a poor choice for this test). The changes on the self-report questionnaires were minimal, basically the smallest difference they could possibly report as statistically significant and such small changes could easily be influenced by other biases, such as difference in encouragement between groups etc. leading to a difference in questionnaire answering behaviour.
Also for PwME, any supposedly-objective measures have to adequately account for cumulative energy drain effects. At least one PACE participant reported that in order to do the 6mwt they effectively "saved up" their energy in order to do it, and did less afterwards. So if their overall energy expenditure had been measured over one or more days, the result would probably have been significantly different.
 
I think it's a shocking admission that they based their decision not to use actigraphy as an outcome measure on it not showing improvement in the Dutch study.

It makes their approach to 'science' crystal clear - only use the outcome measures that show what we want them to show to 'prove' what we want to prove, not what is best for patients or good science.
 
The main thing I get from reading these minutes is how shockingly badly they understand the disease they're studying. They keep tripping over red flags - left, right and centre - and yet are so sure in their convictions that they just ignore them and carry on. It's quite baffling.
 
Also for PwME, any supposedly-objective measures have to adequately account for cumulative energy drain effects. At least one PACE participant reported that in order to do the 6mwt they effectively "saved up" their energy in order to do it, and did less afterwards. So if their overall energy expenditure had been measured over one or more days, the result would probably have been significantly different.
As with everything involving these 'experts', to recognise this they would need to have a basic understanding of the illness which they clearly do not have.
Cross posted with @Lucibee
 
I think it's a shocking admission that they based their decision not to use actigraphy as an outcome measure on it not showing improvement in the Dutch study.

It makes their approach to 'science' crystal clear - only use the outcome measures that show what we want them to show to 'prove' what we want to prove, not what is best for patients or good science.
Yes, it seems that objectivity is not just alien to them in terms of the outcomes they choose, but alien to them full stop. These folk are religious preachers, not scientists.
 
The main thing I get from reading these minutes is how shockingly badly they understand the disease they're studying. They keep tripping over red flags - left, right and centre - and yet are so sure in their convictions that they just ignore them and carry on. It's quite baffling.
Feels to me it's not just the disease they don't understand, but the scientific process itself. Not claiming I do, but these folks really really should.
 
They are doing things that they simply should not be doing. Here is that step test account from TMG #16:
TMG_16_p5_steptest.png
The whole point of the step test is that it cannot be "paced". It is carried out using a metronome to make sure that it is NOT "carried out at a pace that suits the patient"!!! Aaargh! :banghead::banghead::banghead: If the patient can't keep up for 15 seconds, the test is stopped, and that early stop is then incorporated in their fitness score (along with their sky-high HR measurements).

And if they can't carry out a test safely, maybe they should have questioned whether ANY exercise was appropriate in these patients? Jeez!


[Update: Sasha has corrected me. This wasn't the Harvard Step Test. See https://www.s4me.info/threads/pace-trial-tsc-and-tmg-minutes-released.3150/page-9#post-57499]
 
Last edited:
Back
Top Bottom