1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 8th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

More PACE trial data released

Discussion in 'Psychosomatic research - ME/CFS and Long Covid' started by JohnTheJack, May 7, 2019.

  1. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,486
    Location:
    UK

    I was hoping for the 5 dimensions which is really what the EQ5d scale is. The single value is then a country interpretation a while ago there was a paper pointing out that some treatments are only worthwhile in some countries but not others as the utility function applied to the different scores is country specific and dependent on how healthy people in the country feel about potential disabilities. If i remember correctly the utility function is computed from a regression over question answers which leads to different residual errors in different areas,

    I assume it would be interesting to understand the different dimensions and also how they map to value in different countries,
     
    Amw66, JohnTheJack and MEMarge like this.
  2. JemPD

    JemPD Senior Member (Voting Rights)

    Messages:
    3,951
    This is all completely beyond me but i wanted to say thank you & well done for all the hard work involved in obtaining it & sorting through it.
     
    Hoopoe, Arnie Pye, Lidia and 13 others like this.
  3. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    CSRI summary(ish):

    Other info:

    Ref: Beecham J K & Knapp MRJ: Costing mental health interventions. London: Gaskell; 2001.

    Measured at Baseline visit 2, 24 weeks (end of therapy), 52 weeks (trial end)
     
    Last edited: May 9, 2019
  4. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    Reading through the questionnaires in the Protocol again is putting me in a Very Bad Mood. So apols if I'm being a bit stroppy elsewhere on here.

    It's shameful that they collected so much information on participants, and gave no thought as to how they were going to use it. Combining such complex info into simple metrics makes much of the data uninterpretable. Shocking use of participants' time and effort. Makes me so cross.
     
    Robert 1973, Hutan, inox and 34 others like this.
  5. chrisb

    chrisb Senior Member (Voting Rights)

    Messages:
    4,602
    That's the way the money goes. Pop! goes the weasel.
     
  6. JohnTheJack

    JohnTheJack Moderator Staff Member

    Messages:
    4,373
    They did gather a mass if information.

    This trial is very odd. I don't think I'd have qualified on the walking tests (too 'well'), but if I'd been asked to fill in all these questionnaires, my mind would have shut down and I'd have put them in the recycling.
     
    Robert 1973, Hutan, Inara and 22 others like this.
  7. JohnTheJack

    JohnTheJack Moderator Staff Member

    Messages:
    4,373
    Yes, understandable.
     
  8. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,486
    Location:
    UK
    Especially since they dropped accelorometers due to the load on patients.
     
  9. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,426
    Location:
    Canada
    Most of the "power" of the trial came from its cost. It could have been done as is for barely 1/10 the price tag, but that would have been much less impressive. It created a sunk cost that motivated people to justify the huge expense, regardless of the fact that it wholly lacked substance or anything meaningful.

    So it seems expected that they'd have wasted much of it on things they didn't even have a use for. It was busywork, an exercise in confirmation that was going to show "success" no matter what. Same for the alleged expensive training. It's a bullshit treatment with a fictitious narrative model, what training could it have even involved? There's no specific expertise required, no novel technology or anything even justifying doing any training beyond a basic information session.

    The whole point was to spend money so decisions could be justified as "we spent a lot of money confirming that so you're going to use it".
     
    Inara, 2kidswithME, Atle and 7 others like this.
  10. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,426
    Location:
    Canada
    Haha. Yeah. Totally. Implausible deniability, what a hoot.
     
    MEMarge and MSEsperanza like this.
  11. Sean

    Sean Moderator Staff Member

    Messages:
    7,164
    Location:
    Australia
    I think you meant due to the 'load on patients'. ;)
     
    MSEsperanza and JohnTheJack like this.
  12. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    I don't want to put a dampener on this, but I have a nasty feeling about these assumptions... Can anyone confirm? I'm unable to replicate the secondary outcome summary stats.
     
    Barry, MEMarge, MSEsperanza and 3 others like this.
  13. JohnTheJack

    JohnTheJack Moderator Staff Member

    Messages:
    4,373
    Anyone able to help with this? @Adrian ?
     
    MEMarge and MSEsperanza like this.
  14. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    Here's the summary data from White et al. 2011 Table 6 - Secondary Outcomes (truncated to just include relevant variables):

    Table6_Lancet2011.png

    I've also attached a text version (includes Jenkins, just because...).
     

    Attached Files:

  15. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    How off were you? If they were adjusting for variables we don't have (age, sex, etc) couldn't there be very minor differences without it being much of a problem?
     
    MEMarge and MSEsperanza like this.
  16. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    The summary data (means, SDs) in the table are unadjusted. It's only the comparisons that are adjusted.
    The data should match exactly.
     
    Inara, Amw66, JohnTheJack and 3 others like this.
  17. sTeamTraen

    sTeamTraen Established Member (Voting Rights)

    Messages:
    45
    There were no common variables. :eek::eek::eek::eek::eek::eek:

    All I was able to do was merge the columns, on the assumption that the participant order was the same in each case. I added a fictitious participant ID number, so that if anyone sorts the file for some reason they will be able to unsort it again, but that's it. We either have to trust that the records are in the same order for every file, or go back to the researchers and ask them to provide some way to be sure about this.
     
    Robert 1973, Hutan, Inara and 10 others like this.
  18. wdb

    wdb Senior Member (Voting Rights)

    Messages:
    320
    Location:
    UK
    Could we look at something like missing data to check if they line up, for example if all of the 52 week scores are missing from the first set there should be a reasonably high likelihood that the 52 week scores would also be missing from the second set. Giving it a quick glance over that doesn't seem to be the case.
     
    MSEsperanza, MEMarge, Lidia and 3 others like this.
  19. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    Without confirmation, it is not safe to trust that the records are in the same order for every file. Without knowing which groups the data were in, they are pretty much useless, apart from providing a summary of the entire cohort.
     
    MSEsperanza, Lisa108, MEMarge and 5 others like this.
  20. sTeamTraen

    sTeamTraen Established Member (Voting Rights)

    Messages:
    45
    I have found some time to start building some code this evening. I reproduced some of the values from Table 3, where we have fatigue and physical functioning scores at baseline and 52 weeks. (Table 3 has these two side by side, but I don't have room on the page, so they are one above the other here).

    The moderately encouraging news is that all of the means, SDs, and Ns seem to match the table in the article. That suggests that we have the correct treatment arm for each participant, at least for the four variables in question (but I think that these arrived on a single spreadsheet, so that would be expected).

    The less encouraging news is that the mean difference values don't match the published table. The differences are small for the difference between the "active" treatment and SMC, but quite a lot larger for the difference between the "active" treatment and APT.

    Now of course this could be because of how I calculated the mean differences, but the only way I can see to do that is to take the difference in the means. If it was just the CI where the discrepancy was occurring then I would assume I was using a different method from the one used by the authors to calculate the standard error, but for the means I don't know.

    Edit: Does this board have a "code" mode, to preserve the spacing of a piece of text? I had this table nicely formatted, but then all the spaces got eaten.

    Fatigue
    TX APT CBT GET SMC
    Baseline 28.5 (4.0) n=159 27.7 (3.7) n=161 28.2 (3.8) n=160 28.3 (3.6) n=160
    52 weeks 23.1 (7.3) n=153 20.3 (8.0) n=148 20.6 (7.5) n=154 23.8 (6.6) n=152
    MdiffSMC -0.7 (-2.2,0.9) -3.6 (-5.2,-1.9) -3.3 (-4.8,-1.7)
    MdiffAPT -2.9 (-4.6,-1.2) -2.6 (-4.2,-0.9)

    Phys.function
    TX APT CBT GET SMC
    Baseline 37.2 (16.9) n=159 39.0 (15.3) n=161 36.7 (15.4) n=160 39.2 (15.4) n=160
    52 weeks 45.9 (24.9) n=153 58.2 (24.1) n=148 57.7 (26.5) n=154 50.8 (24.7) n=152
    MdiffSMC -4.9 (-10.4,0.7) 7.4 (1.8,12.9) 6.9 (1.2,12.7)
    MdiffAPT 12.2 (6.7,17.8) 11.8 (6.0,17.5)
     

Share This Page