1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 15th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

MAGENTA (Managed Activity Graded Exercise iN Teenagers and pre-Adolescents) - Esther Crawley

Discussion in 'Psychosomatic research - ME/CFS and Long Covid' started by Sly Saint, Jun 29, 2018.

  1. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,495
    Location:
    London, UK
    I think it is, David (@dave30th), in any situation where outcomes are apparent to the researchers along the way. This comes up in the context of trials designed to abort if data reach statistical significance up or down. You have to re-calibrate the statistics.

    Not changing outcome measures is in a sense as bad as changing them. If you are getting nice looking results and you then decide to carry on the trial collecting the results that way you have introduced bias.

    In reality it is does not matter a jot because the results are uninterpretable for more basic reasons. However, if the peer review community is blind to those reasons, as they seem to be, then it may be necessary to dwell on technicalities.
     
  2. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,246
    @Jonathan Edwards I am not convinced of this point. If they leave everything as is, the only choices are to not extend the trial or to extend it with the exact same form/methodology/outcomes etc. Better to do a new full trial if resources allow. But I have less problems than you do with the idea of continuing a trial in which everything is left the same to see if the same results pan out with the larger group. I would have a hard time arguing that it would NEVER be allowable to extend a pilot or feasibility study into a full trial. Now, I agree that MAGENTA will be meaningless and worthless in any event because of the study design (open label, subjective outcomes), but not because of this issue.

    Also, in terms of the 12-month/6-month--the language quoted above doesn't indicate that they meant 12 months after recruitment ended. It's a bit ambiguous so hard to know exactly what they meant. It could be 12 months after recruitment starts. The only data requirement was that by six months they would have some authoritative data about response rates. It doesn't indicate that they needed to see all the response from all participants before deciding. So I don't get from a quick glance that there's a strong argument they violated their own stated procedures in the timing.
     
  3. Trish

    Trish Moderator Staff Member

    Messages:
    52,289
    Location:
    UK
    Unless they managed to recruit their full 100 patients by March 2016, then the 12 months assessment would be able to be moved forward 6 months.
     
  4. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,495
    Location:
    London, UK
    And I am pretty sure you would be wrong, @dave30th. This is an important technical statistical issue. You cannot check how a study is going half way through and then either continue or not and then if you continue analyse the statistics as if you had not made the half way decision. Your Bayesian equation change. It is OK if the outcomes are blood test measurements that nobody looks at until the end but not OK if you know how people are doing.
     
  5. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,486
    Location:
    UK
    Can you think of examples where this has happened beyond Crawley.

    I think there are two points here with the first being around the ethical examination and whether extending a trial is used as a mechanism to avoid detailed scrutiny.

    The other point is that in designing the full trial methodology you have measures and data to use in those choices whether you keep them the same or change them. So the decision you make about the full trial protocol is informed by the data you have from the initial patients which are included in your overall results. This is different from assessing whether the measures used were reliable in reporting results and then designing a trial with better measures.
     
    MEMarge, Hutan, Inara and 3 others like this.
  6. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,486
    Location:
    UK
    I think the point is that you have additional information and a potential decision to keep or change the protocol. So the additional information (how the first x patients perform) is used to influence the decision but also the results as the patients are also included in the result.
     
    MEMarge, Hutan, Inara and 2 others like this.
  7. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,246
    @Jonathan Edwards I guess my assumption in making the point was that the goal of a feasibility trial is to check the feasibility, and the choice about whether to continue it would be based on feasibility considerations, not on the outcomes or results. But thinking it through further after my initial reaction, that obviously wouldn't be the case in a trial like this. I am going to check this point out with my epidemiology colleagues at Berkeley. So your point would be one could do that if there are objective measures being used that are not viewed at all, but not when there are subjective outcomes whose results could be known?
     
    MEMarge, Grigor, Keela Too and 4 others like this.
  8. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,495
    Location:
    London, UK
    In this instance it does not matter whether the results are objective or subjective.

    It is like playing poker. If you already have two aces then you raise the stakes and ask for a new card (or something like that). If you have a three and a six you forget it. You are less likely to win big money if you don't look at the cards half way through. Similarly if you have a king and an ace in pontoon you say 'that's fine no changes'. Whether you are happy with things as they are or not, you are altering the chances of a win if you go on playing.

    And the numbers on cards are objective.
     
  9. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,246
    I think you misunderstand my last point. If you don't look at the objective results but you extend the feasibility trial based solely on feasibility considerations, can you extend it into a full trial, in your view?
     
    Inara likes this.
  10. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,254
    Designing a reliable clinical trial of CBT/GET seems difficult without an objective marker of disability, ill health, or disease activity. That is the excuse by CBT/GET proponents. MAGENTA now includes a measurement of activity levels which could be interesting.

    Measure like the six minute walking test, or daily step counts are more objective, but still influenced by nonspecific effects. For example, brief sporadic measurement of daily step counts could be misleading because it is known that patients momentarily change their behaviour when these devices are used. These more objective measures might be sufficiently close to truly objective if used properly, but how is properly defined here?

    I suspect that there are many ways to formally include more objective mesures into clinical trials while arranging things so that a bogus treatment still appears effective.

    Perhaps we can learn something about this from studies that obtained negative results for CBT or GET:

    Time course of exercise induced alterations in daily activity in chronic fatigue syndrome
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1280928/

    In this case, the authors previously reported an increase in activity, as measured by an accelerometer, in patients assigned to a walking program (with an increase in pain and decrease in mood). When they reanalyzed their data, they observed the following pattern:

    Initially there is "objective benefit".

    But patients find it difficult to sustain this increase in activity (and their pain and mood worsens too).

    What this could mean for MAGENTA:

    At least one of the therapies involves first reducing activity levels. If the first measure of daily step counts falls within the period of time where the patient is told to reduce their activity levels, it could create the illusion that therapy results in sustained increase in activity levels due to subsequent measurements being higher (not necessarily higher than before the beginning of the trial). It would be a very basic error to make, but they've overlooked bigger problems in PACE.

    If the period of time where activity levels are measured is small, then the act of objectively measuring could result in temporary modification of behaviour that gives the illusion of increased activity, while the subsequent crash is missed.
     
    NelliePledge, Sean, Inara and 2 others like this.
  11. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,495
    Location:
    London, UK
    No I got your point, David. It is just that it does not matter whether they are objective or subjective. If you get the patients to answer dodgy questionnaires but keep them in a locked ballot box until you have finished the trial that is OK too, at least in terms of doing a feasibility phase and extending it.
     
    Inara and Indigophoton like this.
  12. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,495
    Location:
    London, UK
    I think this raises an important question. Why is Esther Crawley the only person I know who does these 'feasibility' studies. Is she unique or is this something that psychologists like doing? And more importantly, why do none of my other colleagues even think of doing this? I assume it is because, like me, they assume it is bad practice.
     
    Trish, Inara, BurnA and 4 others like this.
  13. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    I don't think she is the only one.

    Please don't ask me to go and find examples now, but when reading up on these issues when the SMILE trial came out I did find other trials where feasibility trials were run on into full trials, and the guidelines I found on this seemed to suggest that this was okay if this had been planned in advance and outcomes were not changed. It might be strictly ideal to start again with a new trial, but there are cost savings from using data from a feasibility trial, and advantages to having a larger trial.

    Maybe the guidelines I was reading on feasibility and pilot studies were too lax, but I think that this is a weaker point of complaint than many others that are available to us on Crawley's work.
     
    Snow Leopard and Inara like this.
  14. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,495
    Location:
    London, UK
    I agree, and David and I have sort of been continuing a conversation we had over lunch yesterday. These are technicalities that in themselves are maybe trivial. Nevertheless, I think this feasibility tactic may reflect something more important, as Adrian hinted. And if the people who need convincing are blind to the bigger problems then technicalities may be worth exploiting.

    I have yet to see exactly what the logic of feasibility studies in this context is. If we assume that the feasibility study on its own is underpowered, as I think is implicit, then it would be a huge waste of money if not extended to a full study (and have ethical problems). In my own experience feasibility studies have been open trials of no more than six cases, which allow you to see if treating is feasible. 100 patients in a feasibility study looks very odd to me.
     
    MEMarge, Hoopoe, Inara and 7 others like this.
  15. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,584
    Location:
    UK
    The more I read it the more it (the feasibility study) seems to be a study not to test any particular treatment, but whether or not they would be able to do a cross centres study and if they could recruit enough participants [and retain them] for a full study.
    "to investigate the feasibility and acceptability of the study processes and interventions "
    "This is a multicentre study which will test the feasibility of running this trial in different NHS settings "

    So it is basically a feasibility study of the logistics of doing a full trial not to test the effectiveness of any treatment.

    The downside is it leads to a full trial on GET.

    edit: bit in []
     
    Last edited: Jul 1, 2018
  16. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    I can see why a larger feasibility study would be of some use for something like SMILE, where they could be deeply unsure how many parents/children would be willing to sign up for a trial of something like LP. A feasibility study of just ten people might fill up with the small minority of people with a prior interest in LP, without giving much information on whether it could be easily rolled out to a full trial.

    I'm trying to remember examples of feasibility studies where they'd planned to roll participants on to a full study, but something from the feasibility study made them decide against doing so. I can remember reading a number of studies where they jumped to doing a full trial, but then really struggled with recruitment, so maybe these larger feasibility studies are partly to avoid the embarrassment of that?
     
    Peter Trewhitt and Inara like this.
  17. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,495
    Location:
    London, UK
    On the other hand, having recruited a lot of patients you might then not know if there were any more ready to volunteer and be back in the same situation. Moreover, I cannot quite see why you cannot find out how many patients are interested in advance. That is what I always did. If you have 500 waiting to take part no problem, if it is eight, you are in trouble.
     
    Peter Trewhitt, MEMarge and Trish like this.
  18. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,584
    Location:
    UK
    Also interesting that they went for ethics approval for a full trial just before the announcement of AfMEs takeover of AYME. (c. 14th March) As AYMEs medical advisor EC must have known it was about to happen.
     
    MEMarge, Amw66, Inara and 1 other person like this.
  19. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,486
    Location:
    UK
    It does feel strange. Like they see it as a point where they can see what results are and tweek the protocol. I can see it is valid iff the protocol is written in terms of the first patients form a feasibility study to check that the systems work and assuming they do then move to the full study but with no opportunity to change the protocol. Even perhaps an independent committee to tweek things like recruitment methods etc I can see. But this feels different to that. We don't know what the full trial protocol is yet.

    But as you say why have a large (and I assume quite costly) feasibility study. With Smile I don't remember seeing power calculations but I would expect to see them on applying for the full trial. Again I could see a review after 100 patients to see how many are needed (not sure if this is valid).
     
    MEMarge, Inara and Amw66 like this.
  20. Alvin

    Alvin Senior Member (Voting Rights)

    Messages:
    3,309
    Who is paying for this study?
     
    MEMarge likes this.

Share This Page