1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 8th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Basic questions on terms and methodology used in clinical trials

Discussion in 'Trial design including bias, placebo effect' started by MSEsperanza, Nov 3, 2022.

  1. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    As the title indicates -- a thread for lay people like me who have no or only superficial knowledge of trial methodology and statistics to ask some basic questions and hopefully get answers from more knowledgeable forum members.
     
    Last edited: Nov 3, 2022
    petrichor, Sean, alktipping and 6 others like this.
  2. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    From the wording on [edit] some of the questionnaires used in the PACE trial it appears that these weren't filled in by the participants themselves but by study staff in a face-to-face situation with the participant.

    So study staff read the questions to the patients, and the staff fills in their answers.

    I wonder how common that is/ was in studies in general and who fills in the questionnaires in such settings?

    Does 'blinding of assessors' mean only those who actually analyze the data are blinded to the trial arms, or also those who sit together with the patients to fill in the answers?

    Edit: So my question mainly is: Is it possible that therapists or other staff who see the patients also on other occasions actually fill in the questionnaires together with the patient? And could the investigators in this case still say that the assessors were blinded?


    Edit 2 : for clarification see edit in the first line and this post on the PACE trial discussion -- and also:

     
    Last edited: Nov 28, 2022
    Michelle, petrichor, Sean and 4 others like this.
  3. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,463
    Location:
    London, UK
    With these trials I don't think anything can be taken for granted as meaning what it seems to mean. Assessors are normally those who collect and record outcomes face to face with patients. In a situation I which the patient knows what sort of treatment they had one can more or less assume the assessor will get to know or suspect pretty well.

    Blinding is always difficult, and quite problematic even for drug trials with apparently identical placebos. All you need is for a clue to be available and the whole thing is busted.
     
  4. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,425
    Location:
    Canada
    Reportedly this is common in IAPT. I think that wherever particular outcomes are expected, by cheating if necessary, this is likely common practice. It's not as if it makes much of a difference anyway, the very act of limiting outcomes to those biased questionnaires is already half-way to being filled by the people whose job performance depends on getting expected answers.

    Everything BPS has this in bulk. Hell, it includes entirely redefining the patients' problems, filling in questions barely rates as suspicious at this point, it's pretty much the whole thing, by proxy of selecting the questions and answers to narrow issues of no relevance to the patient. If the questions relevant to patients aren't asked and the available answers don't include the ones patients would respond if the option was there, it's pretty much as if the questionnaire was already filled.

    This approach is very common in politics. They take the form of: "Do you support the end of all good things from horrible opposition candidate, or maybe this particular policy which vaguely sounds good but is only a talking point for electoral purposes?"

    Same with answering questions that weren't asked. Or pointing out irrelevant issues as distraction. The overlap with politics is just absurd.
     
    Last edited: Nov 3, 2022
    Hutan, bobbler, Sean and 3 others like this.
  5. Sean

    Sean Moderator Staff Member

    Messages:
    7,159
    Location:
    Australia

    https://www.youtube.com/watch?v=6GSKwf4AIlI


     
    Hutan, alktipping, rvallee and 2 others like this.
  6. ahimsa

    ahimsa Senior Member (Voting Rights)

    Messages:
    2,634
    Location:
    Oregon, USA
    Sorry, what does IAPT stand for?
     
    MSEsperanza and Peter Trewhitt like this.
  7. Trish

    Trish Moderator Staff Member

    Messages:
    52,225
    Location:
    UK
    IAPT is a UK NHS system for providing cheap easy to access psychological therapy using under qualified therapists. It costs billions, claims a high success rate, but has a high drop out rate and people who have looked into it say it's not fit for purpose.
     
    Hutan, alktipping, bobbler and 8 others like this.
  8. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    This is a bit off-topic but still...

    I wonder why quite a couple of the more recent trials investigating CBT for illnesses and conditions often labelled as MUS and also for co-morbid fatigue, depression and anxiety (alleged or real) in established biomedical illness failed to produce equally -- allegedly -- 'good' results as the previous ones,

    I'm aware the investigators in their papers still mostly try to twist the -- even by their own standards -- null results and even continue to twist them when replying to justified critique, e.g. by stating that they reported the null results in other parts of the paper, just not in the abstract -- or something along these lines.

    So my question is:

    Do the more recent trials have better trial methodology? Do they use different outcomes? Are they larger than the previous ones?

    Also, is there a similar trend with trials investigating exercise?

    @dave30th @ME/CFS Skeptic
     
    Last edited: Nov 13, 2022
  9. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Do you have concrete examples?

    Remember that, despite their flaws and biases, PACE, FINE, GETSET, FITNET etc. also did not produce good results, especially at long-term assessments.
     
    Hutan, alktipping, Lilas and 4 others like this.
  10. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    Sorry that was just from memory but some examples should be here:

    If I remember correctly there were more examples like these, including a study led by Sharpe on cancer patients.


    Yes I realize that - but if I understood correctly some of the more recent results are still much weaker even by these type of studies' standards and the investigators' claims even bolder in relation to the actual data?
     
  11. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Perhaps the outcome "dissociative seizure frequency" is less subjective and likely to be affected by response bias than a fatigue questionnaire.

    In the IBS trial patients only received web- or telephone CBT which may have weaker response bias than face-to-face CBT.

    On the other hand, I noticed a standardised mean difference of 0.65 for telephone CBT for IBS-SSS at 12 months, which is bigger than what was found in the PACE and FINE trials if I recall correctly.

    Not sure about this given how the PACE trial authors claimed that GET and CBT could help patients recover from ME/CFS.
     
    Hutan, alktipping, Sean and 2 others like this.
  12. Kitty

    Kitty Senior Member (Voting Rights)

    Messages:
    5,350
    Location:
    UK
    I do sometimes wonder if part of the problem is that not enough people actually interrogate the data.

    If enough studies publish puffed up results that are more narrative than data-based, it could start to appear to purse-string holders (who may not have any expertise in the condition being researched) as if there's something in what they're saying.
     
    bobbler, alktipping, Sean and 3 others like this.
  13. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    Thanks.

    So most likely not a trend towards better trial methodology in psychosomatic research on so-called MUS and functional illness, but probably just due to the increasing amount of studies on diverse conditions -- so also an an increased probability for incoherent data plus an increased probability that they will use more objective measures for specific illnesses where an absence of objective measures would be a too apparent omission?
     
    Last edited: Nov 23, 2022
    alktipping and Peter Trewhitt like this.
  14. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    I'm trying to understand the following paragraph in a methods handbook on the evaluation of statistical significance:

    "A range of aspects should be considered when interpreting p-values. It must be absolutely clear which research question and data situation the significance level refers to, and how the statistical hypothesis is formulated.

    "In particular, it should be evident whether a one- or two-sided hypothesis applies [61] and whether the hypothesis tested is to be regarded as part of a multiple hypothesis testing problem [713].

    Anyone feels up to give examples of one-sided and two-sided hypotheses and a multiple hypothesis testing problem?


    Source and context:
     
    alktipping and petrichor like this.
  15. petrichor

    petrichor Senior Member (Voting Rights)

    Messages:
    320
    It notes that "Regarding the hypothesis formulation, a two-sided test problem is traditionally assumed. Exceptions include non-inferiority studies". So almost everything is a two-sided hypothesis. I think it's two sided in the manner that it's determining whether a treatment is superior or inferior to the other, so you look to see if it's statistically significant in two directions - whether it's statistically significantly better, or statistically significantly worse. I think a non-inferiority study is one sided, because then you're only interested in whether a treatment is inferior or not to the other. A non-inferiority study has a different design where you look at whether the effect of the treatment you're interested in is close enough to be "non-inferior" to the other, and so you're just looking to see if something is statistically significantly inferior or not. There's also an explanation of these ideas here (minus the part about non-inferiority studies)

    If you're testing for an effect in two directions rather than just one, then that means you're twice as likely to find an effect by chance. So that needs to be adjusted for.

    A multiple hypothesis testing problem is for instance just when you have multiple primary outcomes (and therefore you're testing multiple hypotheses) in a trial. So you're more likely to find an apparent effect by chance, so you need to adjust for that, by having a stricter standard of statistical significance (to avoid type 1 error which just means having a false positive)
     
  16. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    Just for further info about that question:

    Croos-posting form the PACE trial discussion thread:


    (Apologies @rvallee for any confusion also for giving you a false alert a couple of days ago when I couldn't find related information but shortly after having posted a question quoting a post of yours I found the info myself and posted here. :ill: :sleeping:)
     
  17. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    2,458
    The following page is quite good in explaining: https://stats.oarc.ucla.edu/other/m...nces-between-one-tailed-and-two-tailed-tests/

    scroll down to section on when to use 1-tailed test and para below (when not to use), it's about half way down - I didn't want to copy too many paras (not on ball enough right now about what's allowed) but think at least these 3 are relevant to your question

     
  18. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    Sorry for muddled thinking and wording -- post now too old to delete.

    Still I think it would be worthwhile to do a review on trial design, outcomes (subjective/ objective), and reporting on outcomes in the field of research on treatments for ME/CFS, MUS, and maybe also comorbid depression in 'established' medical disease.

    Perhaps it would even make sense to include both behavioral and drug interventions?

    Difficult to think of a proper hypothesis -- as I probably have too many questions and assumptions both about trial methodology and about certain proponents at once. (see the discussion with @ME/CFS Skeptic above and also the questions I posted on a more recent members-only thread.)

    But maybe 'just' an extended and contextualized version of the Tack et al paper on bias due to a lack of blinding in ME/CFS treatment trials and Jonathan Edwards' NICE expert testimony?

    (Context = how is the bias due to reliance on subjective outcomes in unblindable trials discussed and addressed by other researchers in various fields, including psychology, (neuro-)psychiatry and physical therapy?).
     
    Last edited: Apr 8, 2023
    alktipping, Sean and Peter Trewhitt like this.
  19. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    About the terms ‘prospective' and ‘retrospective' in observational studies in epidemiology.

    Not up to word my question so just leave this quote from the STROBE Initiative here.

    Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)
    :

    "We recommend that authors refrain from simply calling a study ‘prospective' or ‘retrospective' because these terms are ill defined [29].

    "One usage sees cohort and prospective as synonymous and reserves the word retrospective for case-control studies [30].

    "A second usage distinguishes prospective and retrospective cohort studies according to the timing of data collection relative to when the idea for the study was developed [31].

    " A third usage distinguishes prospective and retrospective case-control studies depending on whether the data about the exposure of interest existed when cases were selected [32].

    "Some advise against using these terms [33], or adopting the alternatives ‘concurrent' and ‘historical' for describing cohort studies [34].

    " In STROBE, we do not use the words prospective and retrospective, nor alternatives such as concurrent and historical. We recommend that, whenever authors use these words, they define what they mean. Most importantly, we recommend that authors describe exactly how and when data collection took place."


    Source:
    Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, Poole C, Schlesselman JJ, Egger M; STROBE Initiative. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. PLoS Med. 2007 Oct 16;4(10):e297. doi: 10.1371/journal.pmed.0040297.
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2020495/
     
    alktipping, Sean and Peter Trewhitt like this.
  20. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    Related to the discussion on developing assessment tools for monitoring disease activity/ impact/ disability --

    Why is the term 'psychometric' used for scales that measure also physical symptoms / symptom burden / disability?

    Is a simple visual analog pain rating scale or a symptom diary also a psychometric tool?


    See discussion on research for a new clinical assessment toolkit in NHS ME/CFS specialist services here.

    And a paper mentioned there on developing a patient-reported-outcome-measure scale here.
     

Share This Page