1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 18th March 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Reducing bias in trials from reactions to measurement: the MERIT study including developmental work and expert workshop, 2021, French et al

Discussion in 'Research methodology news and research' started by Andy, Sep 29, 2021.

  1. Andy

    Andy Committee Member

    Messages:
    21,814
    Location:
    Hampshire, UK
    Abstract
    Background
    Measurement can affect the people being measured; for example, asking people to complete a questionnaire can result in changes in behaviour (the ‘question–behaviour effect’). The usual methods of conduct and analysis of randomised controlled trials implicitly assume that the taking of measurements has no effect on research participants. Changes in measured behaviour and other outcomes due to measurement reactivity may therefore introduce bias in otherwise well-conducted randomised controlled trials, yielding incorrect estimates of intervention effects, including underestimates.

    Objectives
    The main objectives were (1) to promote awareness of how and where taking measurements can lead to bias and (2) to provide recommendations on how best to avoid or minimise bias due to measurement reactivity in randomised controlled trials of interventions to improve health.

    Methods
    We conducted (1) a series of systematic and rapid reviews, (2) a Delphi study and (3) an expert workshop. A protocol paper was published [Miles LM, Elbourne D, Farmer A, Gulliford M, Locock L, McCambridge J, et al. Bias due to MEasurement Reactions In Trials to improve health (MERIT): protocol for research to develop MRC guidance. Trials 2018;19:653]. An updated systematic review examined whether or not measuring participants had an effect on participants’ health-related behaviours relative to no-measurement controls. Three new rapid systematic reviews were conducted to identify (1) existing guidance on measurement reactivity, (2) existing systematic reviews of studies that have quantified the effects of measurement on outcomes relating to behaviour and affective outcomes and (3) experimental studies that have investigated the effects of exposure to objective measurements of behaviour on health-related behaviour. The views of 40 experts defined the scope of the recommendations in two waves of data collection during the Delphi procedure. A workshop aimed to produce a set of recommendations that were formed in discussion in groups.

    Results
    Systematic reviews – we identified a total of 43 studies that compared interview or questionnaire measurement with no measurement and these had an overall small effect (standardised mean difference 0.06, 95% confidence interval 0.02 to 0.09; n = 104,096, I2 = 54%). The three rapid systematic reviews identified no existing guidance on measurement reactivity, but we did identify five systematic reviews that quantified the effects of measurement on outcomes (all focused on the question–behaviour effect, with all standardised mean differences in the range of 0.09—0.28) and 16 studies that examined reactive effects of objective measurement of behaviour, with most evidence of reactivity of small effect and short duration. Delphi procedure – substantial agreement was reached on the scope of the present recommendations. Workshop – 14 recommendations and three main aims were produced. The aims were to identify whether or not bias is likely to be a problem for a trial, to decide whether or not to collect further quantitative or qualitative data to inform decisions about if bias is likely to be a problem, and to identify how to design trials to minimise the likelihood of this bias.

    Limitation
    The main limitation was the shortage of high-quality evidence regarding the extent of measurement reactivity, with some notable exceptions, and the circumstances that are likely to bring it about.

    Conclusion
    We hope that these recommendations will be used to develop new trials that are less likely to be at risk of bias.

    Future work
    The greatest need is to increase the number of high-quality primary studies regarding the extent of measurement reactivity.

    Open access, https://www.journalslibrary.nihr.ac.uk/hta/hta25550/#/full-report
     
  2. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Yes, measuring instruments influencing measured values is a well known issue, not confined to just medicine.
     
  3. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,275
    Location:
    London, UK
    I suspect that they have not considered how much attempting to measure how much measuring things in trials affects measurements is likely to affect the measurements of how much measuring things affects measurements.

    They might need to set up a-committee to measure this.
     
    Joan Crawford, obeat, Sean and 5 others like this.
  4. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,299
    Location:
    Canada
    Telling that they don't seem to consider "treatments" whose sole aim is to influence people's responses on the very questionnaire they then use to evaluate this perception, one of the most blatant forms of bias in all the expert professions. Probably because this method is exempt from standard norms, hence why they are worthless.

    But also that a rating is not a measurement and cannot be considered equal to, no matter how much maths is used to massage the data. There is simply no equivalency between asking people to rate a temperature on a 1-10 scale vs. using a calibrated thermometer in centigrades that relate to well-verified real natural phenomena. Those methods are simply on a different level and it's silly to pretend that those numbers have enough precision to be of any use beyond a general trend. The very use of the word "measurement" is itself biased, it pretends that an unreliable instrument is the least bit reliable beyond whatever meaning they are ascribed in analysis, always an issue of judgment.
     
    obeat, Sean, DokaGirl and 4 others like this.

Share This Page