1. Guest, the 'News in Brief' for the week beginning 21st September 2020 is here.
    Dismiss Notice
  2. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

How to assess if a research paper has used best-practice protocols?

Discussion in 'BioMedical ME/CFS Research' started by RoseE, Feb 15, 2020.

  1. RoseE

    RoseE Senior Member (Voting Rights)

    Likes Received:
    A friend mentioned to me that they were considering using a new supplement* that others say has helped people with ME. She was confident that there was a good body of university-led research to support it. When I asked if she had verified that best practice was used for each of those research papers, she asked me how she could tell if it was good quality research. Well, I realised that I couldn't give her a list of things to look out for! - other than the assessments I have read of the PACE trial that have highlighted issues with that research.

    I wondered what people here look for when reviewing new research papers?
    Does anyone have a checklist that they use to assess the quality of the research process and reliability of the research results? Is there already a thread here on S4me for this?

    *fyi supplement under her consideration is Protandium. And there is a thread on here that I have shared with her https://www.s4me.info/threads/protandim.11453/

    A quick google and I found a few documents, and have stopped at WHO...

    World Health Organization - Recommended format for Research Protocols
    This document contains their guidelines for submissions of Research Proposals to WHO. So it is a research preparation list, not a research review list. And some is related to protecting the human participants, more than guaranteeing a quality research outcome. But perhaps will do?

    Also found the WHO GCP (Good Clinical Research Practice) statement - which at 129 pages is not quite the short list I was after :)

    Summary of Info from the WHO Research Protocol format page...
    Clarity in what is written up would be important to.

    Before I spent more time, I thought I would throw the question out here. What do you look out for when reviewing a research paper?
    ukxmrv and Andy like this.
  2. Hutan

    Hutan Moderator Staff Member

    Likes Received:
    There must be a good list somewhere - off the top of my head and specifically in relation to assessing whether the findings of a trial for supplement can be relied on:
    1. Conflict of interests - are the researchers independent? Do they stand to gain if they find one thing or another? Did the manufacturer pay for the study?

    2. The researchers - do they have a track record doing a range of studies, or do they seem only to have made papers on a particular supplement/brand? Do they come from a credible institution?

    3. The journal - is is credible or does it seem to exist only for this study/brand?

    4. Quality of the writing - are the methods clear. Is there blathery trendy language that doesn't mean anything?

    5. Hype - Does the study over-claim?

    6. P-hacking/cherry picking - did they measure a whole lot of things but only report on a few select things (that might have given the right outcome just by chance)

    7. Size - is the study sufficiently large? Appropriate size varies, but I'd probably want to see at least 20 people in each arm.

    8. Are there controls, is the study blinded, are there objective outcomes? Check out the Mendus study of MitoQ that I wrote up on the Coenzyme Q10 thread for an example of what can go wrong if it's an open label study with subjective outcomes.

    9. Selection of participants - did the participants have the same illness as you do? Did the participants have any reason to give a biased result? Were participants randomly selected, or selected to have certain characteristics that might limit the applicability of the trial to your situation?

    10. The treatment - does it match the product you are considering? Or does it have extra ingredients or a higher dosage or something else different?

    11. Reporting of harms - is reasonable care taken to identify and report harms?

    12. Dropouts and missing data - is it clear how many people were assessed at each stage? Are reasons for dropouts given? Is it possible that different rates of dropouts in the various treatment arms might have biased results? Were there lots of missing data points? If so, how was that dealt with - was data imputed?

    13. Data - are the figures in the tables right or do things not add up?

    14. Significance - is any reported result sufficiently large to be clinically useful?

    I guess there's lots of other things, but if all those were ok, that would be a good start.

    Edit - Rose, if your friend wants to send me the papers, I can have a look through them. Or better, post them on the forum and a range of people can look at them.
    Kirsten, ukxmrv, Trish and 3 others like this.
  3. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Likes Received:
    I am a bit sceptical about lists of methodological rules for judging study quality. In my view there are no general rules that always apply other than common sense. Small uncontrolled studies are sometimes fine. Large controlled studies can be useless. I think the criteria to look for in this situation may be simpler.

    1. Are there studies listed on PubMed Central that appear to indicate that the treatment actually makes a group of people better in terms of quality or length of life? (In this case no - so end of search.)
    2. Might the results of these studies be due to bias? To answer this you need to know quite a bit about how effects are measured in trials but much of the time the risk of bias is fairly obvious - as in open trials where results can be massaged in one way or another.

    The simple answer to Protandim is that, as far as I can see, nobody has yet shown it does people any good. University based research on hypothetical benefits on 'oxidative stress' are irrelevant. Nobody knows if oxidative stress matters.

    What I thought was interesting is that the only paper that comes near to testing whether Protandim does people good is one on athletic performance. It is negative. But why was the supplement tried on athletes? I have a sneaking suspicion that this trial was intended to be negative so that the supplement would not be banned in people doing competitive sports. So it can be sold to athletes as improving their health but not of course illegally tampering with their performance on track.
    TrixieStix, ukxmrv, Trish and 3 others like this.
  4. Kitty

    Kitty Senior Member (Voting Rights)

    Likes Received:
    I usually start from the assumption that a study on a branded food supplement is part of the marketing strategy. It can therefore be ignored until it's been replicated in a reasonably well-defined group, with measurable outcomes...funnily enough, these tend to be few on the ground!
    RoseE, Hutan and Trish like this.
  5. RoseE

    RoseE Senior Member (Voting Rights)

    Likes Received:
    Just going off on a random thought now. Does anyone know if any researchers have applied to WHO for funding for ME/CFS research? Sounds like WHO would back research that is lacking?

Share This Page