Woolie
Senior Member
When is lack of scientific integrity a reason for retracting a paper? A case study.
Abstract:
This editorial has just come out in the Journal of Psychosomatic Research. It is a discussion of issues that arose from this 2004 publication:
This study is a triple-blinded randomised controlled trial of homeopathy for CFS. It might not come as a surprise to you that the primary outcomes of the study were all negative.
The current discussion is related to one of the co-authors, who stood up in a talk recently and said that she had worked out a cunning plan for discovering whether participants were in the homeopathy or the control group.
The current editorial explains why the journal did NOT retract the paper (because the results were all negative anyway, amongst other reasons).
The interesting bit is the issues they chose to comment on (emphasis mine):
None of us here will miss the irony that these requirements for good studies are simply waived for psychotherapy trials, without anyone even caring about the risk of bias. Silly homeopathy folks - don't they know that all they need to do to get around this is to combine their medicine with a wee chat. Hey presto - blinding problems solved!
Abstract:
Objective: The present and past editors of this journal received a request to retract a paper reporting the results of a triple-blind randomized placebo-controlled trial. They expand on their decision not to retract this paper in spite of undisputable evidence of scientific misconduct on behalf of one of the investigators.
Methods: The editors present an ethical reflection on the request to retract this randomized clinical trial with consideration of relevant guidelines from the Committee on Publication Ethics (COPE) and the International Committee of Medical Journal Editors (ICMJE) applied to the unique contextual issues of this case.
Results: In this case, scientific misconduct by a blinded provider of a homeopathy intervention attempted to undermine the study blind. As part of the study, the integrity of the study blind was assessed.Neither participants nor homeopaths were able to identify whether the participant was assigned to homeopathic medicine or placebo. Central to the decision not to retract the paper was the fact that the rigorous scientific design provided evidence that the outcome of the study was not affected by the misconduct. The misconduct itself was thought to be insufficient reason to retract the paper.
Conclusion: Retracting a paper of which the outcome is still valid was in itself considered unethical, as it takes away the opportunity to benefit from its results, rendering the whole study useless. In such cases, scientific misconduct is better handled through other professional channels.
This editorial has just come out in the Journal of Psychosomatic Research. It is a discussion of issues that arose from this 2004 publication:
Weatherley-Jones et al. A randomised, controlled, triple-blind trial of the efficacy of homeopathic treatment for chronic fatigue syndrome. J Psychosom Res. 2004;56(2):189-97.
This study is a triple-blinded randomised controlled trial of homeopathy for CFS. It might not come as a surprise to you that the primary outcomes of the study were all negative.
The current discussion is related to one of the co-authors, who stood up in a talk recently and said that she had worked out a cunning plan for discovering whether participants were in the homeopathy or the control group.
The current editorial explains why the journal did NOT retract the paper (because the results were all negative anyway, amongst other reasons).
The interesting bit is the issues they chose to comment on (emphasis mine):
While we cannot know for certain that her motivation was to discount the results of this study, what she said clearly seeks to undermine the credibility of a trial whose results challenged her firmly held but untested beliefs about the benefit of a treatment that she had high allegiance to
Reporting on the integrity of the blind has merit and is especially valuable when dealing with subjective outcomes for which there is a greater risk of bias due to any unblinding.... Un-blinded assessors of subjective binary outcomes may exaggerate odds ratios by an average of 36% (13). Subjective outcomes are frequently used in studies that fall within this journal's scope, at the interface of psychology and medicine. We recommend assessing the integrity of the blind for any clinical trial, particularly those utilizing subjective outcomes akin to the primary outcomes of the Weatherley-Jones et al. study in question.
13. Hrobjartsson A, et al. Observer bias in randomised clinical trials with binary outcomes: systematic review of trials with both blinded and non-blinded outcome assessors. BMJ. 2012;344:e1119.
None of us here will miss the irony that these requirements for good studies are simply waived for psychotherapy trials, without anyone even caring about the risk of bias. Silly homeopathy folks - don't they know that all they need to do to get around this is to combine their medicine with a wee chat. Hey presto - blinding problems solved!
Last edited: