1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 18th March 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Blog: Murky matters involving conflicts of interest

Discussion in 'Other psychosomatic news and research' started by Indigophoton, May 15, 2018.

  1. Indigophoton

    Indigophoton Senior Member (Voting Rights)

    Messages:
    849
    Location:
    UK
    Considers COI as they relate to peer review,
    https://smallpondscience.com/2018/05/14/murky-matters-involving-conflicts-of-interest/amp
     
  2. Snowdrop

    Snowdrop Senior Member (Voting Rights)

    Messages:
    2,134
    Location:
    Canada
    I know so little of this area but maybe a solution would be to have Statisticians as first line reviewers to review the technical aspects of a paper then the follow up (peer)reviewer would be required to bear in mind that first review of the strengths weaknesses and possible flaws of the methods employed.
     
    pteropus and andypants like this.
  3. Woolie

    Woolie Senior Member

    Messages:
    2,918
    I'd say yes, if stats were the pivotal issue that distinguished poor from good research. But I think it almost never is.

    Recently, I've been reviewing research on depression, and have come across truckloads of weak studies. But not one was weak because of the stats. They were all weak for other reasons - because the researchers asked the wrong question, designed the study badly, ignored confounding variables or alternative interpretations, or emphasised those findings that fit their preconceptions, while playing down those that didn't.

    One had a ridiculously small sample size, so that's something a statistician would pick up (but the idea behind the study was ridiculously stupid so the sample size issue was kind of a moot point!).

    Its pretty hard to spot a lot of these issues unless you're familiar with the pitfalls in that particular subject area. For example, I doubt that statisticians reading the PACE trial would notice the problem with the reliance on self-report measures - that would take someone who knew about the psychological research specifically, and the issues surrounding reliability of self-report measures. And that is the biggest flaw in the whole trial, imo.

    Of course, just being a specialist in the area doesn't mean you're any good at these things either. All I'm saying is that statisticians aren't the answer.
     
  4. Snowdrop

    Snowdrop Senior Member (Voting Rights)

    Messages:
    2,134
    Location:
    Canada
    Yes, I see your point. So the problem needs a fix from a number of fronts then?
     
    Woolie and andypants like this.
  5. Woolie

    Woolie Senior Member

    Messages:
    2,918
    Yes, I think the solution is probably not at the peer review/publication end of things (in any case, that wouldn't work retrospectively). I think it has to come from increasing readers' general scepticism and awareness of potential biases.

    And teaching anyone who reads or in any way uses other people's findings never to take the researchers' conclusions at face value. Always look at what was actually done and found.
     
  6. Snowdrop

    Snowdrop Senior Member (Voting Rights)

    Messages:
    2,134
    Location:
    Canada
    So we're back to wouldn't it be great if critical thinking was taught in grade school.

    Some people manage to come by it naturally they are sceptics. Others need to have their optimism and enthusiasm tempered maybe.
     
    Invisible Woman, Woolie and Trish like this.
  7. chrisb

    chrisb Senior Member (Voting Rights)

    Messages:
    4,602
    Is that information available separately to what was said to have been done and found?
     
    Invisible Woman likes this.
  8. Woolie

    Woolie Senior Member

    Messages:
    2,918
    Valid point.

    But then, if I were going to bother to lie, I wouldn't be producing shit articles like those ones. My articles would all have large Ns, low dropout rates and the results would look amazing, with all hypotheses confirmed at .001. The fact that so many studies are shit tells us that people, on the whole, are not faking their experiments. I suspect for every study that contains outright deception there are probably 1000 with no lies but that are just plain shit.
     
    Invisible Woman and chrisb like this.
  9. chrisb

    chrisb Senior Member (Voting Rights)

    Messages:
    4,602
    I am not suggesting that it necessarily involves deliberate fraud. I just believe that people often did not do what they thought they did or what they intended to do. That's what it is to be human.
     
    Invisible Woman and Woolie like this.
  10. Woolie

    Woolie Senior Member

    Messages:
    2,918
    Yea, researchers do push the boundaries of what's acceptable. Like not reporting studies or manipulations that didn't turn out as hoped. That's not actually lying, but its still being economical with the truth.
     
    Invisible Woman likes this.
  11. Invisible Woman

    Invisible Woman Senior Member (Voting Rights)

    Messages:
    10,280
    I think it's not always consciously done.

    Somebody might take certain actions thinking they understand the underlying mechanisms and therefore thenend result is x.

    But actually, their understanding is insufficient and not all the mechanisms they thought were implicated are and there may be other factors they didn't allow for, or were unaware of. Their end result might be a more complex equation involving x.

    I saw this kind of thing quite often in my career. I'd get a call from someone who took certain actions to resolve an issue, but didn't really understand the technical implications. 75% of the time the might get away with it, but in those other situations....it would hit the fan.
     
    Woolie likes this.

Share This Page