The problem is that people trained in statistics have no particular reason to understand the more basic problems of trial design that arise from sources of bias - human nature. You cannot use numbers to assess the likelihood of bias. The jobs you want done are not jobs for statisticians.
However, it *is* a trial statistician's job to be aware of bias - after all, that's what the trial design is there to address!
I would expect statisticians have a mathematical training background. So I would expect a statistician checks if the things he applied actually can be applied, i.e. he also checks if the theory holds. That's what numerical mathematicians always do (and we made "jokes" about situations where this hadn't been done, e.g. an oil platform that cracked on the high ocean since the meshing wasn't done correctly - according to theory). I would expect the statistician to report to the researcher whether he thinks the data/hypothesis don't fulfill the criteria - because then, theory doesn't hold anymore.
E.g. computing the p-value is only possible if you assume the null hypothesis is true (here comes in first and second type errors).
As with all these things it looks as if statisticians are increasingly oiling the wheels of garbage in garbage out. Statisticians tend to work on projects they are asked to look at by people who want positive results. They are asked 'how can we show this is significant'. Statisticians who reply 'I think you may be cherry picking' will not get asked again.
They could always decline to get involved, or withdraw their name from the paper. They have minimum standards they must meet, and a basic understanding of trial methodology and how it relates to statistics is surely one of them.And there's not much they can do if they weren't consulted while the trial was being designed, particularly if those measures have been used in previous trials.
I will try to explain as best as I can.As I understand it for PACE, the main (i.e. alternative?) hypothesis is along the lines that patients are locked into a vicious circle of being deconditioned, reinforced and perpetuated by activity avoidance. GET reverses that vicious circle and thereby reconditions patients.
So what would the null hypothesis have been? Was it ever reported? What would the p-value have been for data pertaining to that null hypothesis? Was it reported for PACE?
Presumably the null hypothesis would be something along the lines that patients are not deconditioned, and there is no vicious circle to break out of. GET would therefore have no beneficial effect.
I'm well aware that correctly stating the alternative and null hypotheses is crucial, and that I for sure will not have achieved that here. But I'm interested to see if anyone else finds this aspect of PACE interesting, and wants to add their two penn'oth.
Given we are extremely confident this null hypothesis is in fact true, this would presumably have to mean that a soundly constructed and operated trial would produce data showing a very high p-value for that null hypothesis. What would the p-value be for PACE?
I'm also (now) aware that p-values can be hugely distorted as a consequence of bad trial methodology etc.
Am I on the right lines here?
I would say its unreasonable to expect a statistician to understand the psychological factors that may influence outcomes - recall bias, confirmation bias. They also can't be expected to know the nuances of what sort of research designs are best to use to address specific questions, or what sort of participant sample would be appropriate and representative, what sort of outcome measures would be most sensitive and indicative. That's the part the primary researchers are supposed to do - that's supposed to be their area of expertise.I second these questions. For me it seems like the value in having people trained in statistics is not so much in doing the sterile number crunching - for this you should just click a button and a computer script calculates and outputs the results - but in critically examining the validity of the input for testing; then, if statistical testing is warranted, to make sure that the conclusion accurately reflects what the test is telling you.
I would say its unreasonable to expect a statistician to understand the psychological factors that may influence outcomes - recall bias, confirmation bias.
I know. We are sooooo dumb!
I don't think it is unreasonable at all. It's why I went to LSHTM to study stats. But I guess my initial interest in stats was started while studying psychology at Cambridge. But hey, maybe I'm just 'special'.
---
Statisticians are taught about those biases on MSc courses, so it is entirely reasonable to expect them to deal with them.
I don't like the tone of this elitism here - it is not fair to assume that just because someone has studied in one area, they are completely incapable of understanding another. That is a gross presumption to make. We have no idea about the backgrounds of the statisticians involved in PACE. But there should have been enough expertise in the others involved, and by the "endless rounds of peer review" to cover those factors. That they didn't spot the flaws is of great concern - and everyone involved should be ashamed of themselves.
Henrik Vogt is tweeting today
It is tempting to just laugh it off, but he is an MD and has initiated a "patient organisation" with people who have recovered from conditions by their own means/undocumented treatments (meaning ME-patients recovered by Lightning Process, but that isn't said out loud because you are not allowed to advertise for alternative treatments in Norway, including sharing success stories).
People listen to him and he continues to smear ME patients that are critical of Lightening Process and the BPS approach to ME.
Edit to add: The tweet might be a reaction to a recent and critical article in the Journal of the Norwegian Medical Association about the PACE-trial.
There is a post about that article in the thread Rethinking the treatment of chronic fatigue syndrome - A reanalysis and evaluation of findings from a recent major trial og graded exercise and CBT with google-translation.
That's foul (and cheap; at times it's even a bit dumb): "If people had done it exactly like I said, nothing would have gone wrong. It's not my fault if others apply my idea incorrectly and, by doing so, harm people."Not until a few days after that. Here is his reply
I think that's a little unfair. That's not at all what I said. I've seen many times on the forum people placing unreasonable expectations on statisticians - saying that because the PACE trial had statisticians, those people didn't do their job well and must have been really shit. I'm not a statistician, but from where I sit, the PACE statisticians looked pretty competent to me (okay, maybe you're an amazing genius, and know all the ins and outs of everybody's research area, but then I wasn't talking about you specifically). Those statisticians obviously took the basic design and outcomes from the researchers, and worked on those. The problems with PACE were with the primary researchers, not the statisticians.I don't like the tone of this elitism here - it is not fair to assume that just because someone has studied in one area, they are completely incapable of understanding another. That is a gross presumption to make. We have no idea about the backgrounds of the statisticians involved in PACE. But there should have been enough expertise in the others involved, and by the "endless rounds of peer review" to cover those factors. That they didn't spot the flaws is of great concern - and everyone involved should be ashamed of themselves.
As research is done in specialist areas a statistician shouldn't be expected to know the nuances of the data and how it is collected. But they should know the right questions to ask to ensure that the stats are correct with the experiment.
One thing I notice in PACE and other work is if you call something a scale then statisticians seem to treat it as such without thought as to the properties
That's true. There's no guarantee that the primary researchers are any good either. I don't think the PACE psychologists and psychiatrists received a very strong training in psychological methodology (the psychiatrists would have received none, and the psychologist - Trudie - trained under one of those untrained dudes, so by extension, probably didn't get much methodological training either). Most of my training was in that, its really what I do. Because psychology has little "content" and is all about methods and approaches and their strengths and weaknesses.Yes, it's good to have the subject expert on hand, but chances are, they don't really understand the measures they are using either. No-one could ever question the CFQ because Chalder was an author, so she must understand how it works, right?
That's not at all what I said.
As with all these things it looks as if statisticians are increasingly oiling the wheels of garbage in garbage out. Statisticians tend to work on projects they are asked to look at by people who want positive results. They are asked 'how can we show this is significant'. Statisticians who reply 'I think you may be cherry picking' will not get asked again.
The problem is that people trained in statistics have no particular reason to understand the more basic problems of trial design that arise from sources of bias - human nature.
I would expect statisticians have a mathematical training background.
Oh, okay. Point taken.No. It's what others were saying
As research is done in specialist areas a statistician shouldn't be expected to know the nuances of the data and how it is collected. But they should know the right questions to ask to ensure that the stats are correct with the experiment.