That you think it's some good stuff mushed up with stuff that's a bit whiffy is, as I've said several times in this thread, a key point (
as per my post here): studies of all kinds are patchy like that, blinded or not. And that's why the systematic reviewing community has been moving towards rating the uncertainty around data per outcome, not throwing out babies with bathwater (or calling bathwater babies).
This I think is the root of the problem, Hilda. (
@Hilda Bastian )
This is no good. What the systematic reviewing community has been moving towards is no good if it is flawed. I have been in the business of designing, conducting, assessing, and applying to my practice clinical trials for nearly fifty years. An appeal to the authority of the systematic reviewing community does not work. You may have missed it but in a previous post I pointed out that for reviews to trawl through secondary endpoints is to commit exactly the same crime that authors are not allowed to commit.
You might say, but oh! the reviewers are using the right tools and doing things the right way. No good. The reason we are having this discussion is that they don't. They are as human as the authors. Moreover, the same people who write major reviews on how to judge trial quality, or chair committees on devising exactly the tools we are talking about (risk of bias), turn out to be authors on some of the very worst examples of poor studies in the ME field.
Let me put it this way. Cochrane has lost its Michelin star. I used to assume Cochrane was as free of bias as Mr Spock of Star Trek. But various recent events have made it clear that this is not how things are. The 'systematic reviewing community' has to take what comes on TripAdvisor just like the rest of us. And it is getting things wrong.
This is actually why lots of members here were very pleased to hear that you were getting involved. Your Absolutely Maybe is full of sensible and cutting analyses. You are clearly driven by a sense of patient rights. But the idea was to have you come in to sort out exactly what you say things are 'moving towards' - cherry picking by 'those in the know'. I gather you are to be congratulated on having recently submitted a PhD thesis. For me, the main function of a PhD is to teach a student how to recognise how much hot air their supervisor and friends produce.
You cannot get around this basic fact - if only primary outcome measures are good enough to stand as measures of usefulness of a treatment because everything else is subject to the problem of multiple analyses then the other measures remain too unreliable to use in a review. You might say that in a meta-analysis lots of secondary measures pointing in a direction have more weight but
the whole point of systematic bias is that this is not so. In systematic bias
everything leans a bit the way people want. Lots of studies seeming to point in a direction tells you nothing. There is some deeply flawed thinking going on.
My original comment that unblinded trials with subjective primary outcome measures are valueless was meant as a first approximation and it is technically possible to find exceptions. But as a practical translational biologist I am only interested in the real world - where I think you will find that the statement holds very well - for two reasons.
One is the a high proportion of medical science is badly done. Hidden out of site are all sorts of methodological crimes committed but never recorded (junior assessors changing the data to 'help' get the 'right' answer, discarding 'outliers', repeating analyses that 'did not seem right'...). Studies are inherently unreliable. We mitigate that by checking the authors seem to know what they are doing. Any authors who set up an unblinded trial with a primary endpoint sufficiently subjective to be open to systematic bias in that context are not up to the job. And if that seems harsh one only has to look at the what went on with PACE and SMILE and FINE.
In other words a trial with this design should not be taken seriously. It is not so much that it is valueless, I now realise, as potentially harmful.
The second reason relates to the above. An unblinded trial with a primary subjective endpoint is not so much valueless as dangerous because the 'systematic reviewing community' may well come along and find all sorts of things that prove things people want to prove and thereby cause harm. Again, that is exactly where we are at present with the exercise review. And a newly published tool suggests a more lenient approach to bias!
So the bottom line is that any new review that does not recognise the fact that the current systematic review policy is deeply flawed is going to be of no value to the patient community we both want to support. That may put you in a very difficult position, I realise. But you are such a strong advocate for patient rights that I think members here may still hope you will see the point!