A good ethics committee ensures this function. A trial that will not produce reliable evidence is unethical, by and large. Unfortunately, ethics committees are a mixed bag.
In this domain they seem to serve mostly an automatic-approval function.
A good ethics committee ensures this function. A trial that will not produce reliable evidence is unethical, by and large. Unfortunately, ethics committees are a mixed bag.
Not sure about that. I think they do understand it, it's just that they reject the validity of the outcome when it goes against them. They very much like the NICE guidelines when they go their way. In fact for years clinicians pretended to be bound by them when they recommended CBT/GET, and now that they don't, they simply say that guidelines are only advisory. Heads they win, tails they win, don't toss they win again. NICE has produced other guidelines where the same quality of evidence went the other way, there is really no consistency here.So the paper neatly demonstrates the authors' lack of understanding of the NICE process, thereby showing the authors fundamental inability to grasp the scientific rigour needed for scientific research ... which is how it all started of course.
A proper process should not be heavily reliant upon which end of the scale of the mixed bag an ethics committee happens to be or not; a proper process would minimize the variability due to this. If this were the case in aircraft/vehicle/pharmaceuticals safety engineering we would be seeing no end of system failures. No process is perfect of course, as Boeing have managed to stunningly illustrate in recent times. But in safety engineering (which I have only had a glancing contact with in my career, but enough to be confident of what I am saying here), there are specific aspects that have to be worked through and documented, and a failure in any one means going back and reviewing the design, until everything passes or acceptable mitigations are put in place. It is not always an exact science of course (e.g. estimating the human and environmental collateral harms of an aircraft springing a fuel leak in flight).A good ethics committee ensures this function. A trial that will not produce reliable evidence is unethical, by and large. Unfortunately, ethics committees are a mixed bag.
Perhaps I should have said "lack of understanding, or willingness to understand", which would still be applicable to their approach to research.Not sure about that. I think they do understand it, it's just that they reject the validity of the outcome when it goes against them.
An ethics committee implicitly includes the responsibilities of a safety committee surely?
I'd not realised an ethics committee was so informal and unregulated ... and unpaid. If all involved are properly competent and suitably qualified, then it doubtless works fine. But it is of course, as our well known researchers have proven down the years only too well, wide open to abuse, incompetence, etc. I'd think it an area in urgent need of reform.It does, except that members of an ethics committee are not held responsible through any mechanism. Ethics committees were set up on the basis that they were a good idea. They became compulsory but I don't think members have ever had to commit themselves to behaving ethically themselves, or upholding specified principles. Maybe we did but I have no recollection of it.
But I suppose that if members of an ethics committee took on responsibility they could be expected to be paid for their services. Nobody is going to lay themselves open to litigation for nothing.
In my country, and I'm sure elsewhere, there is also a national authority that oversees the functioning of the ethics committees.
something like that can haunt the people who approved the trial.
But even for studies that cannot be directly blinded, or that lack specific objective markers, there are always more general objective (and meaningful) measures of downstream consequences that are usually acceptable proxies.
For example, activity patterns, employment, welfare use.
Same result as PACE.If my memory services me correctly there was a study looking at outcomes for the British ME/CFS clinics at a time when GET/CBT was their standard treatment, and they found that following intervention from these services patients were likely to work fewer hours and claim more benefits.
Yes, my point really was that everything pertinent should be up for discussion, no matter what. Whether the potential outcome of such a discussion may or may not fit with someone's aspirations is utterly distinct from whether the discussion should take place or not.
It is also clear that a trial's up front design should include an assessment of quality of any evidence that will be produced by the trial. NICE's ratings of evidence quality are based on factors that are identifiable before a trial even starts! So the PACE trial design, for instance, could have stated up front that being fully unblinded (as it unavoidably must), if it relied solely on subjective outcomes, then the PACE evidence would inevitably rate as very low quality, which would likely have meant the trial would not have been funded. But of course PACE also used objective outcomes, and the trial design could have stated that with these outcomes, even though the trial was unblinded, the evidence would have been of higher quality, and therefore more likely fundable. This would have made it hugely more difficult for the investigators to skip the objective evidence, knowing that their trial's evidence quality would automatically then be downgraded significantly - not so much eminence to be garnered if your flagship trial, at time of publishing, is publicly graded very low quality thanks to ignoring vital evidence! Part of the deal should, I think, make it a requirement that when publishing, the evidence quality should be clearly stated.
I find it amazing that authorisation of a trial does not require up front projection of evidence quality, along with the trial conditions required for achieving that. Along with a clear contractual obligation.
When you think about it, a clinical trial has one overriding goal: To provide evidence, one way or other, of the efficacy of an intervention. So the quality of that evidence is paramount, and should as far as possible be stated early in a trial's acceptance stages.
Surely this would weed out many trials that should never see the light of day in the first place.
I actually think it IS dimness. People can trot out arguments like a parrot and never see how they will apply in another context. Apart from anything it is dim to write an argument in an email to someone who clearly knows that the argument is garbage!
In my country, we have four regional ethics committees. Members of the committees are paid, although a member has commented that what they are paid amounts to much less than the standard minimum wage if you do your job reasonably conscientiously. It often takes quite a bit of time to really understand a trial and there are a lot of new technologies that require specific expertise. One response has been to have national experts in specific types of research e.g. AI, use of health service consumer data, genetics, so that ad hoc virtual committees relevant to a proposal can be constituted.
There are also ethics committees associated with institutions like universities, and they can also approve research. In those cases, I imagine the committee members are salaried members of the institution, and the work is done more or less as part of their job.
National standards for health and disability research (and quality improvement) are what helps to ensure consistency. All ethics committees in the country assess proposals against the standards. So, that is one way people can influence what happens - provide input when the standards come up for review.
In my country, and I'm sure elsewhere, there is also a national authority that oversees the functioning of the ethics committees. That is another point where citizens can have some influence - is the authority responsible for monitoring ethics committees doing its job well? Are they publishing annual reports about reasons for non-approval? Do they do any quality control assessments of the work of the ethics committees?
If my memory services me correctly there was a study looking at outcomes for the British ME/CFS clinics at a time when GET/CBT was their standard treatment, and they found that following intervention from these services patients were likely to work fewer hours and claim more benefits.
This is a clear inditement of these services as treatment centres, but I would argue for people with a long term currently untreatable medical condition this is a positive outcome.
I don’t think the BPS research repeated use of these metrics, though we still see school attendance being reported in paediatric studies (see current thread on Magenta https://www.s4me.info/threads/grade...l-2024-gaunt-crawley-et-al.37488/#post-518833 , where school attendance improved in the no treatment control, but not the treatment arms. Again perhaps not what the researchers wanted.)
If my memory services me correctly there was a study looking at outcomes for the British ME/CFS clinics at a time when GET/CBT was their standard treatment, and they found that following intervention from these services patients were likely to work fewer hours and claim more benefits.
There is of course also the issue with those measures that there is a floor effect. Once you are ill enough to no longer be able to work or go to school the measure no longer differentiates further deterioration.