Anomalies in the review process and interpretation of the evidence in the NICE guideline for (CFS & ME), 2023, White et al

So the paper neatly demonstrates the authors' lack of understanding of the NICE process, thereby showing the authors fundamental inability to grasp the scientific rigour needed for scientific research ... which is how it all started of course.
Not sure about that. I think they do understand it, it's just that they reject the validity of the outcome when it goes against them. They very much like the NICE guidelines when they go their way. In fact for years clinicians pretended to be bound by them when they recommended CBT/GET, and now that they don't, they simply say that guidelines are only advisory. Heads they win, tails they win, don't toss they win again. NICE has produced other guidelines where the same quality of evidence went the other way, there is really no consistency here.

The quacks are like politicians who only accept the validity of an election if they won, and keep insisting that they won years after they have lost, as evidenced by them not being in office anymore. Except in the case of psychosomatic ideologues, they are still very much in office, still hold every bit of power they ever had, even though they lost the 'election' on the basis of their evidence having always been nothing but a bunch of hot air.

Like most institutions, science has no protections against corruption from within, when the people who hold the levers of power simply reject the responsibility they are expected to uphold and simply do as they wish, confident, and right at that, that they can simply clear themselves of any wrongdoing by 'investigating' themselves later on.

Parts of the system are built around 'self-oversight'. No system can function legitimately like this, but it's how it's built.
 
Last edited:
A good ethics committee ensures this function. A trial that will not produce reliable evidence is unethical, by and large. Unfortunately, ethics committees are a mixed bag.
A proper process should not be heavily reliant upon which end of the scale of the mixed bag an ethics committee happens to be or not; a proper process would minimize the variability due to this. If this were the case in aircraft/vehicle/pharmaceuticals safety engineering we would be seeing no end of system failures. No process is perfect of course, as Boeing have managed to stunningly illustrate in recent times. But in safety engineering (which I have only had a glancing contact with in my career, but enough to be confident of what I am saying here), there are specific aspects that have to be worked through and documented, and a failure in any one means going back and reviewing the design, until everything passes or acceptable mitigations are put in place. It is not always an exact science of course (e.g. estimating the human and environmental collateral harms of an aircraft springing a fuel leak in flight).

A safety engineering process is not there to insult the integrity of the people involved. It is there to minimise consequences of failings due to normal human fallibility, and also minimise consequences of less excusable but sometimes inevitable human failings, such as laziness, incompetence, over-confidence, etc. I see no reason why clinical trial ethics committees should be above such oversight. It's no good relying on the fact people have been taught all these things in the past, because it will still go horribly wrong without a safety process.

An ethics committee implicitly includes the responsibilities of a safety committee surely?
 
An ethics committee implicitly includes the responsibilities of a safety committee surely?

It does, except that members of an ethics committee are not held responsible through any mechanism. Ethics committees were set up on the basis that they were a good idea. They became compulsory but I don't think members have ever had to commit themselves to behaving ethically themselves, or upholding specified principles. Maybe we did but I have no recollection of it.

But I suppose that if members of an ethics committee took on responsibility they could be expected to be paid for their services. Nobody is going to lay themselves open to litigation for nothing.
 
It does, except that members of an ethics committee are not held responsible through any mechanism. Ethics committees were set up on the basis that they were a good idea. They became compulsory but I don't think members have ever had to commit themselves to behaving ethically themselves, or upholding specified principles. Maybe we did but I have no recollection of it.

But I suppose that if members of an ethics committee took on responsibility they could be expected to be paid for their services. Nobody is going to lay themselves open to litigation for nothing.
I'd not realised an ethics committee was so informal and unregulated ... and unpaid. If all involved are properly competent and suitably qualified, then it doubtless works fine. But it is of course, as our well known researchers have proven down the years only too well, wide open to abuse, incompetence, etc. I'd think it an area in urgent need of reform.

In safety engineering a safety committee is typically headed by a qualified safety engineer (probably mandatory but not sure). Given how human safety features so strongly in medical trials, I'm astonished this hasn't caught up with the management of them.

Is the British system typical of other countries? Or do some countries have more formalised rigour?
 
In my country, we have four regional ethics committees. Members of the committees are paid, although a member has commented that what they are paid amounts to much less than the standard minimum wage if you do your job reasonably conscientiously. It often takes quite a bit of time to really understand a trial and there are a lot of new technologies that require specific expertise. One response has been to have national experts in specific types of research e.g. AI, use of health service consumer data, genetics, so that ad hoc virtual committees relevant to a proposal can be constituted.

There are also ethics committees associated with institutions like universities, and they can also approve research. In those cases, I imagine the committee members are salaried members of the institution, and the work is done more or less as part of their job.

National standards for health and disability research (and quality improvement) are what helps to ensure consistency. All ethics committees in the country assess proposals against the standards. So, that is one way people can influence what happens - provide input when the standards come up for review.

In my country, and I'm sure elsewhere, there is also a national authority that oversees the functioning of the ethics committees. That is another point where citizens can have some influence - is the authority responsible for monitoring ethics committees doing its job well? Are they publishing annual reports about reasons for non-approval? Do they do any quality control assessments of the work of the ethics committees?
 
In my country, and I'm sure elsewhere, there is also a national authority that oversees the functioning of the ethics committees.

The situation the UK may now be similar on these points. But I doubt hospital and university ethics committees get paid. It isn't part of the job but having done a stint helps to get bonus pay.

But even if they are paid I am not clear how they can be realistically accountable. If five years later there is fuss about a trial being poorly designed do the ethics committee members get fined or sent to prison? Presumably not. The chairman might get a chastising report but very likely he would be fed up with doing it by then anyway.
 
I agree that the feedback mechanisms are poor - they need to be better. An ethics committee member won't be sent to prison for making a mistake but they may well be personally affected when a trial goes wrong though. For example, if a trial participant dies as a result of a new drug, and there is no compensation for their family because the insurance arrangements for the international investigator don't cover participants in all countries participating in the study, something like that can haunt the people who approved the trial.
 
I think the debate often gets stuck on wanting blinded or objective measures specific to the condition being studied, the equivalent of a diagnostic biomarker, etc.

But even for studies that cannot be directly blinded, or that lack specific objective markers, there are always more general objective (and meaningful) measures of downstream consequences that are usually acceptable proxies.

For example, activity patterns, employment, welfare use.

If there was no other objective measures but those and the results were substantial and sustained then that would be good evidence of benefit, even though they are generic and could be used on almost any condition, medical, psychosocial or anything in between.
 
But even for studies that cannot be directly blinded, or that lack specific objective markers, there are always more general objective (and meaningful) measures of downstream consequences that are usually acceptable proxies.

For example, activity patterns, employment, welfare use.

If my memory services me correctly there was a study looking at outcomes for the British ME/CFS clinics at a time when GET/CBT was their standard treatment, and they found that following intervention from these services patients were likely to work fewer hours and claim more benefits.

This is a clear inditement of these services as treatment centres, but I would argue for people with a long term currently untreatable medical condition this is a positive outcome.

I don’t think the BPS research repeated use of these metrics, though we still see school attendance being reported in paediatric studies (see current thread on Magenta https://www.s4me.info/threads/grade...l-2024-gaunt-crawley-et-al.37488/#post-518833 , where school attendance improved in the no treatment control, but not the treatment arms. Again perhaps not what the researchers wanted.)
 
Yes, my point really was that everything pertinent should be up for discussion, no matter what. Whether the potential outcome of such a discussion may or may not fit with someone's aspirations is utterly distinct from whether the discussion should take place or not.

It is also clear that a trial's up front design should include an assessment of quality of any evidence that will be produced by the trial. NICE's ratings of evidence quality are based on factors that are identifiable before a trial even starts! So the PACE trial design, for instance, could have stated up front that being fully unblinded (as it unavoidably must), if it relied solely on subjective outcomes, then the PACE evidence would inevitably rate as very low quality, which would likely have meant the trial would not have been funded. But of course PACE also used objective outcomes, and the trial design could have stated that with these outcomes, even though the trial was unblinded, the evidence would have been of higher quality, and therefore more likely fundable. This would have made it hugely more difficult for the investigators to skip the objective evidence, knowing that their trial's evidence quality would automatically then be downgraded significantly - not so much eminence to be garnered if your flagship trial, at time of publishing, is publicly graded very low quality thanks to ignoring vital evidence! Part of the deal should, I think, make it a requirement that when publishing, the evidence quality should be clearly stated.

I find it amazing that authorisation of a trial does not require up front projection of evidence quality, along with the trial conditions required for achieving that. Along with a clear contractual obligation.

When you think about it, a clinical trial has one overriding goal: To provide evidence, one way or other, of the efficacy of an intervention. So the quality of that evidence is paramount, and should as far as possible be stated early in a trial's acceptance stages.

Surely this would weed out many trials that should never see the light of day in the first place.

And also stop good money being thrown after bad should for example it start to become clear that the objective measures a trial was funded to do aren't showing what they want.

Either: if there is a methodological issue they come across that means they aren't working then that information needs to be written up as the result, and surely provides an opportunity for the next person applying for funding to benefit from that learning - which is as important in itself of 'finding an outcome' for something where new methods and honing how things are done are probably important for progress.

Or: if it is a case of the particular measure 'working fine' but it just isn't showing what was anticipated or wanted then that finding is as important as anything else. It's a rejection of the hypothesis and if that isn't 'protected' then science isn't science anymore because you can never actually confirm a hypothesis 100% but you can 'reject it' which is the point of having them, that the more times something is run and not rejected the more 'sure' you are.

If it basically can't ever be rejected then if it also can't ever show any positive effect you have the money-wasting loop we all see? where noone gets to the bottom of what the issue is
 
I actually think it IS dimness. People can trot out arguments like a parrot and never see how they will apply in another context. Apart from anything it is dim to write an argument in an email to someone who clearly knows that the argument is garbage!

there is that. Although when power differences come into play it can serve as a different purpose/instruction/suggestion/warning/party line or what not
 
In my country, we have four regional ethics committees. Members of the committees are paid, although a member has commented that what they are paid amounts to much less than the standard minimum wage if you do your job reasonably conscientiously. It often takes quite a bit of time to really understand a trial and there are a lot of new technologies that require specific expertise. One response has been to have national experts in specific types of research e.g. AI, use of health service consumer data, genetics, so that ad hoc virtual committees relevant to a proposal can be constituted.

There are also ethics committees associated with institutions like universities, and they can also approve research. In those cases, I imagine the committee members are salaried members of the institution, and the work is done more or less as part of their job.

National standards for health and disability research (and quality improvement) are what helps to ensure consistency. All ethics committees in the country assess proposals against the standards. So, that is one way people can influence what happens - provide input when the standards come up for review.

In my country, and I'm sure elsewhere, there is also a national authority that oversees the functioning of the ethics committees. That is another point where citizens can have some influence - is the authority responsible for monitoring ethics committees doing its job well? Are they publishing annual reports about reasons for non-approval? Do they do any quality control assessments of the work of the ethics committees?

I think this is an interesting point, because some of the tactics used are only going to be picked up on by people who are looking at these things day-in-day-out vs shipped in as a subject expert to do one every so often. SO you need the expertise of those who look at ethics, and expertise of those who are experienced in administrating/dealing with the applications as much as the subject knowledge.

There might be issues that are more specifically arising in certain either niches or sections - but if you then have people who are doing these things regularly you'd surely have that professional circle where such 'latest problems encountered' are getting communicated and flagged to those other areas that aren't yet seeing them so it stops proliferation.
 
If my memory services me correctly there was a study looking at outcomes for the British ME/CFS clinics at a time when GET/CBT was their standard treatment, and they found that following intervention from these services patients were likely to work fewer hours and claim more benefits.

This is a clear inditement of these services as treatment centres, but I would argue for people with a long term currently untreatable medical condition this is a positive outcome.

I don’t think the BPS research repeated use of these metrics, though we still see school attendance being reported in paediatric studies (see current thread on Magenta https://www.s4me.info/threads/grade...l-2024-gaunt-crawley-et-al.37488/#post-518833 , where school attendance improved in the no treatment control, but not the treatment arms. Again perhaps not what the researchers wanted.)

Agreed that it is important to warn of this - that measures such as those aren't quite right in a culture where the temptation is likely to be performance reviews (based on are these improving) and no treatment and plenty of other complications of the condition (which is why we need long-term outcomes not short). The whole thing becomes revoltingly circular.

Doesn't cancer use 5yr outcomes rather than 1yr for any treatment packages there. I don't know on other conditions that are taken seriously and treated clinically and am only quoting that because you hear these things in the news every so often.

There is of course also the issue with those measures that there is a floor effect. Once you are ill enough to no longer be able to work or go to school the measure no longer differentiates further deterioration.

I also think that we can now spot simply from the uttering of the phrase 'secondary benefits' (which they aren't) who is conflicted/has interests in their reason for their being involved/behaviour. Because one could argue that is more 'the message/aim' they are looking to convey than anything else contained in whatever article or speech or anything. Which gives me a sense from who and how many say it how wise it probably is to stay well away from these as measures. After all there are mediating variables in between this.

But of course these days things have become so argument-based anything that is based on what others can term a behaviour seems to be twistable. We really could have done with those long-term stats from any of these objective variables from said trials.

But how do you separate out eg the person who 'overdoes it' who has a high actimeter reading for 6months and then ends up incredibly sick long-term from it later on due to it causing a crash from those who had a high actimeter reading because they were less unwell than the first person, when the criteria were so poor? and of course even now do we and can we have fine enough definitions to really capture the differences as many of us might go downhill from top-end moderate to low end-moderate rather than moderate to severe. And of course said changes might be in one particular aspect more than others or before other aspects.

It makes me realise that without a few really important findings and pieces of research I think we'd be in a right sticky wicket - the Workwell 2-day CPET being one of the primary things to show it isn't cart-before-horse or being unfit etc.
 
If my memory services me correctly there was a study looking at outcomes for the British ME/CFS clinics at a time when GET/CBT was their standard treatment, and they found that following intervention from these services patients were likely to work fewer hours and claim more benefits.

Right, that was PACE. And Mark Vink and I wrote this paper last year about how their own data shows no benefits overall in occupational status: https://content.iospress.com/articles/work/wor220569
 
There is of course also the issue with those measures that there is a floor effect. Once you are ill enough to no longer be able to work or go to school the measure no longer differentiates further deterioration.

The whole thing's arse-about-face anyway. They shouldn't be focusing entirely on improvements, at least not in anyone who's been ill for more than two to three years.

Real improvement is quite infrequent, so the aim and the measurements should first and foremost be about stability. Any improvements that do occur are often in the form of better QoL, once people have established what level of pacing they need. Their ME isn't really any better, but they're having less pain, less PEM, fewer gut symptoms, etc, because they're not exceeding their capacity.

But I suppose that would require researchers with a passing knowledge of ME. :rolleyes:
 
Back
Top Bottom