Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

It is their stubborn recklessness in the face of all evidence and warnings that stuns and angers me. They really don't care about the harm they cause to others. They care only about harm to their reputations and empires.

I think that is clear in their comments to NICE. Although I wonder if there is some cognitive dissonance which doesn't allow them to grasp that their work is not up to standard - that is quite natural for individual researchers but that is where there needs to be good governance processes and others need to step in and point out the lack of quality and issues (and that should be senior medical/academics/research funders - its not the job of patients although it is patients who have been doing it).
 
I wonder if there is some cognitive dissonance which doesn't allow them to grasp that their work is not up to standard - that is quite natural for individual researchers

@Caroline Struthers provided a good example here* and yes it's to be expected ---

*https://www.s4me.info/threads/indep...ed-by-hilda-bastian.13645/page-80#post-390317

that is where there needs to be good governance processes and others need to step in and point out the lack of quality and issues (and that should be senior medical/academics/research funders - its not the job of patients although it is patients who have been doing it).

Yes, you need to design out the bias, whether that's through ethics committees or whatever.
 
Yes, you need to design out the bias, whether that's through ethics committees or whatever.


Yes. Ethics committees are not trained in this currently, especially not lay members. I am not sure if University research Ethics boards even have lay members. If there were public stakeholder consultation on submissions to ethics committees, in the same way as NICE do for guidelines, that might be a great way to weed out bias and shed light on poor study design before the study even starts. I think the NICE process with the technical review work done by different and independent people from the decision-making committee is very good. If any committee (or systematic review team) making judgement about existing evidence or the methods of generating future evidence (ethics committee) are selected transparently to be balanced too (unlike Cochrane review teams), all the better. An anti-corruption unit could be set up to rove about stamping out the inevitable attempts to subvert the system. Again, getting carried away into Line of Duty fantasy world.
 
I think that is clear in their comments to NICE. Although I wonder if there is some cognitive dissonance which doesn't allow them to grasp that their work is not up to standard - that is quite natural for individual researchers but that is where there needs to be good governance processes and others need to step in and point out the lack of quality and issues (and that should be senior medical/academics/research funders - its not the job of patients although it is patients who have been doing it).
Yes, there is a serious lack of independent oversight. The establishment overseers are highly partisan, and lack the integrity and bottle needed to rise above it, and thereby be independent. And on top of that, they exploit their powerful positions to further their conflicted interests.

Things seriously need to change.
 
Yes. Ethics committees are not trained in this currently, especially not lay members.
Why would ethics committees be responsible for oversight to ensure good, unbiased trial methodology? Although trial methodology and trial ethics will obviously have common touch points, they are nonetheless quite distinct surely, and need separate oversight?

Does it not need a distinctly separate and differently qualified body to oversee the correct design, and ongoing execution, of a trial? Presumably the main remit of an ethics committee is to ensure the trial protocol meets the necessary ethical requirements. And then subsequent trial oversight, but a different independent body, to ensure the trial meets, and continues to meet, the requirements of its own approved protocol, as well as all standard trial methodological requirements.
 
The way they pick reviewers is also problematic. NICE at least strives for a balanced and representative guideline development committee, with a separate technical team, and then has things like the consultation to put its findings out into the world and get feedback on them.
NICE's performance on the Long Covid guideline and the Chronic Pain guideline has been less than great - there were various combinations of a poor selection of reviewers; not listening to the valid concerns of some of the reviewers; not having a proper consultation process; and not listening the valid concerns of stakeholders.

Let's do it! Except will we find any trials which measure and report both (in one publication)
I do hope someone will do it that (apply grade to the two components of the asthma trial separately, to the part of the trial with objective outcomes and to the part of the trial with subjective outcomes).

The way to really address things is not to have a review system (like Grade) that will pick up on issues but to have better standards for trial design. Even to try to run through GRADE prior to running the trial could be valuable (perhaps an ethics committee should do this and say trials giving low quality or very low quality evidence are unethical as the results are meaningless!). It feels like trying to fix the way reviews are done is fixing the wrong problem.
I would also note we found that having strong (continuous) audit helped improve compliance to the processes.
These, and the associated comments I didn't quote, are great observations. In the industry I used to work in, forestry, there were several competing quality standards for forest management. People might know the Forest Stewardship Council certification, but there are others. The schemes involve a set of criteria, so everyone knows what is expected. These are developed by committees of the forest managers, people affected by land use, purchasers of forest products, among others. The criteria develop over time as more is known about what makes for sustainable forest management and gives a company a social licence to operate.

People in the companies and organisations are trained in evaluation. And there are external assessors. These schemes built on the considerable gains that the ISO certification delivered, but allowed for the particular needs of the sector.

They made an enormous difference to the forestry sector. Of course, there are forest managers who aren't certified, but reputable buyers of forest products have nothing to do with them. And of course there are criminal goings on with assessors being bought off, and documentation faked, but there are processes to pick that up over time, processes aiming to make everything very transparent. Assessment reports are published online for anyone to see.

Something like that should be done for medical research. Compared to the range of things that have to be considered in forestry, I think it would not be hard to come up with a certification scheme for good medical/health research. e.g.
  • have stakeholders been consulted? (how are the potential users of the research, the patients and the clinicians involved with the research design? were the needs and rights of indigenous people, of women, of children and others thought about? have ethical standards for involvement of patients been followed?);
  • does the methodology for processes fit with defined good practice - the experiment design, the statistical analysis?;
  • was the trial design lodged in a public registry?;
  • have aspects of privacy and data security been thought about?
  • are the outcomes in line with standard outcomes developed in consultation with stakeholders?;
  • have aspects of health and safety been thought about (e.g. is it really safe to do that gene-editing coronavirus research? has guidance about use of animals in research been considered? are staff and trial participants kept safe?);
  • is there a process for continuous improvement - so what is learned by doing one study is incorporated into the organisation's procedures? what are the processes for selecting peer reviewers? what are the processes for responding to feedback on the research?
  • is there a process for making key information public and reviewing the quality of research done internally e.g. making data available; publishing the names of peer reviewers; were all planned outcomes reported?; dissemination of results, including to the participants, to funders, to clinicians, to patient groups. What are the rules for communicating results to the media?
  • was the research run in a cost-effective way? Were there processes to ensure that funding was handled appropriately? Were planned milestones met? If not, why not?
  • and what is the process for external review all of these things?
Purchasers of research could specify that only researchers or organisations who are certified under the standard can participate in funding rounds.

It might be fun to come up with such a standard. Maybe there is one already, that brings together the bits like the guidance on the use of animals in research.

The stakes of medical research are unquestionably very high, and yet my impression is that the standard of the work done is more often than not pretty low. And the regulating processes to ensure it is done well seem inadequate. If the forest sector can do it, surely a sector full of doctors and universities can manage to?
 
Why would ethics committees be responsible for oversight to ensure good, unbiased trial methodology? Although trial methodology and trial ethics will obviously have common touch points, they are nonetheless quite distinct surely, and need separate oversight?

I think it is closer that that. Adequate trial design is primarily an ethical issue. If there weren't people involved scientists could be allowed to do things as badly as they fancied. The problem with poor methodology is the impact of people's lives - which is an ethical issue.

I used to be on an ethics committee and we considered this very much our remit. Trials that were not going to give useful answers were not ethical if they exposed patients to any procedures that might have risk. We had lay members too, although they tended not to have knowledge of methodology.

Unfortunately, there will always be academic institutions where ethical committees are not too tight on their requirements. The same sort of chums network will operate as it does on the journals.

Inevitably quality control is going to need to be at every step.
 
Something like that should be done for medical research. Compared to the range of things that have to be considered in forestry, I think it would not be hard to come up with a certification scheme for good medical/health research. e.g.
  • have stakeholders been consulted? (how are the potential users of the research, the patients and the clinicians involved with the research design? were the needs and rights of indigenous people, of women, of children and others thought about? have ethical standards for involvement of patients been followed?);
  • does the methodology for processes fit with defined good practice - the experiment design, the statistical analysis?;
  • was the trial design lodged in a public registry?;
  • have aspects of privacy and data security been thought about?
  • are the outcomes in line with standard outcomes developed in consultation with stakeholders?;
  • have aspects of health and safety been thought about (e.g. is it really safe to do that gene-editing coronavirus research? has guidance about use of animals in research been considered? are staff and trial participants kept safe?);
  • is there a process for continuous improvement - so what is learned by doing one study is incorporated into the organisation's procedures? what are the processes for selecting peer reviewers? what are the processes for responding to feedback on the research?
  • is there a process for making key information public and reviewing the quality of research done internally e.g. making data available; publishing the names of peer reviewers; were all planned outcomes reported?; dissemination of results, including to the participants, to funders, to clinicians, to patient groups. What are the rules for communicating results to the media?
  • was the research run in a cost-effective way? Were there processes to ensure that funding was handled appropriately? Were planned milestones met? If not, why not?
  • and what is the process for external review all of these things?
Purchasers of research could specify that only researchers or organisations who are certified under the standard can participate in funding rounds.
Absolutely great idea. Lots of these standards already exist, but they are not mandatory, and so not policed. Even if some are mandatory, they are still not policed effectively. Keeping patients and the public informed is also very low on the list of priorities despite the fact it's them that pay for it with taxes and by taking part in trials. There are endless studies of just how low the rate of following even basic standards is, but nothing very practical on how to make things better. There are also endless piecemeal efforts to improve bits and pieces of the research process jigsaw, but nothing which threatens to disrupt the status quo yet. Also the initiatives are usually focused on drug trials, as if trials or observational research aimed at developing other types of healthcare interventions don't need to achieve these standards.
 
This area of research ethics oversight, research standards audit should be a topic for investigation by the House of Commons Science Committee.

Interesting - I didn't even know it existed.

Anything that's publicly funded should meet minimum standards yet as @Hutan @Caroline Struthers and others have highlighted, it seems they don't meet even the most basic standards - it seems that all of the ME/CFS research on CBT and/or exercise fails the test @Jonathan Edwards describes "not going to give useful answers". OK PACE may have passed that test if the study followed the proposed methodology.
 
Interesting - I didn't even know it existed.

Anything that's publicly funded should meet minimum standards yet as @Hutan @Caroline Struthers and others have highlighted, it seems they don't meet even the most basic standards - it seems that all of the ME/CFS research on CBT and/or exercise fails the test @Jonathan Edwards describes "not going to give useful answers". OK PACE may have passed that test if the study followed the proposed methodology.
It does. And Carol Monaghan is a member.
 
it seems that all of the ME/CFS research on CBT and/or exercise fails the test @Jonathan Edwards describes "not going to give useful answers".

I think the problem there is, useful to whom!

Perhaps one of the helpful pressures might be patient involvement in setting standards for trials. If that became sufficiently attractive to funding bodies and high profile institutions to which researchers might look for future employment, it would at least add another layer of difficulty to getting away with crap research.

Science also needs to get better at allowing research to produce nothing much but publish the data anyway. You commission scripts on the basis that some won't great—and that's fine, because otherwise playwrights would just keep rewriting The Importance of Being Ernest for the rest of eternity—so why not research?
 
Absolutely great idea. Lots of these standards already exist, but they are not mandatory, and so not policed. Even if some are mandatory, they are still not policed effectively. Keeping patients and the public informed is also very low on the list of priorities despite the fact it's them that pay for it with taxes and by taking part in trials. There are endless studies of just how low the rate of following even basic standards is, but nothing very practical on how to make things better. There are also endless piecemeal efforts to improve bits and pieces of the research process jigsaw, but nothing which threatens to disrupt the status quo yet. Also the initiatives are usually focused on drug trials, as if trials or observational research aimed at developing other types of healthcare interventions don't need to achieve these standards.
The Forest Stewardship Council was developed by stakeholders who were frustrated that they couldn't hold bad forest owners to account, and by some forest owners who wanted a way to show that they weren't bad forest owners. My point is that it didn't come from government but, once it was in place, many governments and purchasers of forest products built forest certification into laws and purchasing rules.

Back in the early 1990's there was just a groundswell of feeling that something had to be done to make management of the world's forests better. And so lots of people made the idea, which importantly included external monitoring, happen. I think it could be the time for medical research to do the same. You need a few influential organisations who do medical/health research to commit to getting the ball rolling. Ironically, it's exactly the sort of initiative the Cochrane Institute should have been well placed to drive.
 
Last edited:
NICE's performance on the Long Covid guideline and the Chronic Pain guideline has been less than great - there were various combinations of a poor selection of reviewers; not listening to the valid concerns of some of the reviewers; not having a proper consultation process; and not listening the valid concerns of stakeholders.

I do hope someone will do it that (apply grade to the two components of the asthma trial separately, to the part of the trial with objective outcomes and to the part of the trial with subjective outcomes).



These, and the associated comments I didn't quote, are great observations. In the industry I used to work in, forestry, there were several competing quality standards for forest management. People see might know the Forest Stewardship Council certification, but there are others. The schemes involve a set of criteria, so everyone knows what is expected. These are developed by committees of the forest managers, people affected by land use, purchasers of forest products, among others. The criteria develop over time as more is known about what makes for sustainable forest management that allows a company to have a social licence to operate.

People in the companies and organisations are trained in evaluation. And there are external assessors. These schemes built on the enormous gains that the ISO certification delivered, but allowed for the particular needs of the sector.

They made an enormous difference to the forestry sector. Of course, there are forest managers who aren't certified, but reputable buyers of forest products have nothing to do with them. And of course there are criminal goings on with assessors being bought off, and documentation faked, but there are processes to pick that up over time, processes aiming to make everything very transparent. Assessment reports are published online for anyone to see.

Something like that should be done for medical research. Compared to the range of things that have to be considered in forestry, I think it would not be hard to come up with a certification scheme for good medical/health research. e.g.
  • have stakeholders been consulted? (how are the potential users of the research, the patients and the clinicians involved with the research design? were the needs and rights of indigenous people, of women, of children and others thought about? have ethical standards for involvement of patients been followed?);
  • does the methodology for processes fit with defined good practice - the experiment design, the statistical analysis?;
  • was the trial design lodged in a public registry?;
  • have aspects of privacy and data security been thought about?
  • are the outcomes in line with standard outcomes developed in consultation with stakeholders?;
  • have aspects of health and safety been thought about (e.g. is it really safe to do that gene-editing coronavirus research? has guidance about use of animals in research been considered? are staff and trial participants kept safe?);
  • is there a process for continuous improvement - so what is learned by doing one study is incorporated into the organisation's procedures? what are the processes for selecting peer reviewers? what are the processes for responding to feedback on the research?
  • is there a process for making key information public and reviewing the quality of research done internally e.g. making data available; publishing the names of peer reviewers; were all planned outcomes reported?; dissemination of results, including to the participants, to funders, to clinicians, to patient groups. What are the rules for communicating results to the media?
  • was the research run in a cost-effective way? Were there processes to ensure that funding was handled appropriately? Were planned milestones met? If not, why not?
  • and what is the process for external review all of these things?
Purchasers of research could specify that only researchers or organisations who are certified under the standard can participate in funding rounds.

It might be fun to come up with such a standard. Maybe there is one already, that brings together the bits like the guidance on the use of animals in research.

The stakes of medical research are unquestionably very high, and yet my impression is that the standard of the work done is more often than not pretty low. And the regulating processes to ensure it is done well seem inadequate. If the forest sector can do it, surely a sector full of doctors and universities can manage to?
And some vetting should be done on what is said in an application. Recovery Norway should not be allowed to be branded as "patient collaborators" for example. They are continously arguing they are not a patient org. when it suits them not to be, but then suddenly they are... :banghead:
 
Didn't Live Landmark complain that there were too many patients on the NICE committee!!
Yeah an article a few days ago. Not just that, but many other people were related to people with ME or generally involved in the issue but are not BPS ideologues, because apparently even that should disqualify. Not a caste system or anything like that, though. It's just the mere thought of anyone affected by this being involved that gets people irrationally angry, which is totally normal and respectful of the principles medicine is built on. You know, patient-centered medicine, the proper way, one that excludes anyone who doesn't fit the dogma.

This more than anything shows how this is 99% a political issue. Even with all the pains NICE went through to provide balance, fanatics still cry out for bias against them while demanding exclusive bias in their favor. Just like in politics.
 
Last edited by a moderator:
Ironically, it's exactly the sort of initiative the Cochrane Institute should have been well placed to drive.
Yes. this is why I used to be so devoted to Cochrane. As an information specialist, I could see the potential of widening the scope of information which Cochrane could capture about all the trials in its CENTRAL database of controlled trials - things like non-reporting, selective reporting, poor trial designs (eg. PACE) and conflicts of interest. And also the potential to get citizen scientists/investigators involved in mapping it all. Like they have done so successfully with Zooniverse projects. https://www.zooniverse.org/. Unfortunately Cochrane are so inward focused, they have to do everything Cochrane's own way, so instead, they have created Cochrane crowd https://crowd.cochrane.org/, geared towards meeting the needs of people writing reviews by recruiting volunteers to do the tedious work of screening studies. All very fluffy and fun (if you like that kind of thing), but ultimately self-serving.

When I worked at Cochrane CENTRAL was a bit crap as it contained animal studies, which it shouldn't do, plus other non-trial records. All of these non-trials are supposed to be screened out either automatically or by information specialists and/or cochrane crowd volunteers. I just did a search on Larun as author to see if she had done any trials, and there are two studies listed in the CENTRAL register of trials, which are both in fact reviews. One is a rip off of the Cochrane Exercise review. So CENTRAL is still a bit crap then :-/ And not open access, even though they now use volunteers to update and maintain it. Unethical.
 
I know nothing much about Cochrane and how it works. I have no useful contribution or insights to make. But I am in a particular frame of mind presently so . . .

. . . from understanding of some of the problems thanks to the Larun review I think I'll just have fries with my reading and call it McCochrane. As it seems to me that this is what they're aiming for.

[back to more serious commentary]
 
Back
Top Bottom