1. Guest, the 'News in Brief' for the week beginning 20th June 2022 is here.
    Dismiss Notice
  2. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Who Agrees That GRADE is (a) unjustified in theory and (b) wrong in practice?

Discussion in 'Other research methodology topics' started by Jonathan Edwards, Mar 4, 2021.

  1. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    11,331
    Location:
    London, UK
    They also like to use a language called ....

    GET-IT
    GET-IT stands for the Glossary of Evaluation Terms for Informed Treatment choices.

    The aim of this glossary is to facilitate informed choices about treatments by promoting consistent use of plain language and providing plain language explanations of terms that people might need to understand if they wish to assess claims about treatments.
     
    MSEsperanza, Michelle, Barry and 7 others like this.
  2. Invisible Woman

    Invisible Woman Senior Member (Voting Rights)

    Messages:
    10,284
    And then they slap an acronym on as the title?

    Not only that an acronym which is already in use.

    Give me strength.........
     
    Hutan, MSEsperanza, Michelle and 6 others like this.
  3. cassava7

    cassava7 Senior Member (Voting Rights)

    Messages:
    453
    Someone commented in the thread on the response by Busse et al. that people with ME now have to take on the whole of EBM -- Guyatt being one of those who apparently coined the term in the 1990s. The more we learn about associations between prominent figures in the EBM world that have been involved with the Cochrane review, as with this IHC team, the truer that seems to become. NTNU (Flottorp, Larun et al.) certainly has succeeded in tying links with eminent people in EBM.

    It's a sad state of affairs that these people are promoting the opposite of EBM. What went wrong? Conflicts of interest with policymakers in healthcare? Them having the same view that prevails in medicine that MUS conditions lack objective evidence (biomarkers) and thus must involve psychosocial factors?
     
  4. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    1,813
    This shit should be put up for an award - based on a few seconds I found this gem "The learning resources complement and facilitate critical thinking teaching and scientific reasoning in other areas." Who could possibly object to that - a thing of beauty - aspirational, sun light uplands ---. Presumably the difficulty comes when you have to recommend something specific - diagnosis, treatment --- way forward in terms of understanding the disease.

    I assume they just recommend CBT/GET without having any evidence - possibly there might be some interesting footwork to get from the aspirational to the specific.

    https://www.informedhealthchoices.org/our-solution/
     
  5. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    1,813
    Actually if you had noting better to do this is an entertaining read - OK possibly despairing, incredulity --- couple of random paragraphs* --- "Resources with universal relevance" I think "Global" is what they are aiming for. Aside from politics/government propaganda I doubt there much use for this in the universe!
    Then there's "human-centred" - last time I checked, on digital TV, the gelada baboons have a full time job eating grass and avoiding ethopian wolves - - so yes, best stick to the humans! Sadly have to leave but maybe someone should contact them and advise how much we enjoyed the creative writing --- they should really get into that full time!

    *
    Resources with universal relevance
    We initially created resources for low-income settings, where the need is greatest. However, teams in over 20 countries – including high and middle income settings – are translating and adapting these resources for use in their settings. Most teams are finding that except for language translation, the school resources can be used in their original form without major changes.
    See IHC by Country.

    International and multi-disciplinary collaboration
    We are an international team with backgrounds in research, public health, design, education, technology and communication. We collaborate closely with teachers, students, parents, school administrators and curriculum developers in different countries. Employing a human-centred design approach, we develop resources that are engaging, understandable and feasible to implement in a range of contexts. We carry out fair comparisons (randomized trials) to make sure that they are effective, and use process evaluations to understand how we might better facilitate uptake of resources at a country level.
     
  6. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    9,742
    Location:
    UK West Midlands
    Informed choice, :wtf: you could not make it up. It’s like all those countries with Democratic in the name.
     
  7. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    600
    Location:
    Oxford UK
    [​IMG]
    I think I should do a search for all the crappy out of date reviews in the Cochrane Library and get them to slap a "do not use for decision-making" notice on every single one! There will be hundreds of them.
     
  8. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    1,813
    Go for the website - should not be used for clinical decision-making or just abbreviate - should not be used---

    I gather there are folks teaching PACE i.e. as in how not to do a trial - maybe put this up as teaching material --- how to deceive - a masterclass in manipulation
     
  9. Adrian

    Adrian Administrator Staff Member

    Messages:
    5,981
    Location:
    UK
    There may be a number that have the same issues as this one and the GET one in terms of what they think is an acceptable methodology. Any involving CBT are propably dodgy as its all unblinded so could be worth doing a trawl and comment.
     
  10. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    1,813
    Yes I've heard that much early psychological research, which was conducted by researchers who had experienced the Holocaust, was reliable/sound. This came up in the context of discussion of making up research findings - scientists cheating [BBC Radio 4 program?]. I think the point that those with a vested interest in a sound outcome [Holocaust survivors] are actually more reliable i.e. compared to those with none.

    I think I've come across examples of counter intuitive policies - so NICE should monitor outcomes and think about what the evidence suggest - who really is reliable?
     
  11. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    1,813
    There's an excellent review by @Jonathan Edwards of the current position re ME/CFS research here:
    https://www.s4me.info/threads/the-t...leisk-and-nocon-2021.19583/page-3#post-332763

    In a way it illustrates the difficulty in developing any GRADE type assessment of research. You actually need knowledgeable people to assess the methodology, to see if it makes sense.

    OK I assume you can establish some basis rules [objective outcomes, blinding ---] but if the methodology is flawed then you shouldn't use the study - and only a knowledgeable reviewer will know that!
     
    Invisible Woman and Trish like this.
  12. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    11,331
    Location:
    London, UK
    This post has been copied and following posts moved from
    What we're not being told about ME - UnHerd (Tom Chivers)


    Standing back it may be important to realise just how easy it may be not to realise how muddled and self-serving the thinking of medical practitioners can be - even experts, maybe especially experts.

    The WHO has recently decreed that traditional Chinese medicine should be give the same respect as western medicine. Western medicine, like TCM, was for centuries based on complete fairytales. Quite a lot still is. Rehabilitation as a speciality was set up to provide some way of dealing with the huge number of war casualties - to provide some framework for people while returning to some new level of function. It was never based on tested methods although some testing has gone on in recent years.

    And GRADE has been adopted by NICE and Cochrane and pretty much every other relevant body despite being nonsense. It may be hard for a journalist to believe this. The people who see the situation for what it is are the engineers, who have always had to stick to real evidence of safety and efficacy because they can be sued for getting it wrong.
     
    Last edited by a moderator: Aug 28, 2021
    FMMM1 and Simbindi like this.
  13. chrisb

    chrisb Senior Member (Voting Rights)

    Messages:
    4,469
    With this new found emphasis on GRADE it is intriguing how everything seems to head back in the direction of McPerson and how little of it seems to have been acknowledged or otherwise generally known. Can it all be coincidence that Cott was developing his BPS theories alongside the department coming up with GRADE?
     
    Simbindi, Louie41, Michelle and 3 others like this.
  14. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    11,331
    Location:
    London, UK
    There may be some of the underlying dislike of high tech medicine behind the EBM programme that is so prevalent in Cochrane.

    But the more basic relation may be the attraction for woolly minded 'scientists' to apply numbers to psychosocial issues where they do not belong. GRADE is based on very much the sort of pseudo arithmetic beloved of the psychologists.
     
    Hutan, FMMM1, Michelle and 10 others like this.
  15. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,231
    Since I've been in involved in these forums I've become convinced of the truth of this. They seem to think you can superimpose a neat numeric scale onto things that are nothing like that simple to quantify. Any relationship would be unlikely to be a neat straight line, and quite apart from that, there is so much subjectivity that the noise is massive, and likely swamps much of what might be meaningful. But from their perspective, if you can slap numbers onto something, any numbers, then it suddenly acquires credibility and becomes 'scientific', and they can call themselves scientists ... and convince the unwitting 'others' of that.

    It's like designing a car by questionnaire. If you took an existing design, with all the carefully worked out numbers that had gone into that design, but then went back to the design engineers and said "we want to design another car, but this time forget about the science and engineering processes, but just tell me what you think this number should be, and that number, ...". And then built the car according to those numbers. It would not work very well, if at all, yet would likely still be better than the BPS 'science'.
     
    Snow Leopard, JohnM, Louie41 and 9 others like this.
  16. chrisb

    chrisb Senior Member (Voting Rights)

    Messages:
    4,469
    Yes. It may be that those who created the GRADE system had a particular set of psychological characteristics, and it is perhaps a pity that they did not attempt to enumerate those rather than create a more general, fanciful system.
     
    Louie41 and Barry like this.
  17. petrichor

    petrichor Senior Member (Voting Rights)

    Messages:
    289
    The idea behind methods like GRADE and systematic reviews is eliminating as much bias as possible in reviews of evidence. Hence why they need to create these search terms that get thousands of results which they have to spend months sorting through to systematically the find all the right studies, so there isn't bias in the studies they select. GRADE and systematic reviews are imperfect, but the alternative, which is not trying to follow such methods at all seems even more incredibly open to bias.

    I don't think that systematic reviews and GRADE are the optimal way to assess evidence. But from all the ways I've seen that people can be biased and terrible in assessing evidence and the questionable methods they can use, the alternative seems like such a steaming pile of trash that GRADE and systematic reviews look good. I think there need to at least be some kind of systematic methods used, and they need to be strict enough that people can't just drop some parts they don't like in order to get conclusions they want.
     
  18. chrisb

    chrisb Senior Member (Voting Rights)

    Messages:
    4,469
    For any puzzled by my posts, there was at the 1985 conference at which Arthur Cott presented his views upon which SW and MS appear dependent, a paper was delivered on "Measuring utilities for health states" by George Torrance of the Department of Management Science, and Clinical Epidemiology and Biostatistics of McMaster UNiversity. This area of work appears to be a forerunner to GRADE which seems to have emanated from that Department.
     
    Louie41, Michelle, Simbindi and 3 others like this.
  19. Adrian

    Adrian Administrator Staff Member

    Messages:
    5,981
    Location:
    UK
    Basically they don't understand what it means to measure something and then how to treat the accuracy of a measurement (which is related to the way measures work and the associated noise). Something engineers learn.
     
  20. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    11,331
    Location:
    London, UK
    But that is not the alternative. The alternative is to do things as carefully as practical.

    GRADE has two components. One is a set of rules about all the things that need to be taken in to account. That bit is fine and likely to be very useful as an aide memoire for competent people trawling through vast amounts of material. The systematic logging of information in GRADE makes sense at least in terms of being able to display why a decision was made.

    But the second part of GRADE - the scoring system, is just plain wrong. It is wrong because it is a guess at a pseudo arithmetic recipe that a group of 'experts' thinks best matches the way they judge things. So by definition it is a poor second to some experts judging things.

    So why not have experts judging things?

    You might say that it is important to make use of a consensus view but I think not. I have worked for years with people who like to sit on these committees and by and large they are the dim ones. The clever people just get on with their research. Consensus is reduction to the 'lowest common denominator'. I realise that that might seem to mean that what one needs to do is find just the clever people who really understand bias. But it is not as bad as that. If you include one person in a committee who can see a fatal flaw in a study then there is at least a reasonable hope that they will convince the others. Especially if it barn door obvious, as it is for PACE.

    We don't use systematic recipes in real science. We always use everything at our disposal.
     
    FMMM1, Louie41, Michelle and 10 others like this.

Share This Page