1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 15th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Adapt or die: how the pandemic made the shift from EBM to EBM+ more urgent, Greenhalgh et al, 2021

Discussion in 'Research methodology news and research' started by rvallee, Jul 22, 2022.

  1. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,469
    Location:
    Canada
    Full title: Adapt or die: how the pandemic made the shift from EBM to EBM+ more urgent
    Authors: Trisha Greenhalgh, David Fisman, Danielle J Cane, Matthew Oliver, Chandini Raina Macintyre
    Open access: https://ebm.bmj.com/content/early/2022/07/19/bmjebm-2022-111952

    (Paragraphs mine for legibility)
    Evidence-based medicine (EBM’s) traditional methods, especially randomised controlled trials (RCTs) and meta-analyses, along with risk-of-bias tools and checklists, have contributed significantly to the science of COVID-19. But these methods and tools were designed primarily to answer simple, focused questions in a stable context where yesterday’s research can be mapped more or less unproblematically onto today’s clinical and policy questions. They have significant limitations when extended to complex questions about a novel pathogen causing chaos across multiple sectors in a fast-changing global context.

    Non-pharmaceutical interventions which combine material artefacts, human behaviour, organisational directives, occupational health and safety, and the built environment are a case in point: EBM’s experimental, intervention-focused, checklist-driven, effect-size-oriented and deductive approach has sometimes confused rather than informed debate. While RCTs are important, exclusion of other study designs and evidence sources has been particularly problematic in a context where rapid decision making is needed in order to save lives and protect health.

    It is time to bring in a wider range of evidence and a more pluralist approach to defining what counts as ‘high-quality’ evidence. We introduce some conceptual tools and quality frameworks from various fields involving what is known as mechanistic research, including complexity science, engineering and the social sciences.

    We propose that the tools and frameworks of mechanistic evidence, sometimes known as ‘EBM+’ when combined with traditional EBM, might be used to develop and evaluate the interdisciplinary evidence base needed to take us out of this protracted pandemic. Further articles in this series will apply pluralistic methods to specific research questions.
     
    MSEsperanza, Trish, Arnie Pye and 2 others like this.
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,469
    Location:
    Canada
    I have only skimmed, but this seems like rearranging the chairs on the Titanic, it seems to propose small adjustments that change nothing to the fact that the ship does not hold water. It consists mostly of the plus sign, and not much else.

    I'm actually amazed that it could introduce concepts from other disciplines and not include quality control, which is a key missing piece, and the complete lack of a reliable feedback loop that cause most of the problems in the way of excessive groupthink and a don't-you-dare-take-this-boondoggle-away-from-me attitude that keeps wrong ideas alive out of petty self-interest.

    Whatever those changes may do, they change nothing to the supply-side top-down mutual admiration society medicine suffers from, how it leads to a mix of groupthink and turf wars, where wrong ideas are defended by the people who marketed them, no matter the consequences.

    And that speaks nothing to blatant corruption and how EBM has enabled pseudoscience to establish itself in medicine at the same level as actual scientific concepts.

    The king is dead, long live the same king with a new mustache in the form of a + sign.
     
  3. Kitty

    Kitty Senior Member (Voting Rights)

    Messages:
    5,397
    Location:
    UK
    Oh, that'll help no end.

    After all, accounting is so much easier and quicker when you do away with all that audit bother, because you can just make the numbers up.
     
  4. Sean

    Sean Moderator Staff Member

    Messages:
    7,213
    Location:
    Australia
    Sounds to me like justifying ever more sloppy standards.
     
    Trish, alktipping, Lilas and 8 others like this.
  5. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,829
    Location:
    Australia
    Isn't that what all the pro-ivermectin people are claiming? That is going to go down real well...
     
    Mithriel, Trish, FMMM1 and 8 others like this.
  6. LarsSG

    LarsSG Senior Member (Voting Rights)

    Messages:
    370
    Their basic point — that we should not discount mechanistic evidence — seems pretty sound to me. I think this is something we've fallen down on many times with Covid (oh, we don't have an RCT that proves masks work and so on). Mechanistic evidence works in pretty much hard science field (physics, chemistry, biology, etc). We're just talking about working with causality. It's only when we can't really figure things out well enough that we need RCTs.
     
    Trish, alktipping, Midnattsol and 6 others like this.
  7. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,518
    Location:
    London, UK
    Yes, but we never did.
    Mechanistic evidence is part and parcel of the early stages of all clinical research development.
    But because of biological complexity it is never good enough as a measure of clinical utility.

    This is phoney academia with knobs on. It is doing a lot of harm.
     
    Mithriel, alex3619, lycaena and 13 others like this.
  8. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    2,538
    The issue actually is when basic sense and science contradict their pet nonsense they write some pseudophilosophy-code to claim it doesn't, or just order people to stop asking questions on things that could cause such 'awkwardness' and treat people like objects (by using personality research and inferring they don't know their own minds anyway, and here is the answer so you don't need to ask).

    Given the people involved I can't imagine this being a return to common sense and using science and logic - more some twisted manifesto about crowbarring in psych concepts/blag to fill gaps.

    If this goes as far as people who don't have this, or aren't confident in it then will we end up with learn-by-rote being easier to pick off the shelf to fill the gaps vs critical thinking ie it starts to depend on the background of the individual themselves if you aren't decanting both the skillset at the same time, and indeed the time for people to be able to engage in it (might require a more extensive discussion/history taken by the right people etc)?

    Which subjects are actually drilling in research design and critique of it when reading results in such a way that you have to get a decent mark in such discussion to be able to progress? And at what level? Because there are plenty where there are learn-by-rote or testing that you can do certain techniques exams but without really testing the full-gamut of what approach to use in research design e.g. choose the right calc and do it to get your marks being the test rather than spot the right way to approach testing this issue etc.

    This would seem as important a skill practiced/exampled in peer review as anywhere else and I rarely see it. Maybe if they mean what they say then they'd introduce this as a required section in all research papers - and require that what was written in it was a major part assessed by peer review?
     
    alktipping and Peter Trewhitt like this.
  9. cassava7

    cassava7 Senior Member (Voting Rights)

    Messages:
    985
    It seems important to point out that Greenhalgh signed a similar plea to extend the definition of “evidence” in 2018, written by a Norwegian biopsychosocial group with close ties to Flottorp et al:

    Medical scientists and philosophers worldwide appeal to EBM to expand the notion of ‘evidence’

    Given the multiplicity of methods (cf 2) and a wide interpretation of what counts as a mechanism (cf 3 and 4), causation should be understood in non-reductionist terms. That is, the scope of relevant causal interactions extends beyond the molecular, pharmacological and physiological levels of interaction. Any thorough causal account should also include higher-level factors, such as the behaviour of tissues, whole organs and individuals, including psychological and social factors.

    ‘Causal evidence’ should be extended to include different types of evidence, including case studies and case reports, which can in some cases provide valuable information for understanding causation and causal mechanisms. This is particularly important when dealing with rare disorders, marginal groups or outliers.

    Patient narratives and phenomenological approaches are useful tools for looking beyond evidence such as symptoms and outcomes, and to elucidate the core causes or sources for chronic and unexplained conditions.



    The implication of these propositions (especially the last paragraph) is that anecdotal testimonies from organisations such as Recovery Norway should constitute evidence for causation of a disease like ME/CFS.

    In any case, the kinds of evidence that sit at the bottom of the “EBM pyramid” are useful, or rather essential, in the early stages of clinical research. They cannot, however, constitute generalisable (let alone causational) evidence, which is why we have RCTs in the first place. Perhaps the only exception is case reports on genetics (e.g. TLR7 gain-of-function genetic variation causes human lupus).

    Greenhalgh et al are right to point out that Covid denialists have screamed that “no RCT proves the effectiveness or safety of masks” and similar things, but getting laypeople to acknowledge that a surgical or FFP2 mask blocks aerosols and is safe to wear has nothing to do with EBM. Better education of the general population so that it does not fall prey so easily to obscurantism is the answer, not EBM+ (which, anyway, won’t extend outside the realm of medicine and thus fails to address the problem in the first place).

    It seems that the leaders of EBM urgently need to take a step back so that they can realize that everything does not revolve around it.
     
    Last edited: Jul 23, 2022
  10. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,469
    Location:
    Canada
    The more I look back at the behavior of the ideologues who have been peddling nonsense about us for years, and the more it's obvious that in the end, the whole process of EBM amounts to a simple: just trust us, we know better. All of it, because the answers never speak for themselves, they are always heavily interpreted, to the point where 90%+ of what matters is not even considered, such as reducing complex chronic illness to a single number representing a misleading interpretation of "fatigue".

    Because we point out all the obvious flaws, and they say it doesn't matter because they know better, we just have to trust them, read the paper, they just know better. We point out methodological issues in those papers, with how they rely on subjective measures heavily subject to bias, and they simply say that they know better and we just don't understand it, we should just trust them that it says what it clearly doesn't say. Even when the entire body of evidence is invalidated, they simply scream and stamp their foot about how they know better anyway.

    Yesterday I saw Robert Howard, strong BPS ideologue, arguing about the issue with SSRIs and how well-done RCTs and systematic trials are simply better and show that SSRIs "work", whatever works mean. Even though it's a systematic review of RCTs that simply looked back and pointed out that the evidence was never there, the claims never had any basis. Evidence is entirely irrelevant to them. If it says they're wrong, it's bad evidence, poorly done. If it says they're right, and their own always says so, it's good evidence, strong methodology. There is no other step involved in how they make those decisions: they are right, they know better. This is exactly the old pre-science model, where everything was about eminence and rhetoric and "I know better, I am the eminence, just trust me with everything I say".

    A group of patients report symptoms and consult many times with physicians because every time their concerns are dismissed? Clearly they are not ill with anything, ill people don't do that, we just have to trust them, they know better. A small number (1/7 in the most generous interpretation) can be claimed to have a small subjective improvement on a questionnaire as long as the group is loosely defined enough? Obviously this 100% means that the entire condition is psychological. That's just what the science says, at least according to them, we just have to trust them, they know far better than those who point out otherwise out of personal experience.

    The process of EBM is highly subjective, biased and manipulable. It's a multi-steps process where every step involves judgment, so much so that you can completely change the conclusions from the same data. In fact we see it all the time, and again with the SSRIs, that when evidence is eventually disproved, those who staked their career on it simply say it was poorly done, since it gave results they don't like, they expect it to say that their thing works because they've been saying that it works for years.

    Except we see all over the place that neither this process nor people using such a loose process can be trusted. EBM is filled with false claims, bizarre pseudoscience and made-up nonsense. So it's built entirely on trust, but is obviously not trustworthy.
     
    bobbler, EzzieD, Louie41 and 4 others like this.
  11. LarsSG

    LarsSG Senior Member (Voting Rights)

    Messages:
    370
    David Fisman, one of the authors, recently pointed out that the Canadian government gave a $600,000 grant earlier in the pandemic for a 6-month RCT of surgical masks versus N95s for nurses treating Covid patients (they never reported any results). That seems to me like a pretty good example of where an RCT was absolutely not needed (and was in fact quite unethical).
     
    Milo, bobbler, Sarah94 and 3 others like this.
  12. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,518
    Location:
    London, UK
    I am not sure I agree. Knowing the effectiveness of masks is of great importance and trying to assess it in a real life situation makes sense. The cost looks to be not much more than one cent per head of population. If it was never completed that is not necessarily a fault of the original objective and design.
     
    FMMM1, Louie41, alktipping and 3 others like this.
  13. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,248
    Do you mean because it should have been assumed that the more robust masks did a better job?
     
    bobbler, Louie41, alktipping and 2 others like this.
  14. LarsSG

    LarsSG Senior Member (Voting Rights)

    Messages:
    370
    Yes, it was very clear that N95s were more protective than surgical masks (for which we had plenty of mechanistic evidence). It seems rather unethical to undertake a 6-month trial to see if more nurses in the surgical masks arm — who were caring for Covid patients — caught Covid when we had very good reasons to believe they would.
     
    Milo, oldtimer, bobbler and 8 others like this.
  15. Kitty

    Kitty Senior Member (Voting Rights)

    Messages:
    5,397
    Location:
    UK
    I see your point entirely, but surely it's still an essential exercise if no-one actually knows. There are all sorts of environmental and cost reasons why it would be useful to know whether surgical masks really are significantly inferior in protective terms.

    It might even turn out that they are not, either because the design itself is adequate or because people using them behave differently (perhaps taking more care over other aspects of infection control because they feel a bit vulnerable).
     
  16. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,648
    What a pile of crap.
    EDITED - so in the midst of a pandemic we need folks like this --- don't think so.
    OK trying to assess interventions is difficult but if you can't then look around and find folks who can.
     
    Peter Trewhitt likes this.
  17. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,861
    Location:
    betwixt and between
    Only able to skim the forum ATM so apologies for just popping in:

    Not able to check now but think that's maybe the one discussed on the forum here?

    https://www.s4me.info/threads/medical-scientists-and-philosophers-worldwide-appeal-to-ebm-to-expand-the-notion-of-‘evidence’-bmj-evidence-based-medicine-2020.18966/
     
    Last edited: Aug 29, 2022
    Peter Trewhitt likes this.

Share This Page