Homeopathy can offer empirical insights on treatment effects in a null field, 2023, Sigurdson et al

CRG

Senior Member (Voting Rights)
Journal of Clinical Epidemiology

Homeopathy can offer empirical insights on treatment effects in a null field

Matthew K Sigurdson, Kristin L Sainani, John P A Ioannidis

Abstract

Objectives: A "null field" is a scientific field where there is nothing to discover and where observed associations are thus expected to simply reflect the magnitude of bias. We aimed to characterize a null field using a known example, homeopathy (a pseudoscientific medical approach based on using highly diluted substances), as a prototype.

Study design: We identified 50 randomized placebo-controlled trials of homeopathy interventions from highly-cited meta-analyses. The primary outcome variable was the observed effect size in the studies. Variables related to study quality or impact were also extracted.

Results: The mean effect size for homeopathy was 0.36 standard deviations (Hedges' g; 95% CI: 0.21, 0.51) better than placebo, which corresponds to an odds ratio of 1.94 (95% CI: 1.69, 2.23) in favor of homeopathy. 80% of studies had positive effect sizes (favoring homeopathy). Effect size was significantly correlated with citation counts from journals in the Directory of Open Access Journals and CiteWatch. We identified common statistical errors in 25 studies.

Conclusion: A null field like homeopathy can exhibit large effect sizes, high rates of favorable results, and high citation impact in the published scientific literature. Null fields may represent a useful negative control for the scientific process.

https://pubmed.ncbi.nlm.nih.gov/36736709/ not open access
 
Perhaps this should prompt some reflection on trials of interventions in other fields that are riddled with biases but achieve the same standard deviation of around 0.3 to 0.4 in favour of the intervention.

From the 2008 Cochrane review of CBT for CFS:

Main results

Fifteen studies (1043 CFS participants) were included in the review. When comparing CBT with usual care (six studies, 373 participants), the difference in fatigue mean scores at post‐treatment was highly significant in favour of CBT (SMD ‐0.39, 95% CI ‐0.60 to ‐0.19), with 40% of CBT participants (four studies, 371 participants) showing clinical response in contrast with 26% in usual care (OR 0.47, 95% CI 0.29 to 0.76). Findings at follow‐up were inconsistent. For CBT versus other psychological therapies, comprising relaxation, counselling and education/support (four studies, 313 participants), the difference in fatigue mean scores at post‐treatment favoured CBT (SMD ‐0.43, 95% CI ‐0.65 to ‐0.20). Findings at follow‐up were heterogeneous and inconsistent. Only two studies compared CBT against other interventions and one study compared CBT in combination with other interventions against usual care.
 
Oh, the irony of saying that about homeopathy when it applies far more to everything biopsychosocial, which is basically intellectual homeopathy: ideas diluted out of any meaning that no longer have any. Applying the same evaluation to everything BPS would find far greater biases, the field is built entirely out of it and unlike homeopathy, it has the added weight of being officially recognized, which made it coercive, but without any actual oversight or accountability, the very worst features of both.
 
Thanks @CRG and @cassava7, together those findings suggest CBT isn't worth much at all.

SMD = standardised mean difference
image008.gif



And, it's worth keeping in mind that the homeopathy trials were presumably nominally blinded, whereas the CBT trials were not. And the CBT trials frequently included other positive biases, like extended contact with a therapist who is a proponent of the treatment, and the treatment itself urging participants to downplay their symptoms.

I think it indicates that many of the current approaches to research of medical treatments don't work very well. Where researchers have a big incentive to find a positive result, they will probably find a way, even if they aren't conscious of it. There needs to be replication by researchers who don't have 'skin in the game', who have the required level of equipoise.

Until then, I shall assume that findings of relatively small levels of improvement as a result of a treatment probably aren't real.
 
Just further on the Standard Mean Difference,
Cochrane says:
Different variations on the SMD are available depending on exactly what choice of SD is chosen for the denominator. The particular definition of SMD used in Cochrane Reviews is the effect size known in social science as Hedges’ (adjusted) g. This uses a pooled SD in the denominator, which is an estimate of the SD based on outcome data from both intervention groups, assuming that the SDs in the two groups are similar. In contrast, Glass’ delta (Δ) uses only the SD from the comparator group, on the basis that if the experimental intervention affects between-person variation, then such an impact of the intervention should not influence the effect estimate.

So, that would be something to check for, when looking at SMD's. 'What was the standard deviation that was used for the denominator in the calculation? Was it the standard deviation of the treatment and control arms combined, or was it just the standard deviation of the treatment arm? It might even be a standard deviation from another trial that is taken to be more applicable. The choice of the standard deviation used could swing a standard mean difference in the direction that works best for the researcher.
 
Blinding failure.
:D a failure of blinding, and a blinding failure

(For those not familiar with all the meanings and nuances of the word 'blinding', as well as the scientific meaning where participants (and sometimes some of the researchers) aren't aware of which treatment a participant is receiving;
blinding can mean an obstruction of vision or depriving of understanding, reason and sense
"he was blinded by his faith"
"they try to blind you with science"

It can also mean 'very intense', or 'remarkably skilful')


@Medfeb, @Hilda Bastian, in the increasingly forlorn hope that something useful will come out of the Cochrane review of exercise therapy for ME/CFS, I think this homeopathy example is well worth considering. Can the reviewers really be sure that any apparent small benefit reported in exercise therapy trials for ME/CFS (or indeed chronic fatigue) is not just a product of the bias created by subjective outcomes in unblinded trials, and the bias created by the enthusiasm of the researchers for the treatment?
 
I'm surprised homeopathy is better than a placebo according to many studies. I thought it was no better.
There's always bias in trials. Always. Hell, biases. They're not rigorous precisely to keep the churn of fake positives going. There's just no courage to end the Big lie.

And frankly homeopathy studies show this with high certainty. You know there can't be any effect from a sip of water. And yet there are, because the "effects" are all simple artifacts of poor methodology.

Homeopathy is the ultimate placebo, the null comparator. You can easily do a double-blind experiment. And you still get false positives. That's the impact of biases that exist throughout medicine, it has nothing to do with some effect on the patient, the "placebo" is entirely the sum of small errors made by poor methodology, it's just that they're always in favor of something working.
 
Back
Top Bottom