Large-scale investigation confirms TRPM3 ion channel dysfunction in ME/CFS, 2025, Marshall-Gradisnik et al

Like another study from this group that was posted here, it looks like the p-values are artificially low due to pseudoreplication. I'll just quote the last time I said it, since it's the same issue, just with the sample size changed:
Do you think it is intentional? Or is it just difficult/unusual to avoid this, thus likely to be an accident.
It all sounded quite interesting!
 
Do you think it is intentional? Or is it just difficult/unusual to avoid this, thus likely to be an accident.
Here's another resource explaining pseudoreplication, and they say it's likely usually researchers just being unaware it's an issue:

Pseudoreplication is unfortunately quite a big problem in biological and clinical research, probably because many people aren’t really aware of the issue or how to recognise whether they’re accidentally doing it in their analysis. Several review articles have investigated the incidence of pseudoreplication in published papers, and have estimated that as many as 50% of papers in various fields may suffer from this problem, including neuroscience, animal experiments and cell culture and primate research. In fields like ecology and conservation, the estimated figure is sometimes even higher.
 
Is it possible some researchers haven't done a great deal more than that?
yup, my program only requires one very basic stats course (I’m taking more advanced courses because I’m specializing in bioinformatics, everyone else is doing biology). I don’t know if the idea of pseudoreplication would even be brought up in that basic course

You’d hope that scientists absorb more stats knowledge from others during their training and career but if someone is surrounded by people who do the same thing it might just fall low on the list of things that they are motivated to question and verify for themselves
 
yup, my program only requires one very basic stats course (I’m taking more advanced courses because I’m specializing in bioinformatics, everyone else is doing biology). I don’t know if the idea of pseudoreplication would even be brought up in that basic course

You’d hope that scientists absorb more stats knowledge from others during their training and career but if someone is surrounded by people who do the same thing it might just fall low on the list of things that they are motivated to question and verify for themselves
Considering how statistics are central to all medical research, this is really odd. Also it explains a lot. When we look at evidence-based medicine and how they abuse statistics, it might explain everything. They use the tools because they're told to use them, but don't really understand why they should use them.

I guess this is how we end up with standardizing an entire discipline that aims to influence the lives of billions make such heavy use of mathematical tools that are meant to apply to quantitative data being abused on qualitative data, when most of the benefits of those tools don't apply and can wildly distort the results.
 
Considering how statistics are central to all medical research, this is really odd. Also it explains a lot. When we look at evidence-based medicine and how they abuse statistics, it might explain everything. They use the tools because they're told to use them, but don't really understand why they should use them.
If a project is well funded and comprehensive enough there will usually be a biostatistician collaborator who does the analysis and (hopefully) knows to account for these things. But often when you have a smaller investigation like this study, its just going to be a grad student or post doc generating the data and learning how to run a few tests with a specific stats program. It would be up to their PI to check their analysis, but if the PI is a biologist who didn't spend much time on statistics, the student will end up with the impression that what they did is good enough. And there's often no requirements for any of the reviewers to have a strong stats background--if its a small experiment in a mid-tier journal that doesn't involve much sophisticated analysis, the reviewers will probably be chosen based on their familiarity with the biology.
 
The incentive structures for sticking to the facts in a measured way seem far less albeit more worthy

I suppose the counter argument is that, if a group really thinks they've got something but still need to pin it down, they might have to produce something shiny every now and again to get the support they need to keep chasing it.

Which is kind of okay, as long as they're bright enough to know when it's time to pull the plug.
 
- Their cohorts seem well matched, though their ME cohort has significantly lower white cell counts (p 0.005) and neutrophils (p 0.01) than the controls. I haven't seen that described in ME before? I think they saw some differences in leukocyte numbers in Beentjes' UK biobank study but not with the effect size this paper likely implies.

- they might be comparing cells rather than individuals when testing the ME and control groups making the n appear way higher than it really is. That's not good! (learnt from @forestglip on this thread that's called pseudoreplication, I didn't realise there was a name for that!)

- They show bar plots rather than the individual datapoint in a strip scatter plot!! I think we need to be able to see the scatter plots with the datapoints averaged for each invidual to really make a judgment of the results
 
- they might be comparing cells rather than individuals when testing the ME and control groups making the n appear way higher than it really is. That's not good! (learnt from @forestglip on this thread that's called pseudoreplication, I didn't realise there was a name for that!)

It seems to be the same thing in many papers from this group. Disclaimer that I haven't read all these in detail so I might have missed something, but it looks to me like they all might be using tests which assume independence but are comparing multiple cells per person.

For example, from the last paper listed, Cabanas 2019, it says they compared cells from ME/CFS and HC with Mann-Whitney. I don't see anywhere that it says they did something like average amplitudes from cells for each person.
Statistical comparison was performed using the independent non-parametric Mann-Whitney U test (Table 1, Figures 1, 2, 4, 6), and Fishers exact test (Figures 3, 5, 7), to determine any significant differences.
outward ionic current amplitudes were significantly decreased after successive PregS stimulations in NK cells from ME/CFS patients in comparison to HC (Figures 2E–G)
1768481080364.png
[Fig 2G] Bar graphs representing TRPM3 current amplitude at +100 mV after successive applications of 100 μM PregS in ME/CFS patients (N = 8; n = 27 and n = 25) compared with HC (N = 8; n = 31 and n = 29).

And while I'm not sure if the raw data is available to confirm that the Mann-Whitney p-value is based on cells, not people, it's at least possible to check with the Fisher's exact test results:
Eight ME/CFS patients and 8 age- and sex-matched healthy controls (HC) were recruited
In contrast, ionic currents evoked by both successive applications of PregS were mostly resistant to ononetin in isolated NK cells from ME/CFS patients (Figures 3E,F,H,J) in comparison with HC (Figures 3K,L) (p = 0.0030 and p = 0.0035).
1768481737760.png
[Fig 3K] Table summarizing data for sensitive and insensitive cells to the first application of 10 μM ononetin in presence of PregS in HC (N = 8; n = 31) compared to ME/CFS patients (N = 8; n = 27).
I confirmed that the p=.003 when doing Fisher's exact on the numbers in the table. So it's based on 58 observations, even though there are only 16 participants.
 
Back
Top Bottom