So what were the results?
Of 150 DNRS participants, 102 agreed to fill out questionnaires for the research–meaning almost a third declined, for unknown reasons. The respondents were predominantly white women, and the average age was 51. They reported an average of five diagnoses each, with fibromyalgia, chronic fatigue syndrome, multiple chemical sensitivities, depression, and anxiety among the most common. As Guenter himself noted, these diagnoses were not confirmed. Whether they were rendered by a competent clinician or whether patients diagnosed themselves is unknown.
At three months, six months, and 12 months, respectively, the number of participants responding to the questionnaires dropped to 80, 70, and 64. This represents a fairly high rate of what epidemiologists call “loss to follow up.” A significant rate of loss to follow up is not considered a positive endorsement of an intervention, since people who perceived it to be helpful could be considered more likely to respond. When properly accounted for in statistical analyses, a higher drop-out rate can reduce the apparent benefits attributable to an intervention.
The study’s main outcome measure was the SF-36, a quality of life questionnaire with eight sub-scales. One is the physical function sub-scale that is often used in ME/CFS research; other sub-scales focus on mental health, social function, bodily pain, general health and so on. Since these outcomes are subjective and self-reported, they are prone to significant bias and placebo effects, especially given the intervention’s promises of relief from suffering. With no objective measures in the study, the reported findings are difficult to interpret and cannot be called robust.
The mean scores for all eight sub-scales follow a similar path, a major improvement from baseline to three months, with minimal further changes at six and twelve months. The pattern of eight lines trending upwards in tandem looks impressive on a graph, but there is less here than meets the eye. The analysis seems to have involved averaging the scores received from whichever participants submitted them at any given assessment point. If that’s the case, the apparent improvements in scores could be largely or fully an artifact of the drop-outs.
Let’s say the participants who were most impaired at baseline were more likely to drop-out at subsequent points, a reasonable assumption. Let’s say everyone else stayed the same from baseline through 12 months–no worse, but no better. Given that set of facts, the average mean scores calculated from participants who continued to submit data would rise from baseline even though no individual scored any better.
And what if many or most of the 22 participants who were lost to follow up at three months found DNRS not just useless but actually harmful? What if many or most got worse, as
some ME patients have reported after going through the Lighting Process?
Demonstrating an improvement in the mean scores of a shrinking pool of participants tells us little if we know nothing about the many who dropped out. In any event, an improvement in mean scores can be influenced by outliers and reveals nothing about how many individuals improved their scores, and by how much. Perhaps the investigators have individual-level data that would indicate actual improvement in a significant number of individuals. If so, they should share these data as well.
Guenter acknowledged that a trial with a control group would provide more robust information, as would the development of biomarkers to measure the hypothesized changes. All true.