Menon - Mitochondrial Modifying Nutrients in Treating Chronic Fatigue Syndrome: A 16-week Open-Label Pilot Study

Bigger numbers are always nice but if there is no statistical significance for 87 patients i don't think 870 patients will fare much better, assuming they all have the same disease. Bigger numbers won't make something that doesn't work, work

The way stats testing works is that you try to reject the null hypothesis (that two groups are the same) and in doing do try to show a difference between the groups. In doing this the sample size makes a difference. This is why in doing a trial a power calculation is normally done to workout the chance given an effect that the null hypothesis will be rejected. It uses the sample size, predicted effect size and the significance test level (including multitest corrections). This is used to give a minimum estimated sample size for a trial to lead to significance given the predicted effect size. Its normally required for ethical approval.

So having more patients could lead to a significant result. However, they will have done a power calculation so it would suggest that the effect is smaller than predicted (if any).

The intuition would be something like if you draw more samples from a population and see an effect you get more certainty than if you draw a smaller number. But to me significance testing is not intuitive.
 
The way stats testing works is that you try to reject the null hypothesis (that two groups are the same) and in doing do try to show a difference between the groups. In doing this the sample size makes a difference. This is why in doing a trial a power calculation is normally done to workout the chance given an effect that the null hypothesis will be rejected. It uses the sample size, predicted effect size and the significance test level (including multitest corrections). This is used to give a minimum estimated sample size for a trial to lead to significance given the predicted effect size. Its normally required for ethical approval.

So having more patients could lead to a significant result. However, they will have done a power calculation so it would suggest that the effect is smaller than predicted (if any).

The intuition would be something like if you draw more samples from a population and see an effect you get more certainty than if you draw a smaller number. But to me significance testing is not intuitive.
Technically i agree, and i do know a thing or two about statistics (i did in a previous life anyways), and suffice it to say 87 patients won't give you a 95% confidence interval :laugh:
What i was getting at is this seems to be a ploy to make a quick buck by stacking numbers to make it so and a bad idea won't become good if multiplied, even if they can bend bigger numbers to make it look good
 
Technically i agree, and i do know a thing or two about statistics (i did in a previous life anyways), and suffice it to say 87 patients won't give you a 95% confidence interval :laugh:
What i was getting at is this seems to be a ploy to make a quick buck by stacking numbers to make it so and a bad idea won't become good if multiplied, even if they can bend bigger numbers to make it look good

I agree and if you need high numbers in a trial chances are the effect is small.
 
Do you happen to recall if data was collected at 20 weeks? The protocol listed it as a primary outcome, but the paper only includes data up to 16 weeks. It would have at least included Chalder Fatigue Questionnaire scores at that point.

Data in my case was stopped at 4 months instead of 5 because of a batch of ingredients being lightly different would legally causes trouble for the trial. The cfq scores where calculated at 4 months. Others had completed the 5 months but with a corrupted formulation for the 5th month. Hope this helps.
 
Back
Top Bottom