Recruiting: A Study of a Positive Emotion Intervention for the Treatment of Long COVID-19 Symptoms, University of California, Lopez

who can spot the eleven mistakes in this protocol?
I'll play.
There isn't much detail at the Clinical Trials site.
https://clinicaltrials.gov/ct2/show/NCT05676008


1. waitlist-controlled clinical trial
A wait-list control is not appropriate for reducing the impact of the placebo effect, as people on a wait-list can experience a nocebo effect.

2. vague selection criteria, resulting in heterogeneous sample
People only need to have had a Covid-19 infection more than 3 months ago, be suffering from feeling unwell and have as few as one persistent symptom. The possible symptoms range from coughing to having trouble sleeping. There does not seem to be any requirement for the Covid19 diagnosis or the persistent symptom(s) to be validated by a doctor. Therefore the samples will be poorly characterised. There is potential for later subsetting of the participants, so there might be a finding e.g. that women with persistent coughing benefited from the intervention.

3. biased volunteer selection
As the treatment is delivered online and the trial has been advertised, it is likely that the study will mainly attract people who think that participating in mindfulness training is a useful thing to do. It is unlikely that people who think mindfulness training won't help them will volunteer to be part of the study (unless a whole lot of Science for ME members decide to volunteer to participate :sneaky: ). Therefore, the placebo effect of the intervention and the nocebo effect of the wait-list is increased - there's a big risk of bias.

4. unblinded (although it claims to be 'single masked') combined with subjective primary outcome
The common and major problem of a lack of blinding together with subjective outcome. This maximises the placebo effect i.e. reporting bias resulting from an expectation of a useful effect.

5. large number of primary and secondary outcomes - all subjective,
and 6. no specification of what success is

There are 6 secondary outcomes, assessed at three different times. So, together with the primary outcome, that is 19 possible chances for a positive outcome, before subsetting the sample and subsetting the outcomes, and before different ways of analysing the results.

On the latter, in the protocols that I have seen, there is no specificity about what counts as a meaningful difference in an outcome.
So, if the average absolute endpoint score in the intervention group is 20 points, for example, while the score in the wait-list group is 17 - has the intervention achieved something?
Or, will attention be paid to the percentage of individuals in each group who have a score that is abnormal e.g. 30% of people in the treatment group are deemed to have depression because they have a score over a certain number, while 40% of the waitlist group have depression. If so, what is the diagnostic threshold?
Alternatively, will attention be paid to the change in scores at the individual level, relative to the baseline? So, if the average improvement relative to the baseline was 4 points in the intervention group while the average improvement relative to baseline was 2 points in the intervention group, is that success? It's not even clear that the surveys will be administered prior to treatment.

Will they attempt to correlate wellbeing outcomes with self-reported ongoing practice of the micro-dosed mindfulness?

The lack of specificity about how the data will be analysed creates extremely fertile ground for cherry-picking. In itself, including a substantial number of measures isn't the problem, but there should be some pre-specification of what combination of outcomes will demonstrate success.

7. very short assessment period for the primary outcome
The primary outcome is assessed at the one month mark. This maximises the likelihood that a benefit will be reported that is just the result of the placebo effect, and/or a result that does not last.


8. assumption that the emotions the participants are experiencing require fixing
There seems to be the assumption that the participants are not already applying good coping mechanisms, and that experiencing negative emotions is something that needs to be corrected. People who experience negative emotions might be the ones who apply pressure to make things better, for themselves and others.

9. inadequate description of the intervention
It isn't clear what the people will be taught - is it mindfulness (as in awareness of the reality around one) or is it the cultivation of positive thinking? The nature of the intervention will determine the potential harms that need to be considered.

10. Inadequate consideration of potential harms
This includes the impact of the intervention on the people in the trial (e.g. does the cultivation of positive thinking result in acceptance of difficult symptoms that makes it less likely that the person seeks investigations that might identify a treatable medical condition? does the intervention make people feel they have failed when they still want to stop having disabling symptoms?). It also includes the potential for the poorly designed study to suggest an ineffective practice is useful, and for this intervention to therefore be seen as having delivered a treatment for Long Covid, decreasing pressure for research into useful treatments.

11. Research design doesn't fully address the stated research question
Our research question is whether our newly developed training can assist PASC patients to self-microdose mindfulness (5-15 seconds activities in everyday life) and improve on perceived metrics of well-being (primary outcome)....If effective, an increased frequency of the mindfulness activity will then help buffer negative emotions (e.g., anger, loneliness, etc.) experienced during the pandemic and associated with ongoing stress and/or somatic symptoms.
There's no evidence that they will actually monitor whether the training does result in more patients self-microdosing mindfulness, or in individual patients having a higher frequency of microdosed mindfulness each day. None of the outcomes actually measure that. There's no pre-treatment baseline measurement. If the treatment aims to increase the percentage of people taking a moment to appreciate the sunset, or smell a flower, or enjoy the taste of a cup of tea for example, how will they know that people weren't doing that already? If the treatment aims to increase the number of times that people take such moments in a day, how will they know whether the frequency has increased or not?

Because they have no way of knowing whether the (micro)dosing has changed, they won't be able to say that any identified benefits are due to an increase in the practice of mindfulness.

Gosh, I got there. I didn't think I would get eleven mistakes. I didn't even need to use this one:

12. Poor grammar
Our hypothesis is that self-microdosing mindfulness will evoke positive emotions that can improve well-being on patients suffering of PASC-related symptoms beyond 3 months post COVID-19 infection.
"that can improve well-being on patients"
"patients suffering of PASC-related symptoms"
 
Last edited:
Positive vibes homeopathy.
I don't know, 5-15 seconds in one hit might be over-dosing. This is powerful stuff we are messing with here. Best to start on a few micro seconds per hour.

STAT!

Maybe mindfulness cures the immune, neurological, metabolic or whatever dysfunction that causes long Covid. Perhaps it can regrow brain cells, erase tumors, restore sight to the blind and regrow lost limbs. Maybe we all just need to meditate for 15 minutes a day, and we'll become immortal and live forever.
The ultimate proof against mind-over-matter is that we die. Matter (biology) always wins in the long run.

I like a body scan, unless i am in PEM, but i'm a person who has trouble staying connected with my body i have a tendency to actively & automatically tune out 'youre over-doing it' type warning signs so i find it helpful for pacing, but it does make me laugh how one of the BPS things is that PwME are body watching & paying too much attention to bodily sensations & then they choose mindfulness to recommend.... its almost as if they're incompetent.
You are more polite than I. :whistle:
 
I'm trying to get my head around the validity of a waiting list control group, in a trial for a psychological intervention. I appreciate there may be ethical considerations for not using a no-treatment-at-all control group, but that would not magically validate some other form of control group if it is inherently invalid.

So as a thought experiment (so we can set aside ethical issues for a moment), would a no-treatment-nor-expectation-of-treatment group would be a better control? It's unblinded of course, so patients know they are not being treated, but that will be true for all types of control group. Given the intervention is psychological, then presumably a control where participants' situations are the same, other than no intervention, is the best shot at a control? Being on a waiting list is psychologically quite different from that, so can it really be a control?

Patients on a waiting list are in a very different, limbo-like psychological situation; anticipating treatment yet knowing they have to wait. A very different psychological situation to the no-treatment-nor-expectation-of-treatment control situation, with potentially all sorts of psychological curve balls into the mix. On the one hand elation that help might be on the way? On the other hand maybe despair that it's not happening and may never happen, and no knowing what is going to happen when it does. Conditions for a control group need to be as deterministic and best-matching as possible, yet I can't help wonder if this kind of control group is pretty indeterminate in terms of the psychological states it might foster, especially when the trial is all about psychological states of participants.
I always assumed when they use "waiting list" it's just a control arm for no intervention, to compare with natural outcomes. Maybe they use the waiting list meme because of some expectation effect but really that's just a control for nothing. In some trials the participants are told they can do it afterward, but it's not as if it changes anything, and anyway it removes the possibility of long-term follow-up when they do this, which I assume is on purpose, although more of a wink-and-nod where people pretend it's OK but really it's that the entire system is fine with fake outcomes as long as no one calls out the emperor's ass-mole.
 
I always assumed when they use "waiting list" it's just a control arm for no intervention, to compare with natural outcomes. Maybe they use the waiting list meme because of some expectation effect but really that's just a control for nothing. In some trials the participants are told they can do it afterward, but it's not as if it changes anything, and anyway it removes the possibility of long-term follow-up when they do this, which I assume is on purpose, although more of a wink-and-nod where people pretend it's OK but really it's that the entire system is fine with fake outcomes as long as no one calls out the emperor's ass-mole.
Yes but my point is that being on a waiting list is not the same as nothing, especially in a psychological trial. The very state of being on a waiting list changes a person's perceptions, especially if they know they will be getting the treatment but are denied it at the time. I read somewhere that that can potentially have the effect of making the supposed "controls" feeling even more dispirited, and so artificially inflate the apparent efficacy of the intervention. A control that has significant differences other than the intervention is, by definition, not a control.
 
The researcher is listed as
Javier E Lopez
Title(s) Associate Professor, Internal Medicine
School School of Medicine

But the enquiries name is Michael Amster, who is a mindfulness therapist into Buddha and stuff. Maybe he had to use Lopez's name for administrative reasons?
 
The researcher is listed as
Javier E Lopez
Title(s)
Associate Professor, Internal Medicine
School School of Medicine

But the enquiries name is Michael Amster, who is a mindfulness therapist into Buddha and stuff. Maybe he had to use Lopez's name for administrative reasons?

Michael Amster co-authored this book. :banghead:

The Power of Awe: Overcome Burnout & Anxiety, Ease Chronic Pain, Find Clarity & Purpose—In Less Than 1 Minute Per Day

Amazon product ASIN B09ZB5JGCK
 
Yes but my point is that being on a waiting list is not the same as nothing, especially in a psychological trial. The very state of being on a waiting list changes a person's perceptions, especially if they know they will be getting the treatment but are denied it at the time. I read somewhere that that can potentially have the effect of making the supposed "controls" feeling even more dispirited, and so artificially inflate the apparent efficacy of the intervention. A control that has significant differences other than the intervention is, by definition, not a control.
There's some validity for actual effective treatments when they start showing promising effects and patients feel being cheated on a real chance for improvement, but being on a waiting list for "A Study of a Positive Emotion Intervention for the Treatment of Long COVID-19 Symptoms" is about as null control as it gets.

In medical trials there is some value or another to this, but it's an ethical more than anything. For psychological trials, especially for psychosomatic trials, it's just not the same thing.
 
It's the A.W.E. Method, which stands for:

Attention, Wait, Exhale and Expand. (thus the 5 second duration)

What makes me happy is laughing at all this.

My method is R.L.L. (Read about it. Laugh. Laugh my head off.
 
"a new brief self-care intervention for people suffering from post-acute sequelae SARS-CoV-2 infection (PASC)"

Never get taken up in Dragons Den --- you can't monetise that ---- destined to failure ---
 
Last edited:
I have a new therapy.
1. Browse the Guardian online for two minutes to see what a shit world is going on around us.
2. Settle in to S4ME to see that there are still some sane and lovely people in the same world.
I was having similar thoughts in relation to S4ME as a safe and sane social media site while listening to a BBC Newscast podcast today in which 3 senior female BBC journalists shared their experiences of receiving awful stuff on social media as part of their job and how they deal with it.
 
Back
Top Bottom