Documents from the SMILE trial

Especially as the full trial it morphed into was the very trial being assessed for feasibility in the first place. Not just outcome switching when much of the outcomes were already clearly signposted, but as you say, switching the fundamental objectives of the trial as well. It started out as a feasibility study, for assessing the viability and worth of subsequently doing a fully independent trial. Instead it was spliced/grafted into further work that rendered it nonsensical overall.

LOL - I reckon that paragraph would seem nonsensical to people not already familiar with all this. It's like with PACE - things are so bad that there are parts which sound too absurd to be true.
 
You could imagine laying on a pantomime for the scientific community, and there would be so much material available!

I started writing a sit-com based on PACE misdeeds with some other people I know. Turns out that writing a sit-com is really hard. Also, lines that seem hilariously stupid in an academic paper are not necessarily going to make killer gags in a sit-com.
 
I started writing a sit-com based on PACE misdeeds with some other people I know. Turns out that writing a sit-com is really hard. Also, lines that seem hilariously stupid in an academic paper are not necessarily going to make killer gags in a sit-com.
I think its more suited to a documentary. Anyone know someone or have contacts to get one aired on national TV?
 
Especially as the full trial it morphed into was the very trial being assessed for feasibility in the first place. Not just outcome switching when much of the outcomes were already clearly signposted, but as you say, switching the fundamental objectives of the trial as well. It started out as a feasibility study, for assessing the viability and worth of subsequently doing a fully independent trial. Instead it was spliced/grafted into further work that rendered it nonsensical overall.

One of the things I was wondering about were the power calculations and whether they were redone between the feasibility and full trial as this would show that they looked at the data. The paragraph in the trial protocol seemed vague:
Smile full protocol said:
To be able to detect a difference between the two treatment arms of 8 to 10 points on the SF-36 PCS at six months with 90% power and 1% two-sided alpha, we have calculated (using STATA) that a total of 32 to 50 participants in each arm for analysis are required. Allowing for 10 to 20% non collection of primary outcome data at six months, we aim to recruit 80 to 112 participants to the study.

I believe the power calculation requires some effect size estimate what they don't say is what they used here (But I've never been sure about power calculations). But in the introduction to the feasibility study they say:
Feasibility study paper said:
. In this study, we report on the feasibility and acceptability of recruiting families into a trial for CFS/ME to inform the design of a full-scale, adequately powered RCT comparing LP with LP plus specialist medical care. involving an alternative intervention (the Lightning Process)

which would suggest that they used the initial data to do the power calculation and perhaps in doing so looked at other data when they switched outcomes?
 
We've tried many times to involve Goldacre but for some reason the former student of Simon Wessely refuses to look at it. He's too busy, apparently. Though not too busy to tweet about harassment of researchers by ME militants.

Some otherwise courageous people have the blind spot of not being willing to damage the careers of their friends. Ben Goldacre is one of those people.

As for the rest, it makes no sense to 'morph' a feasibility study into a 'full' study. The study remains a feasibility study even if it is expanded. Anything else is bad science.
 
One of the things I was wondering about were the power calculations and whether they were redone between the feasibility and full trial as this would show that they looked at the data. The paragraph in the trial protocol seemed vague:


I believe the power calculation requires some effect size estimate what they don't say is what they used here (But I've never been sure about power calculations). But in the introduction to the feasibility study they say:


which would suggest that they used the initial data to do the power calculation and perhaps in doing so looked at other data when they switched outcomes?
Out of my depth here until I learn a bit more, but I suppose it depends what level of analysis would have been needed on the early data in order to predict the power of a full trial. And would that have been influenced by how independent the final trial was? Like I say, I need to learn more.

Edit: But having said that, if they looked at it at all then they would know it well enough. And in an unblinded trial it would have been impossible to not know how the results were trending, even without looking at the data itself.
 
Last edited:
I feel like @Adrian is onto something with that point about power calculations and the change from a feasibility trial to a full trial, but I don't have a good enough understanding of these things, or what is expected, to say for sure. It's too late to find out now, but 'bump' for tomorrow.
 
I feel like @Adrian is onto something with that point about power calculations and the change from a feasibility trial to a full trial, but I don't have a good enough understanding of these things, or what is expected, to say for sure. It's too late to find out now, but 'bump' for tomorrow.

I've kept meaning to come back to this. Some quick notes, but I'm not sure I really found anything of interest, and I do not really know how these things are meant to be dealt with.


From the feasibility protocol:

"The specific objectives aim to inform the design of a full-scale, adequately powered randomised trial. The specific objectives are:
...

7. To use the information above to provide estimates of sample size required for a full-scale RCT."

http://www.bristol.ac.uk/media-library/sites/ccah/migrated/documents/smprotv6final.pdf

"2.
Information collected on suitability of the outcome measures used will enable us to apply for funding for further outcome development."

Feasibility paper:

"In this study, we report on the feasibility and acceptability of recruiting families into a trial involving an alternative intervention (the Lightning Process) for CFS/ME to inform the design of a full-scale, adequately powered RCT comparing LP with LP plus specialist medical care."

I don't see where they said anything about how to make SMILE adequately powered though. They had this?:

"Data analysis
We recorded the number of potentially eligible participants attending the clinic, the number assessed for eligibility, and the number of eligible patients who consented and were randomized. We compared characteristics of eligible patients who were and were not randomized using appropriate descriptive statistics (that is, mean and standard deviation or median and interquartile range (given as first and third quartile, Q1, Q3) for continuous, and number and percent for categorical variables respectively). As the aim of this study was to assess the feasibility of a future definitive trial, we did not undertake a formal sample size calculation."

https://trialsjournal.biomedcentral.com/articles/10.1186/1745-6215-14-415

SMILE protocol:

"Statistical considerations
Sample size
We used a definition of a clinically important difference
for the SF-36 physical function subscale from three ex-
pert consensus panels for chronic diseases in adults. The
panels conducted a literature search and used the Delphi
technique to reach consensus on the thresholds for change
over time for small, moderate and large clinically important
SF-36 change scores [24]. Consensus was agreed by each
panel that a small clinically important difference would
be 10 as this is the equivalent to two state changes (a
state change is one improvement in one item - the mini-
mum difference between inventories). A moderate improve-
ment was defined as 20 and a large improvement as 30.
To be able to detect a difference between the two
treatment arms of 8 to 10 points on the SF-36 PCS at
six months with 90% power and 1% two-sided alpha, we
have calculated (using STATA) that a total of 32 to 50
participants in each arm for analysis are required. Allow-
ing for 10 to 20% non collection of primary outcome
data at six months, we aim to recruit 80 to 112 partici-
pants to the study."

Then in the SMILE paper they say:

"Sample size
We used a consensus definition for a small clinically important difference of 10 points on the SF-36-PFS.32 Thirty two to 50 participants in each arm are required to detect a between-group difference of 8 to 10 points on the SF-36-PFS (SD 10) at 6 months with 90% power and 1% two-sided significance. Allowing for 10% to 20% non-collection of primary outcome data, we aimed to recruit 80 to 112 participants."

http://adc.bmj.com/content/early/2017/09/20/archdischild-2017-313375

So it could be that the feasibility study helped them design an adequately powered trial by giving them a rough idea of drop-out rates, and the feasibility study showing a null result for their original primary outcome, and a 'clinically important difference' for the outcome they swopped to was just a coincidence.
 
OT: Just saw SMILE was already cited in this book:

Using Solution Focused Practice with Adults in Health and Social Care
By Judith Milner, Steve Myers

"There is evidence that moving from the search for a magic cure to more positive thought leads to a better outcome for young people with chronic fatigue syndrome (Crawley et al 2017). The process by which people come to acceptance and rediscover hope can be speeded up by..."

https://books.google.co.uk/books?hl...Y1cxfbgYO2sqg31FCZ6vQ#v=onepage&q=CFS&f=false

Nice one Milner and Myers... moving people away from the search for a 'magic cure' and onto a rigorous evidence based intervention like the Lightning Process. Hilarious.
 
OT: Just saw SMILE was already cited in this book:

Using Solution Focused Practice with Adults in Health and Social Care
By Judith Milner, Steve Myers

"There is evidence that moving from the search for a magic cure to more positive thought leads to a better outcome for young people with chronic fatigue syndrome (Crawley et al 2017). The process by which people come to acceptance and rediscover hope can be speeded up by..."

https://books.google.co.uk/books?hl...Y1cxfbgYO2sqg31FCZ6vQ#v=onepage&q=CFS&f=false

Nice one Milner and Myers... moving people away from the search for a 'magic cure' and onto a rigorous evidence based intervention like the Lightning Process. Hilarious.
As soon as you see a 'paper' talking about their own interventions being better than a 'magic cure' ... that is the biggest tell of all that a hefty dose of bullsh*t is on the way :).
 
I have today received the decision notice rejecting my appeal to the ICO for the release of the data.

I shall be appealing to the FTT.

For obvious reasons, I don't want to say too much but two things stand out to me:

First, the decision says information for school attendance, fatigue and anxiety could lead to identification. But it does not say why the other results should not be released.

Second, it has been decided that it is reasonably likely the motivated intruder could use the information in those three tests along with school attendance records to identify the participants. The ICO appears therefore to be saying that it is reasonably likely a motivated intruder could access protected information. School attendance records are confidential and protected by legislation and... the ICO.

Utterly bizarre. So bizarre in fact I emailed back asking for clarification.
 

Attachments

I have discovered the potential for any children in the active treatment part to have been harmed. (extra to the concerns that have been provided about the theory behind the treatment).

In guidance issued by the National Society for the Prevention of Cruelty to Children, there is a leaflet for parents. https://www.nspcc.org.uk/globalassets/documents/advice-and-info/pants/pants-2018/pants-parents-guide-online.pdf

The PANTS framework is an easy way to understand the basic priciples behind protecting children.
Talk about secrets that upset you. Explain to your child that they should always talk about stuff that makes them worried – and that sharing it won’t get them into trouble. Explain the differences between ‘good’ and ‘bad’ secrets. Bad secrets make you feel sad, worried or frightened, whereas good secrets can be things like surprise parties or presents for other people which make you feel excited. Any secret should always be shared in the end.

Speak up, someone can help. Tell your child it’s always good to talk to an adult they trust, about anything that makes them sad, anxious or frightened, so they can help. And it doesn’t have to be a family member. It can be a teacher or a friend’s parent, for example. Reassure them that whatever the problem, it’s not their fault and they will never get into trouble for speaking out.

These principles seem to be the opposite of what is taught in the Lightning Process, where you are taught to hide your true feelings/troubles/worries or you will jeopardise the chances to get better.

Given this, shouldn't there have been explicit mentions of this to the parents when asking for consent and explaining possible side effects/harm?

Teaching children contradictory lessons to what is considered good practice in protection, especially the contradictory lessons about speaking to someone they trust about anything that makes them 'sad, anxious or frightened', and 'it’s not their fault and they will never get into trouble for speaking out', seems dangerous to me.

An adult can work out the context in which these lessons are supposed to be used, but not necessarily children.

So why

  • is this not mentioned in the consent forms?
  • is it mentioned in the ethics approval? (I can't find anything)
  • why did nobody spot this potential problem?
I've used the NSPCC example as an authoritative source for the principles, but everyone who works with children should know a variation of them.

In basic form, we have a relative stranger telling children some troubling principles that have the potential to cause wider harm than the actual treatment itself?

Should this be mentioned in the NICE review submissions, as they mention a particular concern for children and vulnerable adults?

Tagging @Trish and @Graham for your thoughts on this, as you have experience working with children.
 
Last edited:
My teaching in this country was all with over 16 year olds, and I wasn't involved with dealing with child protection. However, as a parent, I am horrified that anyone could have given ethical approval to using LP with children.

I do think there should be some challenge to this. Esther Crawley as a doctor also had a duty of care for the children and should never have subjected them to a 'training' that involved adults telling children to keep secrets or tell lies. I don't know whether Tymes Trust who deal a lot with child protection cases could look into it. Though in the cases they deal with, it's false accusations against parents of children with ME they deal with, which is different.
 
My teaching in this country was all with over 16 year olds, and I wasn't involved with dealing with child protection. However, as a parent, I am horrified that anyone could have given ethical approval to using LP with children.

I do think there should be some challenge to this. Esther Crawley as a doctor also had a duty of care for the children and should never have subjected them to a 'training' that involved adults telling children to keep secrets or tell lies. I don't know whether Tymes Trust who deal a lot with child protection cases could look into it. Though in the cases they deal with, it's false accusations against parents of children with ME they deal with, which is different.
@Action for M.E. it' s my understanding that this kicked off under AYME, whose remit ( and head) now are with Action for ME.
It would be heartening to build on recent more critical viewpoints and reinforce the unacceptable ethics and design of this trial with a suitably worded statement.
The credibility lent to this intervention by SMILE should not be underestimated.
 
I have discovered the potential for any children in the active treatment part to have been harmed. (extra to the concerns that have been provided about the theory behind the treatment).

I do think there should be some challenge to this.

This looks important. We have already had the discussion about technicalities and how they are the fulcrum for the more important issues that are so hard to pin down. This look like another critical one. @dave30th has been making some useful headway with SMILE-related ethics. This may clarify the underlying concerns.
 
I have discovered the potential for any children in the active treatment part to have been harmed. (extra to the concerns that have been provided about the theory behind the treatment).

In guidance issued by the National Society for the Prevention of Cruelty to Children, there is a leaflet for parents. https://www.nspcc.org.uk/globalassets/documents/advice-and-info/pants/pants-2018/pants-parents-guide-online.pdf

This looks important. We have already had the discussion about technicalities and how they are the fulcrum for the more important issues that are so hard to pin down. This look like another critical one. @dave30th has been making some useful headway with SMILE-related ethics. This may clarify the underlying concerns.

Given the huge difficulty we've had in getting the various medical bodies (Lancet, Psych Med, GMC, BMJ, MRC, ethics committees etc. etc. etc.) to take seriously these kinds of problems, I wonder if an appeal to the NSPCC would be an option?

After all, their job is to protect kids, and they're outside of the self-protecting medical club.

@dave30th
 
Back
Top Bottom