BMJ Archives of Diseases in Childhood: ''Editor’s note on correction to Crawley et al. (2018)'', 2019, Nick Brown. (SMILE LP Trial)

My reading of this thread has been rather perfunctory, for the usual reasons. Has it been made clear what she means by "capacity"? It could be that they lacked the resources. It could be that they lacked the authority, permission or status to have the records made available. In either case this should have been predictable when setting up the study.

Perhaps it was an assumption that school records would be made available ( assumption of a lot of things underwrites world view here)

Or perhaps it's the impact of GDPR kicking in between start and end.

Or both?
 
I wonder what the cut-off level of their ingenuousness is. Were there a study in which a treatment was said to have turned ill participants into track stars, would a journal publish that result without objective evidence confirming the "self-reported" claims?
 
I wonder what the cut-off level of their ingenuousness is. Were there a study in which a treatment was said to have turned ill participants into track stars, would a journal publish that result without objective evidence confirming the "self-reported" claims?
My daughter's favourite retort to those who think paralympic athletes reflect the aspirational goal for every disabled person - "so you're modelling yourself on Usain Bolt?"
 
I seem to remember that in an interview Crawley claimed that they had validated the school attendance figures using the school records (it may be a bbc one but can't remember)

I think there's a whole other layer of GDPR considerations there, so I guess they didn't want to go there - much easier to rely on self-report if they'd determined that it was a viable proxy. But my suspicion is that they'd only checked a sample of the baseline data with the school. Interventions change SR measures without necessarily changing the underlying condition. And even if the intervention changed actual school attendance too, the numbers show that this was by less than a day a week (per individual) on average between the two groups by 12 months.
 
My reading of this thread has been rather perfunctory, for the usual reasons. Has it been made clear what she means by "capacity"? It could be that they lacked the resources. It could be that they lacked the authority, permission or status to have the records made available. In either case this should have been predictable when setting up the study.

I have previously worked in state secondary schools in the south west of England and my daughter is a secondary science teacher working in the state sector in Somerset. I know for at least the last decade it is standard practice to take an electronic register in every lesson, as well as morning and afternoon form class registration. This is recorded using a specialised school information management system, such as SIMS

See



Literally, modern schools monitor and record everything about their students attendance, progress and behaviour! I'm not sure about primary, but from what I've seen they do electronic morning and afternoon class registration. Parents receive at least a yearly copy of attendance (usually more frequent) along with their child's school report.

It would have been very simple to have asked parents of the children/young people involved in the study to obtain an electronic copy of the school attendance individually and to pass this on to the researchers. Literally, the school's administrator only has to press a button on their computer to produce this data for any given student! Parents/older children would have been fully entitled to have this information provided upon request in England under the Data Protection Act 1998 (there is a newer act now, but the prinicples were the same).

See for example: https://www.tsatrust.org.uk/wp-content/uploads/2016/01/Subject-Access-Request-Policy.pdf

The researchers could have created a suitable 'subject access request' form for particpants to have used to have made obtaining this limited specified information from their school even easier for them. I guess whether it's worth the small effort depends on how important a researcher feels objective, verifiable outcomes are...
 
I know for at least the last decade it is standard practice to take an electronic register in every lesson,
I worked in a tertiary college (sixth form and FE) with 16 to 18 year olds between 15 and 30 years ago. It was before electronic registration, but we had attendance registers for every lesson. It would have been a bit of work, but attendance could have been fully documented for every pupil from these. Students didn't have to register every morning and afternoon if they didn't have a class.

I can see there is a bit of a problem using school attendance for that age group as a measure of health. Some also leave school at 16, which can add to the problem. Surely that's why you run small pilot studies - to iron out problems like this, and if necessary redesign the study. The problem here, as I see it, is not that they switched outcomes from pilot to main study, but that they included the pilot participants in the main study after that decision was made. That invalidates the study and should have made it unpublishable.

They should have redesigned after the pilot with a different primary objective outcome measure, such as a complete month's actimeter data before, and 6 months and a year after treatment, and run a completely separate study using that outcome measure.
 
another one at Bristol joining in on the LP 'gravy train'
Dr Rebecca Barnes
My main research interest is the study of interaction between patients and health care professionals. I specialise in the application of conversation analytic (CA) methods to address important questions in relation to improving health care. I have studied patient requests and doctor offers for medical services, how treatment recommendations are formulated and responded to, and safety-netting practices in primary care. I am also interested in using CA alongside other methods to examine implementation fidelity in clinical trials of communication-based interventions. I lead the CA Research Group in Bristol and am a Training Lead for the NIHR School for Primary Care Research.
I am working with Dr Emma Anderson and Professor Esther Crawley using CA methods to understand what the Lightning Process treatment offers to paediatric CFS/ME.
 
I seem to remember that in an interview Crawley claimed that they had validated the school attendance figures using the school records (it may be a bbc one but can't remember)

From Buzzfeed:

Crawley told BuzzFeed News it was possible that there was some placebo effect involved, but that the questionnaires she used in the trial asked questions about how far you can walk and how much school you attended, rather than simply whether people felt better. She added that self-reported school attendance lined up very well with the schools’ records of attendance.

https://www.buzzfeed.com/tomchivers/inside-the-controversial-therapy-for-chronic-fatigue

My fallible memory is that using the school attendance figures was part of the design protocol, and that up to now they had not ‘explained’ why no results were ever published.

In the original protocol :

School attendance
Children and young people are asked about school attendance and home tuition in a two item inventory. We will ask for consent to check school attendance using school records and will do this at assessment, 3 months 6 months and 12 months.

http://www.bristol.ac.uk/media-library/sites/ccah/migrated/documents/smprotv6final.pdf
 
It's still an unblinded trial with subjective outcome measures, so not scientifically useful.

And there is no long term follow up (LTFU) data as far as I know. Other trials such as PACE and FINE with transient improvements shown on subjective measures in the short term found the effects had gone by LTFU.

We also now have the data, which looks to me like a mess with lots of missing scores and some meaningless scores. @JohnTheJack I hope someone is looking closely at the data for possible re-analysis.

People more qualified than I am looked closely at the data and found nothing much really. The data it seems were properly collected and reported. It seems the flaws are in the trial design and conduct rather than the data.
 
I think there's a whole other layer of GDPR considerations there,

It would be the DPA rather than GDPR given the timing but I would have thought using registration data for a clinical trial without additional permission from the parents or students would be dodgy. Concent for a purpose was an important part of the DPA I believe.

I'm not sure if Crawley could collect permissions as part of concent and then the schools (or LEA) as data owners rely on these.
 
About the school attendance records--this was listed in both the protocols for the feasibility trial as well as the full trial protocol. They didn't report it in the feasibility trial report. They presumably already knew they didn't have the "capacity" to gather those data by the time they wrote the full trial protocol. So why did they include official school attendance records as an outcome measure in the full trial protocol?
 
So she lied to either the BMJ or Buzzfeed either way it doesn't play well with the BMJ taking assurances from the authors.

I guess she could say she was only talking about the data she had (what does she have?!)? If she was referring to some limited data, rather than outcome data for SMILE, in response to concerns about 'placebo' distorting scores for self-report outcomes then she surely would have known she was being misleading. No one seems to care about researchers misleading people though!
 
You mean the "data" that claims kids said they were good school attenders when self reported?

No @JohnTheJack got hold of a data set which is available through a different thread. A number of people looked at it and the data did represent the results as quoted. There doesn't seem to be actual school attendance data. But even this may not be accurate as kids may attend school when pushed but do less than they would if they attended part time. Schools will also have rest rooms and things.

Basically the whole trial is flawed as you need very good objective measures of activity and performance when one intervention pushes people to do more as then they will get well.
 
So why did they include official school attendance records as an outcome measure in the full trial protocol?

maybe it made the protocol look better or was required?

The protocol doesn't say that though. It said:
We will ask for consent to check school attendance using school records at assessment, 3, 6 and 12 months.

Maybe they asked and didn't get? Or maybe they thought it would be too much faff and went with the proxy measure instead? Or maybe they got consent, checked a few records, and went with the proxy measure because they seemed to match up OK. We can't tell.
 
Last edited:
Also worth noting, Brown says



He admits it's a flawed paper whose author was decietful to the point of making them rewrite portions of the study, but without irony supports its contribution to the field of research, i.e. it's conclusions.



The original Editor's note is gone as far as I can tell. This looks like the lowest possible respose above doing nothing, not equivalent to an expression of concern.
Honestly this is worse than doing nothing. It even tries to disappear that there ever was an issue at all. It's completely corrupt. And with Crawley a board member? This is like a microcosm of everything that can go wrong in medical research. It points to serious issues in the peer review and editorial process at BMJ and an enormous bias towards certain researchers who are allowed do-overs.

Same as we're seeing with the Cochrane reviews being allowed to be written and rewritten until they can manage to bullshit in just the right way to make it passable. Completely irresponsible, as if it had no impact on anyone but the authors. The patient population is basically an afterthought, of no particular interest and having no stake in the outcome. They are playing with our lives as if we were just some dumb inert matter.
 
Last edited:
The protocol doesn't say that though. It said:

Maybe they asked and didn't get? Or maybe they thought it would be too much faff and went with the proxy measure instead? Or maybe they got consent, checked a few records, and went with the proxy measure because they seemed to match up OK. We can't tell.

Fair points, but some possible responses:

Seems unlikely that all the individuals would say no - they already had ethics approval. If it was too much faff by the time they were converting their feasibility study into a full trial then that should have been reflected in the protocol for the full trial. If they have data showing that the self-report did not suffer from problems with response bias then they should release the analysis showing this.
 
Back
Top Bottom