LOL at Sharpe saying 'debate is good'. A few months back he was endorsing the view that debate in the House of Lords was a form of harassment, and claiming that exactly the same thing was now happening with Monaghan's debates.
 
I've not looked at Sharpes twitter feed in ages, but he's really not good a discussing things, is he?

Too many odd things to be worth posting them all, but thought this one was of a bit of interest:



What about the Science Media Centre? eg:

Chronic Fatigue Syndrome – unravelling the controversy

Chronic Fatigue Syndrome (CFS; also known as ME) is an incredibly controversial field, not just in terms of public perception, diagnosis and treatment but even for the very researchers trying to help, who have experienced campaigns of harassment from some patients.

http://www.sciencemediacentre.org/chronic-fatigue-syndrome-unravelling-the-controversy/
 
(you probably know all of this and I'm just including it for anyone who hasn't read the old PACE minutes)

A look at the PACE Trial steering committee minutes shows that they were keen to distance serious adverse effects away from being included and also that short term effects were expected (i.e the admission by Chalder that people could get worse and then better)

10. It was noted that severe adverse events (SAEs)(e.g. a patient having a stroke) was
not necessarily severe adverse reaction (SARs) to treatment.
Therefore, the
procedure for notifying every one of severe adverse reactions did not apply to all
severe adverse events. It was also noted that SARs need to be operationalised into
mild, moderate d severe. Finally, it was important to discriminate SARs of the
supplementary therapies from SARs to USC. The definition of SARs in this trial
is complex and requires further consideration

and

11. The data monitoring committee safety role would require it to monitor for
deterioration of participants in a particular group, as judged by outcome data. It
was noted that there needs to be agreement between the Pls, the Chair of the TSC,
and the DMC about under which circumstances the trial might be stopped.
Action: Pls, JD and DMC to meet in September

and

o) Professor Darbyshire led discussion about how to define
`improvement'. Professor Dieppe stated that in order to identify
`damage' by any treatment arm, it would be important to know how
patients receiving no treatment would be expected to progress. The
question was asked `how soon will you know if a participant is getting
worse?' to which Professor Chalder responded that previous research
has shown that it cannot be determined if people are getting better until
at least six months after the end of therapy (i.e. a year after therapy
has begun). CBT and GET may both make a patient worse before they
begin to improve. Professor Sharpe clarified that there is a difference
between transient and persistent deterioration.
It was felt important that
the DMEC be aware of this short term differential effect.

ACTION 11: Professor White to add into section 10.3 (monitoring
adverse outcomes) a defined drop in SF36 score.

ACTION 12: DMEC: An explicit definition of deterioration should be
produced before the first review by the DMEC next year. At six months
and one year after the trial opens for randomisation, the DMEC (and
statisticians) will review SAEs, CGI and SF36 scores to see if there Is a
normal distribution. In addition, previous trials will be reviewed to aid
categorisation of deterioration.

and

q) Section 14 on adverse events was carefully reviewed as this has
undergone substantial revision since the last TSC meeting. It was felt
that a `new' disability might be irrelevant in the context of PACE
@dave30th @Eagle . Perhaps something that needs some research and communication before the next Westminster debate?
 
Wessely says

@etceteraChong there were three therapies and standard med treatment, reproducing normal practice and controlling for non specific effects

But patients in the control group received far less treatment than the other arms. How can he say that the control group controlled for nonspecific effects? Is my understanding of the term wrong?
 
Last edited:
Wessely says

@etceteraChong there were three therapies and standard med treatment, reproducing normal practice and controlling for non specific effects

But patients in the control group received far less treatment than the other arms. How can he say that the control group controlled for nonspecific effects? Is my understanding of the term wrong?

He is presuming that each group in some way "controls" for the other groups. They even tried to build that into the analysis (in the protocol). Actually, it's more evidence they wanted the APT group to be the "control" group.

But they didn't reproduce "normal practice", because normal practice is to combine CBT and GET.
 
I've been trying to get my head round this issue of controls with psychological therapies, and getting confused. As I understand it, a control group should be identical to intervention groups except for the 'active ingredient' in the intervention itself'; any differences in outcomes then have a good probability of being due to the intervention and nothing else.

But what happens when trialling psychological interventions? Where the intervention is itself about motivating people and encouraging better self esteem etc., etc? What sort of control do you need for such a trial? Do you give the control group the same level of attention of non-motivational attention, which sounds a distinctly dubious control to me. This has been bugging me and I'd like to understand it better.
 
But what happens when trialling psychological interventions? Where the intervention is itself about motivating people and encouraging better self esteem etc., etc? What sort of control do you need for such a trial? Do you give the control group the same level of attention of non-motivational attention, which sounds a distinctly dubious control to me. This has been bugging me and I'd like to understand it better.

There is a Catch22 involved and a very subtle one. The real problem is that the effect may depend on the therapist believing in the treatment. So you need to compare different treatments where the therapists believe equally in their effectiveness. Maybe the comparison of CBT to GET is close to that but the result is no difference so that does notes us either is effective. You really need a bogus treatment that therapists believe in. The Lightning Process gets close to that. And it had an effect on subjective outcomes. So everything is pointing to the idea that it does not really matter what the treatment is as long as the therapist believes in it.

But then you have the problem that the effect in these trials may only occur in trials - where the belief of the therapist induces a trial subject to take on a positively biased role - as a trial subject. So the results tell you nothing about what would happen in ordinary practice.

What people doing trials often forget is that you have to put your wide angled glasses on and think about what happens in real life. Sticking to some rules in a book is no good. In real life all sorts of things affect what happens - and they all have to be controlled for in different ways.
 
I've been trying to get my head round this issue of controls with psychological therapies, and getting confused. As I understand it, a control group should be identical to intervention groups except for the 'active ingredient' in the intervention itself'; any differences in outcomes then have a good probability of being due to the intervention and nothing else.

But what happens when trialling psychological interventions? Where the intervention is itself about motivating people and encouraging better self esteem etc., etc? What sort of control do you need for such a trial? Do you give the control group the same level of attention of non-motivational attention, which sounds a distinctly dubious control to me. This has been bugging me and I'd like to understand it better.

One of the major problems with PACE was the complexity of what it was trying to do. It was simultaneously trying to test 3 different therapies and control for all their 'active ingredients' against one another. There is no way to adequately 'control' such a trial. So when MS says it was 'just a trial', he is wrong. When he says that each intervention 'had its own model', he is admitting that it wasn't (and couldn't be) adequately controlled. And when your intervention and outcome measures are so closely linked as to essentially be an education program in how to fill in questionnaires to get the desired result, you definitely can't control it properly.

I'm now beginning to wonder whether they deliberately made it so complex so that if they couldn't understand what was going on, no-one else was likely to either. Oops.
 
I have only managed to find a small portion of the book that Wessely co-authored:
Clinical Trials in Psychiatry. As you would expect, a lot is his usual historical stuff.
But there is this bit which I thought may be of interest:

As ever, he is rewriting the evidence to suit, so that folks can say, "Oh, this textbook says it's all OK if I do it like this."
Just no!

Blinding of surgery can be tricky, but there are ways round it. Sham procedures, for example. But he doesn't mention them.

Why would using a blinded evaluator help if you are recording said subjective responses using a paper questionnaire?

The main problem with blinding psychological trials is that the patient absolutely knows what they are getting, and so does the therapist. The assessment is a small factor in comparison to the major biases introduced by the main lack of blinding.

The last bit, he's just hedging.

It's all simply dreadful!
 
I have only managed to find a small portion of the book that Wessely co-authored:
Clinical Trials in Psychiatry. As you would expect, a lot is his usual historical stuff.
But there is this bit which I thought may be of interest:

I suppose that counts as:

'Two legs are probably OK as long as you have another two in the air - so lets say two legs good then.'
 
Back
Top Bottom