I had the same impression. Haven't looked at it closely but I have a suspicion that the "hard science" Benedetti refers to (his own work presumably) is rather weak.
I haven't seen anything yet that shows the placebo effect isn't simply due to response bias.
I haven't read the study being discussed - here's my "usual" rant!
There's a thread on a study which compared actimetry with self reported questionnaires. Questionnaires overestimate activity/improvement if you like.
Real world scenario:
You work in policy and you are part of a team overseeing a study your Government is funding i.e. to see if intervention X works [think PACE]. Your job is to ensure that you have good quality evidence to present to the Minister so you'd (logically) go for actimetry and quote the relevant study(s) which set out how this method is objective and robust/defensible. Study ends, you write a summary which e.g. says intervention wasn't shown to be effective ---- Minster, on the face of it, has two options - highlight that the available evidence does not support the use of this treatment or say nothing.
Problem is our experience doesn't look anything like that scenario - poor quality studies just keep getting funded by the UK Government.
As for why questionnaires are crap, that seems to be irrelevant --- I think it could be the Hawthorne effect* and that in turn reflects inadequate blinding in studies [raised by Professor Hughes, Galway University, and I guess many others] - but I'm not sure you could ever blind a CBT/GET study effectively.
@Simon had a really simple summary - unblinded then use objective outcome indicators, blinded then subjective indicators OK (you have an adequate control group).
Bottom line is if there's a method which works [actimetry] then why consider why the other method doesn't [questionnaires]?
*
https://en.wikipedia.org/wiki/Hawthorne_effect
@Keela Too @Caroline Struthers