1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 18th March 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Assessment at clinics

Discussion in 'Advocacy Projects and Campaigns' started by Graham, Oct 13, 2019.

  1. Graham

    Graham Senior Member (Voting Rights)

    Messages:
    3,324
    I'm just floating ideas here, and would appreciate some feedback.

    I have been thinking about the difficulties that ME centres have in measuring effectiveness. It doesn't seem appropriate to put patients through a set of demanding objective assessments, and yet responses to subjective assessments are so easily manipulated.

    So here's my take on it so far: three areas need looking at – activity levels, knowledge and "feelings" about the condition, and cognitive abilities.

    I don't think it is appropriate to try and produce a lengthy, finely-scored set of questionnaires. I'm not interested in a treatment that may bump up my Chalder Fatigue score by 2 points - I vary by more than that from week to week.

    So, one questionnaire simply asking a series of graded questions (graded question therapy?) about what the patient did on the previous day, starting with "Did you wash and dress yourself?" up to "Did you spend more than 4 hours in paid employment?".

    One questionnaire asking a very limited number of questions such as "Do you think you have ME?", "Do you understand the principles of pacing" etc. The idea here is to cover one of the main benefits of the specialist centres – confirmation of diagnosis and an introduction to pacing, possible sleep techniques etc.

    And one short computer controlled cognitive test, where perhaps you have the words RED, BLUE, GREEN, and YELLOW, in the right colours, apart from one which you have to spot, then the converse, or something along those lines. I gather 5 minutes is enough to slow us down.

    With PACE, almost all assessments showed the greatest improvement between baseline and the next assessment. I'm not surprised – the effort involved in getting to the first session, meeting new people and adjusting to the system is exhausting. I would propose that copies of these assessments were sent to the patients, together with a link to the computer test, so that they can look through what they will be required to do, and soften the shock.

    Finally, while at home, patients could be asked to complete a "clock scale" for two weekdays, just shading in blue the hours spent in bed or at rest, and those spent in higher energy requirements (like driving) in red: unshaded areas would be assumed to be low energy requirements such as eating, watching TV etc. (I'm talking in terms of healthy people's perception of energy levels here). Obviously there would need to be a good list of useful examples of each.

    My thoughts would be to produce a set of ME-patient approved assessments for centres to use to measure any effectiveness. They need to be able to be coded and analysed in bulk. I certainly do not think that the present ones that they use are appropriate.
     
  2. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,234
    It seems inherently difficult to accurately measure improvement in the condition and determine whether that is due to the clinic.

    Anyway, I would like questions that aren't about how I feel but about what I'm able to consistently do.
     
    Cheshire, alktipping, rvallee and 4 others like this.
  3. Graham

    Graham Senior Member (Voting Rights)

    Messages:
    3,324
    I'm not interested so much as how the clinics make us feel, but many of us have forgotten the long time we spent waiting for a diagnosis, wondering or worrying about what it could be. One friend had a dad who died of Motor Neurone Disease, another worked in a ward of patients with dementia: you can understand what a good job getting a diagnosis did for them.

    I don't think we could aim at accurately measuring anything, only if there were a significant improvement in what we were able to do. Nor would any individual improvement automatically be attributed to a clinic: but with a reasonably large database, trends could be measured.

    My feeling is that if ME clinics do any good, there needs to be some measure that shows up the effective ones.
     
  4. Trish

    Trish Moderator Staff Member

    Messages:
    51,859
    Location:
    UK
    Isn't that just a variation on the SF-36 physical functioning? And asking about one day is, as we all know, potentially very misleading.

    I agree fatigue questionnaires are completely useless for assessing improvement or deterioration. Far too easily influenced.

    I would prefer they give out some equivalent of a Fitbit, or if they are cash poor, a simple pedometer, or get their patients to use a smartphone app if they keep their phones on them all the time.

    And use an app of the type Solve are designing for patients to record their symptoms, and any measures they have the equipment to do - steps, heart rate variability, etc.

    Patients with diabetes have to learn to monitor their blood sugar. I see no reason why patients with ME shouldn't be shown how to monitor their symptoms using an app, or a paper version if they don't have access to technology. It helps the patient learn what helps and hinders them from pacing. And it would give the clinic feedback on how the patients are getting on if they have access to the data too.

    The app could also include a quick cognitive test for patients to do as and when suitable.

    Time for ME clinics to enter the electronic age and dispense with those useless time wasting largely irrelevant questionnaires.
     
  5. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    13,140
    Location:
    UK West Midlands
    Maybe something about what activities of daily living you're able to manage to do for yourself.
     
  6. JohnTheJack

    JohnTheJack Moderator Staff Member

    Messages:
    4,349
    Just a brief thought, if there are questions about daily living, then I think they should be about what has been done, say, in the last week, rather than 'what can you do?
    Eg How many times have you prepared a meal in the last week? How many times have you gone outside and walked at least 100 metres in the last week? etc
     
  7. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    13,140
    Location:
    UK West Midlands
    Yes very good point people definitely overestimate what they can potentially do based on what they have been able to do at some point in the past. Not exactly the same situation but I’ve sat in hospital appointments with parent with dementia who convincingly confirmed their ability to make a meal for themself. (Not).
     
    Sarah94, rainy, Annamaria and 8 others like this.
  8. Graham

    Graham Senior Member (Voting Rights)

    Messages:
    3,324
    The sf-36 asks questions along the lines of whether a patient has no difficulty, some difficulty or a lot of difficulty with, say, climbing several flights of stairs. Grading an answer like that is so difficult, and open to manipulation.

    I was thinking of a simple list of facts - did you dress and wash yourself this morning?

    The reason I specify "yesterday" is that it gets more and more unreliable the further you step back in time. The "yesterday" would have to be flexible though.

    I'm not sure what types of ME patients you mix with though: I can think of a number in my group who would have considerable problems with any technological solution, and a couple who would resist it on principle. As for them estimating how many times they walked more than 100 metres in the last week ......

    People are very unreliable at this sort of recollection: they either need to note it down as it happens, have it automatically recorded, or we need to keep the window short - yesterday. I struggle to remember what I have done over the last week. Yesterday is a big enough challenge.

    It's true I have a smartphone though: I'm not sure how many steps it does each day in my drawer. I'll have to keep a track.

    Thanks for the thoughts and challenges so far. I'm not pretending I have the solution, just hoping to stir some ideas and see what emerges.
     
    Annamaria, JohnTheJack, Hutan and 4 others like this.
  9. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,269
    Location:
    London, UK
    I doubt this can ever be done. For most specialities it is never done because it is realised that there are too many confounders to generate data that makes any sense. In rheumatology nobody checks clinics to see if they are getting good results with rheumatoid arthritis. We understand that results depend too much on referral patterns, severity biases, etc. and that any attempt to score will immediately bring in fiddling of data.

    The only reason anyone is assessing 'efficacy' these days is because of commercial outsourcing I suspect. Death and complication rates for surgery have always been a legitimate independent of care quality but in medical specialities there is nothing as concrete.

    In a sense this is 'audit' and audit is by definition bad science because it is not done in a controlled set up.It is what the people who like 'pragmatic trials' set up. I think it is the road to nowhere.
     
    rainy, Cheshire, JohnTheJack and 7 others like this.
  10. Wonko

    Wonko Senior Member (Voting Rights)

    Messages:
    6,674
    Location:
    UK
    So what you seem to be saying is that no one has bothered to check if medicine actually 'works'? - despite spending £120 billion on it a year (in the UK)
     
    Annamaria, wdb, Snow Leopard and 2 others like this.
  11. Trish

    Trish Moderator Staff Member

    Messages:
    51,859
    Location:
    UK
    I get that. And I agree asking patients to recall what they have done in the last week is unreliable.

    Perhaps the younger generation than you or me would be less put off by being offered an app to use for recording symptoms and activity. And doing it each day, or one day a week, or whatever suits the individual should not be a burden.

    That's why I suggested monitoring with the help of technology may be the way to go - not so much for clinics to monitor the success of therapy, since we know that none of the 'therapies' for ME work anyway, but to help patients self monitor so they can learn from their activity and symptom patterns, possibly with the help of a nurse practitioner or OT who is teaching them how to use it. It's just a tech solution to the tedious activity and symptom diaries ME clinics seem so fond of and which I found useless.
     
  12. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,290
    Location:
    Canada
    I had roughly the same idea a few months ago and added some notes to this thread about a questionnaire assessment that borrows from the same idea but doesn't include the notion of sustainability, of being able to reliably do those things rather than just once: https://www.s4me.info/threads/valid...itions-2019-carlozzi-et-al.11452/#post-204102.

    I think this is the right direction. A graded assessment of what people can sustain, with healthy people able to do pretty much everything on a daily basis without end, and sicker people falling flat in the early questions, if not the very first few.
     
  13. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,269
    Location:
    London, UK
    On the contrary. Proper trials check that methods work. You then apply them and you can check that you are applying them consistently. But you don't get information about whether treatments work from auditing routine clinics.
     
    wdb, Simbindi, TrixieStix and 3 others like this.
  14. Graham

    Graham Senior Member (Voting Rights)

    Messages:
    3,324
    Coming from education, as Head of Maths, I was answerable to parents, the head and to the governors. That wasn't a superficial chat either: for all of my time in education, it has been easier to get grades at English in comparison with maths. When I started, only 20% of the country were allowed to achieve a grade C in maths, but for English it was 40%. That's one reason why people think they are better at English than maths.

    I often get the accusation that it was easy, just count the exam grades, but it is far more complex than that: if nothing else, we were a comprehensive all-ability school in a grammar school area! Just think what that does for the distribution of skills in the intake, and for a long time we were girls only. It took me hours of analysis to work out whether we were performing well or not, and to determine which areas contributed the most to our improvements.

    I find it difficult to understand why doctors are not more accountable. In my time I have come across a number of specialists who have claimed that such and such treatment would sort out all the problems – a full blooded sinus operation was supposed to do that for me! In fact, afterwards, I would then come across many who had had similar treatment, and it hod worked, but only for a few years then the problem returned. But I have yet to come across any specialist who followed up his patients say, a year later, to see whether the treatment was effective.

    It reminds me of a comment I read from one of the specialists running a local ME clinic. He knew the treatments were very successful because patients didn't come back. On those grounds, I could run a brilliant car repair service.

    Studies are a start to the evaluation process, but they are only a start. The yellow card system is an attempt to improve matters, but it is hardly used: many doctors do not understand probability - if a side effect is stated to only occur once in ten thousand cases, they dismiss it as being unlikely. In fact what they should do is balance that probability against the other possibilities that could have resulted in that effect at that specific moment (generally they are all very unlikely).

    I think a lack of auditing encourages the god-like attitude that I have found among a significant proportion of specialists. Auditing is not easy, it is not clear-cut, but something needs to puncture the arrogance and focus on the needs of the patients. How else can you tell if all the staff are working equally well? That all are following best practice? That the study was wrong?
     
    Sarah94, rainy, alktipping and 10 others like this.
  15. WillowJ

    WillowJ Senior Member (Voting Rights)

    Messages:
    676
    I agree that it's good to have a system that captures information about what we can actually do both physically and metally (not so much how we feel, which is important to quality of life but doesn't make useful assessments).

    And I also think it's more reasonable to ask about the past week, or two weeks, than just yesterday (my activity in any given area tends to fluctuate as I try to balance priorities--most days I don't dress with or without help, but stay in pajamas, and I don't necessarily change them every day, but it's relevant whether I have changed them 0 or 3 times this week, or something in between).

    (And my overall activity fluctuates just with the nature of the disease)

    I do understand that assessing treatments and compliance is easier than auditing clilnics, and that auditing clinics introduces complexity that doesn't have anything to do with treatment usefuleness... but on the other hand there are incompetant, grumpy, arrogant, or otherwise bad doctors.

    And guidelines are a general rule meant for the bulk of the population but which won't work for every last person. If they're being applied mindlessly, some people are not getting appropriate care.

    So there's a need for assessing doctors and clinics also. Just it's very tricky.
     
    alktipping and Snow Leopard like this.
  16. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,269
    Location:
    London, UK
    I don't disagree with the need for more accountability in medicine. But I think it really is very difficult to see how you rate individual clinics or practitioners without spending huge amounts of money. And for the last few decades governments have focused on cutting costs and sweeping the problem under the carpet.

    My daughter is head of maths at an all ability school (all girls this time) in an area dominated by selective fee paying schools like North London Collegiate and we discuss the relative impossibilities of auditing our professions at dinner. I realise the complexities. I genuinely think that trying to assess individual performance by any sort of outcome measure is impractical in medicine. As I say, for my prostate surgeon colleagues it is easier to audit cure and complication rates although even there there are confounders.

    The problem for ME clinics is that we have at present no reason to think any of them do any good at all that could be measurable. The good ones will be those that show respect and keep patients coping and feeling cared for. Measuring things often distracts from much simpler forms of assessment - 'do they help you?' - and is so easy gerrymandered.

    I am reminded of the example of auditing clinic waiting times. For about a decade in our department every clinic attendance had to be logged for time from referral to appointment. Somebody then took thousands of bits of paper and put them in to a computer and calculated the mean wait time from referral to appointment. What nobody seemed to understand is that you would get the same result by just asking the clinic clerk how far ahead was the earliest available appointment. If it was seven weeks then Cleary the system was broken. If she could fit in an urgent case tomorrow then it was not. Those are the sorts of things I think would make the biggest difference to sort out and clearly there is no political pressure to do so. The pressure from the public's elected representatives is to stop people being sent to clinics at all.

    We could probably debate this forever. I do agree that accountability needs to be there. But the resistance to removing CBT and GET at NICE comes from people who make a living out of auditing such treatments and showing that they work in 51% of cases using 'pragmatic' methods.
     
    TigerLilea, rainy, TrixieStix and 4 others like this.
  17. Peter Trewhitt

    Peter Trewhitt Senior Member (Voting Rights)

    Messages:
    3,633
    An ideal effective clinic would be attempting to do many different things:

    - accurate and timely diagnosis, including any comorbidities
    - provide information
    - educate medical professionals, patients and public
    - support adjustment to a life changing condition
    - enable patients to adapt their behaviour to make the most of any spontaneous improvement and/or avoid patterns that worsen the condition
    - enabling management of activity patterns, pacing, avoiding PEM, etc
    - helping minimise any negative consequences of inactivity and helping individuals identify if any activities are possible to help avoid deconditioning
    - support access to appropriate financial and practical support
    - aids and adaptations
    - manage any consequent mental health issues
    - ensure appropriate medical management and pharmacological interventions, including comorbidities such as POTS, IBS, food intolerances and so on
    - proactive mechanisms for managing new symptoms, and rereferral systems
    - etc

    And this is all without any attempt at rehabilitation of the underlying ME, given we have no currently effective identified treatment/intervention strategies.

    Any evaluation of outcomes is going to be very complex. For some keeping them in employment is success where as for others supporting them out of employment might be the best outcome. For some increasing overall stable activity levels might be a success, but for other the target may be to reduce their overall activity levels. For some you might enable them to take advantage of spontaneous recovery, but for others you might be supporting their adaptation to a deteriorating underlying condition.

    Activity scales (eg questionnaires), real live outcomes (employment status/receipt of benefit, time attending school) or objective measures of physical activity are all important in evaluating specific interventions however they are no use to evaluate the overall effectiveness of a clinic. We can not even say that any arbitrary time point if patients reporting improvement in their overall happiness is good or bad, given people may be going through a grieving process or even going from false optimism to resigned realism.

    The fact that we are looking at this question in this way is an artefact of the inappropriate approach of our current system. We have at present clinics that are largely seeking to implement ineffective rehabilitation techniques aimed at increasing people’s activity levels (GET and CBT) either directly (GET) or indirectly (CBT) by removing their psychological blocks to normal activity, so here measuring activity levels or such as employment status would give meaningful outcome measures, but generally they document failure of these interventions.

    An ideal effective clinic would be attempting different things with different people and indeed different things with the same person at different times. Outcome measures might include reaching specific targets, eg giving a diagnosis, ensuring those not working access appropriate benefits, or they might be the achievement of agreed outcomes on individual intervention programme, eg implement pacing strategies in everyday life, but I don’t see how simple measures could be applied to a clinic overall.
     
    Sarah94, Hutan, alktipping and 7 others like this.
  18. Peter Trewhitt

    Peter Trewhitt Senior Member (Voting Rights)

    Messages:
    3,633
    Even ‘waiting times’ are not necessarily meaningful as they are subject to political redefinition, for example when I managed an NHS Communications Aids Equipment Service without any adequate budget, when there was central government funding to reduce waiting lists, we engineered our record keeping to emphasise our equipment waiting list, but when services were penalised for having long waiting times, we redefined equipment recommendation as a successful outcome and an end point in the process that identified unmet need. So people were no longer waiting they just had an unmet need, even though we regularly got pots of money from underspends elsewhere to reduce the unmet need. What we actually did with the patient was the same in both situations, it was just how we recorded it and presented it upwards changed in accordance with the current best way to try to manipulate the funding system.
     
    Hutan, Amw66, TrixieStix and 8 others like this.
  19. Suffolkres

    Suffolkres Senior Member (Voting Rights)

    Messages:
    1,522
    How about an adapted version of the Canadian 'Symptom' scoring chart?
    It could be used effectively as a kind of audit trail over time.
    A high score would mean significant impact on daily living and QOL scores.

    Used;
    1. Before diagnosis,
    2. After Diagnosis and intervention
    3. Ongoing over time - as condition progresses
    That's how I have used it.

    This takes fluctuating symptoms- (true, patient 'subjectively' assessed/ self reported) and scored, resulting in an objective overall total score.

    (This could also be done by a carer not ME member of the family as an objective measure of condition burden on patient as against carer.)

    If a clinic is having any effects at all (Good or Bad!) surl that should sure translate into some element of help with effective symptom management and control?
    That in turn would impact on DLA and QOL?

    At the NICE Scoping meeting the consultant endocrinologist who runs a clinic seemed to like that concept.
    page 5
     

    Attached Files:

    Last edited: Oct 14, 2019
    Amw66, alktipping, Graham and 2 others like this.
  20. Graham

    Graham Senior Member (Voting Rights)

    Messages:
    3,324
    Thanks everyone for helping me think more clearly about this matter. I've got a lot more thinking to do yet.

    Some quick thoughts.

    The "evaluation" is an overall evaluation of the clinic's approach, not of the individual patient. Prior to using any assessment, it would be tested over a period of time with ME patients, and some sort of measure of variability obtained. Minor changes like a pyjama day or not would vanish under those natural variations. After all, if a clinic is claiming success, it needs to be well above any natural variations.

    The reason for only asking for details for one day is to reduce load and aim at accuracy. There would have to be some sort of spiel about perhaps choosing a couple of "normal" days in advance to complete the clock chart, and perhaps use the second day as the one to answer the questions.

    I did think about electronic methods, such as fitbits etc., but decided firstly that they were too inaccurate, and secondly that they did not easily select energy-sapping tasks, such as driving or trying to arrange house insurance: the emphasis was on physical movement. One problem that reared its head when I was trying to work out why PACE had ordered so few trackers for their 640 patients was the sheer complexity of getting them returned, the data uploaded, and ready for the next patient. When you start to factor in weekends, holidays, delays in the post etc. it gets very messy.

    It's interesting, Jon, that you were able to discuss the difficulties of assessing success in teaching maths with your daughter: I had envisaged having to explain the complexities. I also think that Peter's list is very relevant. But surely the point is not so much that we have to measure "success" with such a difficult condition, in which normal ideas of success are inappropriate, but that we need to measure effectiveness, for which Peter's list is very important. I think that there could be two important outcomes from such an analysis. The first is that clinics stop claiming major success in treating the condition, and the second that unhelpful practices or attitudes are highlighted.

    As head of maths, even knowing that the analysis was difficult and potentially misleading, I spent a lot of time on it each year. It helped me pinpoint areas that needed improvement. Obviously it had to be combined with other sources of information and professional knowledge, but it kept me focused on the task of making the department as good as it could be. Do specialists running departments have that pressure and do they have suitable methods of looking at that?
     

Share This Page