By David Tuller, DrPH
*October is crowdfunding month at Berkeley. I conduct this project as a senior fellow in public health and journalism at the university’s Center for Global Public Health. If you would like to support the project with a donation to Berkeley (tax-deductible for US taxpayers), here’s the place: https://crowdfund.berkeley.edu/project/27513
Two years ago, the North Bristol NHS Trust conducted a survey among current and recent attendees of clinical services for patients with CFS/ME (as the illness was then being called). The survey included one key question: Did respondents believe that specialist care should be available for CFS/ME patients?
According to the trust’s report on the survey, 99.8 % of the 612 respondents from April to July 23rd agreed that such services should be available, with only one person responding in the negative. (In a longer report, the trust indicated that it received 610 responses by July 23rd; if this discrepancy was explained, I didn’t notice it.) The survey did not ask specifically whether the respondents approved of the treatments being offered, in particular a steady increase in physical exertion called graded exercise therapy and a CFS/ME-specific form of cognitive behavior therapy. Both interventions have long been promoted as cures.
Why bother discussing this survey now? Well, the issue relates to the continuing saga over the new-but-hijacked ME/CFS guidelines from the National Institute for Health and Care Excellence (NICE), the UK agency charged with developing clinical advice about various medical conditions.
in November 2020, NICE published its draft guidelines for what it now calls ME/CFS. The draft recommended against the two interventions most prominently endorsed in the 2007 guidelines, in particular GET and CBT. In the subsequent public comment period, a wide range of interest groups expressed their views on the draft. NICE considered these concerns, produced a final version, and designated August 18th as the publication date.
Instead, in an abrupt divergence from its usual process, NICE announced a publication delay less than 24 hours beforehand. This decision came after fierce objections to the new version from pooh-bahs at some of the UK’s powerful medical associations. (These physician groups are known grandly as “royal colleges,” but they function as trade organizations serving the professional interests of their members.) On Monday, NICE is hosting a “roundtable” discussion among relevant parties—although some stakeholders that asked to be included were not.
North Bristol NHS Trust’s report on the 2019 survey included criticism of other research in which patients have reported harms following these standard interventions, and in particular GET. In the process, the trust’s report also raised questions about the NICE draft issued in November 2020. It is certainly possible that this survey will be referenced at the upcoming roundtable by those parties seeking to subvert the guideline development process and gut the new recommendations.
So let’s take a closer look.
A closer look at the Bristol survey
The North Bristol NHS Trust’s reporting of the survey is seriously deficient. The trust noted the number of responses (whether 610 or 612) but failed to provide information about the response rate–a noticeable and surprising omission. In short, how many people received the questionnaire and did not respond? Or did the service just send the questionnaire to patients they hoped or believed would fill it out as desired?
Hard to know, since here is all we’re told about this point: “Patients were invited to respond to the short survey by emails sent from NHS service email accounts…The majority of patients were those of the Bristol CFS/ME Service and parents of young people attending the Paediatric CFS/ME Service at Bath were also invited to participate.” This passage does not offer sufficient context to allow for a genuine assessment of the findings.
Moreover, the survey was apparently launched primarily as a campaigning and advocacy tool. According to the trust, “this survey was conducted because we were aware of a risk of closure of NHS CFS/ME specialist services.” In other words, the trust wanted to generate data that would support a particular policy goal–persuading the NHS to continue to fund CFS/ME specialist services. Perhaps that’s why the main question was asked in a leading fashion that seemed clearly designed to elicit the desired response.
Here’s how the survey introduced the issue:
“Research shows that adults and children with CFS/ME are more likely to improve with specialist treatment…We believe that adults and children with Chronic Fatigue Syndrome or ME (CFS/ME) should have a choice and be able to access specialist NHS services if they want this.“
Then the respondents were asked this question: “Do you think that adults and children with Chronic Fatigue Syndrome or ME (CFS/ME) should have access to specialist NHS services for assessment and treatment?“
This is laughable. It would be hard to concoct a more effective method of ensuring biased responses. The framing of the question leaves minimal room for “no” as an answer. It shouldn’t be surprising that patients want specialist services, and of course they’re likely to agree that they should have access to them. That doesn’t necessarily mean they like or want the services provided at Bristol and Bath, and no one should presume that it does.
To bolster its argument, the North Bristol NHS Trust noted that only a few people criticized the interventions. But the survey did not ask about them. Instead, it left a blank space with this instruction: “If you want to say anything else, please add some free text below.” The trust analyzed these comments by theme, but failed to indicate how many people added any text at all. This parsimonious approach to detail makes it hard to assess the reported findings.
Indeed, that prompt to add “free text” does not represent a serious effort to obtain actionable information—especially because respondents were not anonymous to those conducting the survey. How many recent or current patients (or patents of patients) would be willing to openly declare to those running their clinic that they disliked the interventions or found them to be useless or even harmful?
In late July, someone posted the survey on social media—at which point, the North Bristol NHS Trust report noted, the proportion of negative comments about the services and the interventions rose dramatically. Almost 30% of the 282 later respondents answered “no” to the main question and many reported concerns about the interventions. These thumbs-down assessments were much more in line with multiple other surveys of patients’ experiences than the Bristol survey. According to the trust, the discrepancy between the first and second batches of responses should raise concerns—but apparently only about the second batch.
Here is what the North Bristol NHS Trust concluded:
“The shift in negative responses…provides compelling evidence that the responses of current and recent NHS patients differ greatly from the responses of people responding via social media. We think that this evidence should be taken into account by the NICE Guideline review in considering the scope of the data submitted from surveys which involved recruitment via social media. It is clear that these surveys do not reflect the experiences of all NHS patients who are accessing specialist CFS/ME Services. It is likely that there are a number of sources of bias involved, and certainly the responses to our survey after it became public suggests that the experiences of actual NHS patients is very different from the experiences of those who heard about the survey via social media.”
This position comes across as an effort to ignore inconvenient data. To my knowledge, it has not been argued that responses to questionnaires disseminated on social media are representative of NHS patients accessing specialist services, nor of the overall patient population. This lack of representativeness, while undoubtedly a limitation, does not render the testimony meaningless or negate the reported experiences.
In contrast, North Bristol NHS Trust appears to assume without justification that its own survey findings are in fact representative of NHS patients attending specialist services. But no one should be able to make this argument with a straight face, or pretend that these results were not themselves subject to enormous bias in the framing of the relevant question.
Beyond that, did North Bristol NHS Trust contact patients who dropped out of clinical services before completing treatment? How about those who went through treatment a year before, or two years, or five years? We simply don’t know—just as we don’t know how many current and recent patients actually received the survey but failed or refused to respond, for whatever reason or reasons.
Will representatives of the medical trade associations really highlight this problematic survey during the roundtable discussion in an effort to impeach other data? Who knows? If they make that choice, their reliance on such flimsy data would expose the weakness of their position. Hopefully, attendees with a more robust grasp of scientific methodology will be able to address the survey’s multiple shortcomings and set everyone straight.