PACE trial investigators respond to David Tuller

Professors Peter White, Trudie Chalder and Michael Sharpe (co-principal investigators of the PACE trial) respond to the three blog posts by David Tuller, published here on 21st, 22nd and 23rd October 2015, about the PACE trial.


The PACE trial was a randomized controlled trial of four non-pharmacological treatments for 641 patients with chronic fatigue syndrome (CFS) attending secondary care clinics in the United Kingdom (UK) ( The trial found that individually delivered cognitive behaviour therapy (CBT) and graded exercise therapy (GET) were more effective than both adaptive pacing therapy (APT), when added to specialist medical care (SMC), and SMC alone. The trial also found that CBT and GET were cost-effective, safe, and were about three times more likely to result in a patient recovering than the other two treatments.

There are a number of published systematic reviews and meta-analyses that support these findings from both before and after the PACE trial results were published (Whiting et al, 2001, Edmonds et al, 2004, Chambers et al, 2006, Malouff et al, 2008, Price et al, 2008, Castell et al, 2011, Larun et al, 2015, Marques et al, 2015, Smith et al, 2015). We have published all the therapist and patient manuals used in the trial, which can be down-loaded from the trial website (

We will only address David Tuller’s main criticisms. Most of these are often repeated criticisms that we have responded to before, and we will argue that they are unjustified.

Main criticisms:

13% of patients had already “recovered” on entry into the trial

Some 13% of patients entering the trial did have scores within normal range (i.e. within one standard deviation of the population means) for either one or both of the primary outcomes of fatigue and physical function – but this is clearly not the same as being recovered; we have published a correction after an editorial, written by others, implied that it was (White et al, 2011a). In order to be considered recovered, patients also had to:

  • Not meet case criteria for CFS
  • Not meet eligibility criteria for either of the primary outcome measures for entry into the trial
  • Rate their overall health (not just CFS) as “much” or “very much” better.

It would therefore be impossible to be recovered and eligible for trial entry (White et al, 2013). 

Bias was caused by a newsletter for patients giving quotes from patients and mentioning UK government guidance on management. A key investigator was on the guideline committee

It is considered good practice to publish newsletters for participants in trials, so that they are kept fully informed both about the trial’s progress and topical news about their illness. We published four such newsletters during the trial, which can all be found at The newsletter referred to is the one found at this link:

As can be seen no specific treatment or therapy is named in this newsletter and we were careful to print feedback from participants from all four treatment arms. All newsletters were approved by the independent research ethics committee before publication. It seems very unlikely that this newsletter could have biased participants as any influence on their ratings would affect all treatment arms equally.

The same newsletter also mentioned the release of the UK National Institute for Health and Care Excellence guideline for the management of this illness (this institute is independent of the UK government). This came out in 2007 and received much media interest, so most patients would already have been aware of it. Apart from describing its content in summary form we also said “The guidelines emphasize the importance of joint decision making and informed choice and recommended therapies include Cognitive Behavioural Therapy, Graded Exercise Therapy and Activity Management.” These three (not two as David Tuller states) therapies were the ones being tested in the trial, so it is hard to see how this might lead to bias in the direction of one or other of these therapies.

The “key investigator” on the guidelines committee, who was mentioned by David Tuller, helped to write the GET manuals, and provided training and supervision for one of the therapies; however they had left the trial team two years before the newsletter’s publication. 

Bias was caused by changing the two primary outcomes and how they were analyzed

These criticisms were first made four years ago, and have been repeatedly addressed and explained by us (White et al, 2013a, White 2015), including explicit descriptions and justification within the main paper itself (White et al, 2011), the statistical analysis plan (Walwyn et al, 2013), and the trial website section of frequently asked questions, published in 2011 (

The two primary outcomes for the trial were the SF36 physical function sub-scale and the Chalder fatigue questionnaire, as in the published trial protocol; so there was no change in the outcomes themselves. The only change to the primary outcomes from the original protocol was the use of the Likert scoring method (0, 1, 2, 3) of the fatigue questionnaire. This was used in preference to the binary method of scoring (0, 0, 1, 1). This was done in order to improve the variance of the measure (and thus provide better evidence of any change).

The other change was to drop the originally chosen composite measures (the number of patients who either exceeded a threshold score or who changed by more than 50 per cent). After careful consideration, we decided this composite method would be hard to interpret clinically, and would not answer our main question of comparing effectiveness between treatment arms. We therefore chose to compare mean scores of each outcome measure between treatment arms instead.

All these changes were made before any outcome data were analyzed (i.e. they were pre-specified), and were all approved by the independent Trial Steering Committee and Data Monitoring and Ethics committee.

Our interpretation was misleading after changing the criteria for determining recovery

We addressed this criticism two years ago in correspondence that followed the paper (White et al, 2013b), and the changes were fully described and explained in the paper itself (White et al, 2013). We changed the thresholds for recovery from the original protocol for our secondary analysis paper on recovery for three, not four, of the variables, since we believed that the revised thresholds better reflected recovery. For instance, we included those who felt “much” (and “very much”) better in their overall health as one of the five criteria that defined recovery. This was done before the analysis occurred (i.e. it was pre-specified). In the discussion section of the paper we discussed the limitations and difficulties in measuring recovery, and stated that other ways of defining recovery could produce different results. We also provided the results of different criteria for defining recovery in the paper. The bottom line was that, however we defined recovery, significantly more patients had recovered after receiving CBT and GET than after other treatments (White et al, 2013).

Requests for data under the freedom of information act were rejected as vexatious

 We have received numerous Freedom of Information Act requests over the course of many years. These even included a request to know how many Freedom of Information requests we had received. We have provided these data when we were able to (e.g. the 13% figure mentioned above came from our releasing these data). However, the safe-guarding of personal medical data was an undertaking enshrined in the consent procedure and therefore is ethically binding; so we cannot publicly release these data. It is important to remember that simple methods of anonymization does not always protect the identity of a person, as they may be recognized from personal and medical information. We have only considered two of these many Freedom of Information requests as vexatious, although an Information Tribunal judge considered an earlier request was also vexatious (General Regulation Chamber, 2013).

Subjective and objective outcomes

These issues were first raised seven years ago and have all been addressed before (White et al, 2008, White et al, 2011, White et al, 2013a, White et al, 2013b, Chalder et al, 2015a). We chose (subjective) self-ratings as the primary outcomes, since we considered that the patients themselves were the best people to determine their own state of health. We have also reported the results of a number of objective outcomes, including a walking test, a stepping test, employment status and financial benefits (White et al, 2011a, McCrone et al, 2012, Chalder et al, 2015). The distance participants could walk in six minutes was significantly improved following GET, compared to other treatments. There were no significant differences in fitness, employment or benefits between treatments. We interpreted these data in the light of their context and validity. For instance, we did not use employment status as a measure of recovery or improvement, because patients may not have been in employment before falling ill, or they may have lost their job as a consequence of being ill (White et al, 2013b). Getting better and getting a job are not the same things, and being in employment depends on the prevailing state of the local economy as much as being fit for work.

There was a bias caused by many investigators’ involvement with insurance companies and a failure not to declare links with insurance companies in information regarding consent

No insurance company was involved in any aspect of the trial. There were some 19 investigators, three of whom have done consultancy work at various times for insurance companies. This was not related to the research and was listed as a potential conflict of interest in the relevant papers. The patient information sheet informed all potential participants as to which organizations had funded the research, which is consistent with ethical guidelines.


Castell BD et al, 2011. Cognitive Behavioral Therapy and Graded Exercise for Chronic Fatigue Syndrome: A Meta‐Analysis. Clin Psychol Sci Pract 18; 311-324.


Chalder T et al, 2015. Rehabilitative therapies for chronic fatigue syndrome: a secondary mediation analysis of the PACE trial. Lancet Psychiatry 2; 141-152.


Chalder T et al, 2015a. Methods and outcome reporting in the PACE trial–Author’s reply. Lancet Psychiatry 2; e10–e11. doi:

Chambers D et al, 2006. Interventions for the treatment, management and rehabilitation of patients with chronic fatigue syndrome/myalgic encephalomyelitis: an updated systematic review. J R Soc Med 99: 506-520.

Edmonds M et al, 2004. Exercise therapy for chronic fatigue syndrome. Cochrane Database Syst Rev 3: CD003200. doi:

General Regulation Chamber (Information Rights) First Tier Tribunal. Mitchell versus Information commissioner. EA 2013/0019.

Larun L et al, 2015. Exercise therapy for chronic fatigue syndrome. Cochrane Database of Systematic Reviews Issue 2. Art. No.: CD003200.


Malouff JM et al, 2008. Efficacy of cognitive behavioral therapy for chronic fatigue syndrome: a meta-analysis. Clin Psychol Rev 28: 736–45.


Marques MM et al, 2015. Differential effects of behavioral interventions with a graded physical activity component in patients suffering from Chronic Fatigue (Syndrome): An updated systematic review and meta-analysis. Clin Psychol Rev 40; 123–137. doi:

McCrone P et al. Adaptive pacing, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome: a cost effectiveness analysis. PLoS ONE 2012; 7: e40808. Doi:

Price JR et al, 2008. Cognitive behaviour therapy for chronic fatigue syndrome in adults. Cochrane Database Syst Rev 3: CD001027.


Smith MB et al, 2015. Treatment of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome: A Systematic Review for a National Institutes of Health Pathways to Prevention Workshop. Ann Intern Med. 162: 841-850. doi:

Walwyn R et al, 2013. A randomised trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome (PACE): statistical analysis plan. Trials 14: 386.

White PD et al, 2007. Protocol for the PACE trial: a randomised controlled trial of adaptive pacing, cognitive behaviour therapy, and graded exercise, as supplements to standardised specialist medical care versus standardised specialist medical care alone for patients with the chronic fatigue syndrome/myalgic encephalomyelitis or encephalopathy. BMC Neurol 7:6. doi:

White PD et al, 2008. Response to comments on “Protocol for the PACE trial”.

White PD et al, 2011. The PACE trial in chronic fatigue syndrome – Authors’ reply. Lancet 377; 1834-35. DOI:

White PD et al, 2011a. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. Lancet 377:823-36. doi:

White PD et al, 2013. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychol Med 43: 227-35. doi:

White PD et al, 2013a. Chronic fatigue treatment trial: PACE trial authors’ reply to letter by Kindlon. BMJ 347:f5963. doi:

White PD et al, 2013b. Response to correspondence concerning ‘Recovery from chronic fatigue syndrome after treatments in the PACE trial’. Psychol Med 43; 1791-2. doi:

White PD et al, 2015. The planning, implementation and publication of a complex intervention trial for chronic fatigue syndrome: the PACE trial. Psychiatric Bulletin 39, 24-27. doi:

Whiting P et al, 2001. Interventions for the Treatment and Management of Chronic Fatigue Syndrome: A Systematic Review. JAMA. 286:1360-68. doi:

Comments on this entry are closed.

  • uab9876 30 October 2015, 5:32 am

    On their first defense saying to recover they needed not to have met the initial criteria for CFS for the trial (i,e. the oxford criteria). The recovery paper talks about using modified criteria which include thresholds for fatigue or physical function as well as a different time period – so this may still be an issue.

    In addition they make no defense of the thresholds and their changes from recovery as defined in the protocol and don’t offer a sensitivity analysis. Where is the data?

  • uab9876 30 October 2015, 5:40 am

    On the changing of the scoring system for the CFQ. It was claimed that this increased sensitivity but no evidence was ever provided for this.

    What is interesting is that the two scoring systems do not maintain order. That is using one scoring system patient A could be more fatigued than patient B but the other way around for the other scoring system. Given this I cannot see how both scales could be adequate interval scale proxies for fatigue. Yet the statistics presented (mean and SD) suggest that both need to be. It is up to the authors to demonstrate that both (since they use both) scoring systems are interval scales and hence justify the techniques they use.

    Also the simple point that was made in the original article – where is the original data so that readers can see the impact of the change. Why is it suppressed?

  • uab9876 30 October 2015, 5:53 am

    In terms of subjective and objective outcomes. Some of the trial interventions were intended to address the way patients interpret symptoms so it would be somewhat shocking if questionnaire results were not changed. Hence the whole trial seems badly designed with a self fulfilling result. This point is not addressed here or anywhere else by the authors of PACE. They just say ‘We chose (subjective) self-ratings as the primary outcomes, since we
    considered that the patients themselves were the best people to
    determine their own state of health.’ But provide no analysis of potential priming/framing or other effects of CBT and GET which could change scores.

    The PACE data could have been released to show if there were correlations between the limited objective outcomes. It should be said that when a patient asked for the 6 minute walking test results for those deemed to have recovered they called that request vexatious. (later changed when the patient appealed and hence not listed as one of the 2 requests they said were vexatious).

    Again they could help patients make better informed decisions by releasing such data. But perhaps the data doesn’t support their favored outcome?

  • Valentijn 30 October 2015, 5:55 am

    I still don’t see any justification for changing the SF36-PF threshold for “recovery” from 85 to 60 (scale runs from 0 to 100). Especially since patients scoring 65 were considered disabled enough to enter the trial, and a score of 60 is only normal if you’re a 75-80 year old woman.

    I don’t think any sane, honest person would describe that level of disability as being recovered from anything.

  • Valentijn 30 October 2015, 6:01 am

    The investigators keep assuming that subjective measurements are the most important to patients. I can assure you, they are not. We don’t want to be taught to say we are a bit less fatigued – we want to be able to become more active. We want to be less disabled and more functional. We want to be able to work again, or clean our own houses, or at least sit up all day.

    And we all know the real reasons that objective measurements (actometers) were dropped. The Wiborg review of three CBT/GET trials included actometer data which showed that CBT/GET do not result in any objective improvements in patient functionality, even when patients dutifully give the less fatigued questionnaire answers which the therapists have taught them to give.

    It’s also disingenuous of the investigators to protest here that objective measurements were actually used after all. None of those were part of the primary outcome measurements, and none showed any clinical improvement, or reflected any increased ability to work, get off benefits, go to school, or otherwise function more normally. If the PACE trial had been judged on those few objective measurements, it would have clearly been a failure (though undoubtedly still spun by its authors as a success).

  • Valentijn 30 October 2015, 6:08 am

    For clarity, regarding the FOI requests: for how many did the institution or the investigators release the requested data without being forced to by the ICO which handles such requests?

    Why have the investigators denied the request to provide the numerical data for a graph which they have published? The graph has been released to the public, why not the actual values of the few points on that graph? Is this supposed to be science or is it just a drawing activity for school children?

    How was it even remotely “vexatious” for a patient to request data regarding the timing of the sweeping changes made to the recovery criteria?

    And most importantly: why are the investigators so afraid to release the data as was promised in the original protocol?

  • uab9876 30 October 2015, 6:19 am

    There is a matter of trust here which is a key question for patients. Patients should have as much information as possible in order to be able to make informed decisions about treatments and doctors advising them equally need the information to help inform the choice.

    In the case of the PACE trial information appears limited and cherry picked. This is the impression left by things like the protocol changes and the disfunctional ‘normal range’. It may be that the published data correctly represents the results but how can patients know? Trust is quite rightly lost by the changes that happened and the lack of reasoning provided behind these changes.

    Those carrying out the PACE trial seem determined that patients should simply trust them. They do not provide data around protocol changes that would allow patients to make up their own minds. They do not provide well worked out arguments for protocol changes. Even here they have not responded to many of the points raised. So why would a patient trust the result of the PACE trial sufficiently to make a treatment decision. I could not (and I am not a patient).

    Trust is something that needs to be worked on and won but that is easily lost. The attitude of the PACE authors in changing the protocol and hiding data means it has been completely lost. They could start to rectify this situation by releasing more data and without spin but they choose not to. In fact they are still spinning claims and demonstrating bad methodology in their followup paper and their press conference on this was followed by the usual hate speech against ME patients in the papers. (People should read Coyne’s analysis of their recent paper

    They could have used this as an exercise in properly addressing concerns and issues and release data showing the effects of protocol changes and different thresholds for their normal range. Why did they not do this?

  • uab9876 30 October 2015, 6:27 am

    They got their stats wrong. There is no justification for that so they keep quiet. But that is one piece that was checkable. Given they got that piece wrong how much else is wrong with bad assumptions.

    We need to see historgrams of the outcome data to know if their analysis is correct. Are outcome distributions bimodal in which case quoting mean and SD isn’t very helpful. Or were they skewed. Where there outliers having too much influence on the results?

    The point is if they cannot get a simple threshold calculation right where their answer is obviously wrong how can we trust anything else they have done.

    Just to add the threshold is wrong because the data is skewed. The median sf36 physical function score for the working age population is 100 (that at least half the population scored 100) for the survey they used for their calculations. They claimed differently in their papers.

  • uab9876 30 October 2015, 6:36 am

    “All these changes were made before any outcome data were analyzed (i.e.
    they were pre-specified), and were all approved by the independent Trial
    Steering Committee and Data Monitoring and Ethics committee.”

    But it was a non-blined trial so investigators will have insight into results hence changes not good. They could have published the data for both. Also we are not allowed to see the reasons given to the committees I can only assume they were discussed in much greater detail than presented here and other public sources. Otherwise approval of changes to a non-blinded trial seem shocking.

    It would be good to hear from the chair of the trial steering committee how she felt about the changes particularly to those for recovery.

  • Sasha 30 October 2015, 7:21 am

    You say, incorrectly:

    “In order to be considered recovered, patients also [in addition to being in the bizarre “normal range”] had to:

    *Not meet case criteria for CFS
    *Not meet eligibility criteria for either of the primary
    outcome measures for entry into the trial
    *Rate their overall health (not just CFS) as “much” or “very
    much” better.

    It would therefore be impossible to be recovered and
    eligible for trial entry (White et al, 2013).”

    and you also say that you didn’t change all four of your planned “recovery” thresholds. Both this statement and your description here of the recovery criteria are wrong according to your own paper:

    The key issue is that you added – after the protocol and statistical analysis plan were written, apparently – fatigue/physical function trial-eligibility criteria to the case criteria (your paper, p. 2229): they don’t stand alone. So to be diagnosed with CFS at the end of the trial, a patient had to fit the case criteria AND have a fatigue score better than trial entry AND have a physical function score better than trial entry.

    All a patient would have to do to fail these supercharged criteria would be to scrape over just one of those trial entry thresholds, even if they still fell below the other.

    And you used a post-hoc (according to you yourselves), nonsensical “trial recovery” analysis, in which patients could still be sick according to the CDC and London criteria, but classed as recovered as long as they no longer fit the Oxford. That’s the analysis that you broadcast in your abstract. I really can’t understand your claim not to have altered this fourth recovery criterion.

    Here is an excellent summary of the issues (I have checked it against both your “recovery” paper and the study protocol and analysis plan and it is accurate):

  • A.M. 30 October 2015, 7:34 am

    White et al. concede that 13% of participants entered the trial with scores within the “normal range” for either fatigue or physical function, but state that this is not an issue because there are other recovery criteria. These other criteria however are also too lax to guarantee a full recovery from CFS even when combined together. While it is true that it was impossible to be classified as recovered and remain eligible for trial entry, it is still doubtful whether anyone fully recovered.

    The practical problem with a “normal range” that overlaps with trial entry criteria for severe disabling fatigue is self-explanatory. The normal range was a product of unreliable statistical methods applied to non-representative population samples. Tuller’s articles already describe how simple use of the mean and standard deviation on a non-normal distribution leads to misleading results. Both the population samples used for defining the normal range did not follow a normal distribution. The thresholds for the normal range are unvalidated, but the protocol-specified threshold for normal fatigue was validated. [1] Furthermore the population samples used included many people with fatiguing illnesses, chronic disabilities, and/or were elderly. Two peer reviewed commentaries describe how the stated justification for changing the threshold for normal physical function was erroneous. [2,3]

    White et al. stated in the recovery paper that the CGI requirement was relaxed because it represented the “process of recovery”. However this is not the same as a full recovery. A CGI score of 2 or “much better” (with 3 or “a little better” regarded as a non-improvement), may reflect a modest improvement unrelated to fatigue or physical function rather than a full recovery. As described in a systematic review on recovery from CFS, global impression scales do not guarantee a full recovery. [4]

    A FOI request confirmed that failing either of the two entry criteria for severe disabling fatigue also meant failing the Oxford criteria too. [5] This means that some participants who still had CFS and still otherwise met Oxford criteria as typically used in the clinic could improve a single increment or so on either scale for fatigue or physical function (e.g. 6 to 5 out of 11 for CFQ, or 65 to 70 out of 100 for SF-36/PF, both still abnormal scores according to the published protocol [1]) and yet be classified as no longer meeting Oxford criteria for the purposes of recovery in the PACE trial.

    Patients who only fail one of several entry criteria after a modest improvement are not fully recovered. This would be less problematic if the normal range was stringent and did not overlap with entry criteria. During the recruitment process some candidates met Oxford criteria but were excluded for not meeting the entry criteria for fatigue or physical function, which further examples the above. There would probably be large and highly statistically significant differences when comparing the so-called “recovered” participants to a healthy working age population sample.

    White et al. state that the recovery analysis was “pre-specified” because it was changed before the analysis of recovery. That may be technically true, but the timing of changes to the protocol suggest that the revised recovery criteria was conducted well after the authors were already unblinded to trial data and aware of the main results, including the normal range which was apparently first introduced during the peer review stage of the Lancet publication in early 2011. [6] White et al. did not mention in the recovery paper itself or in their recent response to Tuller, anything about these changes being approved by any oversight body. The statement of approved changes appears to apply to the changes to primary outcomes, whereas the revised recovery criteria is not mentioned in the final statistical analysis plan. [7]

    After criticism of the revised thresholds, White et al. maintain that “the revised thresholds better reflected recovery”. We want to know how a normal range which overlaps with entry criteria for severe disabling fatigue better reflects full recovery? How does the partial process of recovery better reflects full recovery? How does being easily disqualified from meeting Oxford criteria as a result of a modest improvement despite remaining ill and falling far short of a full recovery, better reflect full recovery?

    None of the lax recovery criteria, alone or combined, provide convincing evidence of a full recovery from CFS. White et al. assert that CBT and GET demonstrated an advantage no matter how recovery was defined. Their assertion has not been tested with the protocol-specified recovery rates. The changes to the recovery criteria were substantial and/or numerous; this was not a minor adjustment, and the protocol-specified recovery rates are likely to be much lower than those published so far, possibly low to middle single figures, not the 22% asserted. Patients deserve credible estimates of full recovery, hence calls for the protocol-specified outcomes.

    In the recovery paper, White et al. (2013) inaccurately describe their recovery criteria as comprehensive and conservative (robust), and incorrectly claim that their normal range thresholds are more stringent than the previous work of Knoop et al., then provided another factual error to justify that statement. [8,9] These significant errors have still not been acknowledged or corrected.


  • Sasha 30 October 2015, 7:41 am

    It is not clear to me that you understand that your “normal range” analysis is an utter nonsense and that you yourselves have made strong claims based upon it that need to be retracted.

    Your “normal range” for physical function starts at 60/100 on the SF-36 scale. Patients with Class II congestive heart failure have average scores only three points below that.

    Your “normal range” for fatigue overlaps your trial entry threshold, which you said in your papers represents fatigue that is severe and disabling.

    A straight question, now: do you consider that these “normal ranges” should play any part as thresholds for improvement, let alone recovery?

    A “yes” or “no” to that simple question, please.

    You used those thresholds in The Lancet to say (p. 834):

    “No more than 30% of participants were within normal ranges for both outcomes and only 41% rated themselves as much better or very much better in their overall health. We suggest that these findings show that either CBT or GET, when added to SMC, is an effective treatment for chronic fatigue syndrome, and that the size of this effect is moderate.​”

    Will you now retract that statement?

    Will you retract the results based on them in your “recovery” paper in Psychological Medicine?

    Will you publish the recovery analyses according to the original, more sensible criteria published in the protocol?

    If not, why not?

    So far, over 4,000 people – as of Wednesday evening – want you to.

  • Sasha 30 October 2015, 8:00 am

    Patients, scientists and doctors have a right to expect that claims of recovery and improvement due to treatments that are published in medical journals are accurate.

    The bizarre and nonsensical “normal range” analyses used in the PACE trial make a mockery of that right.

    A petition calling for the retraction of claims of improvement and recovery based on these analyses that were made in The Lancet and Psychological Medicine has been started and has already gained 4,000 signatures in its first 36 hours. It also calls for publication of the recovery results according to the original analyses specified in the study protocol.

    Please sign the petition.

    Supplementary pages explain briefly and clearly the background to the petition and the problems with the PACE study – which I believe readers of David Tuller’s posts of last week, and of the PACE authors’ reply today, will find helpful and interesting.

    The petition has been launched by the respected and well-networked advocacy platform #MEAction.

  • andrewkewley 30 October 2015, 9:12 am

    ” We chose (subjective) self-ratings as the primary outcomes, since we considered that the patients themselves were the best people to determine their own state of health.”

    This answer missed the point four years ago and it seems the authors still haven’t bothered to understand why.

    Self-ratings are not adequately controlled in unblinded studies and changes in questionnaire answering behaviour do not reflect the overall health of patients in the short term.

    In the long term questionnaire results (after several years) do regress to the mean – but the followup results show that at followup there was no difference between any of the groups – the results show that those who received CBT/GET during or after the trial did not improve any more than the other patients.

    At the end of the day objective measures such as actigraphy better reflect ones state of heath – someone who is more well will be more active. Unless more objective measures of functional disability are used, then the efficacy of unblinded like this one will always be in doubt.

  • andrewkewley 30 October 2015, 9:36 am

    “The other change was to drop the originally chosen composite measures (the number of patients who either exceeded a threshold score or who changed by more than 50 per cent). After careful consideration, we decided this composite method would be hard to interpret clinically, and would not answer our main question of comparing effectiveness between treatment arms.”

    Looks like an admission that the original measures would have been much less likely to show a difference between treatment arms. If the results were strong there wouldn’t need to be any need to change the measures.

    Self-reports are easily biased, hence why large changes (or scores at the level of population means) are needed to have any confidence that there were underlying health changes.

  • Valentijn 30 October 2015, 10:25 am

    Not to mention the exquisite irony of using subjective outcome measures in a trial where the treatments are based on the presumption that the patients’ subjective experience of their illness cannot be trusted.

  • Mary Dimmock 30 October 2015, 10:39 am

    This response did not address Bruce Levin’s concern with the methodological issues in how patients were identified as meeting ME or CDC CFS definitions. This is a critical issue that deserves a response as the investigators have claimed that patients meeting the ME definition or the CDC CFS definition had a similar treatment response to treatment, leading to the assumption that these findings apply to ME patients at large, something that Levin said was not appropriate.

    In addition to Levin’s concern, its important to note that in applying the CDC CFS 2003 definition, the PACE team only required that the additional symptoms be present for one week, not the 6 months required by the CDC CFS definition. The 2013 PACE Recovery paper acknowledged that the resultant patient characterization “may have been inaccurate because we only examined for accompanying symptoms in the previous week, not the previous 6 months.” As a result, its impossible to say how many patients within the Oxford cohort truly met these criteria. It is also unclear how or whether the use of a modified version of the London criteria affected the identification of ME patients in a similar way.

    Regarding the author’s pointing to other evidence reviews to bolster the claim of CBT and GET efficacy – its important to note that these evidence reviews have either used inclusion criteria that required nothing more than medically unexplained debilitating fatigue and/or have conflated a number of CFS and ME definitions as representing the same disease, regardless of substantial differences in inclusion and exclusion criteria across these definitions. The result, as seen in the 2014 Smith review, is that CBT and GET treatment recommendations are being made for all CFS and ME patients based largely on Oxford studies.

    But as the 2015 report of the Institute of Medicine definitively stated, “a diagnosis of CFS is not equivalent to a diagnosis of ME.” This suggests a fundamental flaw in the conduct of these evidence reviews and a question of medical ethicality in applying treatment recommendations based on studies in CFS patients to patients with ME.

  • A.M. 30 October 2015, 11:16 am

    White et al. state that they refuse to publicly release any raw data under the FOIA because they are ethically safeguarding personal medical data, and that simple methods of anonymisation do not always protect the identity of a person.

    Sufficiently anonymised or de-identified individual data from clinical trials is not regarded as sensitive personal data under the FOIA or the DPA. The NHS and GMC guidelines on confidentiality are also only concerned with identifiable information. Re-identification typically requires additional data not available to the public, and examples where simple anonymisation has failed have involved personal identifiers such as age and gender and location across multiple databases, or accidentally attaching identifiable information to medical data before disclosure.

    The public interest is also served when information is released that furthers the understanding of and participation in the debate of important issues, and helps to correct misleading statements made by public authorities or their employees.

    There are major efforts in the research community for increased transparency and open data in clinical trials, although there is debate over the degree of openness. It is important to publish and follow trial protocols or provide adequate justification for changes to pre-specified endpoints. Unfortunately in the PACE trial, questionable post-hoc deviations from the published protocol resulted in flaws and errors. As QMUL/PACE refuse to acknowledge or correct these problems, they have failed their responsibility to publish complete, accurate, and meaningful reports on all results.

    In March 2014 I submitted a request to QMUL for a selection of raw data from the PACE Trial without personal identifiers. I did so because of the questionable deviations from the trial protocol, misleading estimates of full recovery from CFS, and the refusal to release protocol-specified outcomes as expected by BMC Neurology which published the protocol. The FOIA is a statutory obligation and removes QMUL from the decision process. QMUL rejected my request, so on 15 December 2014 I initiated a complaint to the ICO, who were ultimately not convinced by QMUL’s arguments; on 27 October 2015 the ICO instructed QMUL to disclose the data to me within 35 working days. The full ICO decision notice (FS50565190) should soon be available on their website.

    On FOI requests in general, I was surprised to read that White et al. “only considered two of these many Freedom of Information requests as vexatious”. This contradicts previous impressions from White about being swamped with such requests. I will comment later about this under Tuller’s response.

  • A.M. 30 October 2015, 11:23 am

    Minor correction: 35 working days should be 35 calender days.

  • Gweebo 30 October 2015, 12:30 pm

    White et al. said:

    “We chose (subjective) self-ratings as the primary outcomes, since we considered
    that the patients themselves were the best people to determine their own state
    of health.”

    Yet PACE is based on the premise that the disease is perpetuated by patients assumed wrong cognition and assumed fear of exercise.

    Perhaps Professor White could enlighten us as to how they surmised the point at which patients perceptions become credible and trustworthy, to the extent of being viable as a primary outcome?

  • Margaret Smith 30 October 2015, 1:30 pm

    Hear hear, well said Valentijn. Speaking as a lowly patient even I could figure out that using subjective measures is not only a waste of time and hardly scientific, but it’s not what is required. Frankly we may as well just take a straw poll amongst ourselves and it would be about as valid from a measurable scientific point of view.

    As for patients being the best people to determine their own state of health. Does that include patients like a friend who kept insisting his cough wasn’t anything, he was fine and turned out to have a chest infection. Or a friends dad who kept insisting he was fine, collapsed a few hours later and was found to have pneumonia.

    If you want to run a trial and claim it has significant scientific evidence of something, then it stands to reason you have to use accurate scientific measures.

  • Matt 30 October 2015, 2:18 pm

    How do they explain the post-exercise cytokine storm by Judith Light? The lactic acid in the brain? An immune marker in the spinal fluid? Do those go away with CBT/GET?

  • Matt 30 October 2015, 2:20 pm

    Oops, not Judith Light 😉 Light and Bateman et al.

  • 1SarahL1 30 October 2015, 3:35 pm

    Isn’t it strange how the PACE trial authors claim to have successfully treated ME sufferers with CBT and GET, yet they suggest that wearing an ankle bracelet for 7 days would be simply too much for us: ”.. we decided that a test that required participants to wear an actometer around their ankle for a week was too great a burden at the end of the trial…”. PD White, MC Sharpe, T Chalder, JC DeCesare, R Walwyn, for the PACE trial management group. Response to comments on “Protocol for the PACE trial”. BMC Neurol. 2007, 7:6doi:10.1186/1471-2377-7-6.

  • Atos Miracles 30 October 2015, 3:42 pm

    Participant in the PACE trial Paul Everett (who is personally known to me) publically spoke yesterday about what happened to him. Participants in the OWN WORDS say more than the summary of the parts – which has shown to be very dubious. Read Angela Kennedy’s previous responses to the PACE trial for clarity too.

    Paul’s experience in the trial: I was on the PACE trial after having ME for 20+ years and I was
    desperate to finally be given some hope !! I was told CBT would be best
    for myself but was not available for a year – BUT if you help us with
    our PACE trial we will dangle a carrot of hope in front of you and you
    may be randomly selected to receive CBT… Happy Days … well not so happy
    days as my selection was nothing, just monitoring once in a while. I had
    to travel home on my own in floods of tears and massive confusion.
    So I was now on the PACE trial … so much paperwork to do …so many
    interviews where I poured my heart out and highlighted drugs that may
    help and what was good or bad for our condition ( after all we are the
    experts ) – it all seemed to fall on death ears as there was already a
    bias to it being a mental disorder as this was my Professors field. I
    invited my Professor to come and spend a day with me so he could really
    understand the struggle we live with in our environment – he declined my
    kind invitation and I realised from then on that no one was actually
    listening to me – I could go on but I think this was a common issue with
    patients on the trial. Anyhow the whole process set me back years, I
    did get CBT after the year as promised and my adviser was fantastic and
    really caring but had little experience with ME – I remember her saying
    it would be a good idea to take my computer out of my bedroom – I agreed
    and was looking forward to doing just that on my long train journey
    home – that plan of action must have been spoken about 10 years ago and
    guess what my computer is still in the flipping bedroom – help !!!
    Anyhow I send love and light out to all my fellow sufferers and
    survivors and there will be real help for us one day, keep positive,
    keep laughing at the crazy pain and comatose state we find ourselves in
    but please don’t go near the gym xxxxx

  • leelaplay 30 October 2015, 5:41 pm

    I hope the investigators will also reply to James C. Coyne’s analysis of the multitude of errors and problems with PACE. And also accept his offer to debate.

    Uninterpretable: Fatal flaws in PACE Chronic Fatigue Syndrome follow-up study

  • Valentijn 31 October 2015, 4:57 am

    Their research may be subpar, but they really do excel at coming up with spin and novel excuses.

  • clouty 31 October 2015, 12:18 pm

    Another participant’s experience in their own words:
    “I took part in this study, and was randomised to the GET group, and I’d be very sceptical about its results.
    My initial blood tests showed some signs of infection and inflammation so I
    was sent for another set which apparently didn’t, so I could be
    accepted into the trial.* The assessment/criteria forms which had to be
    filled out at the before and during the trial, did not mention symptoms
    after exertion or delayed onset fatigue, there was very little attention
    paid to pain and cognitive/mental issues were very blurred.
    At the start of the trial, I had to wear an accelerometor thing for a week,
    presumably to measure activity levels. But at the end of the trial, this
    wasn’t repeated. The fitness tests measured the number of steps I could
    do in a set amount of time, but paid no attention to the fact that I
    usually couldn’t walk for 2 days after these assessments.
    The ‘handbook’ I was given contained an incredibly flawed model, which GET
    is based on, which basically goes ‘felt a bit ill – led to resting too
    much – led to deconditioning – led to the ME/CFS symptoms’. This
    completely ignores the fact that the vast majority of people don’t rest
    early on and carry on pushing themselves despite severe pain and
    I would suggest that the criteria were so vague and the
    assessment so poor that a majority of the people who recovered using GET
    never had ME/CFS in the first place.”
    permalink (since the Guardian changed it’s website, you now need to scroll down just over half way).
    *The Oxford Criteria, used to filter applicants, does not accept people with neurological or immune system abnormalities. This on its own devalues any outcome.

  • Ian Barnes 31 October 2015, 9:29 pm

    I think I finally understand the mind of the PACE investigators. It goes something like this: <<>>

  • Helle Rasmussen 1 November 2015, 5:06 am

    Already in 2010, Prof. emeritus Malcolm Hooper made a 442 page report, detailing the failings of the Medical Research Council and specifically the PACE trials:

    “Magical Medicine. How to make a disease disappear. Background to, consideration of, and quotations from the manuals for the Medical Research Council’s PACE Trial of behavioural interventions for Chronic Fatigue Symdrome / Myalgic Encephalomyelitis together with evidence that such interventions are unlikely to be effective and may even be contraindicated.”

    Well worth a read!

  • Valentijn 2 November 2015, 10:23 am

    That may be the most concise summary of PACE I’ve seen so far 😀

  • pilsbury 3 November 2015, 5:49 am

    I have a good friend who took part in the PACE trial – she has been diagnosed with ME/CFS BUT also has PBC which has many similar symptoms – she was accepted on the trial despite the potential for this to muddy the results. She also met two others taking part, one of whom had depression and another who had cycled to the trial in his lunch break from work!! I do not know of anyone with ME or CFS who could do that without severe relapse afterwards! It is clear to her and to me that the trial was very flawed.

  • Mary Posa 5 November 2015, 7:18 am

    Unfortunably, due to their length, I’m not able to read David Tuller’s profound
    posts and the study authors’ answer in detail. Scanning them, hoewever, I found no comment on one, in my opinion the most obvious, error in the PACE trial: The use of the Oxford Criteria. In case Dave Tuller included this point in his argument, then it should be made much stronger! In German media the PACE
    trial is also an issue — there, too, without reference to the diagnostic
    criteria used for the patients included in the study. Why is this point
    so broadly ingnored?

  • GQ 17 November 2015, 12:40 pm

    The PACE trial Principal Investigators now respond to David Tuller:

    “No insurance company was involved in any aspect of the trial. There were some 19 investigators, three of whom have done consultancy work at various times for insurance companies. This was not related to the research and was listed as a potential conflict of interest in the relevant papers. The patient information sheet informed all potential participants as to which organizations had funded the research, which is consistent with ethical guidelines.”

    The PACE trial researchers here obfuscate and obscure the facts.

    1. There were 3 Principal Investigators. The other 16 referred to were not Principal Investigators.

    2. The Principal Investigators were ultimately responsible for designing the PACE trial and obtaining funding for the PACE trial.

    3. The 3 Principal Investigators of the PACE trial were (a) Peter White, (b) Michael Sharpe and (c) Trudy Chalder

    4. The 3 that had conflicts of interest with insurers and reinsurers and therefore a vested interest were (a) Peter White, (b) Michael Sharpe and (c) Trudy Chalder.

    5. Therefore 3 out of 3 or 100% of the Principal Investigators had conflicts of interest with the insurance and reinsurance industry who had a vested interest in the PACE trial.

  • GQ 17 November 2015, 1:10 pm

    The link between the insurers and reinsurers is being significantly downplayed by the Principal Investigators of the PACE trial.

    The Principal Investigators of the PACE trial are Peter White, Michael Sharpe and Trudy Chalder.

    Peter White is a medical officer of Swiss Re (a reinsurer) and other insurers. Michael Sharpe is a consultant with Aegon (an insurance company) and associated with UNUM. Trudy Chalder has not declared which insurance companies and the number of insurance companies she has worked for.

    It is important to understand the reinsurance business and how it affects insurance companies and their policyholders. Insurance companies reinsure their liability to claims with reinsurance companies. There are a very large number of insurance companies but only a handful of reinsurance companies in the world. Swiss Re is the second largest reinsurer in the world and one of the largest reinsurers of disability policies. Insurance companies can cede a majority or even up to 90% of the liability of the claims to reinsurers. Therefore the reinsurance company may carry a higher liability to claims than the insurance company itself that the policyholder is insured with. Policyholders with an insurer will be unaware of the reinsurance arrangement that their insurance company has with the reinsurer that will be liable for the claim. Policyholders will also be unaware of the identity of that reinsurer. In the event of a claim, the reinsurance company will be involved with the insurance company in managing that claim.

    It is well documented that policyholders and people with ME/CFS have disproportionately had problems with making claims to insurers and being wrongly denied over the last two decades. In fact in the last two decades since the small number of psychiatrists became involved with ME/CFS, insurers have targeted ME/CFS claims for denial. Most disability insurers will have reinsured their liability to a reinsurer such as Swiss Re.

    In the case of ME/CFS, claims are denied
    (a) on the false basis that it is psychiatric
    (b) of the use of exclusion clauses for ME/CFS
    (c) by re-diagnosing the ME/CFS as psychiatric disorder
    (d) that ME/CFS can be simply cured by effective treatments of CBT and GET (PACE trial treatments) and claims not accepted until these treatments have been undertaken and if such treatments do not work in that patients particular situation, disability benefits are denied on the basis that patients lack motivation to get better or are malingering.

    A conflict of interest with a re-insurer therefore is one of the most serious conflicts that a medical researcher could have.

    None of the PACE trial investigators have still specifically detailed the amount of financial remuneration that they received from their respective insurers and reinsurers during the period of the PACE trial.