TRIAL BY ERROR: The Troubling Case of the PACE Chronic Fatigue Syndrome Study (final installment)

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley. 

A few years ago, Dr. Racaniello let me hijack this space for a long piece about the CDC’s persistent incompetence in its efforts to address the devastating illness the agency itself had misnamed “chronic fatigue syndrome.” Now I’m back with an even longer piece about the U.K’s controversial and highly influential PACE trial. The $8 million study, funded by British government agencies, purportedly proved that patients could “recover” from the illness through treatment with one of two rehabilitative, non-pharmacological interventions: graded exercise therapy, involving a gradual increase in activity, and a specialized form of cognitive behavior therapy. The main authors, a well-established group of British mental health professionals, published their first results in The Lancet in 2011, with additional results in subsequent papers.

Much of what I report here will not be news to the patient and advocacy communities, which have produced a voluminous online archive of critical commentary on the PACE trial. I could not have written this piece without the benefit of that research and the help of a few statistics-savvy sources who talked me through their complicated findings. I am also indebted to colleagues and friends in both public health and journalism, who provided valuable suggestions and advice on earlier drafts. Today’s Virology Blog installment is the final quarter; the first and second installment were published previously. I was originally working on this piece with Retraction Watch, but we could not ultimately agree on the direction and approach. 

After this article was posted, the PACE investigators replied, and in turn I responded to their criticisms. All the articles can be found at the ME/CFS page.

SUMMARY

This examination of the PACE trial of chronic fatigue syndrome identified several major flaws:

*The study included a bizarre paradox: participants’ baseline scores for the two primary outcomes of physical function and fatigue could qualify them simultaneously as disabled enough to get into the trial but already “recovered” on those indicators–even before any treatment. In fact, 13 percent of the study sample was already “recovered” on one of these two measures at the start of the study.

*In the middle of the study, the PACE team published a newsletter for participants that included glowing testimonials from earlier trial subjects about how much the “therapy” and “treatment” helped them. The newsletter also included an article informing participants that the two interventions pioneered by the investigators and being tested for efficacy in the trial, graded exercise therapy and cognitive behavior therapy, had been recommended as treatments by a U.K. government committee “based on the best available evidence.” The newsletter article did not mention that a key PACE investigator was also serving on the U.K. government committee that endorsed the PACE therapies.

*The PACE team changed all the methods outlined in its protocol for assessing the primary outcomes of physical function and fatigue, but did not take necessary steps to demonstrate that the revised methods and findings were robust, such as including sensitivity analyses. The researchers also relaxed all four of the criteria outlined in the protocol for defining “recovery.” They have rejected requests from patients for the findings as originally promised in the protocol as “vexatious.”

*The PACE claims of successful treatment and “recovery” were based solely on subjective outcomes. All the objective measures from the trial—a walking test, a step test, and data on employment and the receipt of financial information—failed to provide any evidence to support such claims. Afterwards, the PACE authors dismissed their own main objective measures as non-objective, irrelevant, or unreliable.

*In seeking informed consent, the PACE authors violated their own protocol, which included an explicit commitment to tell prospective participants about any possible conflicts of interest. The main investigators have had longstanding financial and consulting ties with disability insurance companies, having advised them for years that cognitive behavior therapy and graded exercise therapy could get claimants off benefits and back to work. Yet prospective participants were not told about any insurance industry links and the information was not included on consent forms. The authors did include the information in the “conflicts of interest” sections of the published papers.

Top researchers who have reviewed the study say it is fraught with indefensible methodological problems. Here is a sampling of their comments:

Dr. Bruce Levin, Columbia University: “To let participants know that interventions have been selected by a government committee ‘based on the best available evidence’ strikes me as the height of clinical trial amateurism.”

Dr. Ronald Davis, Stanford University: “I’m shocked that the Lancet published it…The PACE study has so many flaws and there are so many questions you’d want to ask about it that I don’t understand how it got through any kind of peer review.”

Dr. Arthur Reingold, University of California, Berkeley: “Under the circumstances, an independent review of the trial conducted by experts not involved in the design or conduct of the study would seem to be very much in order.”

Dr. Jonathan Edwards, University College London: “It’s a mass of un-interpretability to me…All the issues with the trial are extremely worrying, making interpretation of the clinical significance of the findings more or less impossible.”

Dr. Leonard Jason, DePaul University: “The PACE authors should have reduced the kind of blatant methodological lapses that can impugn the credibility of the research, such as having overlapping recovery and entry/disability criteria.”

************************************************************************

PART FOUR:

The Publication Aftermath

Publication of the paper triggered what The Lancet described in an editorial as “an outpouring of consternation and condemnation from individuals or groups outside our usual reach.” Patients expressed frustration and dismay that once again they were being told to exercise and seek psychotherapy. They were angry as well that the paper ignored the substantial evidence pointing to patients’ underlying biological abnormalities.

Even Action For ME, the organization that developed the adaptive pacing therapy with the PACE investigators, declared in a statement that it was “surprised and disappointed” at “the exaggerated claims” being made about the rehabilitative therapies. And the findings that the treatments did not cause relapses, noted Peter Spencer, Action For ME’s chief executive officer, in the statement, “contradict the considerable evidence of our own surveys and those of other patient groups.”

Many believed the use of the broad Oxford criteria helped explain some of the reported benefits and lack of adverse effects. Although people with psychosis, bipolar disorder, substance “misuse,” organic brain disorder, or an eating disorder were screened out of the PACE sample, 47 percent of the participants were nonetheless diagnosed with “mood and anxiety disorders,” including depression. But just as cognitive and behavioral interventions have proven successful with people suffering from primary depression, as DePaul psychologist Leonard Jason had noted, the increased activity was also unlikely to harm such participants if they did not also experience the core ME/CFS symptom of post-exertional malaise.

Others, like Tom Kindlon, speculated that many of the patients in the two rehabilitative arms, even if they had reported subjective improvements, might not have significantly increased their levels of exertion. To bolster this argument, he noted the poor results from the six-minute walking test, which suggested little or no improvement in physical functioning.

“If participants did not follow the directives and did not gradually increase their total activity levels, they might not suffer the relapses and flare-ups that patients sometimes report with these approaches,” said Kindlon.

During an Australian radio interview, Lancet editor Richard Horton denounced what he called the “orchestrated response” from patients, based on “the flimsiest and most unfair allegations,” seeking to undermine the credibility of the research and the researchers. “One sees a fairly small, but highly organized, very vocal and very damaging group of individuals who have, I would say, actually hijacked this agenda and distorted the debate so that it actually harms the overwhelming majority of patients,” he said.

In fact, he added, “what the investigators did scrupulously was to look at chronic fatigue syndrome from an utterly impartial perspective.”

In explaining The Lancet’s decision to publish the results, Horton told the interviewer that the paper had undergone “endless rounds of peer review.” Yet the ScienceDirect database version of the article indicated that The Lancet had “fast-tracked” it to publication. According to current Lancet policy, a standard fast-tracked article is published within four weeks of receipt of the manuscript.

Michael Sharpe, one of the lead investigators, also participated in the Australian radio interview. In response to a question from the host, he acknowledged that only one in seven participants received a “clinically important treatment benefit” from the rehabilitative therapies of graded exercise therapy and cognitive behavior therapy—a key data point not mentioned in the Lancet paper.

“What this trial isn’t able to answer is how much better are these treatments than really not having very much treatment at all,” Sharpe told the radio host in what might have been an unguarded moment, given that the U.K. government had spent five million pounds on the PACE study to find out the answer. Sharpe’s statement also appeared to contradict the effusive “recovery” and “back-to-normal” news stories that had greeted the reported findings.

***

In correspondence published three months after the trial, the PACE authors gave no ground. In response to complaints about changes from the protocol, they wrote that the mid-trial revisions “were made to improve either recruitment or interpretability” and “were approved by the Trial Steering Committee, were fully reported in our paper, and were made before examining outcome data to avoid outcome reporting bias.” They did not mention whether, since it was an unblinded trial, they already had a general sense of outcome trends even before examining the actual outcome data. And they did not explain why they did not conduct sensitivity analyses to measure the impact of the protocol changes.

They defended their post-hoc “normal ranges” for fatigue and physical function as having been calculated through the “conventional” statistical formula of taking the mean plus/minus one standard deviation. As in the Lancet paper itself, however, they did not mention or explain the unusual overlaps between the entry criteria for disability and the outcome criteria for being within the “normal range.” And they did not explain why they used this “conventional” method for determining normal ranges when their two population-based data sources did not have normal distributions, a problem White himself had acknowledged in his 2007 study.

The authors clarified that the Lancet paper had not discussed “recovery” at all; they promised to address that issue in a future publication. But they did not explain why Chalder, at the press conference, had declared that patients got “back to normal.”

They also did not explain why they had not objected to the claim in the accompanying commentary, written by their colleagues and discussed with them pre-publication, that 30 percent of participants in the rehabilitative arms had achieved “recovery” based on a “strict criterion” —especially since that “strict criterion” allowed participants to get worse and still be “recovered.” Finally, they did not explain why, if the paper was not about “recovery,” they had not issued public statements to correct the apparently inaccurate news coverage that had reported how study participants in the graded exercise therapy and cognitive behavior therapy arms had “recovered” and gotten “back to normal.”

The authors acknowledged one error. They had described their source for the “normal range” for physical function as a “working-age” population rather than what it actually was–an “adult” population. (Unlike a “working-age” population, an “adult” population includes elderly people and is therefore less healthy. Had the PACE participants’ scores on the SF-36 physical function scale actually been compared to the SF-36 responses of the working-age subset of the adult population used as the source for the “normal range,” the percentages achieving the “normal range” threshold of this healthier group would have been even lower than the reported results.)

Yet The Lancet did not append a correction to the article itself, leaving readers completely unaware that it contained—and still contains–a mistake that involved a primary outcome and made the findings appear better than they actually were. (Lancet policy calls for correcting “any substantial error” and “any numerical error in the results, or any factual error in interpretation of results.”)

***

A 2012 paper in PLoS One, on financial aspects of the illness, included outcomes for some additional objective measures. Instead of a decrease in financial benefits received by those in the rehabilitative therapy arms, as would be expected if disabled people improved enough to increase their ability to work, the paper reported a modest average increase in the receipt of benefits across all the arms of the study. There were also no differences among the groups in days lost from work.

The investigators did not include the promised information on wages. They also had still not published the results of the self-paced step-test, described in the protocol as a measure of fitness.

In another finding, the PLoS One paper argued that the graded exercise and cognitive behavior therapies were the most cost-effective treatments from a societal perspective. In reaching this conclusion, the investigators valued so-called  “informal” care—unpaid care provided by family and friends–at the replacement cost of a homecare worker. The PACE statistical analysis plan (approved in 2010 but not published until 2013) had included two additional, lower-cost assumptions. The first valued informal care at minimum wage, the second at zero compensation.

The PLoS One paper itself did not provide these additional findings, noting only that “sensitivity analyses revealed that the results were robust for alternative assumptions.”

Commenters on the PLoS One website, including Tom Kindlon, challenged the claim that the findings would be “robust” under the alternative assumptions for informal care. In fact, they pointed out, the lower-cost conditions would reduce or fully eliminate the reported societal cost-benefit advantages of the cognitive behavior and graded exercise therapies.

In a posted response, the paper’s lead author, Paul McCrone, conceded that the commenters were right about the impact that the lower-cost, alternative assumptions would have on the findings. However, McCrone did not explain or even mention the apparently erroneous sensitivity analyses he had cited in the paper, which had found the societal cost-benefit advantages for graded exercise therapy and cognitive behavior therapy to be “robust” under all assumptions. Instead, he argued that the two lower-cost approaches were unfair to caregivers because families deserved more economic consideration for their labor.

“In our opinion, the time spent by families caring for people with CFS/ME has a real value and so to give it a zero cost is controversial,” McCrone wrote. “Likewise, to assume it only has the value of the minimum wage is also very restrictive.”

In a subsequent comment, Kindlon chided McCrone, pointing out that he had still not explained the paper’s claim that the sensitivity analyses showed the findings were “robust” for all assumptions. Kindlon also noted that the alternative, lower-cost assumptions were included in PACE’s own statistical plan.

“Remember it was the investigators themselves that chose the alternative assumptions,” wrote Kindlon. “If it’s ‘controversial’ now to value informal care at zero value, it was similarly ‘controversial’ when they decided before the data was looked at, to analyse the data in this way. There is not much point in publishing a statistical plan if inconvenient results are not reported on and/or findings for them misrepresented.”

***

The journal Psychological Medicine published the long-awaited findings on “recovery” in January, 2013. In the paper, the investigators imposed a serious limitation on their construct of “recovery.” They now defined it as recovery solely from the most recent bout of illness—a health status generally known as  “remission,” not “recovery.” The protocol definition included no such limitation.

In a commentary, Fred Friedberg, a psychologist in the psychiatry department at Stony Brook University and an expert on the illness, criticized the PACE authors’ use of the term “recovery” as inaccurate. “Their central construct…refers only to recovery from the current episode, rather than sustained recovery over long periods,” he and a colleague wrote. The term “remission,” they noted, was “less prone to misinterpretation and exaggeration.”

Tom Kindlon was more direct. “No one forced them to use the word ‘recovery’ in the protocol and in the title of the paper,” he said. “If they meant ‘remission,’ they should have said ‘remission.’” As with the release of the Lancet paper, when Chalder spoke of getting “back to normal” and the commentary claimed “recovery” based on a “strict criterion,” Kindlon believed the PACE approach to naming the paper and reporting the results would once again lead to inaccurate news reports touting claims of “recovery.”

In the new paper, the PACE investigators loosened all four of the protocol’s required criteria for “recovery” but did not mention which, if any, oversight committees approved this overall redefinition of the term. Two of the four revised criteria for “recovery” were the Lancet paper’s fatigue and physical function “normal ranges.” Like the Lancet paper, the Psychological Medicine paper did not point out that these “normal ranges”—now re-purposed as “recovery” thresholds–overlapped with the study’s entry criteria for disability, so that participants could already be “recovered” on one or both of these two indicators from the outset.

The four revised “recovery” criteria were:

*For physical function, “recovery” required a score of 60 or more. In the protocol, “recovery” required a score of 85 or more. At entry, a score of 65 or less was required to demonstrate enough disability to be included in the trial. This entry threshold of 65 indicated better health than the new “recovery” threshold of 60.

*For fatigue, a score of 18 or less out of 33 (on the fatigue scale, a higher score indicated more fatigue). In the protocol, “recovery” required a score of 3 or less out of 11 under the original scoring system. At entry, a score of at least 12 on the revised scale was required to demonstrate enough fatigue to be included the trial. This entry threshold of 12 indicated better health than the new “recovery” threshold of 18.

*A score of 1 (“very much better”) or 2 (“much better”) out of 7 on the Clinical Global Impression scale. In the protocol, “recovery” required a score of 1 (“very much better” on the Clinical Global Impression scale; a score of 2 (“much better”) was not good enough. The investigators made this change, they wrote, because “we considered that participants rating their overall health as ‘much better’ represented the process of recovery.” They did not cite references to justify their post-protocol reconsideration of the meaning of the Clinical Global Impression scale, nor did they explain when and why they changed their minds about how to interpret it.

*The last protocol requirement for “recovery”—not meeting any of the three case definitions used in the study–was now divided into less and more restrictive sub-categories. Presuming participants met the relaxed fatigue, physical function, and Clinical Global Impression thresholds, those who no longer met the Oxford criteria were now defined as having achieved “trial recovery,” even if they still met one of the other two case definitions, the CDC’s chronic fatigue syndrome case definition and the ME definition. Those who fulfilled the protocol’s stricter criteria of not meeting any of the three case definitions were now defined as having achieved “clinical recovery.” The authors did not explain when or why they decided to divide this category into two.

After these multiple relaxations of the protocol definition of “recovery,” the paper reported the full data for the less restrictive category of “trial recovery,” not the more restrictive category of “clinical recovery.” The authors found that the odds of “trial recovery” in the cognitive behavior therapy and graded exercise therapy arms were more than triple those in the adaptive pacing therapy and specialist medical care arms. They did not report having conducted any sensitivity analyses to measure the impact of all the changes in protocol definition of “recovery.”

They acknowledged that the “trial recovery” rate from the two rehabilitative treatments, at 22 percent in each group, was low. They suggested that increasing the total number of graded exercise therapy and cognitive behavior therapy sessions and/or bundling the two interventions could boost the rates.

***

Like the Lancet paper, the “recovery” findings received uncritical media coverage—and as Tom Kindlon feared, the news accounts did not generally mention “remission.” Nor did they discuss the dramatic changes in all four of the criteria from the original protocol definition of “recovery.” Not surprisingly, the report drew fire from patients and advocacy groups.

Commenters on the journal’s website and on patient and advocacy blogs challenged the revised definition for “recovery,” including the use of the overlapping “normal ranges” for fatigue and physical function as two of the four criteria. They wondered why the PACE authors used the term “recovery” at all, given the serious limitation they had placed on its meaning. They also noted that the investigators were ignoring the Lancet paper’s objective results from the six-minute walking test in assessing whether people had recovered, as well as the employment and benefits data from the PLoS One paper—all of which failed to support the “recovery” claims.

In their response, White and his colleagues defended their use of the term “recovery” by noting that they explained clearly what they meant in the paper itself. “We were careful to give a precise definition of recovery and to emphasize that it applied at one particular point only and to the current episode of illness,” they wrote. But they did not explain why, given that narrow definition, they simply did not use the standard term “remission, ” since there was always the possibility that the word “recovery” would lead to misunderstanding of the findings.

Once again, they did not address or explain why the entry criteria for disability and the outcome criteria for the physical function and fatigue “normal ranges”—now redefined as “recovery” thresholds–overlapped. They again did not explain why they used the statistical formula to find “normal ranges” for normally distributed populations on samples that they knew were skewed. And they now disavowed the significance of objective measures they themselves had selected, starting with the walking test, which had been described as “an objective outcome measure of physical capacity” in the protocol.

“We dispute that in the PACE trial the six-minute walking test offered a better and more ‘objective’ measure of recovery,” they now wrote, citing “practical limitations” with the data.

For one thing, the researchers now explained that during the walking test, in deference to participants’ poor health, they did not verbally encourage them, in contrast to standard practice. For another, they did not have follow-up walking tests for more than a quarter of the sample, a significant data gap that they did not explain. (One possible explanation is that participants were too sick to do the walking test at all, suggesting that the findings might have looked significantly worse if they had included actual results from those missing subjects.)

Finally, the PACE investigators explained, they had only 10 meters of corridor space for conducting the test, rather than the standard of 30 to 50 meters–although they did not explain whether all six of their study centers around the country, or just some of them, suffered from this deficiency. “This meant that participants had to stop and turn around more frequently, slowing them down and thereby vitiating comparisons with other studies,” wrote the investigators.

This explanation raised further questions, however. The investigators had started assessing participants–and administering the walking-test–in 2005. Yet two years later, in the protocol published in BMC Neurology, they did not mention any comparison-vitiating problems; instead, they described the walking test as an “objective” measure of physical capacity. While the protocol itself was written before the trial started, the authors posted a comment on the BMC Neurology web page in 2008, in response to patient comments, that reaffirmed the six-minute walking test as one of “several objective outcome measures.”

In their response in the Psychological Medicine correspondence, White and his colleagues did not explain if they had recognized the walking test’s comparison-vitiating limitations by the time they published their protocol in 2007 or their comment on BMC Neurology’s website in 2008–and if not, why not.

In their response, they also dismissed the relevance of their employment and benefits outcomes, which had been described as “another more objective measure of function” in the protocol. “Recovery from illness is a health status, not an economic one, and plenty of working people are unwell, while well people do not necessarily work,” they now wrote. “In addition, follow-up at 6 months after the end of therapy may be too short a period to affect either benefits or employment. We therefore disagree…that such outcomes constitute a useful component of recovery in the PACE trial.”

In conclusion, they wrote in their Psychological Medicine response, cognitive behavior therapy and graded exercise therapy “should now be routinely offered to all those who may benefit from them.”

***

Each published paper fueled new questions. Patients and advocates filed dozens of freedom-of-information requests for PACE-related documents and data with Queen Mary University of London, White’s institutional home and the designated administrator for such matters.

How many PACE participants, patients wanted to know, were “recovered” according to the much stricter criteria in the 2007 protocol? How many participants were already “within the normal range” on fatigue or physical function when they entered the study? When exactly were the changes made to the assessment strategies promised in the protocol, what oversight committees approved them, and why?

Some requests were granted. One response revealed that 85 participants—or 13 percent of the total sample–were already “recovered” or “within the normal range” for fatigue or physical function even as they qualified as disabled enough for the study. (Almost all of these, 78 participants, achieved the threshold for physical function alone; four achieved it for fatigue, and three for both.)

But many other requests have been turned down. Anna Sheridan, a long-time patient with a doctorate in physics, requested data last year on how the patients deemed “recovered” by the investigators in the 2013 Psychological Medicine paper had performed on the six-minute walking test. Queen Mary University rejected the request as “vexatious.”

Sheridan asked for an internal review. “As a scientist, I am seeking to understand the full implications of the research,” she wrote. “As a patient, the distance that I can walk is of incredible concern…When deciding to undertake a treatment such as CBT and GET, it is surely not unreasonable to want to know how far the patients who have recovered using these treatments can now walk.”

The university re-reviewed the request and informed Sheridan that it was not, in fact, “vexatious.” But her request was again being rejected, wrote the university, because the resources needed to locate and retrieve the information “would exceed the appropriate limit” designated by the law. Sheridan appealed the university’s decision to the next level, the U.K. Information Commissioner’s Office, but was recently turned down.

The Information Commissioner’s Office also turned down a request from a plaintiff seeking meeting minutes for PACE oversight committees to understand when and why outcome measures were changed. The plaintiff appealed to a higher-level venue, the First-Tier Tribunal. The tribunal panel–a judge and two lay members—upheld the decision, declaring that it was “pellucidly clear” that release of the minutes would threaten academic freedom and jeopardize future research.

The tribunal panel defended the extensive protocol changes as “common to most clinical trials” and asserted that the researchers “did not engineer the results or undermine the integrity of the findings.” The panel framed the many requests for trial documents and data as part of a campaign of harassment against the researchers, and sympathetically cited the heavy time burdens that the patients’ demands placed on White. In conclusion, wrote the panel, the tribunal “has no doubt that properly viewed in its context, this request should have been seen as vexatious–it was not a true request for information–rather its function was largely polemical.”

To date, the PACE investigators have rejected requests to release raw data from the trial for independent analysis. Patients and other critics say the researchers have a particular obligation to release the data because the trial was conducted with public funds.

Since the Lancet publication, much media coverage of the PACE investigators and their colleagues has focused on what The Guardian has called the “campaign of abuse and violence” purportedly being waged by “militants…considered to be as dangerous and uncompromising as animal rights extremists.” In a news account in the BMJ, White portrayed the protestors as hypocrites. “The paradox is that the campaigners want more research into CFS, but if they don’t like the science they campaign to stop it,” he told the publication. While news reports have also repeated the PACE authors’ claims of treatment success and “recovery,” these accounts have not generally examined the study itself in depth or investigated whether patients’ complaints about the trial are valid.

Tom Kindlon has often heard these arguments about patient activists and says they are used to deflect attention away from the PACE trial’s flaws. “They’ve said that the activists are unstable, the activists have illogical reasons and they are unfair or prejudiced against psychiatry, so they’re easy to dismiss,” said Kindlon.

What patients oppose, he and others explain, is not psychiatry or psychiatrists, but being told that their debilitating organic disease requires treatments based on the hypothesis that they have false cognitions about it.

***

In January of this year, the PACE authors published their paper on mediators of improvement in The Lancet Psychiatry. Not surprisingly, they found that reducing participants’ presumed fears of activity was the main mechanism through which the rehabilitative interventions of graded exercise therapy and cognitive behavior therapy delivered their purported benefits. News stories about the findings suggested that patients with ME/CFS could get better if they were able to rid themselves of their fears of activity.

Unmentioned in the media reports was a tiny graph tucked into a page with 13 other tiny graphs: the results of the self-paced step-test, the fitness measure promised in the protocol. The small graph indicated no advantages for the two rehabilitative intervention groups on the step-test. In fact, it appeared to show that those in the other two groups might have performed better. However, the paper did not include the data on which the graph was based, and the graph was too small to extract any useful data from it.

After publication of the study, a patient filed a request to obtain the actual step-test results that were used to create the graph. Queen Mary University rejected the request as “vexatious.”

With the publication of the step-test graph, the study’s key “objective” outcomes—except for the still-unreleased data on wages–had now all failed to support the claims of “recovery” and treatment success from the two rehabilitative therapies. The Lancet Psychiatry paper did not mention this serious lack support for the study’s subjective findings from all its key objective measures.

Some scientific developments since the 2011 Lancet paper–such as this year’s National Institutes of Health and Institute of Medicine panel reports, the Columbia University findings of distinct immune system signatures, further promising findings from Norwegian research into the immunomodulatory drug [see correction below] pioneered by rheumatoid arthritis expert Jonathan Edwards, and a growing body of evidence documenting patients’ abnormal responses to activity–have helped shift the focus to biomedical factors and away from PACE, at least outside Great Britain.

In the U.K. itself, the Medical Research Council, in a modest shift, has awarded some grants for biomedical research, but the PACE approach remains the dominant framework for treatment within the national health system. Two years ago, the disparate scientific and political factions launched the CFS/ME Research Collaborative, conceived as an umbrella organization representing a range of views. At the collaborative’s inaugural two-day gathering in Bristol in September of 2014, many speakers presented on promising biomedical research. Peter White’s talk, called “PACE: A Trial and Tribulations,” focused on the response to his study from disaffected patients.

According to the conference report, White cited the patient community’s “campaign against the PACE trial” for recruitment delays that forced the investigators to seek more time and money for the study. He spoke about “vexatious complaints” and demands for PACE-related data, and said he had so far fielded 168 freedom-of-information requests. (He’d received a freedom-of-information request asking how many freedom-of-information requests he’d received.) This type of patient activity “damages” research efforts, he said.

Jonathan Edwards, the rheumatoid arthritis expert now working on ME/CFS, filed a separate report on the conference for a popular patient forum. “I think I can only describe Dr. White’s presentation as out of place,” he wrote. After White briefly discussed the trial outcomes, noted Edwards, “he then spent the rest of his talk saying how unreasonable it was that patients did not gratefully accept this conclusion, indicating that this was an attack on science…

“I think it was unfortunate that Dr. White suggested that people were being unreasonable over the interpretation of the PACE study,” concluded Edwards. “Fortunately nobody seemed to take offence.”

Correction: The original text referred to the drug as an anti-inflammatory. 

TRIAL BY ERROR: The Troubling Case of the PACE Chronic Fatigue Syndrome Study (second installment)

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley. 

A few years ago, Dr. Racaniello let me hijack this space for a long piece about the CDC’s persistent incompetence in its efforts to address the devastating illness the agency itself had misnamed “chronic fatigue syndrome.” Now I’m back with an even longer piece about the U.K’s controversial and highly influential PACE trial. The $8 million study, funded by British government agencies, purportedly proved that patients could “recover” from the illness through treatment with one of two rehabilitative, non-pharmacological interventions: graded exercise therapy, involving a gradual increase in activity, and a specialized form of cognitive behavior therapy. The main authors, a well-established group of British mental health professionals, published their first results in The Lancet in 2011, with additional results in subsequent papers.

Much of what I report here will not be news to the patient and advocacy communities, which have produced a voluminous online archive of critical commentary on the PACE trial. I could not have written this piece without the benefit of that research and the help of a few statistics-savvy sources who talked me through their complicated findings. I am also indebted to colleagues and friends in both public health and journalism, who provided valuable suggestions and advice on earlier drafts. Yesterday, Virology Blog posted the first half of the story. Today’s installment was supposed to be the full second half. However, because the two final sections are each 4,000 words long, we decided to make it easier on readers, split the remainder into two posts, and publish them on successive days instead. I was originally working on this piece with Retraction Watch, but we could not ultimately agree on the direction and approach. 

After this article was posted, the PACE investigators replied, and in turn I responded to their criticisms. All the articles can be found at the ME/CFS page.

SUMMARY

This examination of the PACE trial of chronic fatigue syndrome identified several major flaws:

*The study included a bizarre paradox: participants’ baseline scores for the two primary outcomes of physical function and fatigue could qualify them simultaneously as disabled enough to get into the trial but already “recovered” on those indicators–even before any treatment. In fact, 13 percent of the study sample was already “recovered” on one of these two measures at the start of the study.

*In the middle of the study, the PACE team published a newsletter for participants that included glowing testimonials from earlier trial subjects about how much the “therapy” and “treatment” helped them. The newsletter also included an article informing participants that the two interventions pioneered by the investigators and being tested for efficacy in the trial, graded exercise therapy and cognitive behavior therapy, had been recommended as treatments by a U.K. government committee “based on the best available evidence.” The newsletter article did not mention that a key PACE investigator was also serving on the U.K. government committee that endorsed the PACE therapies.

*The PACE team changed all the methods outlined in its protocol for assessing the primary outcomes of physical function and fatigue, but did not take necessary steps to demonstrate that the revised methods and findings were robust, such as including sensitivity analyses. The researchers also relaxed all four of the criteria outlined in the protocol for defining “recovery.” They have rejected requests from patients for the findings as originally promised in the protocol as “vexatious.”

*The PACE claims of successful treatment and “recovery” were based solely on subjective outcomes. All the objective measures from the trial—a walking test, a step test, and data on employment and the receipt of financial information—failed to provide any evidence to support such claims. Afterwards, the PACE authors dismissed their own main objective measures as non-objective, irrelevant, or unreliable.

*In seeking informed consent, the PACE authors violated their own protocol, which included an explicit commitment to tell prospective participants about any possible conflicts of interest. The main investigators have had longstanding financial and consulting ties with disability insurance companies, having advised them for years that cognitive behavior therapy and graded exercise therapy could get claimants off benefits and back to work. Yet prospective participants were not told about any insurance industry links and the information was not included on consent forms. The authors did include the information in the “conflicts of interest” sections of the published papers.

Top researchers who have reviewed the study say it is fraught with indefensible methodological problems. Here is a sampling of their comments:

Dr. Bruce Levin, Columbia University: “To let participants know that interventions have been selected by a government committee ‘based on the best available evidence’ strikes me as the height of clinical trial amateurism.”

Dr. Ronald Davis, Stanford University: “I’m shocked that the Lancet published it…The PACE study has so many flaws and there are so many questions you’d want to ask about it that I don’t understand how it got through any kind of peer review.”

Dr. Arthur Reingold, University of California, Berkeley: “Under the circumstances, an independent review of the trial conducted by experts not involved in the design or conduct of the study would seem to be very much in order.”

Dr. Jonathan Edwards, University College London: “It’s a mass of un-interpretability to me…All the issues with the trial are extremely worrying, making interpretation of the clinical significance of the findings more or less impossible.”

Dr. Leonard Jason, DePaul University: “The PACE authors should have reduced the kind of blatant methodological lapses that can impugn the credibility of the research, such as having overlapping recovery and entry/disability criteria.”

************************************************************************

PART THREE:

The PACE Trial is Published

Trial recruitment and randomization into the four arms began in early 2005. In 2007, the investigators published a short version of their trial protocol in the journal BMC Neurology. They promised to provide the following results for their two primary measures:

*”Positive outcomes” for physical function, defined as achieving either an SF-36 score of 75 or more, or a 50% increase in score from baseline.

*“Positive outcomes” for fatigue, defined as achieving either a Chalder Fatigue Scale score of 3 or less, or a 50% reduction in score from baseline.

*“Overall improvers,” defined as participants who achieved “positive outcomes” for both physical function and fatigue.

The investigators also promised to provide results for what they defined as “recovery,” a secondary outcome that included four components:

*A physical function score of 85 or more.

*A fatigue score of 3 or less.

*A score of 1 (“very much better”) out of 7 on the Clinical Global Impression scale, a self-rated measure of overall health change.

*Not fulfilling any of the three case definitions used in the study (the Oxford criteria, the CDC criteria for chronic fatigue syndrome, and the myalgic encephalomyelitis criteria).

Tom Kindlon scrutinized the protocol for details on the promised objective outcomes. He knew that self-reported questionnaire responses could be influenced by extraneous factors like affection for the therapist or a desire to believe the treatment worked. He also knew that previous studies of rehabilitative treatments for the illness had shown that objective measurements often failed even when a study reported improvements in subjective measures.

“I’d make the analogy that if you’re measuring weight loss, you wouldn’t ask people if they think they’d lost weight, you’d measure them,” he said.

The protocol’s objective measures of physical capacity and function included:

*A six-minute walking test;

*A self-paced step-test (i.e. on a short stool);

*Data on employment, wages, and the receipt of benefits

***

On the trial website, the PACE team posted occasional “participants newsletters,” which featured updates on funding, recruitment and related developments. The third newsletter, dated December 2008, included words of praise for the trial from Prime Minister Gordon Brown’s office as well as an article about the government’s release of new clinical treatment guidelines for chronic fatigue syndrome.

The new U.K. clinical guidelines, the newsletter told participants, were “based on the best available evidence” and recommended treatment with cognitive behavior therapy and graded exercise therapy, the two rehabilitative approaches being studied in PACE. The newsletter did not mention that one of the key PACE investigators, physiotherapist Jessica Bavington, had also served on the U.K. government committee that endorsed the PACE therapies.

The same newsletter included a series of testimonials from participants about their positive outcomes from the “therapy” and “treatment,” although it did not mention the trial arms by name. The newsletter did not balance these positive accounts by including any comments from participants with poor outcomes. At that time, about a third of the participants—200 or so out of the final total of 641–still had one or more assessments to undergo, according to a recruitment chart in the same newsletter.

“The therapy was excellent,” wrote one participant. Another was “so happy that this treatment/trial has greatly changed my sleeping!” A third wrote: “Being included in this trial has helped me tremendously. (The treatment) is now a way of life for me.” A fourth noted: “(The therapist) is very helpful and gives me very useful advice and also motivates me.” One participant’s doctor wrote about the “positive changes” in his patient from the “therapy,” declared that the trial “clearly has the potential to transform [the] lives of many people,” and congratulated the PACE team on its “successful programme”—although no trial findings had yet been published.

Arthur Reingold, the head of epidemiology at the University of California, Berkeley, School of Public Health (and a colleague of mine), has reviewed innumerable clinical trials and observational studies in his decades of work and research with state, national and international public health agencies. He said he had never before seen a case in which researchers themselves had disseminated, mid-trial, such testimonials and statements promoting therapies under investigation. The situation raised concerns about the overall integrity of the study findings, he said.

Although specific interventions weren’t named, he added, the testimonials could still have biased responses in all of the arms toward the positive, or exerted some other unpredictable effect—especially since the primary outcomes were self-reported. (He’d also never seen a trial in which participants could be disabled enough for entry and “recovered” on an indicator simultaneously.)

“Given the subjective nature of the primary outcomes, broadcasting testimonials from those who had received interventions under study would seem to violate a basic tenet of research design, and potentially introduce substantial reporting and information bias,” said Reingold. “I am hard-pressed to recall a precedent for such an approach in other therapeutic trials. Under the circumstances, an independent review of the trial conducted by experts not involved in the design or conduct of the study would seem to be very much in order.”

***

As soon as the Lancet article was released, Kindlon began sharing his impressions with others online. “It was like a hive mind,” he said. “Gradually people spotted different problems and would post those points, and you could see the flaws in it.”

In addition to asserting that cognitive behavior therapy and exercise therapy were modestly effective, the Lancet paper declared these treatments to be safe—no signs of serious adverse events, despite patients’ concerns. The pacing therapy proved little or no better than the baseline condition of specialist medical care. And the results for the two subgroups defined by other criteria did not differ significantly from the overall findings.

It didn’t take long for Kindlon and the others to notice something unusual—the investigators had made a great many mid-trial changes, including in both primary measures. Facing lagging recruitment eleven months into the trial, the PACE authors explained in The Lancet, they had decided to raise the physical function entry threshold, from the initial 60 to the healthier threshold of 65. With the fatigue scale, they had decided to abandon the 0 or 1 bimodal scoring system in favor of continuous scoring, with each answer ranging from 0 to 3; the reason, they wrote, was “to more sensitively test our hypotheses.” (As collected, the data allowed for both scoring methods.)

They did not explain why they made the decision about the fatigue scale in the middle of the trial rather than before, nor why they simply didn’t provide the results with both scoring methods. They did not mention that in 2010, the FINE trial—a smaller study for severely disabled and homebound ME/CFS patients that tested a rehabilitative intervention related to those in PACE–reported no significant differences in final outcomes between study arms, using the same physical function and fatigue questionnaires as in PACE.

The analysis of the Chalder Fatigue Scale responses in the FINE paper were bimodal, like those promised in the PACE protocol. However, the FINE researchers later reported that a post-hoc analysis, in which they rescored the Chalder Fatigue Scale responses using the continuous scale of 0 to 3, had found modest benefits. The following year, the PACE team adopted the same revised approach in The Lancet.

The FINE study also received funding in 2003 from the Medical Research Council, and the PACE team referred to it as its “sister” trial. Yet the text of the Lancet paper included nothing about the FINE trial and its negative findings.

***

Besides these changes, the authors did not include the promised protocol data: results for  “positive outcomes” for fatigue and physical function, and for the “overall improvers” who achieved “positive outcomes” on both measures. Instead, noting that changes had been approved by oversight committees before outcome data had been examined, they introduced other statistical methods to assess the fatigue and physical function scores. All of their results showed modest advantages for cognitive behavior therapy and graded exercise therapy.

First, they compared the one-year changes in each arm’s average scores for physical function and fatigue. Yet unlike the method outlined in the protocol, this new mean-based measure did not provide information about a key factor of interest—the actual numbers or proportion of participants in each group who reported having gotten better or worse.

In another approach, which they identified as a post-hoc analysis, they determined the proportion of participants in each arm who achieved what they defined as a “clinically useful” benefit–an increase of eight points on the physical function scale and a decrease of two points on the revised fatigue scale. Unlike the first analysis, this post-hoc analysis did provide individual-level rather than aggregate responses. Yet post-hoc results never enjoy the level of confidence granted to pre-specified ones.

Moreover, the improvements required for what the researchers now called a “clinically useful” benefit were smaller than the minimum improvements needed to achieve the protocol’s threshold scores for “positive outcomes”—an increase of ten points on the physical function scale, from the entry threshold of 65 to 75, and a drop of three points on the original fatigue scale, from the entry threshold of 6 to 3.

A third method in the Lancet paper was another post-hoc analysis, this one assessing how many participants in each group achieved what the researchers called the “normal ranges” for fatigue and physical function. They calculated these “normal ranges” from earlier studies that reported the responses of large population samples to the SF-36 and Chalder Fatigue Scale questionnaires. The authors reported that 30 and 28 percent of participants in, respectively, the cognitive behavior therapy and graded exercise therapy arms scored within the “normal ranges” of representative populations for both fatigue and physical function, about double the rate in the other groups.

Of the key objective measures mentioned in the protocol, the Lancet paper only included the results of the six-minute walking test. Those in the exercise arm averaged a modest increase in distance walked of 67 meters, from 312 at baseline to 379 at one year, while those in the other three arms, including cognitive behavior therapy, made no significant improvements, from similar baseline values.

But the exercise arm’s performance was still evidence of serious disability, lagging far behind the mean performances of relatively healthy women from 70 to 79 years (490 meters), people with pacemakers (461 meters), patients with Class II heart failure (558 meters), and cystic fibrosis patients (626 meters). About three-quarters of the PACE participants were women; the average age was 38.

***

In reading the Lancet paper, Kindlon realized that Trudie Chalder was highlighting the post-hoc “normal range” analysis of the two primary outcomes when she spoke at the PACE press conference of “twice as many” participants in the cognitive behavior and exercise therapy arms getting “back to normal.” Yet he knew that “normal range” was a statistical construct, and did not mean the same thing as “back to normal” or “recovered” in medical terms.

The paper itself did not include any results for “recovery” from the illness, as defined using the four criteria outlined in the protocol. Given that, Kindlon believed Chalder had created unneeded confusion in referring to participants as “back to normal.” Moreover, he believed the colleagues of the PACE authors had compounded the problem with their claim in the accompanying commentary of a 30 percent “recovery” rate based on the same “normal range” analysis.

But Kindlon and others also noticed something very peculiar about these “normal ranges”: They overlapped with the criteria for entering the trial. While a physical function score of 65 was considered evidence of sufficient disability to be a study participant, the researchers had now declared that a score of 60 and above was “within the normal range.” Someone could therefore enter the trial with a physical function score of 65, become more disabled, leave with a score of 60, and still be considered within the PACE trial’s “normal range.”

The same bizarre paradox bedeviled the fatigue measure, in which a lower score indicated less fatigue. Under the revised, continuous method of scoring the answers on the Chalder Fatigue Scale, the 6 out of 11 required to demonstrate sufficient fatigue for entry translated into a score ranging from 12 and higher. Yet the PACE trial’s “normal range” for fatigue included any score of 18 or below. A participant could have started the trial with a revised fatigue score of 12, become more fatigued to score 18 at the end, and yet still been considered within the “normal range.”

“It was absurd that the criteria for ‘normal’ fatigue and physical functioning were lower than the entry criteria,” said Kindlon.

That meant, Kindlon realized, that some of the participants whom Chalder described as having gotten “back to normal” because they met the “normal range” threshold might have actually gotten worse during the study. And the same was true of the Lancet commentary accompanying the PACE paper, in which participants who met the peculiar “normal range” threshold were said to have achieved “recovery” according to a “strict criterion”—a definition of “recovery” that apparently survived the PACE authors’ pre-publication discussion of the commentary’s content.

Tom Kindlon wasn’t surprised when these “back to normal” and “recovery” claims became the focus of much of the news coverage. Yet it bothered him tremendously that Chalder and the commentary authors were able to generate such positive publicity from what was, after all, a post-hoc analysis that allowed participants to be severely disabled and “back to normal” or “recovered” simultaneously.

***

Perplexed at the findings, members of the online network checked out the population-based studies cited in PACE as the sources of the “normal ranges.” They discovered a serious problem. In those earlier studies, the responses to both the fatigue and physical function questionnaires did not form the symmetrical, bell-shaped curve known as a normal distribution. Instead, the responses were highly skewed, with many values clustered toward the healthier end of the scales—a frequent phenomenon in population-based health surveys.  However, to calculate the PACE “normal ranges,” the authors used a standard statistical method—taking the mean value, plus/minus one standard deviation, which identifies a range that includes 68% of the values in a normally distributed sample.

A 2007 paper co-authored by White noted that this formula for determining normal ranges “assumed a normal distribution of scores” and yielded different results given “a violation of the assumptions of normality”—that is, when the data did not fall into a normal distribution. White’s 2007 paper also noted that the population-based responses to the SF-36 physical function questionnaire were not normally distributed and that using statistical methods specifically designed for such skewed populations would therefore yield different results.

To determine the fatigue “normal range,” the PACE team used a 2010 paper co-authored by Chalder, which provided population-based responses to the Chalder Fatigue Scale. Like the population-based responses to the SF-36 questionnaire, the responses on the fatigue scale were also not normally distributed but skewed toward the healthy end, as the Chalder paper noted.

Despite White’s caveats in his 2007 paper about “a violation of the assumption of normality,” the PACE paper itself included no similar warnings about this major source of distortion in calculating both the physical function and fatigue “normal ranges” using the formula for normally distributed data. The Lancet paper also did not mention or discuss the implications of the head-scratching results: having outcome criteria that indicated worse health than the entry criteria for disability.

Bruce Levin, the Columbia biostatistician, said there are simple statistical formulas for calculating ranges that would include 68 percent of the values when the data are skewed and not normally distributed, as with the population-based data sources used by PACE for both the fatigue and physical function “normal ranges.” To apply the standard formula to data sources that have highly skewed distributions, said Levin, can lead to “very misleading” results.

***

Raising tough questions about the changes to the PACE protocol certainly conformed to the philosophy of the journal that published it. BioMed Central, the publisher of BMC Neurology, notes on its site that a major goal of publishing trial protocols is “enabling readers to compare what was originally intended with what was actually done, thus preventing both ‘data dredging’ and post-hoc revisions of study aims.” The BMC Neurology “editor’s comment” linked to the PACE protocol reinforced the message that the investigators should be held to account.

Unplanned changes to protocols are never advisable, and they present particular problems in unblinded trials like PACE, said Levin, the Columbia biostatistician. Investigators in such trials might easily sense the outcome trends long before examining the actual outcome data, and that knowledge could influence how they revise the measures from the protocol, he said.

And even when changes are approved by appropriate oversight committees, added Levin, researchers must take steps to address concerns about the impacts on results. These steps might include reporting the findings under both the initial and the revised methods in sensitivity analyses, which can assess whether different assumptions or conditions would cause significant differences in the results, he said.

“And where substantive differences in results occur, the investigators need to explain why those differences arise and convince an appropriately skeptical audience why the revised findings should be given greater weight than those using the a priori measures.” said Levin, noting that the PACE authors did not take these steps.

***

Some PACE trial participants were unpleasantly surprised to learn only after the trial of the researchers’ financial and consulting ties to insurance companies. The researchers disclosed these links in the “conflicts of interest” section of the Lancet article. Yet the authors had promised to adhere to the Declaration of Helsinki, an international human research ethics code mandating that prospective trial participants be informed about “any possible conflicts of interest” and “institutional affiliations of the researcher.”

The sample participant information and consent forms in the final approved protocol did not include any of the information. Four trial participants interviewed, three in person and one by telephone, all said they were not informed before or during the study about the PACE investigators’ ties to insurance companies, especially those in the disability sector. Two said they would have agreed to be in the trial anyway because they lacked other options; two said it would have impacted their decision to participate.

Rhiannon Chaffer said she would likely have refused to be in the trial, had she known beforehand. “I’m skeptical of anything that’s backed by insurance, so it would have made a difference to me because it would have felt like the trial wasn’t independent,” said Chaffer, in her mid-30s, who became ill in 2006 and attended a PACE trial center in Bristol.

Another of the four withdrew her consent retroactively and forbade the researchers from using her data in the published results. “I wasn’t given the option of being informed, quite honestly,” she said, requesting anonymity because of ongoing legal matters related to her illness. “I felt quite pissed off and betrayed. I felt like they lied by omission.”

(None of the participants, including three in the cognitive behavior therapy arm, felt the trial had reversed their illness. I will describe these participants’ experiences at a later point).

Tomorrow: The Aftermath

TRIAL BY ERROR: The Troubling Case of the PACE Chronic Fatigue Syndrome Study

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley. 

A few years ago, Dr. Racaniello let me hijack this space for a long piece about the CDC’s persistent incompetence in its efforts to address the devastating illness the agency itself had misnamed “chronic fatigue syndrome.” Now I’m back with an even longer piece about the U.K’s controversial and highly influential PACE trial. The $8 million study, funded by British government agencies, purportedly proved that patients could “recover” from the illness through treatment with one of two rehabilitative, non-pharmacological interventions: graded exercise therapy, involving a gradual increase in activity, and a specialized form of cognitive behavior therapy. The main authors, a well-established group of British mental health professionals, published their first results in The Lancet in 2011, with additional results in subsequent papers.

Much of what I report here will not be news to the patient and advocacy communities, which have produced a voluminous online archive of critical commentary on the PACE trial. I could not have written this piece without the benefit of that research and the help of a few statistics-savvy sources who talked me through their complicated findings. I am also indebted to colleagues and friends in both public health and journalism, who provided valuable suggestions and advice on earlier drafts. Today’s Virology Blog installment is the first half; the second half will be posted in two parts, tomorrow and the next day. I was originally working on this piece with Retraction Watch, but we could not ultimately agree on the direction and approach. 

After this article was posted, the PACE investigators replied, and in turn I responded to their criticisms. All the articles can be found at the ME/CFS page.

SUMMARY

This examination of the PACE trial of chronic fatigue syndrome identified several major flaws:

*The study included a bizarre paradox: participants’ baseline scores for the two primary outcomes of physical function and fatigue could qualify them simultaneously as disabled enough to get into the trial but already “recovered” on those indicators–even before any treatment. In fact, 13 percent of the study sample was already “recovered” on one of these two measures at the start of the study.

*In the middle of the study, the PACE team published a newsletter for participants that included glowing testimonials from earlier trial subjects about how much the “therapy” and “treatment” helped them. The newsletter also included an article informing participants that the two interventions pioneered by the investigators and being tested for efficacy in the trial, graded exercise therapy and cognitive behavior therapy, had been recommended as treatments by a U.K. government committee “based on the best available evidence.” The newsletter article did not mention that a key PACE investigator was also serving on the U.K. government committee that endorsed the PACE therapies.

*The PACE team changed all the methods outlined in its protocol for assessing the primary outcomes of physical function and fatigue, but did not take necessary steps to demonstrate that the revised methods and findings were robust, such as including sensitivity analyses. The researchers also relaxed all four of the criteria outlined in the protocol for defining “recovery.” They have rejected requests from patients for the findings as originally promised in the protocol as “vexatious.”

*The PACE claims of successful treatment and “recovery” were based solely on subjective outcomes. All the objective measures from the trial—a walking test, a step test, and data on employment and the receipt of financial information—failed to provide any evidence to support such claims. Afterwards, the PACE authors dismissed their own main objective measures as non-objective, irrelevant, or unreliable.

*In seeking informed consent, the PACE authors violated their own protocol, which included an explicit commitment to tell prospective participants about any possible conflicts of interest. The main investigators have had longstanding financial and consulting ties with disability insurance companies, having advised them for years that cognitive behavior therapy and graded exercise therapy could get claimants off benefits and back to work. Yet prospective participants were not told about any insurance industry links and the information was not included on consent forms. The authors did include the information in the “conflicts of interest” sections of the published papers.

Top researchers who have reviewed the study say it is fraught with indefensible methodological problems. Here is a sampling of their comments:

Dr. Bruce Levin, Columbia University: “To let participants know that interventions have been selected by a government committee ‘based on the best available evidence’ strikes me as the height of clinical trial amateurism.”

Dr. Ronald Davis, Stanford University: “I’m shocked that the Lancet published it…The PACE study has so many flaws and there are so many questions you’d want to ask about it that I don’t understand how it got through any kind of peer review.”

Dr. Arthur Reingold, University of California, Berkeley: “Under the circumstances, an independent review of the trial conducted by experts not involved in the design or conduct of the study would seem to be very much in order.”

Dr. Jonathan Edwards, University College London: “It’s a mass of un-interpretability to me…All the issues with the trial are extremely worrying, making interpretation of the clinical significance of the findings more or less impossible.”

Dr. Leonard Jason, DePaul University: “The PACE authors should have reduced the kind of blatant methodological lapses that can impugn the credibility of the research, such as having overlapping recovery and entry/disability criteria.”

************************************************************************

PART ONE:

The PACE Trial, Deconstructed

On Feb 17, 2011, at a press conference in London, psychiatrist Michael Sharpe and behavioral psychologist Trudie Chalder, members of the British medical and academic establishments, unveiled the results of a controversial clinical trial of more than 600 people diagnosed with chronic fatigue syndrome. The findings were being published in The Lancet. As with many things about the illness, the news was expected to cause a stir.

The study, known as the PACE trial, was the largest ever of treatments for chronic fatigue syndrome. The authors were among a prominent group of British mental health professionals who had long argued that the devastating symptoms were caused by severe physical deconditioning. They recognized that many people experienced an acute viral infection or other illness as an initial trigger. However, they believed that the syndrome was perpetuated by patients’ “unhelpful” and “dysfunctional” notion that they continued to suffer from an organic disease—and that exertion would make them worse. According to the experts’ theory, patients’ decision to remain sedentary for prolonged periods led to muscle atrophy and other negative systemic physiological impacts, which then caused even more fatigue and other symptoms in a self-perpetuating cycle.

An estimated one to 2.5 million Americans, a quarter of a million British, and an unknown number of others around the world suffer from chronic fatigue syndrome. The illness often leaves patients too sick to work, attend school, or take care of their children, with a significant minority home-bound for months or years. It is a terrible public health burden, costing society billions of dollars a year in medical care and lost productivity. But what causes it and what to do about it have been fiercely debated for decades.

Patients and many leading scientists view the debilitating ailment as caused by pathological disease processes, not by physical deconditioning. Studies have shown that the illness is characterized by immunological and neurological dysfunctions, and many academic and government scientists say that the search for organic causes, diagnostic tests and drug interventions is paramount. Some recent research has generated excitement. In February, for example, a Columbia-led team reported distinct patterns of immune system response in early-stage patients—findings that could ultimately lead to a biomarker able to identify the presence of the illness.

In contrast, the British mental health experts have focused on non-pharmacological rehabilitative therapies, aimed at improving patients’ physical capacities and altering their perceptions of their condition through behavioral and psychological approaches. The PACE trial was designed to be a definitive test of two such treatments they had pioneered to help patients recover and get back to work. British government agencies, eager to stem health and disability costs related to the illness, had committed five million pounds—close to $8,000,000 at current exchange rates–to support the research.

At the press conference, Sharpe and Chalder touted the two treatments—an incremental increase in activity known as “graded exercise therapy,” and a specialized form of cognitive behavior therapy—as effective in reversing the illness. Citing participant responses on questionnaires about fatigue and physical function, Chalder declared that, compared to other study subjects, “twice as many people on graded exercise therapy and cognitive behaviour therapy got back to normal.”

A Lancet guest commentary, whose contents were discussed in advance with the PACE authors, amplified the positive news, stating that about 30 percent of patients in the two rehabilitative treatment arms had achieved “recovery.” Headlines and stories around the world trumpeted the results.

“Fatigued patients who go out and exercise have best hope of recovery, finds study,” declared The Daily Mail. “Psychotherapy Eases Chronic Fatigue Syndrome, Study Finds,” stated The New York Times headline (I wrote the accompanying story.) According to BMJ’s report about the trial, some PACE participants were “cured” of the illness.

***

Some 300 miles to the northwest in Castleknock, a middle-class suburb of Dublin, Tom Kindlon read and re-read the Lancet paper and reviewed the upbeat news coverage. The more he reviewed and re-reviewed everything, the more frustrated and angry he became. The investigators, he observed, were spinning the study as a success in one of the world’s preeminent scientific publications, and the press was lapping it up.

“The paper was pure hype for graded exercise therapy and cognitive behavior therapy,” said Kindlon in a recent interview. “But it was a huge trial, getting huge coverage, and they were getting a very influential base to push their views.”

Kindlon had struggled with the illness for more than two decades. In 1993, his health problems forced him to drop his math studies at Dublin’s prestigious Trinity College; he’d been largely homebound since. With his acumen for statistics, Kindlon was known in the advocacy community for his nuanced understanding of the research.

He shared this passion with a small group of other science-savvy patients he’d met through online networks. Kindlon and the others were particularly worried about the PACE trial, first announced in 2003. They knew the results would wield great influence on government health policies, public attitudes, and future research—not only in Great Britain, but in the U.S. and elsewhere as well.

Like others in the patient and advocacy communities, they believed the evidence clearly pointed to an ongoing biological disease, not physical debility caused by deconditioning. They bristled with offense at the suggestion they would get better if only they could change their perceptions about their condition. And pushing themselves to be more active not only wasn’t helpful, they insisted, but could trigger a serious and extended relapse.

In the four years since the Lancet publication, Kindlon and others have pressed for an independent review of the trial data. They have produced a sprawling online literature deconstructing the trial’s methodology, submitted dozens of freedom-of-information requests for PACE-related documents and data, and published their criticisms on the websites and letters columns of leading medical journals. Their concerns, if true, would raise serious questions about the study’s findings.

***

For their part, the PACE investigators have released additional results from the trial. These have included a 2012 paper on economic aspects in PLoS One, a 2013 paper on “recovery” in Psychological Medicine, and a “mediation analysis” paper last January in The Lancet Psychiatry suggesting that reducing patients’ purported fears of activity mediated improvement.

But this investigation–based on many dozens of interviews and a review of thousands of pages of documents–has confirmed that some of the major criticisms of the trial are accurate. (The documents reviewed included, among others, the trial protocol, the manuals for the trial’s four arms, participant information and consent forms, meeting minutes of oversight committees, critical reports written by patients and advocates, transcripts of parliamentary hearings, and many dozens of peer-reviewed studies. Some documents were obtained by patients under freedom-of-information requests and either posted online or provided to me.)

Among the findings:

*The trial included a bizarre paradox: Participants’ baseline scores for physical function and fatigue could qualify them simultaneously as sick enough to get into the trial but already “recovered” on those indicators–even before any treatment. In other words, the thresholds for being “recovered” demonstrated worse health than the scores required in the first place to demonstrate the severe disability needed to enter the trial. This anomaly meant that some participants could get worse on physical function and fatigue during the trial and still be included in the results as being “recovered.” Data obtained by a patient through a freedom-of-information request indicated that 13 percent of the participants were already “recovered” for physical function or fatigue, or both, when they joined the study—a fact not mentioned in any of the published papers. (In the 2011 Lancet paper, participants who met these unusual thresholds were referred to not as having “recovered” but as being “within normal range.” In the 2013 Psychological Medicine paper, the same thresholds were re-purposed as indicators of “recovery.”)

*During the study, the PACE team published a “participants newsletter” that included glowing testimonials from earlier trial subjects about how the “therapy” and “treatment” had improved their lives. An article in the same newsletter also reported that the U.K. government’s newly released clinical guidelines for the illness recommended the two rehabilitative treatments under investigation, cognitive behavior therapy and graded exercise therapy, “based on the best available evidence.” (The article didn’t mention that a key PACE investigator also served on the U.K. government committee that endorsed the two PACE therapies.) The testimonials and the statements promoting the two therapies could have biased the responses of the 200 or so remaining participants, about a third of the total study sample.

*The investigators abandoned all the criteria outlined in their protocol for assessing their two primary measures of fatigue and physical function, and adopted new ones (in the 2011 Lancet paper). They also significantly relaxed all four of their criteria for defining “recovery” (in the 2013 Psychological Medicine paper). They did not report having taken the necessary steps to assess the impacts of these changes, such as conducting sensitivity analyses. Such protocol changes contradicted the ethos of BMC Neurology, the journal that published the PACE protocol in 2007. An “editor’s comment” linked to the protocol urged readers to review the published results and to contact the authors “to ensure that no deviations from the protocol occurred during the study.” The PACE team has rejected freedom-of-information requests for the results as promised in the protocol as “vexatious.”

*The study’s two primary outcomes were subjective, but in the 2007 published protocol the investigators also included several “objective” secondary outcomes to assess physical capacity, fitness and function; these measures included a six-minute walking test, a self-paced step test, and data on employment, wages and financial benefits. These findings utterly failed to support the subjective reports that the authors had interpreted as demonstrating successful treatment and “recovery.” In subsequently published comments, the authors then disputed the relevance, reliability and “objectivity” of the main objective measures they themselves had selected.

*In seeking informed consent, the investigators violated a major international research ethics code that they promised, in their protocol, to observe. A key provision of the Declaration of Helsinki, developed after WW II to protect human research subjects, requires that study participants be “adequately informed” of researchers’ “possible conflicts of interest” and “institutional affiliations.” The key PACE authors have longstanding financial and consulting ties to the disability insurance industry; they have advised insurers for years that cognitive behavior therapy and graded exercise therapy can get patients off benefits and back to work. In the papers published in The Lancet and other journals, the PACE authors disclosed their industry ties, yet they did not reveal this information to prospective trial subjects. Of four participants interviewed, two said the knowledge would have impacted their decision to participate; one retroactively withdrew her consent and forbade the researchers from including her data.

***

I did not interview Chalder, Sharpe, and Peter White, also a psychiatrist and the lead PACE investigator, for this story. Chalder did not respond to an e-mail last December seeking interviews. Sharpe and White both e-mailed back, declining to be interviewed [see correction below]. In his message, White wrote that, after consulting with his colleagues and reviewing my past reporting on the illness, “I have concluded that it would not be worthwhile our having a conversation…We think our work speaks for itself.” A second request for interviews, sent last week to the three investigators, also proved unsuccessful.

(I did have a telephone conversation with Chalder in January of this year, organized as part of the media campaign for the Lancet Psychiatry paper published that month by the PACE team. In Chalder’s memory of the conversation, we talked at length about some of the major concerns examined here. In my memory, she mostly declined to talk about concerns related to the 2011 Lancet paper, pleading poor recall of the details.)

Richard Horton, the editor of The Lancet, was also not interviewed for this story. Last December, his office declined an e-mail request for an interview. A second e-mail seeking comment, sent to Horton last week, was not answered.

***

Experts who have examined the PACE study say it is fraught with problems.

“I’m shocked that the Lancet published it,” said Ronald Davis, a well-known geneticist at Stanford University and the director of the scientific advisory board of the Open Medicine Foundation. The foundation, whose board also includes three Nobel laureates, supports research on ME/CFS and is currently focused on identifying an accurate biomarker for the illness.

“The PACE study has so many flaws and there are so many questions you’d want to ask about it that I don’t understand how it got through any kind of peer review,” added Davis, who became involved in the field after his son became severely ill. “Maybe The Lancet picked reviewers who agreed with the authors and raved about the paper, and the journal went along without digging into the details.”

In an e-mail interview, DePaul University psychology professor Leonard Jason, an expert on the illness, said the study’s statistical anomalies were hard to overlook. “The PACE authors should have reduced the kind of blatant methodological lapses that can impugn the credibility of the research, such as having overlapping recovery and entry/disability criteria,” wrote Jason, a prolific researcher widely respected among scientists, health officials and patients.

Jason, who was himself diagnosed with the illness in the early 1990s, also noted that researchers cannot simply ignore their own assurances that they will follow specific ethical guidelines. “If you’ve promised to disclose conflicts of interest by promising to follow a protocol, you can’t just decide not to do it,” he said.

Jonathan Edwards, a professor emeritus of connective tissue medicine from University College London, pioneered a novel rheumatoid arthritis treatment in a large clinical trial published in the New England Journal of Medicine in 2004. For the last couple of years, he has been involved in organizing clinical trial research to test the same drug, rituximab, for chronic fatigue syndrome, which shares traits with rheumatoid arthritis and other autoimmune disorders.

When he first read the Lancet paper, Edwards was taken aback: Not only did the trial rely on subjective measures, but participants and therapists all knew which treatment was being administered, unlike in a double-blinded trial. This unblinded design made PACE particularly vulnerable to generating biased results, said Edwards in a phone interview, adding that the newsletter testimonials and other methodological flaws only made things worse.

“It’s a mass of un-interpretability to me,” said Edwards, who last year called the PACE results “valueless” in publicly posted comments. “Within the circle who are involved in this field, it seems there were a group who were prepared to all sing by the hymn sheet and agree that PACE was wonderful. But all the issues with the trial are extremely worrying, making interpretation of the clinical significance of the findings more or less impossible.”

Bruce Levin, a professor of biostatistics at Columbia University and an expert in clinical trial design, said that unplanned, post-protocol changes in primary outcomes should be made only when absolutely necessary, and that any such changes inevitably raised questions about interpretation of the results. In any event, he added, it would never be acceptable for such revisions to include “normal range” or “recovery” thresholds that overlapped with the study’s entry criteria.

“I have never seen a trial design where eligibility requirements for a disease alone would qualify some patients for having had a successful treatment,” said Levin, who has been involved in research on the illness and has reviewed the PACE study. “It calls into question the diagnosis of an illness whose patients already rate as ‘recovered’ or ‘within normal range.’ I find it nearly inconceivable that a trial’s data monitoring committee would have approved such a protocol problem if they were aware of it.”

Levin also said the mid-trial publication of the newsletter featuring participant testimonials and positive news about interventions under investigation created legitimate concerns that subsequent responses might have been biased, especially in an unblinded study with subjective outcomes like PACE.

“It is highly inappropriate to publish anything during an ongoing clinical trial,” said Levin. “To let participants know that interventions have been selected by a government committee ‘based on the best available evidence’ strikes me as the height of clinical trial amateurism.”

At the least, the PACE researchers should have evaluated the responses from before and afterwards to assess any resulting bias, he added.

***

Recent U.S. government reports have raised further challenges for the PACE approach. In June, a panel convened by the National Institutes of Health recommended that researchers abandon a core aspect of the PACE trial design—its method of identifying participants through the single symptom of prolonged fatigue, rather than a more detailed set of criteria. This method, the panel’s report noted, could “impair progress and cause harm” because it identifies people with many fatiguing conditions, making it hard to interpret the findings.

Last February, the Institute of Medicine released its own study, commissioned by several health agencies and based on an extensive literature review, which described the illness as a serious organic disease, not a cognitive or behavioral disorder characterized by “unhelpful beliefs” that lead to sedentary behavior. Two members of the IOM panel, in discussing their report with Medscape, cast sharp doubt on the central argument advanced for years by the British mental health professionals: that physical deconditioning alone perpetuates the devastating symptoms.

Ellen Wright Clayton, the panel chair and a professor of pediatrics and law at Vanderbilt University, said lack of activity could not possibly explain the scope and severity of patients’ symptoms. “The level of response is much more than would be seen with deconditioning,” she told Medscape. Peter Rowe, a pediatrician at Johns Hopkins and an expert on the disease, called the deconditioning hypothesis “flawed” and “untenable.”

The PACE investigators have strongly defended the integrity of their research and say that patients and advocacy groups have harassed and vilified them for years without justification. In 2011, The Guardian reported that Sharpe had been stalked by a woman who brought a knife to one of his lectures. A 2013 report in The Sunday Times noted that psychiatrist Simon Wessely, a senior colleague and adviser to the PACE authors, had received death threats, and that “one person rang him up and threatened to castrate him.”

No one is known to have been charged in these and other cases of reported threats or harassment.

 

Correction: The original text indicated that Sharpe did not respond to the December e-mail at all.

************************************************************************

PART TWO:

The Origins of the PACE Trial

Tom Kindlon, six feet tall and bulky, can only stand up for half a minute before dizziness and balance problems force him back down. He has a round face, wire-rimmed glasses, an engaging smile, and beard scruff. Direct light hurts his eyes. He wears a baseball cap to shield them.

Kindlon, 43, still lives with his parents in the two-story, four-bedroom house where he grew up. His mum, Vera, is his primary caretaker. He remains close with his three younger siblings— Ali, 40, and twins David and Deirdre, who are 35. All live nearby and help out when needed.

For the last 15 years, Kindlon has harnessed his limited energy for what he perceives as his primary mission: reviewing, and responding to, the literature on the illness. He has published more than a dozen peer-reviewed letters in scientific publications and regularly posts on the public forums and “rapid response” sections of journal websites, politely debating, dissecting and debunking questionable research claims.

“I haven’t read a fiction book in 20 years,” he noted, during a series of conversations ranging across Skype, Facebook, Twitter, and e-mail. “I need to be blinkered in what I do and don’t read, to concentrate and use my mental energy for this material.”

As a teenager, Kindlon loved playing rugby, cricket, tennis and soccer. When he was 16, he spent five days in western Ireland on a hiking and sailing trip with high school classmates. It was February, damp and chilly, and he was already suffering from a cold or some other bug; back in Dublin, he felt worse and stayed home for several days.

When he returned to school, he discovered something weird: After a round of sports, he now experienced muscle pains and a paralyzing exhaustion unlike anything he’d previously encountered. “I’d be totally whacked by the end of the day,” he recalled.

He saw a physiotherapist and then an orthopedic surgeon, who told him to exercise more. He tried swimming, but that also left him depleted. In 1991, despite his health struggles, he entered Trinity College. He slogged through two years of math studies but suffered more and more from problems with memory and concentration. “I was forgetting things, making silly errors,” he said.

Toward the end of the second year, he could no longer hold a pen in his hand. He developed tendonitis, first in one arm, then in the other. When he drove, pushing the pedals caused severe ankle pain. “Everything was magnified now,” he said. “I was just breaking down.” He took a leave from Trinity. His health continued to slide.

***

Then Kindlon read something about myalgic encephalomyelitis, or ME—an alternate name for chronic fatigue syndrome frequently used in the U.K., meaning “inflammation of the brain and spinal cord, with muscle pain.” A specialist confirmed the diagnosis.

Since there are no approved medical tests, diagnosis has generally been made based on symptoms, after other possibilities have been excluded. A major clue in Kindlon’s case was his experience of a prolonged collapse after sports. Almost all patients report this unusual symptom, called “post-exertional malaise”–a sustained relapse or worsening after a minimal amount of exertion or activity.

It was September, 1994. Tom Kindlon was 22 years old. He could just about drag himself to the toilet a few times a day. He could hold a brief conversation, though he often couldn’t remember what he or anyone else had said.

Soon after his diagnosis, he heard about a local support group called the Irish ME Association. Vera attended a meeting to learn more. She became a fixture at the monthly gatherings, and soon was voted chair of the group; her son was appointed assistant chair. Though his condition gradually stabilized and sometimes even seemed to improve a little, he never felt well enough to attend meetings and worked instead from home.

At the time, the organization only had a few dozen members. “I felt the group could get bigger than just people sitting in circles,” Kindlon said. “We needed to raise awareness. I wanted people’s stories to be told.”

On May 12, 1996, designated by U.K. advocates as International ME Day, the small Irish group held a public event. Vera spoke on national radio. The Kindlons, mother and son, publicized the group’s work, and by 2000 the membership list topped 400.

Through a leadership listserv, Kindlon maintained contact with dozens of patient support and advocacy groups around the UK and elsewhere; the network kept him abreast of the major scientific, public health, and political developments related to the illness. Then he learned about the PACE trial.

***

In the mid-1980s, several outbreaks of a disabling and prolonged flu-like illness popped up across the U.S. Although clinicians treating some of the patients believed it was associated with the Epstein-Barr virus, which causes mononucleosis, CDC investigators were unable to identify a link with that or other pathogens.

The CDC team called the mysterious condition “chronic fatigue syndrome” after rejecting the name “myalgic encephalomyelitis,” coined after a similar outbreak at a London hospital in the 1950s. The key symptom of myalgic encephalomyelitis had been identified as extreme muscle fatigue after minimal exertion, with delayed recovery—essentially, a description of post-exertional malaise, Tom Kindlon’s main symptom. The CDC also rejected “post-viral fatigue syndrome,” another common name. In contrast, the World Health Organization, which had years earlier classified “benign myalgic encephalomyelitis” as a neurological disorder, deemed both post-viral fatigue syndrome and chronic fatigue syndrome to be synonyms. (The word “benign” eventually fell out of common use.)

In the U.S., the disease is now often being called ME/CFS by government agencies; the recent report from the Institute of Medicine suggested renaming it “systemic exertion intolerance disease,” or SEID. In the U.K, it is often called CFS/ME.

Patients have always hated the name chronic fatigue syndrome. For one thing, the word “fatigue” does not come close to describing the profound depletion of energy that marks the illness. A few years ago, best-selling author and long-time patient Laura Hillenbrand (Unbroken; Seabiscuit) once told The New York Times: “This disease leaves people bedridden. I’ve gone through phases where I couldn’t roll over in bed. I couldn’t speak. To have it called ‘fatigue’ is a gross misnomer.”

Patients, clinicians and scientists say the name is also inaccurate because the hallmark is not fatigue itself but more specifically what Tom Kindlon experienced—the relapses known as post-exertional malaise. (Patients also criticize the word ‘malaise,’ like ‘fatigue,’ as inaccurate and inadequate, and many prefer to call the symptom ‘post-exertional relapse.’) Other core symptoms are cognitive and neurological problems, sleep disorders, and in many cases muscle pain.

Researchers have not been able to identify a specific cause—at least in part because investigators have used many different criteria to define the illness and identify study subjects, making it hard to compare results. In many cases, as in the 1980s outbreaks, ME/CFS appears to be triggered by a viral or other infection from which people never recover. Since patients often don’t seek treatment and are not diagnosed until they have been sick for a long time, research on triggering events has often been based on self-reports of an initial infection rather than laboratory confirmation. However, a prospective 2006 study from Australian researchers and the CDC found that 11 percent of more than 250 patients who were followed after acute cases of mononucleosis, Q fever, and Ross River virus met diagnostic criteria for chronic fatigue syndrome six months later.

Although in some cases patients report a gradual start to the illness, a 2011 definition of myalgic encephalomyelitis developed by an international expert committee noted that “most patients have an acute infectious onset with flu-like and/or respiratory symptoms.” In fact, many experts believe ME/CFS is likely a cluster of related illnesses, in which one or more infections, or exposures to toxins, mold, stress, trauma or other physiological insults, spark the immune system into a persistent state of hyper-activation, with the resulting inflammation and other systemic effects causing the symptoms. Like the varying methods for defining the illness, the heterogeneity of potential triggering events among chronic fatigue syndrome populations has also complicated research. Without accurate sub-grouping, the findings from such samples can undermine rather than promote the search for causes, biomarkers and treatments.

The illness can fluctuate over time. Long-term patients sometimes experience periods of moderate remission, but few appear to recover completely. Most treatment has involved symptomatic relief.

Although research has been hampered by limited government support, studies over the years have documented a wide range of biological abnormalities as well as associations with a host of pathogens. But some promising leads have not panned out, most spectacularly several years ago when an apparent association with mouse retroviruses turned out to be the result of lab contamination—a devastating blow to patients.

***

For their part, the PACE investigators have collectively published hundreds of studies and reports about the illness, which they prefer to call chronic fatigue syndrome. In their model, the syndrome starts when people become sick—often from a virus, sometimes from other causes. This short-term illness leaves them exhausted; when the infection or other cause passes and they try to resume normal activity, they feel weakened and symptomatic again. This response is expected given their deconditioned state, according to the model, yet patients become fearful that they are still sick and decide they need more rest.

Then, instead of undergoing a normal recovery, they develop what the PACE authors have called “unhelpful beliefs” or “dysfunctional cognitions”–more specifically, the unhelpful belief that they continue to suffer from an infection or some other medical disease that will get worse if they exert themselves. Patients guided by these faulty cognitions further reduce their activity and, per the theory, become even more deconditioned, ultimately leading to “a chronic fatigue state in which symptoms are perpetuated by a cycle of inactivity, deterioration in exercise tolerance and further symptoms,” noted a 1989 article whose authors included Chalder and Simon Wessely, the PACE investigators’ longtime colleague.

The two rehabilitative therapies were designed to interrupt this downward spiral and restore patients’ sense of control over their health, in part through positive reinforcement and encouragement that recovery was possible. The course of cognitive behavior therapy, known as CBT, was specifically designed and structured to help chronic fatigue syndrome patients alleviate themselves of the “unhelpful beliefs” that purportedly kept them sedentary, and to encourage them to re-engage with daily life. (Standard forms of cognitive behavior therapy are recommended for helping people deal with all kinds of adversity, including major illness, yet doctors do not suggest that it is an actual treatment for cancer, multiple sclerosis, or renal failure.) The increase in activity known as graded exercise therapy, or GET, sought to counteract the deconditioning by getting people moving again in planned, incremental steps.

Through their extensive writings and their consulting roles with government agencies, Sharpe, Chalder, White, and their colleagues have long exerted a major impact on treatment. In the U.K., the National Health Service has primarily favored cognitive behavior therapy and graded exercise therapy, or related approaches, even in specialized clinics.

In the U.S., the Centers for Disease Control and Prevention has collaborated with White, Sharpe and some of their colleagues for decades. The agency recommends the two treatments on its website and in its now-archived CFS Toolkit for health professionals about how to treat the illness. The toolkit recommends contacting St. Bartholomew’s—the venerable London hospital that is one of White’s professional homes—for more information about graded exercise therapy.

White, the lead author of the Lancet paper, is a professor of psychological medicine at Queen Mary University of London and co-leads the chronic fatigue syndrome service at St. Bartholomew’s. Sharpe is a professor of psychological medicine at Oxford University, and Chalder is a professor of cognitive behavioral psychotherapy at King’s College London. Their faculty webpages currently credit them with, respectively, 90, 366 and 205 publications.

The PACE authors have been referred to as members of the “Wessely school”—or, less politely, the “Wessely cabal”– because of Simon Wessely’s prominence as a pioneer of this treatment approach for chronic fatigue syndrome. Wessely, a professor of psychological medicine at King’s College London, has published more than 700 papers, was knighted in 2013, and is the current president of the Royal College of Psychiatrists.

Over the years, members of the PACE team developed close consulting and financial relationships with insurance companies; they have acknowledged these ties in “conflict of interest” statements in published papers. They have advised insurers that rehabilitative, non-pharmacological therapies can help claimants with chronic fatigue syndrome return to work—as Sharpe noted in a 2002 UNUMProvident report on disability insurance trends.

In his article for the UNUMProvident report, Sharpe also criticized the “ME lobby” for playing a negative role in influencing patients’ self-perceptions of their condition, noting that “the patient’s beliefs may become entrenched and be driven by anger and the need to explain continuing disability.” Sharpe noted that economic and social factors, like receiving financial benefits or accepting the physiological illness claims made by patient groups, also represented roadblocks to both clinical improvement and the resolution of disability insurance claims.

“A strong belief and preoccupation that one has a ‘medical disease’ and a helpless and passive attitude to coping is associated with persistent disability,” Sharpe warned readers of the disability insurance report. “The current system of state benefits, insurance payments and litigation remain potentially major obstacles to effective rehabilitation…If the claimant becomes hostile toward employer or insurer the position is likely to be difficult to retrieve.”

***

Given the medical and social costs of the illness, the government wanted solid evidence from a large trial about treatments that could help people get better. In 2003, the U.K. Medical Research Council announced that it would fund the PACE trial—more formally known as “Comparison of adaptive pacing therapy, cognitive behavior therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome: a randomized trial.”

Three other government agencies–Scotland’s Chief Scientist Office, England’s Department of Health, and the U.K. Department for Work and Pensions—chipped in. The West Midlands Multicentre Research Ethics Committee approved the final study protocol.

The investigators selected two self-reported measures, for physical function and fatigue, as their primary outcomes. For physical function, they chose a section of a widely used questionnaire called the Medical Outcomes Study 36-Item Short Form Health Survey, or SF-36; with this physical function scale, they designated a score of 60 or less out of 100 as representing sufficient disability for trial entry.

For fatigue, they selected the Chalder Fatigue Scale, developed by one of the PACE investigators, on which higher scores represented greater fatigue. The response to each of the scale’s 11 questions would be scored as 0 or 1, and a score of 6 or more was deemed sufficient evidence of disability for trial entry.

In the proposed trial, participants would be randomized into four arms. All would be offered a few meetings with a specialist—the baseline condition ultimately called “specialist medical care.” Participants in three of the arms would receive additional interventions, of up to 14 sessions over six months, with a booster session three months later. Everyone would be assessed one year after entering the trial—that is, six months after the end of the main period of treatment. Home-bound patients were not eligible, since participation required attendance at multiple clinic sessions.

Besides the two rehabilitative treatments of cognitive behavior therapy and graded exercise therapy, the investigators planned to include an intervention based on a popular self-help strategy known as “pacing.” While the first two approaches challenged patients to adjust their thinking and push themselves beyond what they believed they could do, pacing involved accepting and adapting to the physical constraints of the illness, paying attention to symptoms, and not exceeding personal energy reserves to avoid triggering a relapse.

***

Previous studies conducted by the authors and other researchers, although smaller than PACE, had found that graded exercise therapy and cognitive behavior therapy led to modest improvements in self-reported outcomes, as a 2001 review in JAMA noted. But the same review also warned that the positive results on subjective measures in these studies did not mean that participants had actually improved their physical capacities.

“The person may feel better able to cope with daily activities because they have reduced their expectations of what they should achieve, rather than because they have made any recovery as a result of the intervention,” stated the review. “A more objective measure of the effect of any intervention would be whether participants have increased their working hours, returned to work or school, or increased their physical activities.”

Aware of such concerns, the PACE investigators planned to include some measures of physical function and fitness not dependent on subjective impressions.

Beyond the question of how to measure the effects of the intervention, the therapies themselves remained highly controversial among patients. Many understood that cognitive behavior therapy could be a useful tool for coping with a serious condition but resented and dismissed the PACE authors’ suggestion that it could treat the underlying illness. Encouraging an increase in exercise or exertion was even more controversial. Patients considered it dangerous because of the possibility of relapse from post-exertional malaise. In surveys, patients who had received graded exercise therapy were more likely to report that it had made them worse rather than better.

The psychiatrists and other mental health experts acknowledged that patients often felt worse after starting an activity program. To them, the resurgence of symptoms reflected the deconditioned body’s natural response to renewed exertion, not an underlying disease process—a point strongly conveyed to patients. According to the PACE manual for clinicians administering graded exercise therapy, “Participants are encouraged to see symptoms as temporary and reversible, as a result of their current physical weakness, and not as signs of progressive pathology.”

***

Patients and advocates, aware of the previous work of the PACE team, responded to the Medical Research Council’s announcement with alarm. Fearing the research would lead to calls for more funding for cognitive behavior therapy and exercise therapy and nothing else, patient groups demanded that the agency back research into biological causes and treatments of ME/CFS instead–something it was not doing.

“We believe that the money being allocated to the PACE trial is a scandalous way of prioritising the very limited research funding that the MRC [Medical Research Council] have decided to make available for ME/CFS,” declared the ME Association, a major advocacy organization, in a statement widely disseminated on social media. The statement demanded that the trial be halted and the money “held in reserve for research that is likely to be of real benefit to people with ME/CFS.”

Despite the anger in the patient community, the investigators were able to enlist Action For ME, another major advocacy group, to help design the pacing intervention. They called their operationalization of the strategy “adaptive pacing therapy,” or APT.

The trial protocol described the pacing therapy as “essentially an energy management approach, which involves assessment of the link between activity and subsequent symptoms and disability, establishing a stable baseline of activity using a daily diary, with advice to plan and pace activity in order to avoid exacerbations.” But many patients argued that pacing was an inherently personal, flexible approach. Packaging it as a structured “treatment” administered by a “therapist,” with a focus on daily diaries and advance planning, would inevitably alter its effect, they said.

***

Patients and other researchers also objected to the PACE study’s choice of “case definition”—a set of research or diagnostic criteria designed to include everyone with an illness and exclude those without it. Many challenged the decision to identify participants using the single-symptom case definition of chronic fatigue syndrome called the Oxford criteria—the same broad case definition that last June’s NIH report recommended for retirement because it could “impair progress and cause harm.”

Over the years, there have been many definitions proposed for both chronic fatigue syndrome and myalgic encephalomyelitis, for both clinical and research use. The most widely used has been the CDC’s 1994 definition for chronic fatigue syndrome, which required six months of fatigue, plus any four of eight other symptoms: cognitive problems, muscle pain, joint pain, headache, tender lymph nodes, sore throat, post-exertional malaise, and sleep disturbances.

Many patients, researchers and clinicians experienced in treating the illness prefer more recent and restrictive definitions that seek to reduce misdiagnoses by requiring the presence of the core symptom of post-exertional malaise as well as neurological and cognitive dysfunctions, unlike the more flexible CDC definition. In contrast, the Oxford criteria, published in 1991 by PACE investigator Michael Sharpe and colleagues, required only one symptom: six months of medically unexplained, disabling fatigue. Proponents argued that this broad scope ensured that research results could be applied to the largest number of people potentially suffering from the illness. If other symptoms were present, as often happened, the criteria required that fatigue be the primary complaint.

According to DePaul psychologist Leonard Jason, the Oxford criteria blurred the boundaries between “chronic fatigue,” a symptom of many conditions, and the distinct illness known as “chronic fatigue syndrome.” In particular, he said, an Oxford criteria sample would likely include many people with primary depression, which can cause prolonged fatigue and often responds to interventions like those being tested in PACE. (In contrast, many people with ME/CFS get depressed as a secondary result of their illness experience.)

“The Oxford criteria clearly select for a lot of patients with primary depression, and people who are depressed do react very well to CBT and exercise,” said Jason, who has published widely on the ME/CFS case definition problem. Positive outcomes in the sample among depressed patients without ME/CFS could therefore lead to the unwarranted conclusion that the therapies worked for people with the disease, he added.

***

The PACE investigators were aware of these concerns, and they promised to study as well two subgroups of participants from their Oxford criteria sample who met additional case definitions: an updated 2003 version of the CDC’s 1994 definition for chronic fatigue syndrome, and a separate definition for myalgic encephalomyelitis. That way, they hoped to be able to draw conclusions about whether the therapies worked, no matter how the illness was defined.

Yet this approach presented its own challenges. Neither of the two other definitions required fatigue to be the primary symptom, as did the Oxford criteria. The myalgic encephalomyelitis definition did not even include fatigue per se as a symptom at all; post-exertional malaise, not fatigue, was the core symptom. And under the CDC definition, patients could present with any of the other symptoms as their primary complaint, as long as they also experienced fatigue.

Given these major differences in the case definitions, an unknown number of patients might have been screened out of the sample by the Oxford criteria but still met one of the other sets of criteria, making it hard to interpret the subgroup findings, according to other researchers. (The PACE investigators and I debated this methodological issue in an exchange of letters in The New York Times in 2011, after an article I wrote about case definition and the PACE trial.)

Bruce Levin, the Columbia University biostatistician, said the PACE investigators should not have assumed that the experience of a subgroup within an already defined population would match the experience of a group that hadn’t been pre-screened. “I would not accept an extrapolation to people diagnosed with alternative criteria from a subgroup comprising people satisfying both sets of criteria rather than just the alternative set of criteria,” he said, adding that reviewers should catch such questionable assumptions before publication.

Tomorrow: Publication of the PACE trial

TWiV 213: Not bad for a hobby

On the final episode of the year of the science show This Week in Virology, the TWiV team reviews twelve cool virology stories from 2012.

You can find TWiV #213 at www.microbe.tv/twiv.

TWiV 186: From Buda to grinding stumps

On episode #186 of the science show This Week in Virology, the TWiV chiefs tackle reader email about how to pronounce Buda, Texas, grinding tree stumps, and much more.

You can find TWiV #186 at www.microbe.tv/twiv.

Cleaning up after XMRV

XMRVThe retrovirus XMRV does not cause prostate cancer or chronic fatigue syndrome – that hypothesis was disproved by the finding that the virus was produced in the laboratory in the 1990s by passage of a prostate tumor in nude mice. A trio of new papers on the virus attempt to address questions about the serological detection of XMRV in prostate cancer, and further emphasize that XMRV is not a human pathogen.

Absence of XMRV and Closely Related Viruses in Primary Prostate Cancer Tissues Used to Derive the XMRV-Infected Cell Line 22Rv1. The human cell line 22Rv1, which was established from a human prostate tumor (CWR22), produces infectious XMRV. It was previously shown that DNA from various passages of the prostate tumor in nude mice (called xenografts), did not contain XMRV, but cells from the mice do contain two related proviruses called PreXMRV-1 and PreXMRV-2 which recombined to form XMRV between 1993-1996. In a new study samples of the original prostate tumor CWR22 were examined for the presence of XMRV or related viruses. PCR assays targeting the viral gag, pol, and env sequences failed to provide evidence of XMRV in CWR22 tissue. These assays could detect endogenous murine leukemia virus DNA in mouse DNA, indicating that the CWR22 tumor contained neither XMRV nor related viruses. In addition, no XMRV sequences were detected when sections from the CWR22 tumor were examined by in situ hybridization. The same assay previously detected XMRV sequences in stromal cells of prostate tumors. The authors conclude that “Our findings conclusively show an absence of XMRV or related viruses in prostate of patient CWR22, thereby strongly supporting a mouse origin of XMRV.”

An important question not addressed by this study is why XMRV was originally detected in multiple prostate tumors obtained from patients at the Cleveland Clinic. The authors seem to be working on this problem, as they state that “…the sequence of XMRV present in 22Rv1 cells is virtually identical with XMRV cloned using human prostate samples, thus suggesting laboratory contamination with XMRV nucleic acid from 22Rv1 cells as the source. Further experiments designed to confirm or refute this hypothesis are currently underway.”

No biological evidence of XMRV in blood or prostatic fluid from prostate cancer patients. Samples from individuals with prostate cancer were tested for the presence of infectious XMRV and for antibodies against the virus. Neither infectious virus nor antibodies were detected in blood plasma (n = 29) or prostate secretions (n = 5). Among these were five specimens that had previously tested positive for XMRV DNA, including two from the original study. The authors conclude that the results “support the conclusion from other studies that XMRV has not entered the human population”.

Susceptibility of human lymphoid tissue cultured ex vivo to Xenotropic murine leukemia virus-related virus (XMRV) infection. Although XMRV is not known to cause human disease, whether it has to potential to do so is unknown. The virus can infect a variety of cultured human cells including peripheral blood mononuclear cells and neuronal cells. In this study the authors placed human tonsillar tissue in culture and infected it with XMRV. Proviral (integrated) DNA could be detected in the cells several weeks after infection and virus particles were released into the medium. However these released viruses could not infect fresh tonsillar tissue, possibly due to modification by innate antiviral restriction factors such as APOBEC, which is known to inhibit XMRV infectivity.

Based on their findings the authors conclude that “laboratories working with XMRV producing cell lines should be aware of the potential biohazard risk of working with this replication-competent retrovirus”.

It is clear that XMRV does not cause chronic fatigue syndrome; the original findings of Lombardi and colleagues linking the virus to this disease have been retracted by the journal. However there are still two papers in the literature that report the presence of XMRV in prostate – the original XMRV discovery paper and one from Ila Singh’s laboratory. In both papers XMRV detection in tissues was accomplished by using serological procedures. Based on the papers summarized here, the assays did not detect XMRV – but a satisfactory explanation for the positive signals has not yet been provided.

TWiV 176: Ave, magi virorum!

On episode #176 of the podcast This Week in Virology, Vincent, Alan, and Rich answer listener email about MS, CFS, EBV, B cells, virii, influenza B, scientific papers, and more.

You can find TWiV #176 at www.microbe.tv/twiv.

TWiV 165: The email zone

T4 tatooHosts: Vincent Racaniello, Dickson DespommierRich Condit, and Alan Dove

Vincent, Dickson, Rich, and Alan answer listener questions about XMRV, cytomegalovirus, latency, shingles vaccine, myxomavirus and rabbits, and more.

Please help us by taking our listener survey.

Click the arrow above to play, or right-click to download TWiV 165 (61 MB .mp3, 102 minutes).

Subscribe to TWiV (free) in iTunes , at the Zune Marketplace, by the RSS feed, by email, or listen on your mobile device with the Microbeworld app.

Links for this episode:

Weekly Science Picks

Dickson – Creation
Rich –
America’s Science Decline
AlanOut of context science
Vincent – The Scientist Top 10 Innovations 2011

Listener Pick of the Week

Jim – Christoph Adami: Finding life we can’t imagine (TED)
TimPatient Zero (Radiolab)
Mary – Natural Obsessions by Natalie Angier
Jimmy –
Science Exchange

Send your virology questions and comments (email or mp3 file) to twiv@microbe.tv, or call them in to 908-312-0760. You can also post articles that you would like us to discuss at microbeworld.org and tag them with twiv.

TWiV 164: Six steps forward, four steps back

xmrvHosts: Vincent RacanielloRich Condit, and Alan Dove

Vincent, Alan, and Rich review ten compelling virology stories of 2011.

Please help us by taking our listener survey.

Click the arrow above to play, or right-click to download TWiV 164 (60 MB .mp3, 99 minutes).

Subscribe to TWiV (free) in iTunes , at the Zune Marketplace, by the RSS feed, by email, or listen on your mobile device with the Microbeworld app.

Ten virology stories of 2011:

  1. XMRV, CFS, and prostate cancer (TWiV 119, 123, 136, 150)
  2. Influenza H5N1, ferrets, and the NSABB (TWiV 159)
  3. The Panic Virus (TWiV 117)
  4. Polio eradication (TWiV 127, 149)
  5. Viral oncotherapy (TWiV 124, 131, 142, 156)
  6. Hepatitis C virus (TWiV 130, 137, 141)
  7. Zinc finger nuclease and HIV therapy (TWiV 144)
  8. Bacteria help viruses (TWiV 154)
  9. Human papillomaviruses (TWiV 126)
  10. Combating dengue with Wolbachia (TWiV 115, 147)

Links for this episode:

Weekly Science Picks

Rich – Fundamentals of Molecular Virology by Nicholas H. Acheson
AlanFetch, with Ruff Ruffman
Vincent – Year end reviews at Rule of 6ix and Contagions

Listener Pick of the Week

GarrenTrillion-frame-per-second video
Judi – iBioMagazine
Ricardo –
Brain Picking’s 11 best science books of 2011

Send your virology questions and comments (email or mp3 file) to twiv@microbe.tv, or call them in to 908-312-0760. You can also post articles that you would like us to discuss at microbeworld.org and tag them with twiv.

This year in virology

XMRVFor some time I have thought about reviewing this year’s topics on virology blog in 2001, not only to get a sense of what I thought was significant, but more importantly, to highlight areas that need more coverage. I went through all the articles I wrote in 2011, put them in subject categories, and listed them by number of articles. The results are both obvious and surprising.

I wrote most frequently about the retrovirus XMRV and its possible role in chronic fatigue syndrome and prostate cancer. This extensive coverage was warranted because we had an opportunity to learn how disease etiology is established, followed by development of therapeutics. By the end of the year we learned that XMRV does not cause human disease, but the journey to that point was highly instructive.

The next most frequently visited topic on virology blog was influenza. Writing often about this virus makes sense because it is a common human infection that occurs every year, and controlling it is a continuing goal of virology research.

There were five  posts noting the death of virologists, colleagues, or someone I thought made a substantial impact on my career.

I wrote more about poliovirus than any other virus except XMRV and influenza. Eradication of poliomyelitis continues to be difficult and faces periodic setbacks.

I only wrote three articles about topics in basic virology.

Like many others, I find the biggest viruses and their virophages compelling.

The past year saw the release of Contagion, a movie about a virus outbreak. Look for an analysis on TWiV in 2012.

The state of science education and science funding is becoming more of a concern. It is not a topic I write about often – I prefer to focus on the science of virology – but for future scientists it is extremely important.

The other posts covered a variety of topics and viruses, including HIV, human papilloma viruses, hepatitis C virus, and smallpox virus.

What have I learned from looking back? The best covered viruses – XMRV, influenza, and poliovirus – deserve the attention. I am surprised that there were so few articles on important viruses such as HIV, HCV, rotaviruses, and herpesviruses. That shortcoming will have to change. I did not write enough about basic virology. One could argue that teaching a virology course is enough – but I think that concise, informative articles on basic virology are very useful. I’ll try to do more of that in 2012. There is one topic I’d like to write less about, but over which I have little control – the passing of scientists.

Thank you for coming here to learn about virology.