An open letter to Psychological Medicine, again!

Last week, Virology Blog posted an open letter to the editors of Psychological Medicine. The letter called on them to retract the misleading findings that participants in the PACE trial for ME/CFS had “recovered” from cognitive behavior therapy and graded exercise therapy. More than 100 scientists, clinicians, other experts and patient organizations signed the letter.

Three days later, I received a response from Sir Robin Murray, the UK editor of Psychological Medicine. Here’s what he wrote:

 Thank you for your letter and your continuing interest in the paper on the PACE Trial which Psychological Medicine published. I was interested to learn that Wilshire and colleagues have now published a reanalysis of the original data from the PACE Trial in the journal Fatigue: Biomedicine, Health & Behavior, a publication that I was not previously aware of. Presumably, interested parties will now be able to read this reanalysis and compare the scientific qualiity of the re-analysis with that of the original. My understanding is that this is the way that science advances.

This is an unacceptable response.  Sir Robin Murray is misguided if he believes that science advances by allowing misleading claims based on manipulated data to stand in the literature. When researchers include participants who were already “recovered” on key indicators at baseline, the findings are by definition so flawed and nonsensical they must be retracted.

That the editors of Psychological Medicine do not grasp that it is impossible to be “disabled” and “recovered” simultaneously on an outcome measure is astonishing and deeply troubling. It is equally astonishing that the PACE authors now defend themselves, as noted in a New York Times opinion piece on Sunday, by arguing that this overlap doesn’t matter because there were also other recovery criteria.

In response to the comments from Psychological Medicine, we are reposting the open letter with 17 added individuals and 24 more organizations, for a total of 142 signatories altogether. These include two lawyers from Queen Mary University of London, the academic home of lead PACE investigator Peter White, along with other experts and ME/CFS patient groups from around the world.

 

Sir Robin Murray and Dr. Kenneth Kendler
Psychological Medicine
Cambridge University Press
University Printing House
Shaftesbury Road
Cambridge CB2 8BS
UK

Dear Sir Robin Murray and Dr. Kendler:

In 2013, Psychological Medicine published an article called “Recovery from chronic fatigue syndrome after treatments given in the PACE trial.”[1] In the paper, White et al. reported that graded exercise therapy (GET) and cognitive behavioural therapy (CBT) each led to recovery in 22% of patients, compared with only 7% in a comparison group. The two treatments, they concluded, offered patients “the best chance of recovery.”

PACE was the largest clinical trial ever conducted for chronic fatigue syndrome (also known as myalgic encephalomyelitis, or ME/CFS), with the first results published in The Lancet in 2011.[2] It was an open-label study with subjective primary outcomes, a design that requires strict vigilance to prevent the possibility of bias. Yet PACE suffered from major flaws that have raised serious concerns about the validity, reliability and integrity of the findings.[3] Despite these flaws, White et al.’s claims of recovery in Psychological Medicine have greatly impacted treatment, research, and public attitudes towards ME/CFS.

According to the protocol for the PACE trial, participants needed to meet specific benchmarks on four different measures in order to be defined as having achieved “recovery.”[4] But in Psychological Medicine, White et al. significantly relaxed each of the four required outcomes, making “recovery” far easier to achieve. No PACE oversight committees appear to have approved the redefinition of recovery; at least, no such approvals were mentioned. White et al. did not publish the results they would have gotten using the original protocol approach, nor did they include sensitivity analyses, the standard statistical method for assessing the impact of such changes.

Patients, advocates and some scientists quickly pointed out these and other problems. In October of 2015, Virology Blog published an investigation of PACE, by David Tuller of the University of California, Berkeley, that confirmed the trial’s methodological lapses.[5] Since then, more than 12,000 patients and supporters have signed a petition calling for Psychological Medicine to retract the questionable recovery claims. Yet the journal has taken no steps to address the issues.

Last summer, Queen Mary University of London released anonymized PACE trial data under a tribunal order arising from a patient’s freedom-of-information request. In December, an independent research group used that newly released data to calculate the recovery results per the original methodology outlined in the protocol.[6] This reanalysis documented what was already clear: that the claims of recovery could not be taken at face value.

In the reanalysis, which appeared in the journal Fatigue: Biomedicine, Health & Behavior, Wilshire et al. reported that the PACE protocol’s definition of “recovery” yielded recovery rates of 7 % or less for all arms of the trial. Moreover, in contrast to the findings reported in Psychological Medicine, the PACE interventions offered no statistically significant benefits. In conclusion, noted Wilshire et al., “the claim that patients can recover as a result of CBT and GET is not justified by the data, and is highly misleading to clinicians and patients considering these treatments.”

In short, the PACE trial had null results for recovery, according to the protocol definition selected by the authors themselves. Besides the inflated recovery results reported in Psychological Medicine, the study suffered from a host of other problems, including the following:

*In a paradox, the revised recovery thresholds for physical function and fatigue–two of the four recovery measures–were so lax that patients could deteriorate during the trial and yet be counted as “recovered” on these outcomes. In fact, 13 % of participants met one or both of these recovery thresholds at baseline. White et al. did not disclose these salient facts in Psychological Medicine. We know of no other studies in the clinical trial literature in which recovery thresholds for an indicator actually represented worse health status than the entry thresholds for serious disability on the same indicator.

*During the trial, the authors published a newsletter for participants that included glowing testimonials from earlier participants about their positive outcomes in the trial.[7] An article in the same newsletter reported that a national clinical guidelines committee had already recommended CBT and GET as effective; the newsletter article did not mention adaptive pacing therapy, an intervention developed specifically for the PACE trial. The participant testimonials and the newsletter article could have biased the responses of an unknown number of the two hundred or more people still undergoing assessments—about a third of the total sample.

*The PACE protocol included a promise that the investigators would inform prospective participants of “any possible conflicts of interest.” Key PACE investigators have had longstanding relationships with major insurance companies, advising them on how to handle disability claims related to ME/CFS. However, the trial’s consent forms did not mention these self-evident conflicts of interest. It is irrelevant that insurance companies were not directly involved in the trial and insufficient that the investigators disclosed these links in their published research. Given this serious omission, the consent obtained from the 641 trial participants is of questionable legitimacy.

Such flaws are unacceptable in published research; they cannot be defended or explained away. The PACE investigators have repeatedly tried to address these concerns. Yet their efforts to date—in journal correspondence, news articles, blog posts, and most recently in their response to Wilshire et al. in Fatigue[8]have been incomplete and unconvincing.

The PACE trial compounded these errors by using a case definition for the illness that required only one symptom–six months of disabling, unexplained fatigue. A 2015 report from the U.S. National Institutes of Health recommended abandoning this single-symptom approach for identifying patients.[9] The NIH report concluded that this broad case definition generated heterogeneous samples of people with a variety of fatiguing illnesses, and that using it to study ME/CFS could “impair progress and cause harm.”

PACE included sub-group analyses of two alternate and more specific case definitions, but these case definitions were modified in ways that could have impacted the results. Moreover, an unknown number of prospective participants might have met these alternate criteria but been excluded from the study by the initial screening.

To protect patients from ineffective and possibly harmful treatments, White et al.’s recovery claims cannot stand in the literature. Therefore, we are asking Psychological Medicine to retract the paper immediately. Patients and clinicians deserve and expect accurate and unbiased information on which to base their treatment decisions. We urge you to take action without further delay.

Sincerely,

Dharam V. Ablashi, DVM, MS, Dip Bact
Scientific Director
HHV-6 Foundation
Former Senior Investigator
National Cancer Institute
National Institutes of Health
Bethesda, Maryland, USA

James N. Baraniuk, MD
Professor, Department of Medicine
Georgetown University
Washington, D.C., USA

Lisa F. Barcellos, MPH, PhD
Professor of Epidemiology
School of Public Health
California Institute for Quantitative Biosciences
University of California, Berkeley
Berkeley, California, USA

Lucinda Bateman, MD
Medical Director
Bateman Horne Center
Salt Lake City, Utah, USA

Alison C. Bested, MD, FRCPC
Clinical Associate Professor
Faculty of Medicine
University of British Columbia
Vancouver, British Columbia, Canada

Molly Brown, PhD
Assistant Professor
Department of Psychology
DePaul University
Chicago, Illinois, USA

John Chia, MD
Clinician and Researcher
EVMED Research
Lomita, California, USA

Todd E. Davenport, PT, DPT, MPH, OCS
Associate Professor
Department of Physical Therapy
University of the Pacific
Stockton, California, USA

Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University
Stanford, California, USA

Simon Duffy, PhD, FRSA
Director
Centre for Welfare Reform
Sheffield, UK

Jonathan C.W. Edwards, MD
Emeritus Professor of Medicine
University College London
London, UK

Derek Enlander, MD
New York, New York, USA

Meredyth Evans, PhD
Clinical Psychologist and Researcher
Chicago, Illinois, USA

Kenneth J. Friedman, PhD
Associate Professor of Physiology and Pharmacology (retired)
New Jersey Medical School
University of Medicine and Dentistry of New Jersey
Newark, New Jersey, USA

Robert F. Garry, PhD
Professor of Microbiology and Immunology
Tulane University School of Medicine
New Orleans, Louisiana, USA

Keith Geraghty, PhD
Honorary Research Fellow
Division of Population Health, Health Services Research & Primary Care
School of Health Sciences
University of Manchester
Manchester, UK

Ian Gibson, PhD
Former Member of Parliament for Norwich North
Former Dean, School of Biological Sciences
University of East Anglia
Honorary Senior Lecturer and Associate Tutor
Norwich Medical School
University of East Anglia
Norwich, UK

Rebecca Goldin, PhD
Professor of Mathematics
George Mason University
Fairfax, Virginia, USA

Ellen Goudsmit, PhD, FBPsS
Health Psychologist (retired)
Former Visiting Research Fellow
University of East London
London, UK

Maureen Hanson, PhD
Liberty Hyde Bailey Professor
Department of Molecular Biology and Genetics
Cornell University
Ithaca, New York, USA

Malcolm Hooper, PhD
Emeritus Professor of Medicinal Chemistry
University of Sunderland
Sunderland, UK

Leonard A. Jason, PhD
Professor of Psychology
DePaul University
Chicago, Illinois, USA

Michael W. Kahn, MD
Assistant Professor of Psychiatry
Harvard Medical School
Boston, Massachusetts, USA

Jon D. Kaiser, MD
Clinical Faculty
Department of Medicine
University of California, San Francisco
San Francisco, California, USA

David L. Kaufman, MD
Medical Director
Open Medicine Institute
Mountain View, California, USA

Betsy Keller, PhD
Department of Exercise and Sports Sciences
Ithaca College
Ithaca, New York, USA

Nancy Klimas, MD
Director, Institute for Neuro-Immune Medicine
Nova Southeastern University
Director, Miami VA Medical Center GWI and CFS/ME Program
Miami, Florida, USA

Andreas M. Kogelnik, MD, PhD
Director and Chief Executive Officer
Open Medicine Institute
Mountain View, California, USA

Eliana M. Lacerda, MD, MSc, PhD
Clinical Assistant Professor
Disability & Eye Health Group/Clinical Research Department
Faculty of Infectious and Tropical Diseases
London School of Hygiene & Tropical Medicine
London, UK

Charles W. Lapp, MD
Medical Director
Hunter-Hopkins Center
Charlotte, North Carolina, USA
Assistant Consulting Professor
Department of Community and Family Medicine
Duke University School of Medicine
Durham, North Carolina, USA

Bruce Levin, PhD
Professor of Biostatistics
Columbia University
New York, New York, USA

Alan R. Light, PhD
Professor of Anesthesiology
Professor of Neurobiology and Anatomy
University of Utah
Salt Lake City, Utah, USA

Vincent C. Lombardi, PhD
Director of Research
Nevada Center for Biomedical Research
Reno, Nevada, USA

Alex Lubet, PhD
Professor of Music
Head, Interdisciplinary Graduate Group in Disability Studies
Affiliate Faculty, Center for Bioethics
Affiliate Faculty, Center for Cognitive Sciences
University of Minnesota
Minneapolis, Minnesota, USA

Steven Lubet
Williams Memorial Professor of Law
Northwestern University Pritzker School of Law
Chicago, Illinois, USA

Sonya Marshall-Gradisnik, PhD
Professor of Immunology
Co-Director, National Centre for Neuroimmunology and Emerging Diseases
Griffith University
Queensland, Australia

Patrick E. McKnight, PhD
Professor of Psychology
George Mason University
Fairfax, Virginia, USA

Jose G. Montoya, MD, FACP, FIDSA
Professor of Medicine
Division of Infectious Diseases and Geographic Medicine
Stanford University School of Medicine
Stanford, California, USA

Zaher Nahle, PhD, MPA
Vice President for Research and Scientific Programs
Solve ME/CFS Initiative
Los Angeles, California, USA

Henrik Nielsen, MD
Specialist in Internal Medicine and Rheumatology
Copenhagen, Denmark

James M. Oleske, MD, MPH
François-Xavier Bagnoud Professor of Pediatrics
Senator of RBHS Research Centers, Bureaus, and Institutes
Director, Division of Pediatrics Allergy, Immunology & Infectious Diseases
Department of Pediatrics
Rutgers New Jersey Medical School
Newark, New Jersey, USA

Elisa Oltra, PhD
Professor of Molecular and Cellular Biology
Catholic University of Valencia School of Medicine
Valencia, Spain

Richard Podell, MD, MPH
Clinical Professor
Department of Family Medicine
Rutgers Robert Wood Johnson Medical School
New Brunswick, New Jersey, USA

Nicole Porter, PhD
Psychologist in Private Practice
Rolling Ground, Wisconsin, USA

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University
New York, New York, USA

Arthur L. Reingold, MD
Professor of Epidemiology
University of California, Berkeley
Berkeley, California, USA

Anders Rosén, MD
Professor of Inflammation and Tumor Biology
Department of Clinical and Experimental Medicine
Division of Cell Biology
Linköping University
Linköping, Sweden

Peter C. Rowe, MD
Professor of Pediatrics
Johns Hopkins University School of Medicine
Baltimore, Maryland, USA

William Satariano, PhD
Professor of Epidemiology and Community Health
University of California, Berkeley
Berkeley, California, USA

Ola Didrik Saugstad, MD, PhD, FRCPE
Professor of Pediatrics
University of Oslo
Director and Department Head
Department of Pediatric Research
University of Oslo and Oslo University Hospital
Oslo, Norway

Charles Shepherd, MB, BS
Honorary Medical Adviser to the ME Association
Buckingham, UK

Christopher R. Snell, PhD
Scientific Director
WorkWell Foundation
Ripon, California, USA

Donald R. Staines, MBBS, MPH, FAFPHM, FAFOEM
Clinical Professor
Menzies Health Institute Queensland
Co-Director, National Centre for Neuroimmunology and Emerging Diseases
Griffith University
Queensland, Australia

Philip B. Stark, PhD
Professor of Statistics
University of California, Berkeley
Berkeley, California, USA

Eleanor Stein, MD, FRCP(C)
Psychiatrist in Private Practice
Assistant Clinical Professor
University of Calgary
Calgary, Alberta, Canada

Staci Stevens, MA
Founder, Exercise Physiologist
Workwell Foundation
Ripon, California, USA

Julian Stewart, MD, PhD
Professor of Pediatrics, Physiology and Medicine
Associate Chairman for Patient Oriented Research
Director, Center for Hypotension
New York Medical College
Hawthorne, NY, USA

Leonie Sugarman, PhD
Emeritus Associate Professor of Applied Psychology
University of Cumbria
Carlisle, UK

John Swartzberg, MD
Clinical Professor Emeritus
School of Public Health
University of California, Berkeley
Berkeley, California, USA

Ronald G. Tompkins, MD, ScD
Summer M Redstone Professor of Surgery
Harvard Medical School
Boston, Massachusetts, USA

David Tuller, DrPH
Lecturer in Public Health and Journalism
University of California, Berkeley
Berkeley, California, USA

Rosemary A. Underhill, MB, BS, MRCOG, FRCSE
Physician and Independent Researcher
Palm Coast, Florida, USA

Rosamund Vallings, MNZM, MB, BS
General Practitioner
Auckland, New Zealand

Michael VanElzakker, PhD
Research Fellow, Psychiatric Neuroscience Division
Harvard Medical School & Massachusetts General Hospital
Instructor, Tufts University Psychology
Boston, Massachusetts, USA

Mark VanNess, PhD
Professor of Health, Exercise & Sports Sciences
University of the Pacific
Stockton, California, USA
Workwell Foundation
Ripon, California, USA

Mark Vink, MD
Family Physician
Soerabaja Research Center
Amsterdam, Netherlands

Frans Visser, MD
Cardiologist
Stichting Cardiozorg
Hoofddorp, Netherlands

Tony Ward, MA (Hons), PhD, DipClinPsyc
Registered Clinical Psychologist
Professor of Clinical Psychology
School of Psychology
Victoria University of Wellington
Wellington, New Zealand
Adjunct Professor, School of Psychology
University of Birmingham
Birmingham, UK
Adjunct Professor, School of Psychology
University of Kent
Canterbury, UK

William Weir, FRCP
Infectious Disease Consultant
London, UK

John Whiting, MD
Specialist Physician
Private Practice
Brisbane, Australia

Carolyn Wilshire, PhD
Senior Lecturer
School of Psychology
Victoria University of Wellington
Wellington, New Zealand

Michael Zeineh, MD, PhD
Assistant Professor
Department of Radiology
Stanford University
Stanford, California, USA

Marcie Zinn, PhD
Research Consultant in Experimental Electrical Neuroimaging and Statistics
Center for Community Research
DePaul University
Chicago, Illinois, USA
Executive Director
Society for Neuroscience and Psychology in the Performing Arts
Dublin, California, USA

Mark Zinn, MM
Research Consultant in Experimental Electrophysiology
Center for Community Research
DePaul University
Chicago, Illinois, USA

New individuals added 23 March 2017

Norman E. Booth, PhD, FInstP
Emeritus Fellow in Physics
Mansfield College
University of Oxford
Oxford, UK

Joan Crawford, CPsychol, CEng, CSci, MA, MSc
Chartered Counselling Psychologist
Chronic Pain Management Service
St Helens Hospital
St Helens, UK

Lucy Dechene, PhD
Professor of Mathematics (retired)
Fitchburg State University
Fitchburg, Massachusetts, USA

Valerie Eliot Smith
Barrister and Visiting Scholar
Centre for Commercial Law Studies
Queen Mary University of London
London, UK

Margaret C. Fernald, PhD
Clinical and Research Psychologist
University of Maine
Orono, Maine, USA

Simin Ghatineh, MSc, PhD
Biochemist
London, UK

Alan Gurwitt, M.D.
Former Clinical Child Psychiatry Faculty Member
Yale Child Study Center, New Haven, Connecticut
University of Connecticut School of Medicine, Farmington, Connecticut
Harvard Medical School, Boston, Massachusetts
Co-author of primers on Adult and Pediatric ME/CFS
Clinician in Private Practice (retired)
Boston, Massachusetts, USA

Geoffrey Hallmann, LLB, DipLegPrac
Former Laywer, (Disability And Compensation)
Lismore, Australia

Susan Levine, MD
Clinician in Private Practice
New York, New York, USA
Visiting Fellow
Cornell University
Ithaca, New York, USA

Marvin S. Medow, Ph.D.
Professor of Pediatrics and Physiology
Chairman, New York Medical College IRB
Associate Director of The Center for Hypotension
New York Medical College
Hawthorne, New York, USA

Sarah Myhill MB BS
Clinician in Private Practice
Knighton, UK

Pamela Phillips, Dip, Dip. MSc MBACP (registered)
Counsellor in Private Practice
London, UK

Gwenda L Schmidt-Snoek, PhD
Researcher
Former Assistant Professor of Psychology
Hope College
Holland, Michigan, USA

Robin Callender Smith, PhD
Professor of Media Law
Centre for Commercial Law Studies
Queen Mary University of London.
Barrister and Information Rights Judge
London, UK

Samuel Tucker, MD
Former Assistant Clinical Professor of Psychiatry
University of California, San Francisco
San Francisco, California, USA

AM Uyttersprot, MD
Neuropsychiatrist
AZ Jan Portaels
Vilvoorde, Belgium

Paul Wadeson, Bsc, MBChB, MRCGP
GP Principal
Ash Trees Surgery
Carnforth, UK

 

ME/CFS Patient Organizations

25% ME Group
UK

Emerge Australia
Australia

European ME Alliance:

Belgium ME/CFS Association
Belgium

ME Foreningen
Denmark

Suomen CFS-Yhdistys
Finland

Fatigatio e.V.
Germany

Het Alternatief
Netherlands

Icelandic ME Association
Iceland

Irish ME Trust
Ireland

Associazione Malati di CFS
Italy

Norges ME-forening
Norway

Liga SFC
Spain

Riksföreningen för ME-patienter
Sweden

Verein ME/CFS Schweiz
Switzerland

Invest in ME Research
UK

Hope 4 ME & Fibro Northern Ireland
UK

Irish ME/CFS Association
Ireland

Massachusetts CFIDS/ME & FM Association
USA

ME Association
UK

ME/cvs Vereniging
Netherlands

National ME/FM Action Network
Canada

New Jersey ME/CFS Association
USA

Pandora Org
USA

Phoenix Rising
International membership representing many countries

Solve ME/CFS Initiative
USA

Tymes Trust (The Young ME Sufferers Trust)
UK

Wisconsin ME and CFS Association
USA

New Organizations added 23 March 2017

Action CND
Canada

Associated New Zealand ME Society
New Zealand

Chester MESH (ME self-help) group
Chester, UK

German Society for ME/CFS (Deutsche Gesellschaft für ME/CFS)
Germany

Lost Voices Stiftung
Germany

M.E. Victoria Association
Canada

ME North East
UK

ME Research UK
UK

ME Self Help Group Nottingham
UK

ME/CFS and Lyme Association of WA, Inc.
Australia

ME/CFS (Australia) Ltd
Australia

ME/CFS Australia (SA), Inc.
Australia

ME/CVS Stichting Nederland
Netherlands

ME/FM Myalgic Encephalomyelitis and Fibromyalgia Society of British Columbia
Canada

MEAction
International membership representing many countries 
 
Millions Missing Canada
Canada
 
National CFIDS Foundation, Inc.
USA
 
North London ME Network
UK
 
OMEGA (Oxfordshire ME Group for Action)
UK
 
Open Medicine Foundation
USA

Quebec ME Association
Canada
 
The York ME Community
UK
 
Welsh Association of ME & CFS Support
UK
Organization added 29 March 2017
Supportgroup ME and Disability
(Steungroep ME en Arbeidsongeschiktheid)
Groningen, Netherlands

[1] White PD, Goldsmith K, Johnson AL, et al. 2013. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychological Medicine 43(10): 2227-2235.

[2] White PD, Goldsmith KA, Johnson AL, et al. 2011. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. The Lancet 377: 823–836

[3] Racaniello V. 2016. An open letter to The Lancet, again. Virology Blog, 10 Feb. Available at: http://www.virology.ws/2016/02/10/open-letter-lancet-again/ (accessed on 2/24/17).

[4] White PD, Sharpe MC, Chalder T, et al. 2007. Protocol for the PACE trial: a randomised controlled trial of adaptive pacing, cognitive behaviour therapy, and graded exercise, as supplements to standardised specialist medical care versus standardised specialist medical care alone for patients with the chronic fatigue syndrome/myalgic encephalomyelitis or encephalopathy. BMC Neurology 7: 6.

[5] Tuller D. 2015. Trial by error: the troubling case of the PACE chronic fatigue syndrome trial. Virology Blog, 21-23 Oct. Available at: http://www.virology.ws/2015/10/21/trial-by-error-i/ (accessed on 2/24/17)

[6] Wilshire C, Kindlon T, Matthees A, McGrath S. 2016. Can patients with chronic fatigue syndrome really recover after graded exercise or cognitive behavioural therapy? A critical commentary and preliminary re-analysis of the PACE trial. Fatigue: Biomedicine, Health & Behavior; published online 14 Dec. Available at: http://www.tandfonline.com/doi/full/10.1080/21641846.2017.1259724 (accessed on 2/24/17)

[7] PACE Participants Newsletter. December 2008. Issue 3. Available at: http://www.wolfson.qmul.ac.uk/images/pdfs/participantsnewsletter3.pdf (accessed on 2/24/17).

[8] Sharpe M, Chalder T, Johnson AL, et al. 2017. Do more people recover from chronic fatigue syndrome with cognitive behaviour therapy or graded exercise therapy than with other treatments? Fatigue: Biomedicine, Health & Behavior; published online 15 Feb. Available at: http://www.tandfonline.com/doi/full/10.1080/21641846.2017.1288629 (accessed on 2/24/17).

[9] Green CR, Cowan P, Elk R. 2015. National Institutes of Health Pathways to Prevention Workshop: Advancing the research on myalgic encephalomyelitis/chronic fatigue syndrome. Annals of Internal Medicine 162: 860-865.

An open letter to Psychological Medicine about “recovery” and the PACE trial

Sir Robin Murray and Dr. Kenneth Kendler
Psychological Medicine
Cambridge University Press
University Printing House
Shaftesbury Road
Cambridge CB2 8BS
UK

Dear Sir Robin Murray and Dr. Kendler:

In 2013, Psychological Medicine published an article called “Recovery from chronic fatigue syndrome after treatments given in the PACE trial.”[1] In the paper, White et al. reported that graded exercise therapy (GET) and cognitive behavioural therapy (CBT) each led to recovery in 22% of patients, compared with only 7% in a comparison group. The two treatments, they concluded, offered patients “the best chance of recovery.”

PACE was the largest clinical trial ever conducted for chronic fatigue syndrome (also known as myalgic encephalomyelitis, or ME/CFS), with the first results published in The Lancet in 2011.[2] It was an open-label study with subjective primary outcomes, a design that requires strict vigilance to prevent the possibility of bias. Yet PACE suffered from major flaws that have raised serious concerns about the validity, reliability and integrity of the findings.[3] Despite these flaws, White et al.’s claims of recovery in Psychological Medicine have greatly impacted treatment, research, and public attitudes towards ME/CFS.

According to the protocol for the PACE trial, participants needed to meet specific benchmarks on four different measures in order to be defined as having achieved “recovery.”[4] But in Psychological Medicine, White et al. significantly relaxed each of the four required outcomes, making “recovery” far easier to achieve. No PACE oversight committees appear to have approved the redefinition of recovery; at least, no such approvals were mentioned. White et al. did not publish the results they would have gotten using the original protocol approach, nor did they include sensitivity analyses, the standard statistical method for assessing the impact of such changes.

Patients, advocates and some scientists quickly pointed out these and other problems. In October of 2015, Virology Blog published an investigation of PACE, by David Tuller of the University of California, Berkeley, that confirmed the trial’s methodological lapses.[5] Since then, more than 12,000 patients and supporters have signed a petition calling for Psychological Medicine to retract the questionable recovery claims. Yet the journal has taken no steps to address the issues.

Last summer, Queen Mary University of London released anonymized PACE trial data under a tribunal order arising from a patient’s freedom-of-information request. In December, an independent research group used that newly released data to calculate the recovery results per the original methodology outlined in the protocol.[6] This reanalysis documented what was already clear: that the claims of recovery could not be taken at face value.

In the reanalysis, which appeared in the journal Fatigue: Biomedicine, Health & Behavior, Wilshire et al. reported that the PACE protocol’s definition of “recovery” yielded recovery rates of 7 % or less for all arms of the trial. Moreover, in contrast to the findings reported in Psychological Medicine, the PACE interventions offered no statistically significant benefits. In conclusion, noted Wilshire et al., “the claim that patients can recover as a result of CBT and GET is not justified by the data, and is highly misleading to clinicians and patients considering these treatments.”

In short, the PACE trial had null results for recovery, according to the protocol definition selected by the authors themselves. Besides the inflated recovery results reported in Psychological Medicine, the study suffered from a host of other problems, including the following:

*In a paradox, the revised recovery thresholds for physical function and fatigue–two of the four recovery measures–were so lax that patients could deteriorate during the trial and yet be counted as “recovered” on these outcomes. In fact, 13 % of participants met one or both of these recovery thresholds at baseline. White et al. did not disclose these salient facts in Psychological Medicine. We know of no other studies in the clinical trial literature in which recovery thresholds for an indicator actually represented worse health status than the entry thresholds for serious disability on the same indicator.

*During the trial, the authors published a newsletter for participants that included glowing testimonials from earlier participants about their positive outcomes in the trial.[7] An article in the same newsletter reported that a national clinical guidelines committee had already recommended CBT and GET as effective; the newsletter article did not mention adaptive pacing therapy, an intervention developed specifically for the PACE trial. The participant testimonials and the newsletter article could have biased the responses of an unknown number of the two hundred or more people still undergoing assessments—about a third of the total sample.

*The PACE protocol included a promise that the investigators would inform prospective participants of “any possible conflicts of interest.” Key PACE investigators have had longstanding relationships with major insurance companies, advising them on how to handle disability claims related to ME/CFS. However, the trial’s consent forms did not mention these self-evident conflicts of interest. It is irrelevant that insurance companies were not directly involved in the trial and insufficient that the investigators disclosed these links in their published research. Given this serious omission, the consent obtained from the 641 trial participants is of questionable legitimacy.

Such flaws are unacceptable in published research; they cannot be defended or explained away. The PACE investigators have repeatedly tried to address these concerns. Yet their efforts to date—in journal correspondence, news articles, blog posts, and most recently in their response to Wilshire et al. in Fatigue[8]have been incomplete and unconvincing.

The PACE trial compounded these errors by using a case definition for the illness that required only one symptom–six months of disabling, unexplained fatigue. A 2015 report from the U.S. National Institutes of Health recommended abandoning this single-symptom approach for identifying patients.[9] The NIH report concluded that this broad case definition generated heterogeneous samples of people with a variety of fatiguing illnesses, and that using it to study ME/CFS could “impair progress and cause harm.”

PACE included sub-group analyses of two alternate and more specific case definitions, but these case definitions were modified in ways that could have impacted the results. Moreover, an unknown number of prospective participants might have met these alternate criteria but been excluded from the study by the initial screening.

To protect patients from ineffective and possibly harmful treatments, White et al.’s recovery claims cannot stand in the literature. Therefore, we are asking Psychological Medicine to retract the paper immediately. Patients and clinicians deserve and expect accurate and unbiased information on which to base their treatment decisions. We urge you to take action without further delay.

Sincerely,

Dharam V. Ablashi, DVM, MS, Dip Bact
Scientific Director
HHV-6 Foundation
Former Senior Investigator
National Cancer Institute
National Institutes of Health
Bethesda, Maryland, USA

James N. Baraniuk, MD
Professor, Department of Medicine
Georgetown University
Washington, D.C., USA

Lisa F. Barcellos, MPH, PhD
Professor of Epidemiology
School of Public Health
California Institute for Quantitative Biosciences
University of California, Berkeley
Berkeley, California, USA

Lucinda Bateman, MD
Medical Director
Bateman Horne Center
Salt Lake City, Utah, USA

Alison C. Bested, MD, FRCPC
Clinical Associate Professor
Faculty of Medicine
University of British Columbia
Vancouver, British Columbia, Canada

Molly Brown, PhD
Assistant Professor
Department of Psychology
DePaul University
Chicago, Illinois, USA

John Chia, MD
Clinician and Researcher
EVMED Research
Lomita, California, USA

Todd E. Davenport, PT, DPT, MPH, OCS
Associate Professor
Department of Physical Therapy
University of the Pacific
Stockton, California, USA

Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University
Stanford, California, USA

Simon Duffy, PhD, FRSA
Director
Centre for Welfare Reform
Sheffield, UK

Jonathan C.W. Edwards, MD
Emeritus Professor of Medicine
University College London
London, UK

Derek Enlander, MD
New York, New York, USA

Meredyth Evans, PhD
Clinical Psychologist and Researcher
Chicago, Illinois, USA

Kenneth J. Friedman, PhD
Associate Professor of Physiology and Pharmacology (retired)
New Jersey Medical School
University of Medicine and Dentistry of New Jersey
Newark, New Jersey, USA

Robert F. Garry, PhD
Professor of Microbiology and Immunology
Tulane University School of Medicine
New Orleans, Louisiana, USA

Keith Geraghty, PhD
Honorary Research Fellow
Division of Population Health, Health Services Research & Primary Care
School of Health Sciences
University of Manchester
Manchester, UK

Ian Gibson, PhD
Former Member of Parliament for Norwich North
Former Dean, School of Biological Sciences
University of East Anglia
Honorary Senior Lecturer and Associate Tutor
Norwich Medical School
University of East Anglia
Norwich, UK

Rebecca Goldin, PhD
Professor of Mathematics
George Mason University
Fairfax, Virginia, USA

Ellen Goudsmit, PhD, FBPsS
Health Psychologist (retired)
Former Visiting Research Fellow
University of East London
London, UK

Maureen Hanson, PhD
Liberty Hyde Bailey Professor
Department of Molecular Biology and Genetics
Cornell University
Ithaca, New York, USA

Malcolm Hooper, PhD
Emeritus Professor of Medicinal Chemistry
University of Sunderland
Sunderland, UK

Leonard A. Jason, PhD
Professor of Psychology
DePaul University
Chicago, Illinois, USA

Michael W. Kahn, MD
Assistant Professor of Psychiatry
Harvard Medical School
Boston, Massachusetts, USA

Jon D. Kaiser, MD
Clinical Faculty
Department of Medicine
University of California, San Francisco
San Francisco, California, USA

David L. Kaufman, MD
Medical Director
Open Medicine Institute
Mountain View, California, USA

Betsy Keller, PhD
Department of Exercise and Sports Sciences
Ithaca College
Ithaca, New York, USA

Nancy Klimas, MD
Director, Institute for Neuro-Immune Medicine
Nova Southeastern University
Director, Miami VA Medical Center GWI and CFS/ME Program
Miami, Florida, USA

Andreas M. Kogelnik, MD, PhD
Director and Chief Executive Officer
Open Medicine Institute
Mountain View, California, USA

Eliana M. Lacerda, MD, MSc, PhD
Clinical Assistant Professor
Disability & Eye Health Group/Clinical Research Department
Faculty of Infectious and Tropical Diseases
London School of Hygiene & Tropical Medicine
London, UK

Charles W. Lapp, MD
Medical Director
Hunter-Hopkins Center
Charlotte, North Carolina, USA
Assistant Consulting Professor
Department of Community and Family Medicine
Duke University School of Medicine
Durham, North Carolina, USA

Bruce Levin, PhD
Professor of Biostatistics
Columbia University
New York, New York, USA

Alan R. Light, PhD
Professor of Anesthesiology
Professor of Neurobiology and Anatomy
University of Utah
Salt Lake City, Utah, USA

Vincent C. Lombardi, PhD
Director of Research
Nevada Center for Biomedical Research
Reno, Nevada, USA

Alex Lubet, PhD
Professor of Music
Head, Interdisciplinary Graduate Group in Disability Studies
Affiliate Faculty, Center for Bioethics
Affiliate Faculty, Center for Cognitive Sciences
University of Minnesota
Minneapolis, Minnesota, USA

Steven Lubet
Williams Memorial Professor of Law
Northwestern University Pritzker School of Law
Chicago, Illinois, USA

Sonya Marshall-Gradisnik, PhD
Professor of Immunology
Co-Director, National Centre for Neuroimmunology and Emerging Diseases
Griffith University
Queensland, Australia

Patrick E. McKnight, PhD
Professor of Psychology
George Mason University
Fairfax, Virginia, USA

Jose G. Montoya, MD, FACP, FIDSA
Professor of Medicine
Division of Infectious Diseases and Geographic Medicine
Stanford University School of Medicine
Stanford, California, USA

Zaher Nahle, PhD, MPA
Vice President for Research and Scientific Programs
Solve ME/CFS Initiative
Los Angeles, California, USA

Henrik Nielsen, MD
Specialist in Internal Medicine and Rheumatology
Copenhagen, Denmark

James M. Oleske, MD, MPH
François-Xavier Bagnoud Professor of Pediatrics
Senator of RBHS Research Centers, Bureaus, and Institutes
Director, Division of Pediatrics Allergy, Immunology & Infectious Diseases
Department of Pediatrics
Rutgers New Jersey Medical School
Newark, New Jersey, USA

Elisa Oltra, PhD
Professor of Molecular and Cellular Biology
Catholic University of Valencia School of Medicine
Valencia, Spain

Richard Podell, MD, MPH
Clinical Professor
Department of Family Medicine
Rutgers Robert Wood Johnson Medical School
New Brunswick, New Jersey, USA

Nicole Porter, PhD
Psychologist in Private Practice
Rolling Ground, Wisconsin, USA

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University
New York, New York, USA

Arthur L. Reingold, MD
Professor of Epidemiology
University of California, Berkeley
Berkeley, California, USA

Anders Rosén, MD
Professor of Inflammation and Tumor Biology
Department of Clinical and Experimental Medicine
Division of Cell Biology
Linköping University
Linköping, Sweden

Peter C. Rowe, MD
Professor of Pediatrics
Johns Hopkins University School of Medicine
Baltimore, Maryland, USA

William Satariano, PhD
Professor of Epidemiology and Community Health
University of California, Berkeley
Berkeley, California, USA

Ola Didrik Saugstad, MD, PhD, FRCPE
Professor of Pediatrics
University of Oslo
Director and Department Head
Department of Pediatric Research
University of Oslo and Oslo University Hospital
Oslo, Norway

Charles Shepherd, MB, BS
Honorary Medical Adviser to the ME Association
Buckingham, UK

Christopher R. Snell, PhD
Scientific Director
WorkWell Foundation
Ripon, California, USA

Donald R. Staines, MBBS, MPH, FAFPHM, FAFOEM
Clinical Professor
Menzies Health Institute Queensland
Co-Director, National Centre for Neuroimmunology and Emerging Diseases
Griffith University
Queensland, Australia

Philip B. Stark, PhD
Professor of Statistics
University of California, Berkeley
Berkeley, California, USA

Eleanor Stein, MD, FRCP(C)
Psychiatrist in Private Practice
Assistant Clinical Professor
University of Calgary
Calgary, Alberta, Canada

Staci Stevens, MA
Founder, Exercise Physiologist
Workwell Foundation
Ripon, California, USA

Julian Stewart, MD, PhD
Professor of Pediatrics, Physiology and Medicine
Associate Chairman for Patient Oriented Research
Director, Center for Hypotension
New York Medical College
Hawthorne, NY, USA

Leonie Sugarman, PhD
Emeritus Associate Professor of Applied Psychology
University of Cumbria
Carlisle, UK

John Swartzberg, MD
Clinical Professor Emeritus
School of Public Health
University of California, Berkeley
Berkeley, California, USA

Ronald G. Tompkins, MD, ScD
Summer M Redstone Professor of Surgery
Harvard Medical School
Boston, Massachusetts, USA

David Tuller, DrPH
Lecturer in Public Health and Journalism
University of California, Berkeley
Berkeley, California, USA

Rosemary A. Underhill, MB, BS, MRCOG, FRCSE
Physician and Independent Researcher
Palm Coast, Florida, USA

Rosamund Vallings, MNZM, MB, BS
General Practitioner
Auckland, New Zealand

Michael VanElzakker, PhD
Research Fellow, Psychiatric Neuroscience Division
Harvard Medical School & Massachusetts General Hospital
Instructor, Tufts University Psychology
Boston, Massachusetts, USA

Mark VanNess, PhD
Professor of Health, Exercise & Sports Sciences
University of the Pacific
Stockton, California, USA
Workwell Foundation
Ripon, California, USA

Mark Vink, MD
Family Physician
Soerabaja Research Center
Amsterdam, Netherlands

Frans Visser, MD
Cardiologist
Stichting Cardiozorg
Hoofddorp, Netherlands

Tony Ward, MA (Hons), PhD, DipClinPsyc
Registered Clinical Psychologist
Professor of Clinical Psychology
School of Psychology
Victoria University of Wellington
Wellington, New Zealand
Adjunct Professor, School of Psychology
University of Birmingham
Birmingham, UK
Adjunct Professor, School of Psychology
University of Kent
Canterbury, UK

William Weir, FRCP
Infectious Disease Consultant
London, UK

John Whiting, MD
Specialist Physician
Private Practice
Brisbane, Australia

Carolyn Wilshire, PhD
Senior Lecturer
School of Psychology
Victoria University of Wellington
Wellington, New Zealand

Michael Zeineh, MD, PhD
Assistant Professor
Department of Radiology
Stanford University
Stanford, California, USA

Marcie Zinn, PhD
Research Consultant in Experimental Electrical Neuroimaging and Statistics
Center for Community Research
DePaul University
Chicago, Illinois, USA
Executive Director
Society for Neuroscience and Psychology in the Performing Arts
Dublin, California, USA

Mark Zinn, MM
Research Consultant in Experimental Electrophysiology
Center for Community Research
DePaul University
Chicago, Illinois, USA

 

ME/CFS Patient Organizations

25% ME Group
UK

Emerge Australia
Australia

European ME Alliance:

Belgium ME/CFS Association
Belgium

ME Foreningen
Denmark

Suomen CFS-Yhdistys
Finland

Fatigatio e.V.
Germany

Het Alternatief
Netherlands

Icelandic ME Association
Iceland

Irish ME Trust
Ireland

Associazione Malati di CFS
Italy

Norges ME-forening
Norway

Liga SFC
Spain

Riksföreningen för ME-patienter
Sweden

Verein ME/CFS Schweiz
Switzerland

Invest in ME Research
UK

Hope 4 ME & Fibro Northern Ireland
UK

Irish ME/CFS Association
Ireland

Massachusetts CFIDS/ME & FM Association
USA

ME Association
UK

ME/cvs Vereniging
Netherlands

National ME/FM Action Network
Canada

New Jersey ME/CFS Association
USA

Pandora Org
USA

Phoenix Rising
International membership representing many countries

Solve ME/CFS Initiative
USA

Tymes Trust (The Young ME Sufferers Trust)
UK

Wisconsin ME and CFS Association
USA

[1] White PD, Goldsmith K, Johnson AL, et al. 2013. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychological Medicine 43(10): 2227-2235.

[2] White PD, Goldsmith KA, Johnson AL, et al. 2011. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. The Lancet 377: 823–836

[3] Racaniello V. 2016. An open letter to The Lancet, again. Virology Blog, 10 Feb. Available at: http://www.virology.ws/2016/02/10/open-letter-lancet-again/ (accessed on 2/24/17).

[4] White PD, Sharpe MC, Chalder T, et al. 2007. Protocol for the PACE trial: a randomised controlled trial of adaptive pacing, cognitive behaviour therapy, and graded exercise, as supplements to standardised specialist medical care versus standardised specialist medical care alone for patients with the chronic fatigue syndrome/myalgic encephalomyelitis or encephalopathy. BMC Neurology 7: 6.

[5] Tuller D. 2015. Trial by error: the troubling case of the PACE chronic fatigue syndrome trial. Virology Blog, 21-23 Oct. Available at: http://www.virology.ws/2015/10/21/trial-by-error-i/ (accessed on 2/24/17)

[6] Wilshire C, Kindlon T, Matthees A, McGrath S. 2016. Can patients with chronic fatigue syndrome really recover after graded exercise or cognitive behavioural therapy? A critical commentary and preliminary re-analysis of the PACE trial. Fatigue: Biomedicine, Health & Behavior; published online 14 Dec. Available at: http://www.tandfonline.com/doi/full/10.1080/21641846.2017.1259724 (accessed on 2/24/17)

[7] PACE Participants Newsletter. December 2008. Issue 3. Available at: http://www.wolfson.qmul.ac.uk/images/pdfs/participantsnewsletter3.pdf (accessed on 2/24/17).

[8] Sharpe M, Chalder T, Johnson AL, et al. 2017. Do more people recover from chronic fatigue syndrome with cognitive behaviour therapy or graded exercise therapy than with other treatments? Fatigue: Biomedicine, Health & Behavior; published online 15 Feb. Available at: http://www.tandfonline.com/doi/full/10.1080/21641846.2017.1288629 (accessed on 2/24/17).

[9] Green CR, Cowan P, Elk R. 2015. National Institutes of Health Pathways to Prevention Workshop: Advancing the research on myalgic encephalomyelitis/chronic fatigue syndrome. Annals of Internal Medicine 162: 860-865.

Trial By Error, Continued: The New FITNET Trial for Kids

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

The past year has been a disaster for proponents of the PACE trial. They have faced growing international resistance to their exaggerated claims that cognitive behavior therapy and graded exercise therapy are effective treatments for chronic fatigue syndrome, also known as ME/CFS. The recent court-ordered release of key trial data has confirmed what was long self-evident: The PACE authors weakened their outcome criteria mid-stream in ways that allowed them to report dramatically better results for “improvement” (in The Lancet in 2011) and “recovery” (in Psychological Medicine in 2013). In refusing to provide the findings per the original protocol methods, or statistical analyses assessing the impact of the many mid-trial changes, or their actual trial data, they were able to hide their disastrous results for five years.

Yet the PACE authors and their allies continue, astonishingly, to defend the indefensible study, cite its findings approvingly, and push forward with ever more research into behavioral and cognitive interventions. The latest case in point: Esther Crawley, a British pediatrician and a highly controversial figure in the ME/CFS community because of her longtime promotion of the CBT/GET approach. On November 1st, the Science Media Centre in London held a press briefing to tout Dr. Crawley’s current venture—FITNET-NHS, a one-million-pound study of online CBT that is now recruiting and seeks to enroll more than 700 adolescents.

Dr. Crawley is a professor of child health at the University of Bristol. She is also currently recruiting for the MAGENTA study of graded exercise therapy for children with the illness. She is a lead player in the U.K. CFS/ME Research Collaborative, an umbrella organization that is sponsoring an ambitious Big Data effort called MEGA, now in the planning stages. While patients and advocates are desperate for the kind of top-notch biomedical and genetic research being proposed, many oppose MEGA precisely because of the involvement of Dr. Crawley and Peter White, the lead PACE investigator. (Dr. White is reportedly no longer involved in MEGA; Dr. Crawley still definitely is.)

The rationale for FITNET-NHS is that many ME/CFS patients live too far from specialists to obtain adequate care. Therefore, CBT delivered through an online portal, along with e-mail communication with a therapist, could potentially provide a convenient answer for those in such circumstances. The SMC press briefing generated widespread and enthusiastic news coverage. The BBC’s breathless online report about the “landmark” study noted that the online CBT “successfully treats two-thirds of children with chronic fatigue syndrome.” According to the BBC story, the intervention was designed “to change the way the children think of the disease.”

The BBC story and other news reports did not mention that the PACE trial–a foundational piece of evidence for the claim that changing people’s thoughts about the disease is the best way to treat it—has been publicly exposed as nonsense and is the subject of a roiling worldwide scientific debate. The stories also didn’t mention a more recent paper by the authors of the 2012 study from the Netherlands that was the source of the BBC’s claim of a “two-thirds” success rate.

The 2012 study, a Dutch version of FITNET, was published in The Lancet. (Why is The Lancet always involved?) In a subsequent paper published in Pediatrics in 2013, the Dutch team reported no differences among their trial participants at long-term follow-up. In other words, as with the PACE trial itself, any apparent advantages conferred by the investigators’ preferred treatment disappeared after the study was over. (More on the Dutch study below.)

The SMC, a purportedly neutral arbiter of science, actually functions as a cheerleader for research about cognitive and behavioral treatments for ME/CFS. Simon Wessely, a founder of the CBT/GET treatment paradigm and a close colleague of the PACE authors, is on the SMC’s board of trustees. The journalist who wrote the BBC story, James Gallagher, sits on the SMC’s advisory committee, so the reporting wasn’t exactly conducted at arm’s length. This reportorial conflict-of-interest was not disclosed in the BBC story itself.

(In fact, the Countess of Mar, a member of the House of Lords and a longtime advocate for ME/CFS patients, has filed a formal complaint with the BBC to protest its biased reporting on FITNET-NHS. In her complaint, she noted that “the BBC coverage was so hyperbolic and it afforded the FITNET trial so much publicity that it was clearly organised as a counter-punch to the anti-PACE evidence which is now gaining world-wide attention.”)

As a treatment for chronic fatigue syndrome, cognitive behavior therapy is grounded in an unproven hypothesis. According to the theory, the cause of patients’ continuing symptoms is a vicious downward spiral generated by false illness beliefs, a fear of engaging in activity, and progressive deconditioning. Whatever the initial viral or other illness that might have triggered the syndrome, patients are presumed to be currently free of any organic disease. Changing their beliefs through CBT, per the theory, will help encourage them to increase their levels of activity and resume their normal lives.

Here’s the rationale for the treatment from the PACE study itself: “CBT was done on the basis of the fear avoidance theory of chronic fatigue syndrome. This theory regards chronic fatigue syndrome as being reversible and that cognitive responses (fear of engaging in activity) and behavioural responses (avoidance of activity) are linked and interact with physiological processes to perpetuate fatigue. The aim of treatment was to change the behavioural and cognitive factors assumed to be responsible for perpetuation of the participant’s symptoms and disability. Therapeutic strategies guided participants to address unhelpful cognitions, including fears about symptoms or activity by testing them in behavioural experiments.”

The goal of this specific form of CBT, therefore, is to reverse the “reversible” illness by helping patients abandon their “unhelpful” beliefs of having a medical disease. This is definitely not the goal of CBT when it is used to help people cope with cancer, Parkinson’s, or other illnesses—no one claims those diseases are “reversible.” That the PACE authors, Dr. Crawley, and their Dutch colleagues promote CBT as a curative treatment and not simply a management or adaptive strategy is clear from their insistence on using the word “recovery”—a term that has no well-defined or universally understood meaning when it comes to this illness but has a very clear meaning to the general public.

While PACE so far remains in the literature, the study has been rejected by dozens of leading clinicians and academics, in the U.S. and elsewhere. Last February, an open letter to The Lancet signed by 42 experts and posted on Virology Blog condemned its egregious flaws, noting that they “have no place in published research.” The study has even been presented as a case study of bad science in graduate epidemiology seminars and at major scientific gatherings.

*****

Like the work of the PACE authors, Dr. Crawley’s research is fraught with misrepresentations and methodological problems. Like them, she routinely conflates the common symptom of chronic fatigue with the illness called chronic fatigue syndrome—a serious error with potentially harmful consequences. (I will mostly use chronic fatigue syndrome in describing the research because that is the term they use.)

Dr. Crawley favors subjective over objective outcomes. In PACE, of course, the objective measures–like a walking test, a step-test for fitness, and employment status—all failed to demonstrate “recovery” or reflect the reported improvements in the two primary subjective outcomes of physical function and fatigue. FITNET-NHS doesn’t even bother with such measures. The primary outcome is a self-report questionnaire assessing physical function, and almost all the secondary outcomes are also subjective.

This is particularly troubling because FITNET-NHS, like PACE, is non-blinded; that is, both participants and investigators know which intervention they are receiving. Non-blinded studies with subjective outcomes are notoriously vulnerable to bias—even more when the intervention itself involves telling participants that the treatment will make them better, as is the case with the kind of cognitive behavior therapy provided for ME/CFS patients.

The FITNET-NHS study protocol states that participants will be identified using the guidelines developed by NICE—the U.K.’s National Institute for Health and Care Excellence. The protocol describes the NICE guidelines as requiring three months of fatigue, plus one or more of nine additional symptoms: post-exertional malaise, difficulty sleeping, cognitive dysfunction, muscle and/or joint pain, headaches, painful lymph nodes, general malaise, dizziness and/or nausea, or palpitations. In other words, according to the protocol, post-exertional malaise is not required to participate in FITNET-NHS; it is clearly identified as an optional symptom. (In the U.K., the illness can be diagnosed at three months in children, rather than at six months.)

But the proposal’s claim to be following the NICE guidelines does not appear to be true. In the NICE guidelines, post-exertional malaise is not an optional symptom. It is required, as an essential element of the fatigue itself. (In addition, one or more of ten other symptoms must also be present.) To repeat: post-exertional malaise is required in the NICE guidelines, but is not required in the description of the NICE guidelines provided in the FITNET-NHS protocol.

By making this subtle but significant shift—a sleight-of-guideline, so to speak—Dr. Crawley and her colleagues have quietly transformed their prospective cohort from one in which post-exertional malaise is a cardinal characteristic of the illness to one in which it might or might not be present. And they have done this while still claiming–inaccurately–to follow NICE guidelines. As currently described, however, Dr. Crawley’s new study is NOT a study of chronic fatigue syndrome, as she maintains, but of chronic fatigue.

As a result, the actual study participants, like the PACE cohort, will likely be a heterogeneous grab bag of kids suffering from fatigue for any number of reasons, including depression–a common cause of exhaustion and a condition that often responds to psychotherapeutic interventions like CBT. Some or even many participants—an unknown number—will likely be genuine ME/CFS patients. Yet the results will be applied to ALL adolescents identified as having that illness. Since those who actually have it suffer from the required symptom of post-exertional malaise, an intervention that encourages them to increase their activity levels, like CBT, could potentially cause harm.

(I suppose it’s possible the FITNET-NHS protocol’s inaccurate description of the role of post-exertional malaise in the NICE guidelines was inadvertent, a case of sloppiness. If so, it would be an extraordinary oversight, given the number of people involved in the study and the enormous implications of the switch. It is curious that this obvious and jarring discrepancy between the NICE guidelines and the FITNESS-NHS description of them was not flagged during the review process, since it is easy to check whether the protocol language accurately reflects the recommendations.)

Yet Dr. Crawley is experienced at this blurring of categories–she did the same in a study she co-authored in the journal Pediatrics, in January of this year. In the study, “Chronic Fatigue Syndrome at Age 16 Years,” she and colleagues reported that almost one in fifty adolescents suffered from the illness—an extremely high rate that attracted widespread media attention. The main conclusion was described like this: “CFS affected 1.9% of 16-year-olds in a UK birth cohort and was positively associated with higher family adversity.”

However, the Pediatrics study is unreliable as a measure of “chronic fatigue syndrome.” It is of note that this paper, like the FITNET-NHS protocol, also appears to have inaccurately presented the NICE guidelines. According to the Pediatrics paper, NICE calls for a CFS diagnosis after three months of “persistent
or recurrent fatigue that is not
the result of ongoing exertion, not substantially alleviated by rest, has resulted in a substantial reduction of activities, and has no known cause.” But this description is incomplete–it omits the NICE requirement that the fatigue must include the specific characteristic of post-exertional malaise in order to render a diagnosis of chronic fatigue syndrome.

In the Pediatrics paper, the determination of illness was based not on clinical examination but on parental reports of children’s unexplained fatigue. In a previous study of 13-year-olds that relied on the same U.K. database, Dr. Crawley and her co-authors referred to the endpoint—appropriately–as “disabling chronic fatigue.” But in this study, they justified changing the endpoint to “chronic fatigue syndrome” by noting that they cross-referenced the parental reports with children’s self-reports of their own fatigue.

Here’s how they explained this shift in nomenclature: “In the earlier study, we were unable to confirm a diagnosis of CFS because we had only parental report of fatigue; hence, chronic disabling fatigue was defined as the study outcome. In the present study, parental and child report of fatigue were combined to identify adolescents with CFS.”

This reasoning is incoherent. A child’s confirmation of a parental report of fatigue cannot be taken to indicate the presence of chronic fatigue syndrome–especially without a clinical examination to rule out other possible conditions. Moreover, neither the parental nor child reports appear to have included information about post-exertional malaise, which is required for a diagnosis of chronic fatigue syndrome—even though the Pediatrics study did not mention this requirement in its description of the NICE guidelines. In fact, the authors provided no evidence or data to support their assumption that a double-report of fatigue equaled a case of chronic fatigue syndrome. (How’d that assumption ever pass peer review, anyway?)

Moreover, the study itself acknowledged that, when those found to be suffering from high levels of depression were removed, the prevalence of what the investigators called chronic fatigue syndrome was only 0.6 %. And since depression is likely to be highly correlated with chronic fatigue as well as with family adversity, it is not surprising that the study found the apparent association between family adversity and chronic fatigue syndrome that the investigators highlighted in their conclusion. That misinterpretation of their data has likely lent support to the widespread but inaccurate belief that the illness is largely or even partly psychiatric in nature.

In any event, the figure of 0.6 % should have been identified as the prevalence of “chronic disabling fatigue, not attributable to high levels of depression.” Without any further clinical data, to identify either 1.9 % or 0.6 % as the prevalence of chronic fatigue syndrome was unwarranted and irresponsible. Although the authors cited the lack of clinical diagnosis as a limitation, this acknowledgement does not excuse their interpretive leap. To call this a study of chronic fatigue syndrome is really misleading–a serious over-interpretation of the data.

In subsequent correspondence, three professors of pediatrics—Marvin Medow and Julian Stewart from New York Medical College, and Peter Rowe from Johns Hopkins–scolded the study authors for identifying the participants as having chronic fatigue syndrome rather than chronic fatigue. They cited this misclassification as the likely source of the reported link between chronic fatigue syndrome and family adversity. In particular, they challenged diagnoses made without benefit of clinical evaluations.

“An important component of the diagnosis is a physician’s history and physical examination to exclude conditions that could explain the fatigue, including hypothyroidism, heart disease, cancer, liver failure, covert drug abuse, medication side effects, gastrointestinal/nutritional, infectious and psychiatric conditions,” they wrote. The Pediatrics paper, concluded the three pediatricians, “should be titled ‘Chronic Fatigue but not Chronic Fatigue Syndrome at Age 16 Years.’”

In response, the study authors agreed that clinical diagnoses would be more accurate. But they did not address the critical issue of why they decided that two reports of chronic fatigue could be used to identify chronic fatigue syndrome.

*****

The conflation of chronic fatigue and chronic fatigue syndrome is a huge problem in ME/CFS research. That’s why a major report last year from the National Institutes of Health declared that the case definition used in PACE—which required only six months of unexplained fatigue but no other symptoms–could “impair progress and cause harm,” and should be “retired” from use. But Dr. Crawley and her colleagues do not seem to have gotten the message.

At the SMC press briefing presenting FITNET-NHS, one of the experts appearing with Dr. Crawley was Dr. Stephen Holgate, the leader of the CFS/ME Research Collaborative and a professor of immunopharmacology at the University of Southampton. According to the BBC report, he praised the new trial as “high-quality research.” This endorsement suggests that Dr. Holgate, like Dr. Crawley, does not appreciate the significance of the distinction between the symptom of chronic fatigue and the illness called chronic fatigue syndrome—a troubling blind spot. It also suggests that Dr. Holgate is unaware or unconcerned that the main support for the use of CBT in this illness, the PACE trial, has been discredited.

Also at the SMC briefing was Paul McCrone, a professor of health economics from King’s College London and a PACE co-author. Dr. McCrone is serving as the chair of FITNET-NHS’ independent steering committee–another unsettling sign. As I have documented on Virology Blog, Dr. McCrone made false claims as lead author in a 2012 PLoS One paper—and those false claims allowed the PACE authors to declare that CBT and GET were cost-effective. They have routinely cited this fraudulent finding in promoting the therapies.

Beyond the problem of conflating “chronic fatigue” and “chronic fatigue syndrome,” Dr. Crawley’s reliance on the Dutch trial suggests that this previous FITNET study warrants a closer look—especially since the BBC and other news outlets cited its robust claims of success in extolling the U.K. version.

The approach to CBT in the Dutch FITNET trial reflects that in the U.K. Of the online intervention’s 21 modules, according to the protocol for the Dutch study, fourteen “focus on cognitive behavioural strategies and include instructions and exercises on how to identify, challenge and change cognitive processes that contribute to CFS.” Of course, experts outside the CBT/GET/PACE bubble understand that ME/CFS is a physiological disease and that faulty “cognitive processes” have nothing to do with perpetuating or contributing to it.

The Dutch study found that those assigned to FITNET reported less fatigue, greater physical function, and greater school attendance than those in the comparison group, who received standard treatment–referred to as “usual care.” And using a composite definition of “recovery,” the study reported that 63% of those in the FITNET group–just shy of two-thirds–“recovered” at six months, compared to just eight percent in the comparison group. But this apparent success masks a much more complicated reality and cannot be taken at face value, for multiple reasons.

First, the subsequent 2013 paper from the Dutch team found no differences in “recovery” between participants in the two groups at long-term follow-up (on average, 2.7 years after starting). Those in the comparison group improved after the trial and had caught up to the intervention group, so the online CBT conferred no extended advantages or benefits. The researchers argued that the therapy was nonetheless useful because patients achieved gains more quickly. But they failed to consider another reasonable explanation for their results.

Those in usual care were attending in-person sessions at clinics or doctors’ offices. Depending on how often they went, how far they had to travel and how sick they were, the transportation demands could easily have triggered relapses and harmed their health. In contrast, those in the FITNET group could be treated at home. Perhaps they improved not from the treatment itself but from an unintended side effect–the sedentary nature of the intervention allowed them more time to rest. The investigators did not control for this aspect of the online CBT.

Second, the “recovery” figure in the Dutch FITNET study was a post-hoc calculation, as the authors acknowledged. The protocol for the trial included the outcomes to be measured, of course, but the authors did not identify before the trial what thresholds participants would need to meet to be considered “recovered.” The entire definition was constructed only after they saw the results—and the thresholds they selected were extremely lenient. Even two of the PACE authors, in a Lancet commentary praising the Dutch study, referred to the “recovery” criteria as “liberal” and “not stringent.” (In fact, only 36% “recovered” under a more modest definition of “recovery,” but the FITNET authors tucked this finding into an appendix and Dr. Crawley’s FITNET-NHS protocol didn’t mention it.)

Now, the fact that “recovery” was a post-hoc measure doesn’t mean it isn’t valid. But anyone citing this “recovery” rate should do so with caveats and some measure of caution. Dr. Crawley has exhibited no such reticence—in a recent radio interview, she declared flatly that the Dutch participants had made a “full recovery.” (In the same interview, she called PACE “a great, great study.” Then she completely misrepresented the results of the recent reanalyses of the PACE trial data. So, you know, take her words for what they’re worth.)

Given the hyperbole about “recovery,” the public is understandably likely to assume that Dr. Crawley’s new “landmark” study will result in similar success. A corollary of that assumption is that anyone who opposes the study’s approach, like so many in the patient and advocacy communities, could be accused of acting in ways that harm children by depriving them of needed treatment. This would be an unfair charge, since the online CBT being offered is based on the questionable premise that the children harbor untrue cognitions about their illness.

Third, the standard treatments received by the usual care group were described like this: “individual/group based rehabilitation programs, psychological support including CBT face-to-face, graded exercise therapy by a physiotherapist, etc.” In other words, pretty much the kinds of “evidence-based” strategies these Dutch experts and their U.K. colleagues had promoted for years as being effective for chronic fatigue syndrome. In the end, two-thirds of those in usual care received in-person CBT, and half received graded exercise therapy. (Many participants in this arm received more than one form of usual care.)

And yet less than one in ten of the usual care participants were found to have “recovered” at six months, according to the 2012 study. So what does that say about the effectiveness of these kinds of rehabilitative approaches in the first place? In light of the superlative findings for online CBT, why haven’t all chronic fatigue syndrome patients in the Netherlands now been removed from in-person treatments and offered this more convenient option? (Dr. Crawley’s FITNET-NHS proposal tried to explain away this embarrassing finding of the Dutch study by suggesting that those providing usual care were not trained to work with this kind of population.)

Finally, the Dutch study did not report any objective measures of physical performance. Although the study included assessments using an actometer—an ankle bracelet that monitors distance moved—the Lancet paper did not mention those results. In previous studies of cognitive and behavioral treatments for ME/CFS, reported improvements on subjective measures for fatigue or physical function were not accompanied by increases in physical movement, as measured by actometer. And in PACE, of course, the investigators dismissed their own objective measures as irrelevant or non-objective—after these outcomes failed to provide the desired results.

In response to correspondence calling for publication of the actometer data, the Dutch investigators refused, noting that “the goal of our treatment was reduction of fatigue and increase in school attendance, not increase in physical activity per se.” This is an inadequate explanation for the decision to withhold data that would shed light on whether participants actually improved in their physical performance as well as in their subjective impressions of their condition. If the actometer data demonstrated remarkable increases in activity levels in the online CBT group, is there any doubt they would have reported it?

In short, the Dutch FITNET study leaves a lot of questions unanswered. So does its U.K. version, the proposed FITNET-NHS. And Dr. Crawley’s recent media blitz—which included a “can’t-we-all-get-along” essay in The New Scientist—did little to quell any of the reasonable qualms observers might have about this latest effort to bolster the sagging fortunes of the CBT/GET/PACE paradigm.

“Patients are desperate for this trial, yet some people are still trying to stop us,” wrote Dr. Crawley in The New Scientist. “The fighting needs to end.”

However, those mysterious and sinister-sounding “some people” cited by Dr. Crawley have very thoughtful and legitimate reasons for questioning the quality of her research. The fighting, as she calls it, is likely to end when Dr. Crawley and her colleagues stop conflating chronic fatigue and chronic fatigue syndrome through the use of loose diagnostic criteria. And when they acknowledge what scientists in the U.S. and around the world now understand: The claim that cognitive and behavioral approaches are effective treatments that lead to “recovery” is based on deeply flawed research.

A Short Postscript:

Several Dutch colleagues have joined Dr. Crawley as part of the FITNET-NHS study. Two of them, Dr. Gijs Bleijenberg from the Radboud University Medical Centre in Nijmegen, and Dr. Hans Knoop from the University of Amsterdam, are among the leaders of the CBT/GET movement in the Netherlands and have collaborated with their U.K. counterparts. Not surprisingly, their work is similarly dodgy.

In a post last year, I dissected a 2011 commentary in The Lancet on the PACE trial, co-authored by Dr. Bleijenberg and Dr. Knoop, in which they argued that 30 percent of the participants in the CBT and GET groups had met “a strict criterion for recovery.” This statement was absurd, since these “strict” thresholds for “recovery” were in fact so lax that participants could get worse during the study and still meet them. Although the problematic nature of the thresholds has been pointed out to Dr. Bleijenberg and Dr. Knoop, they have stood by their nonsensical claim.

Earlier this year, the Dutch parliament asked the Health Council—an independent scientific advisory body—to review the state of evidence related to the illness, including the evidence on treatments like CBT and GET. The Health Council appointed a committee to conduct the review. Among the committee members are Dr. Knoop and colleagues who share his perspective. It remains unclear whether the committee is taking sufficient account of the methodological flaws underpinning the evidence for the CBT/GET paradigm and of the ongoing condemnations of the PACE trial from well-respected scientists. I plan to blog about this situation soon.

TWiV 397: Trial by error

Journalism professor David Tuller returns to TWiV for a discussion of the PACE trial for ME/CFS: the many flaws in the trial, why its conclusions are useless, and why the data must be released and re-examined.

You can find TWiV #397 at microbe.tv/twiv, or listen below.

Click arrow to play
Download TWiV 397 (67 MB .mp3, 93 min)
Subscribe (free): iTunesRSSemailGoogle Play Music

Become a patron of TWiV!

Trial by error, Continued: PACE Team’s Work for Insurance Companies Is “Not Related” to PACE. Really?

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

In my initial story on Virology Blog, I charged the PACE investigators with violating the Declaration of Helsinki, developed in the 1950s by the World Medical Association to protect human research subjects. The declaration mandates that scientists disclose “institutional affiliations” and “any possible conflicts of interest” to prospective trial participants as part of the process of obtaining informed consent.

The investigators promised in their protocol to adhere to this foundational human rights document, among other ethical codes. Despite this promise, they did not tell prospective participants about their financial and consulting links with insurance companies, including those in the disability sector. That ethical breach raises serious concerns about whether the “informed consent” they obtained from all 641 of their trial participants was truly “informed,” and therefore legitimate.

The PACE investigators do not agree that the lack of disclosure is an ethical breach. In their response to my Virology Blog story, they did not even mention the Declaration of Helsinki or explain why they violated it in seeking informed consent. Instead, they defended their actions by noting that they had disclosed their financial and consulting links in the published articles, and had informed participants about who funded the research–responses that did not address the central concern.

“I find their statement that they disclosed to The Lancet but not to potential subjects bemusing,” said Jon Merz, a professor of medical ethics at the University of Pennsylvania. “The issue is coming clean to all who would rely on their objectivity and fairness in conducting their science. Disclosure is the least we require of scientists, as it puts those who should be able to trust them on notice that they may be serving two masters.”

In their Virology Blog response, the PACE team also stated that no insurance companies were involved in the research, that only three of the 19 investigators “have done consultancy work at various times for insurance companies,” and that this work “was not related to the research.” The first statement was true, but direct involvement in a study is of course only one possible form of conflict of interest. The second statement was false. According to the PACE team’s conflict of interest disclosures in The Lancet, the actual number of researchers with insurance industry ties was four—along with the three principal investigators, physiotherapist Jessica Bavington acknowledged such links.

But here, I’ll focus on the third claim–that their consulting work “was not related to the research.” In particular, I’ll examine an online article posted by Swiss Re, a large reinsurance company. The article describes a “web-based discussion group” held with Peter White, the lead PACE investigator, and reveals some of the claims-assessing recommendations arising from that presentation. White included consulting work with Swiss Re in his Lancet disclosure.

The Lancet published the PACE results in February, 2011; the undated Swiss Re article was published sometime within the following year or so. The headline: “Managing claims for chronic fatigue the active way.” (Note that this headline uses “chronic fatigue” rather than “chronic fatigue syndrome,” although chronic fatigue is a symptom common to many illnesses and is quite distinct from the disease known as chronic fatigue syndrome. Understanding the difference between the two would likely be helpful in making decisions about insurance claims.)

The Swiss Re article noted that the illness “can be an emotive subject” and then focused on the implications of the PACE study for assessing insurance claims. It started with a summary account of the findings from the study, reporting that the “active rehabilitation” arms of cognitive behavioral therapy and graded exercise therapy “resulted in greater reduction of patients’ fatigue and larger improvement in physical functioning” than either adaptive pacing therapy or specialist medical care, the baseline condition. (The three intervention arms also received specialist medical care.)

The trial’s “key message,” declared the article, was that “pushing the limits in a therapeutic setting using well described treatment modalities is more effective in alleviating fatigue and dysfunction than staying within the limits imposed by the illness traditionally advocated by ‘pacing.’”

Added the article: “If a CFS patient does not gradually increase their activity, supported by an appropriate therapist, then their recovery will be slower. This seems a simple message but it is an important one as many believe that ‘pacing’ is the most beneficial treatment.”

This understanding of the PACE research—presumably based on information from Peter White’s web-based discussion—was wrong. Pacing is not and has never been a “treatment.” It is also not one of the “four most commonly used therapies,” as the newsletter article declared, since it has never been a “therapy” either. It is a self-help method practiced by many patients seeking the best way to manage their limited energy reserves.

The PACE investigators did not test pacing. Instead, the intervention they dubbed “adaptive pacing therapy” was an operationalized version of “pacing” developed specifically for the study. Many patients objected to the trial’s form of pacing as overly prescriptive, demanding and unlike the version they practiced on their own. Transforming an intuitive, self-directed approach into a “treatment” administered by a “therapist” was not a true test of whether the self-help approach is effective, they argued–with significant justification. Yet the Swiss Re article presented “adaptive pacing therapy” as if it were identical to “pacing.”

The Swiss Re article did not mention that the reported improvements from “active rehabilitation” were based on subjective outcomes and were not supported by the study’s objective data. Nor did it report any of the major flaws of the PACE study or offer any reasons to doubt the integrity of the findings.

The article next asked, “What can insurers and reinsurers do to assist the recovery and return to work of CFS claimants?” It then described the conclusions to be drawn from the discussion with White about the PACE trial—the “key takeaways for claims management.”

First, Swiss Re advised its employees, question the diagnosis, because “misdiagnosis is not uncommon.”

The second point was this: “It is likely that input will be required to change a claimant’s beliefs about his or her condition and the effectiveness of active rehabilitation…Funding for these CFS treatments is not expensive (in the UK, around £2,000) so insurers may well want to consider funding this for the right claimants.”

Translation: Patients who believe they have a medical disease are wrong, and they need to be persuaded that they are wrong and that they can get better with therapy. Insurers can avoid large payouts by covering the minimal costs of these treatments for patients vulnerable to such persuasion, given the right “input.”

Finally, the article warned that private therapists might not provide the kinds of “input” required to convince patients they were wrong. Instead of appropriately “active” approaches like cognitive behavior therapy and graded exercise therapy, these therapists might instead pursue treatments that could reinforce claimants’ misguided beliefs about being seriously ill, the article suggested.

“Check that private practitioners are delivering active rehabilitation therapies, such as those described in this article, as opposed to sick role adaptation,” the Swiss RE article advised. (The PACE investigators, drawing on the concept known as “the sick role” in medical sociology, have long expressed concern that advocacy groups enabled patients’ condition by bolstering their conviction that they suffered from a “medical disease,” as Michael Sharpe, another key PACE investigator, noted in a 2002 UNUMProvident report. This conviction encouraged patients to demand social benefits and health care resources rather than focus on improving through therapy, Sharpe wrote.)

Lastly, the Swiss Re article addressed “a final point specific to claims assessment.” A diagnosis of chronic fatigue syndrome, stated the article, provided an opportunity in some cases to apply a mental health exclusion, depending upon the wording of the policy. In contrast, a diagnosis of myalgic encephalomyelitis did not.

The World Health Organization’s International Classification for Diseases, or ICD, which clinicians and insurance companies use for coding purposes, categorizes myalgic encephalomyelitis as a neurological disorder that is synonymous with the terms “post-viral fatigue syndrome” and “chronic fatigue syndrome.” But the Swiss Re article stated that, according to the ICD, “chronic fatigue syndrome” can also “alternatively be defined as neurasthenia which is in the mental health chapter.”

The PACE investigators have repeatedly advanced this questionable idea. In the ICD’s mental health section, neurasthenia is defined as “a mental disorder characterized by chronic fatigue and concomitant physiologic symptoms,” but there is no mention of “chronic fatigue syndrome” as a discrete entity. The PACE investigators (and Swiss Re newsletter writers) believe that the neurasthenia entry encompasses the illness known as “chronic fatigue syndrome,” not just the common symptom of “chronic fatigue.”

This interpretation, however, appears to be at odds with an ICD rule that illnesses cannot be listed in two separate places—a rule confirmed in an e-mail from a WHO official to an advocate who had questioned the PACE investigators’ argument. “It is not permitted for the same condition to be classified to more than one rubric as this would mean that the individual categories and subcategories were no longer mutually exclusive,” wrote the official to Margaret Weston, the pseudonym for a longtime clinical manager in the U.K. National Health Service.

Presumably, after White disseminated the good news about the PACE results at the web-based discussion, Swiss Re’s claims managers felt better equipped to help ME/CFS claimants. And presumably that help included coverage for cognitive behavior therapy and graded exercise therapy so that claimants could receive the critical “input” they needed in order to recognize and accept that they didn’t have a medical disease after all.

In sum, contrary to the investigators’ argument in their response to Virology Blog, the PACE research and findings appear to be very much “related to” insurance industry consulting work. The claim that these relationships did not represent “possible conflicts of interest” and “institutional affiliations” requiring disclosure under the Declaration of Helsinki cannot be taken seriously.

Update 11/17/15 12:22 PM: I should have mentioned in the story that, in the PACE trial, participants in the cognitive behavior therapy and graded exercise therapy arms were no more likely to have increased their hours of employment than those in the other arms. In other words, there was no evidence for the claims presented in the Swiss Re article, based on Peter White’s presentation, that these treatments were any more effective in getting people back to work.

The PACE investigators published this employment data in a 2012 paper in PLoS One. It is unclear whether Peter White already knew these results at the time of his Swiss Re presentation on the PACE results.

Update 11/18/15 6:54 AM: I also forgot to mention in the story that the three principal PACE investigators did not respond to an e-mail seeking comment about their insurance industry work. Lancet editor Richard Horton also did not respond to an e-mail seeking comment.

An open letter to Dr. Richard Horton and The Lancet

Dr. Richard Horton
The Lancet
125 London Wall
London, EC2Y 5AS, UK

Dear Dr. Horton:

In February, 2011, The Lancet published an article called “Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomized trial.” The article reported that two “rehabilitative” approaches, cognitive behavior therapy and graded exercise therapy, were effective in treating chronic fatigue syndrome, also known as myalgic encephalomyelitis, ME/CFS and CFS/ME. The study received international attention and has had widespread influence on research, treatment options and public attitudes.

The PACE study was an unblinded clinical trial with subjective primary outcomes, a design that requires strict vigilance in order to prevent the possibility of bias. Yet the study suffered from major flaws that have raised serious concerns about the validity, reliability and integrity of the findings. The patient and advocacy communities have known this for years, but a recent in-depth report on this site, which included statements from five of us, has brought the extent of the problems to the attention of a broader public. The PACE investigators have replied to many of the criticisms, but their responses have not addressed or answered key concerns.

The major flaws documented at length in the recent report include, but are not limited to, the following:

*The Lancet paper included an analysis in which the outcome thresholds for being “within the normal range” on the two primary measures of fatigue and physical function demonstrated worse health than the criteria for entry, which already indicated serious disability. In fact, 13 percent of the study participants were already “within the normal range” on one or both outcome measures at baseline, but the investigators did not disclose this salient fact in the Lancet paper. In an accompanying Lancet commentary, colleagues of the PACE team defined participants who met these expansive “normal ranges” as having achieved a “strict criterion for recovery.” The PACE authors reviewed this commentary before publication.

*During the trial, the authors published a newsletter for participants that included positive testimonials from earlier participants about the benefits of the “therapy” and “treatment.” The same newsletter included an article that cited the two rehabilitative interventions pioneered by the researchers and being tested in the PACE trial as having been recommended by a U.K. clinical guidelines committee “based on the best available evidence.” The newsletter did not mention that a key PACE investigator also served on the clinical guidelines committee. At the time of the newsletter, two hundred or more participants—about a third of the total sample–were still undergoing assessments.

*Mid-trial, the PACE investigators changed their protocol methods of assessing their primary outcome measures of fatigue and physical function. This is of particular concern in an unblinded trial like PACE, in which outcome trends are often apparent long before outcome data are seen. The investigators provided no sensitivity analyses to assess the impact of the changes and have refused requests to provide the results per the methods outlined in their protocol.

*The PACE investigators based their claims of treatment success solely on their subjective outcomes. In the Lancet paper, the results of a six-minute walking test—described in the protocol as “an objective measure of physical capacity”–did not support such claims, notwithstanding the minimal gains in one arm. In subsequent comments in another journal, the investigators dismissed the walking-test results as irrelevant, non-objective and fraught with limitations. All the other objective measures in PACE, presented in other journals, also failed. The results of one objective measure, the fitness step-test, were provided in a 2015 paper in The Lancet Psychiatry, but only in the form of a tiny graph. A request for the step-test data used to create the graph was rejected as “vexatious.”

*The investigators violated their promise in the PACE protocol to adhere to the Declaration of Helsinki, which mandates that prospective participants be “adequately informed” about researchers’ “possible conflicts of interest.” The main investigators have had financial and consulting relationships with disability insurance companies, advising them that rehabilitative therapies like those tested in PACE could help ME/CFS claimants get off benefits and back to work. They disclosed these insurance industry links in The Lancet but did not inform trial participants, contrary to their protocol commitment. This serious ethical breach raises concerns about whether the consent obtained from the 641 trial participants is legitimate.

Such flaws have no place in published research. This is of particular concern in the case of the PACE trial because of its significant impact on government policy, public health practice, clinical care, and decisions about disability insurance and other social benefits. Under the circumstances, it is incumbent upon The Lancet to address this matter as soon as possible.

We therefore urge The Lancet to seek an independent re-analysis of the individual-level PACE trial data, with appropriate sensitivity analyses, from highly respected reviewers with extensive expertise in statistics and study design. The reviewers should be from outside the U.K. and outside the domains of psychiatry and psychological medicine. They should also be completely independent of, and have no conflicts of interests involving, the PACE investigators and the funders of the trial.

Thank you very much for your quick attention to this matter.

Sincerely,

Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University

Jonathan C.W. Edwards, MD
Emeritus Professor of Medicine
University College London

Leonard A. Jason, PhD
Professor of Psychology
DePaul University

Bruce Levin, PhD
Professor of Biostatistics
Columbia University

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University

Arthur L. Reingold, MD
Professor of Epidemiology
University of California, Berkeley

Trial By Error, Continued: Why has the PACE Study’s “Sister Trial” been “Disappeared” and Forgotten?

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

In 2010, the BMJ published the results of the Fatigue Intervention by Nurses Evaluation, or FINE. The investigators for this companion trial to PACE, also funded by the Medical Research Council, reported no benefits to ME/CFS patients from the interventions tested.

 In medical research, null findings often get ignored in favor or more exciting “positive” results. In this vein, the FINE trial seems to have vanished from the public discussion over the controversial findings from the PACE study. I thought it was important to re-focus some attention on this related effort to prove that “deconditioning” is the cause of the devastating symptoms of ME/CFS. (This piece is also too long but hopefully not quite as dense.)

An update on something else: I want to thank the public relations manager from Queen Mary University of London for clarifying his previous assertion that I did not seek comment from the PACE investigators before Virology Blog posted my story. In an e-mail, he explained that he did not mean to suggest that I hadn’t contacted them for interviews. He only meant, he wrote, that I hadn’t sent them my draft posts for comment before publication. He apologized for the misunderstanding.

I accept his apology, so that’s the end of the matter. In my return e-mail, however, I did let him know I was surprised at the expectation that I might have shared the draft with the PACE investigators before publication. I would not have done that whether or not they had granted me interviews. This is journalism, not peer-review. Different rules.

************************************************************************

In 2003, with much fanfare, the U.K. Medical Research Council announced that it would fund two major studies of non-pharmacological treatments for chronic fatigue syndrome. In addition to PACE, the agency decided to back a second, smaller study called “Fatigue Intervention by Nurses Evaluation,” or FINE. Because the PACE trial was targeting patients well enough to attend sessions at a medical clinic, the complementary FINE study was designed to test treatments for more severely ill patients.

(Chronic fatigue syndrome is also known as myalgic encephalomyelitis, CFS/ME, and ME/CFS, which has now been adopted by U.S. government agencies. The British investigators of FINE and PACE prefer to call it chronic fatigue syndrome, or sometimes CFS/ME.)

Alison Wearden, a psychologist at the University of Manchester, was the lead FINE investigator. She also sat on the PACE Trial Steering Committee and wrote an article about FINE for one of the PACE trial’s participant newsletters. The Medical Research Council and the PACE team referred to FINE as PACE’s “sister” trial. The two studies included the same two primary outcome measures, self-reported fatigue and physical function, and used the same scales to assess them.

The FINE results were published in BMJ in April, 2010. Yet when the first PACE results were published in The Lancet the following year, the investigators did not mention the FINE trial in the text. The trial has also been virtually ignored in the subsequent public debate over the results of the PACE trial and the effectiveness, or lack thereof, of the PACE approach.

What happened? Why has the FINE trial been “disappeared”?

*****

The main goal of the FINE trial was to test a treatment for homebound patients that adapted and combined elements of cognitive behavior therapy and graded exercise therapy, the two rehabilitative therapies being tested in PACE. The approach, called “pragmatic rehabilitation,” had been successfully tested in a small previous study. In FINE, the investigators planned to compare “pragmatic rehabilitation” with another intervention and with standard care from a general practitioner.

Here’s what the Medical Research Council wrote about the main intervention in an article in its newsletter, MRC Network, in the summer of 2003: “Pragmatic rehabilitation…is delivered by specially trained nurses, who give patients a detailed physiological explanation of symptom patterns. This is followed by a treatment programme focussing on graded exercise, sleep and relaxation.”

The second intervention arm featured a treatment called “supportive listening,” a patient-centered and non-directive counseling approach. This treatment presumed that patients might improve if they felt that the therapist empathized with them, took their concerns seriously, and allowed them to find their own approach to addressing the illness.

The Medical Research Council committed 1.3 million pounds to the FINE trial. The study was conducted in northwest England, with 296 patients recruited from primary care. Each intervention took place over 18 weeks and consisted of ten sessions–five home visits lasting up to 90 minutes alternating with five telephone conversations of up to 30 minutes.

As in the PACE trial, patients were selected using the Oxford criteria for chronic fatigue syndrome, defined as the presence of six months of medically unexplained fatigue, with no other symptoms required. The Oxford criteria have been widely criticized for yielding heterogeneous samples, and a report commissioned by the National Institutes of Health this year recommended by the case definition be “retired” for that reason.

More specific case definitions for the illness require the presence of core symptoms like post-exertional malaise, cognitive problems and sleep disorders, rather than just fatigue per se. Because the symptom called post-exertional malaise means that patients can suffer severe relapses after minimal exertion, many patients and advocacy organizations consider increases in activity to be potentially dangerous.

To be eligible for the FINE trial, participants needed to score 70 or less out of 100 on the physical function scale, the Medical Outcomes Study 36-Item Short Form Health Survey, known as the SF-36. They also needed to score a 4 or more out of 11 on the 11-item Chalder Fatigue Scale, with each item scored as either 0 or 1. On the fatigue scale, a higher score indicated greater fatigue.

Among other measures, the trial also included a key objective outcome–the “time to take 20 steps, (or number of steps
taken, if this is not achieved) and maximum heart rate reached on a step-test.”

Participants were to be assessed on these measures at 20 weeks, which as right after the end of the treatment period, and again at 70 weeks, which was one year after the end of treatment. According to the FINE trial protocol, published in the journal BMC Medicine in 2006, “short-term assessments of outcome in a chronic health condition such as CFS/ME can be misleading” and declared the 70-week assessment to be the “primary outcome point.”

*****

The theoretical model behind the FINE trial and pragmatic rehabilitation paralleled the PACE concept. The physical symptoms were presumed to be the result not of a pathological disease process but of “deconditioning” or “dysregulation” caused by sedentary behavior, accompanied by disrupted sleep cycles and stress. The sedentary behavior was itself presumed to be triggered by patients’ “unhelpful’ conviction that they suffered from a progressive medical illness. Counteracting the deconditioning involved re-establishing normal sleep cycles, reducing anxiety levels and gently increasing physical exertion, even if patients remained homebound.

“The treatment [pragmatic rehabilitation] is based on a model proposing that CFS/ME is best understood as a consequence of physiological dysregulation associated with inactivity and disturbance of sleep and circadian rhythms,” stated the FINE trial protocol. “We have argued that these conditions…are often maintained by illness beliefs that lead to exercise-avoidance. The essential feature of the treatment is the provision of a detailed explanation for patients’ symptoms, couched in terms of the physiological dysregulation model, from which flows the rationale for a graded return to activity.”

On the FINE trial website, a 2004 presentation about pragmatic rehabilitation explained the illness in somewhat simpler terms, comparing it to “very severe jetlag.” After explaining how and why pragmatic rehabilitation led to physical improvement, the presentation offered this hopeful message, in boldface: “There is no disease–you have a right to full health. This is a good news diagnosis. Carefully built up exercise can reverse the condition. Go for 100% recovery.”

In contrast, patients, advcoates and many leading scientists have completely rejected the PACE and FINE approach. They believe the evidence overwhelmingly points to an immunological and neurological disorder triggered by an initial infection or some other physiological insult. Last month, the National Institutes of Health ratified this perspective when it announced a major new push to seek biomedical answers to the disease, which it refers to as ME/CFS.

As in PACE, patients in the FINE trial were issued different treatment manuals depending upon their assigned study arm. The treatment manual for pragmatic rehabilitation repeatedly informed participants that the therapy could help them get better—even though the trial itself was designed to test the effectiveness of the therapy. (In the PACE trial, the manuals for the cognitive behavior and graded therapy arms also included many statements promoting the idea that the therapies could successfully treat the illness.)

“This booklet has been written with the help of patients who have made a full recovery from Chronic Fatigue Syndrome,” stated the FINE pragmatic rehabilitation manual on its second page. “Facts and information which were important to them in making this recovery have been included.” The manual noted that the patients who helped write it had been treated at the Royal Liverpool University Hospital but did not include more specific details about their “full recovery” from the illness.

Among the “facts and information” included in the manual were assertions that the trial participants, contrary to what they might themselves believe, had no persistent viral infection and “no underlying serious disease.” The manual promised them that pragmatic rehabilitation could help them overcome the illness and the deconditioning perpetuating it. “Instead of CFS controlling you, you can start to regain control of your body and your life,” stated the manual.

Finally, as in PACE, participants were encouraged to change their beliefs about their condition by “building the right thoughts for your recovery.” Participants were warned that “unhelpful thoughts”—such as the idea that continued symptoms indicated the presence of an organic disease and could not be attributed to deconditioning—“can put you off parts of the treatment programme and so delay or prevent recovery.”

The supportive listening manual did not similarly promote the idea that “recovery” from the illness was possible. During the sessions, the manual explained, “The listener, your therapist, will provide support and encourage you to find ways to cope by using your own resources to change, manage or adapt to difficulties…She will not tell you what to do, advise, coach or direct you.”

*****

A qualitative study about the challenges of the FINE research process, published by the investigators in the journal Implementation Science in 2011, shed light on how much the theoretical framework and the treatment approaches frustrated and angered trial participants. According to the interviews with some of the nurses, nurse supervisors, and participants involved in FINE, the home visits often bristled with tension over the different perceptions of what caused the illness and which interventions could help.

“At times, this lack of agreement over the nature of the condition and lack of acceptance as to the rationale behind the treatment led to conflict,” noted the FINE investigators in the qualitative paper. “A particularly difficult challenge of interacting with patients for the nurses and their supervisors was managing patients’ resistance to the treatment.”

One participant in the pragmatic rehabilitation arm, who apparently found it difficult to do what was apparently expected, attributed this resistance to the insistence that deconditioning caused the symptoms and that activity would reverse them. “If all that was standing between me and recovery was the reconditioning I could work it out and do it, but what I have got is not just a reconditioning problem,” the participant said. “I have got something where there is damage and a complete lack of strength actually getting into the muscles and you can’t work with what you haven’t got in terms of energy.”

Another participant in the pragmatic rehabilitation arm was more blunt. “I kept arguing with her [the nurse administering the treatment] all the time because I didn’t agree with what she said,” said the participant, who ended up dropping out of the trial.

Some participants in the supportive listening arm also questioned the value of the treatment they were receiving, according to the study. “I mostly believe it was more physical than anything else, and I didn’t see how talking could truthfully, you know, if it was physical, do anything,” said one.

In fact, the theoretical orientation also alienated some prospective participants as well, according to interviews the investigators conducted with some patients who declined to enter the trial. ‘It [the PR intervention] insisted that physiologically there was nothing wrong,” said one such patient. “There was nothing wrong with my glands, there was nothing wrong, that it was just deconditioned muscles. And I didn’t believe that…I can’t get well with treatment you don’t believe in.”

When patients challenged or criticized the therapeutic interventions, the study found, nurses sometimes felt their authority and expertise to be under threat. “They are testing you all the time,” said one nurse. Another reported: “That anger…it’s very wearing and demoralizing.”

One nurse remembered the difficulties she faced with a particular participant. “I used to go there and she would totally block me, she would sit with her arms folded, total silence in the house,” said the nurse. “It was tortuous for both of us.”

At times, nurses themselves responded to these difficult interactions with bouts of anger directed at the participants, according to a supervisor.

“Their frustration has reached the point where they sort of boiled over,” said the supervisor. “There is sort of feeling that the patient should be grateful and follow your advice, and in actual fact, what happens is the patient is quite resistant and there is this thing like you know, ‘The bastards don’t want to get better.’”

*****

BMJ published the FINE results in 2010. The FINE investigators found no statistically significant benefits to either pragmatic rehabilitation or supportive listening at 70 weeks. Despite these null findings one year after the end of the 18-week course of treatment, the mean scores of those in the pragmatic rehabilitative arm demonstrated at 20 weeks a “clinically modest” but statistically significant reduction in fatigue—a drop of one point (plus a little) on the 11-point fatigue scale. The slight improvement still meant that participants were much more fatigued than the initial entry threshold for disability, and any benefits were no longer statistically significant by the final assessment.

Despite the null findings at 70 weeks, the authors put a positive gloss on the results, reporting first in the abstract that fatigue was “significantly improved” at 20 weeks. Given the very modest one-point change in average fatigue scores, perhaps the FINE investigators intended to report instead that there was a “statistically significant improvement” at 20 weeks—an accurate phrase with a somewhat different meaning.

The abstract included another interesting linguistic element. While the trial protocol had designated the 70-week assessment as “the primary outcome point,” the abstract of the paper itself now stated that “the primary clinical outcomes were fatigue and physical functioning at the end of treatment (20 weeks) and 70 weeks from recruitment.”

After redefining their primary outcome points to include the 20-week as well as the 70-week assessment, the abstract promoted the positive effects found at the earlier point as the study’s main finding. Only after communicating the initial benefits did they note that these advantages for pragmatic rehabilitation later wore off. The FINE paper cited no oversight committee approval for this expanded interpretation of the trial’s primary outcome points to include the 20-week assessment, nor did it mention the protocol’s caveat about the “misleading” nature of short-term assessments in chronic health conditions.

In fact, within the text of the paper, the investigators noted that the “pre-designated outcome point” was 70 weeks. But they did not explain why they then decided to highlight most in the abstract what was not the pre-designated but instead a post-hoc “primary” outcome point—the 20-week assessment.

A BMJ editorial that accompanied the FINE trial also accentuated the positive results at 20 weeks rather than the bad news at 70 weeks. According to the editorial’s subhead, pragmatic rehabilitation “has a short term benefit, but supportive listening does not.” The editorial did not note that this was not the pre-designated primary outcome point. The null results for that outcome point—the 70-week assessment—were not mentioned until later in the editorial.

*****

Patients and advocates soon began criticizing the study in the “rapid response” section of the BMJ website, citing its theoretical framework, the use of the broad Oxford criteria as a case definition, and the failure to provide the step-test outcomes, among other issues.

“The data provide strong evidence that the anxiety and deconditioning model of CFS/ME on which the trial is predicated is either wrong or, at best, incomplete,” wrote one patient. “These results are immensely important because they demonstrate that if a cure for CFS/ME is to be found, one must look beyond the psycho-behavioural paradigm.”

Another patient wrote that the study was “a wake-up call to the whole
of the medical establishment” to take the illness seriously. One predicted “that there will those who say that the this trial failed because
the patients were not trying hard enough.”

A physician from Australia sought to defend the interests not of patients but of the English language, decrying the lack of hyphens in the paper’s full title: “Nurse led, home based self help treatment for patients in primary care with chronic fatigue syndrome: randomised controlled trial.”

“The hyphen is a coupling 
between carriages of words to ensure unambiguous
 transmission of thought,” wrote the doctor. “Surely this should read ‘Nurse-led, home-based, self-
help…’

“Lest English sink further into the Great Despond of 
ambiguity and non-sense [hyphen included in the original comment], may I implore the co-editors of
the BMJ to be the vigilant watchdogs of our mother tongue
 which at the hands of a younger ‘texting’ generation is heading towards anarchy.” [The original comment did not include the expected comma between ‘tongue’ and ‘which.’]

*****

In a response on the BMJ website a month after publishing the study, the FINE investigators reported that they had conducted a post-hoc analysis with a different kind of scoring for the Chalder Fatigue Scale.

Instead of scoring the answers as 0 or 1 using what was called a bimodal scale, they rescored them using what was called a continuous scale, with values ranging from 0 to 3. The full range of possible scores now ran from 0 to 33, rather than 0 to 11. (As collected, the data for the Chalder Fatigue Scale allowed for either scoring system; however, the original entry criteria of 4 on the bimodal scale would translate into a range from 4 to as high as 19 on the revised scale.)

With the revised scoring, they now reported a “clinically modest, but statistically significant effect” of pragmatic rehabilitation at 70 weeks—a reduction from baseline of about 2.5 points on the 0 to 33 scale. This final score represented some increase in fatigue from the 20-week interim assessment point.

In their comment on the website, the FINE investigators now reaffirmed that the 70-week assessment was “our primary outcome point.” This statement conformed to the protocol but differed from the suggestion in the BMJ paper that the 20-week results also represented “primary” outcomes. Given that the post-hoc rescoring allowed the investigators to report statistically significant results at the 70-week endpoint, this zig-zag back to the protocol language was perhaps not surprising.

In their comment, the FINE investigators also explained that they did not report their step-test results—their one objective measure of physical capacity–“due to a significant amount of missing data.” They did not provide an explanation for the missing data. (One obvious possible reason for missing data on an objective fitness test is that participants were too disabled to perform it at all.)

The FINE investigators did not address the question of whether the title of their paper should have included hyphens.

In the rapid comments, Tom Kindlon, a patient and advocate from a Dublin suburb, responded to the FINE investigators’ decision to report their new post-hoc analysis of the fatigue scale. He noted that the investigators themselves had chosen the bimodal scoring system for their study rather than the continuous method.

“I’m
 sure many pharmacological and non-pharmacological studies could look
 different if investigators decided to use a different scoring method or
scale at the end, if the results weren’t as impressive as they’d hoped,” he wrote. “But that is not normally how medicine works. So, while it is interesting
 that the researchers have shared this data, I think the data in the main
paper should be seen as the main data.”

*****

The FINE investigators have published a number of other papers arising from their study. In a 2013 paper on mediators of the effects of pragmatic rehabilitation, they reported that there were no differences between the three groups on the objective measure of physical capacity, the step test, despite their earlier decision not to publish the data in the BMJ paper.

Wearden herself presented the trial as a high point of her professional career in a 2013 interview for the website of the University of Manchester’s School of Psychological Sciences. “I suppose the thing I did that I’m most proud of is I ran a large treatment trial of pragmatic rehabilitation treatment for patients with chronic fatigue syndrome,” she said in the interview. “We successfully carried that trial out and found a treatment that improved patients’ fatigue, so that’s probably the thing that I’m most proud of.”

The interview did not mention that the improvement at 20 weeks was transient until the investigators performed a post-hoc-analysis and rescored the fatigue scale.

*****

The Science Media Centre, a self-styled “independent” purveyor of information about science and scientific research to journalists, has consistently shown an interest in research on what it calls CFS/ME. It held a press briefing for the first PACE results published in The Lancet in 2011, and has helped publicize the release of subsequent studies from the PACE team.

However, the Science Media Centre does not appear to have done anything to publicize the 2010 release of the FINE trial, despite its interest in the topic. A search of the center’s website for the lead FINE investigator, Alison Wearden, yielded no results. And a search for CFS/ME indicated that the first study embraced by the center’s publicity machine was the 2011 Lancet paper.

That might help explain why the FINE trial was virtually ignored by the media. A search on the LexisNexis database for “PACE trial” and “chronic fatigue syndrome” yielded 21 “newspaper” articles (I use the “apostrophes” here because I don’t know if that number includes articles on newspaper websites that did not appear in the print product; the accuracy of the number is also in question because the list did not include two PACE-related articles that I wrote for The New York Times).

Searches on the database combining “chronic fatigue syndrome” with either “FINE trial” or “pragmatic rehabilitation” yielded no results. (I used the version of LexisNexis Academic available to me through the University of California library system.)

Other researchers have also paid scant attention to the FINE trial, especially when compared to the PACE study. According to Google Scholar, the 2011 PACE paper in The Lancet has been cited 355 times. In contrast, the 2010 FINE paper in BMJ has only been cited 39 times.

*****

The PACE investigators likely exacerbated this virtual disappearance of the FINE trial by their decision not to mention it in their Lancet paper, despite its longstanding status as a “sister trial” and the relevance of the findings to their own study of cognitive behavior therapy and graded exercise therapy. The PACE investigators have not explained their reasons for ignoring the FINE trial. (I wrote about this lapse in my Virology Blog story, but in their response the PACE investigators did not mention it.)

This absence is particularly striking in light of the decision made by the PACE investigators to drop their protocol method of assessing the Chalder Fatigue Scale. In the protocol, their primary fatigue outcome was based on bimodal scoring on the 11-item fatigue scale. The protocol included continuous scoring on the fatigue scale, with the 0 to 33 scale, as a secondary outcome.

In the PACE paper itself, the investigators announced that they had dropped the bimodal scoring in favor of the continuous scoring “to more sensitively test our hypotheses of effectiveness.” They did not explain why they simply didn’t provide the findings under both scoring methods, since the data as collected allowed for both analyses. They also did not cite any references to support this mid-trial decision, nor did they explain what prompted it.

They certainly did not mention that PACE’s “sister” study, the FINE trial, had reported null results at the 70-week endpoint—that is, until the investigators rescored the data using a continuous scale rather than the bimodal scale used in the original paper.

The three main PACE investigators—psychiatrist Peter White and Michael Sharpe, and behavioral psychologist Trudie Chalder—did not respond to an e-mail request for comment on why their Lancet paper did not mention the FINE study, especially in reference to their post-hoc decision to change the method of scoring the fatigue scale. Lancet editor Richard Horton also did not respond to an e-mail request for an interview on whether he believed the Lancet paper should have included information about the FINE trial and its results.

*****

Update 11/9/15 10:46 PM: According to a list of published and in-process papers on the FINE trial website, the main FINE study was rejected by The Lancet before being accepted by BMJ, suggesting that the journal was at least aware of the trial well before it published the PACE study. That raises further questions about the absence of any mention of FINE and its null findings in the text of the PACE paper.

Trial By Error, Continued: Did the PACE Study Really Adopt a ‘Strict Criterion’ for Recovery?

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

First, some comments: When Virology Blog posted my very, very, very long investigation of the PACE trial two weeks ago, I hoped that the information would gradually leak out beyond the ME/CFS world. So I’ve been overwhelmed by the response, to say the least, and technologically unprepared for my viral moment. I didn’t even have a photo on my Twitter profile until yesterday.

Given the speed at which events are unfolding, I thought it made sense to share a few thoughts, prompted by some of the reactions and comments and subsequent developments.

I approached this story as a journalist, not an academic. I read as much as I could and talked to a lot of people. I did not set out to write the definitive story about the PACE trial, document every single one of its many oddities, or credit everyone involved in bringing these problems to light. My goal was to explain what I recognized as some truly indefensible flaws in a clear, readable way that would resonate with scientists, public health and medical professionals, and others not necessarily immersed in the complicated history of this terrible disease.

To do that most effectively and maximize the impact, I had to find a story arc, some sort of narrative, to carry readers through 14,000 words and many dense explanations of statistical and epidemiologic concepts. After a couple of false starts, I settled on a patient and advocate, Tom Kindlon, as my “protagonist”—someone readers could understand and empathize with. Tom is smart, articulate, and passionate about good science–and he knows the PACE saga inside out. He was a terrific choice whose presence in the story, I think, made reading it a lot more bearable.

That decision in no way implied that Tom was the only possible choice or even the best possible choice. I built my work on the work of others, including many that James Coyne recently referred to as “citizen-scientists.” Tom’s dedication to tracking and critiquing the research has been heroic, given his health struggles. But the same could be said, and should be said, of many others who have fought to raise awareness about the problems with PACE since the trial was announced in 2003.

The PACE study has generated many peer-reviewed publications and a healthy paper trail. My account of the story, notwithstanding its length, has significant gaps. I haven’t finished writing about PACE, so I hope to fill in some of them myself—as with today’s story on the 2011 Lancet commentary written by colleagues of Peter White, the lead PACE investigator. But I have no monopoly on this story, nor would I want one—the stakes are too high and too many years have already been wasted. Given the trial’s wealth of problems and its enormous influence and ramifications, there are plenty of PACE-related stories left for everyone to tackle.

I am, obviously, indebted to Tom—for his good humor, his willingness to trust me given so many unfair media portrayals of ME/CFS, and his patience when I peppered him with question after question via Facebook, Twitter, and e-mail.

I am also indebted to my friend Valerie Eliot Smith. We met when I began research on this project in July, 2014; since then, she has become an indispensible resource, offering transatlantic support across multiple domains. Valerie has given me invaluable legal counsel, making sure that what I was writing was verifiable and, just as important, defendable—especially in the U.K. (I don’t want to know how many billable hours she has invested!) She has provided keen strategic advice. She has been a terrific editor, whose input greatly improved the story’s flow and readability. She has done all this, I realize, at some risk to her own health. I am lucky she decided to join me on this unexpected journey.

I would like to thank, as well, Dr. Malcolm Hooper, Margaret Williams, Dr. Nigel Speight, Dr. William Weir, Natalie Boulton, Lois Addy, and the Countess of Mar for their help and hospitality while I was in England researching the story last year. I will always cherish the House of Lords plastic bag that I received from the Countess. (The bag was stuffed with PACE-related reports and documents.)

So far, Richard Horton, the editor of The Lancet, has not responded to the criticisms documented in my story. As for the PACE investigators, they provided their own response last Friday on Virology Blog, followed by my rebuttal.

In seeking that opportunity for the PACE investigators to respond, a public relations representative from Queen Mary University of London, or QMUL, had approached Virology Blog. In e-mails to Dr. Racaniello, the public relations representative had suggested that “misinformation” and “inaccuracies” in my article had triggered social media “abuse” and could cause “reputational damage.”

These are serious charges, not to be taken lightly. Last Friday’s exchange has hopefully put an end to such claims. It seems unlikely that calling rituximab an “anti-inflammatory” rather than an “immunomodulatory” drug would trigger social media abuse or cause reputational damage.

Last week, in an effort to expedite Virology Blog’s publication of the PACE investigators’ response, the QMUL public relations representative further charged that I had not sought their input before the article was posted. This accusation goes to the heart of my professional integrity as a journalist. It is also untrue—as the public relations representative would have known had he read my piece or talked to the PACE investigators themselves. (Whether earlier publication of their response would have helped their case is another question.)

Disseminating false information to achieve goals is not usually an effective PR strategy. I have asked the QMUL public relations representative for an explanation as to why he conveyed false information to Dr. Racaniello in his attempt to advance the interests of the PACE investigators. I have also asked for an apology.


 

Since 2011, the PACE investigators have released several papers, repeatedly generating enthusiastic news coverage about the possibility of “recovery”–coverage that has often drawn conclusions beyond what the publications themselves have reported.

The PACE researchers can’t control the media and don’t write headlines. But in at least one case, their actions appeared to stimulate inaccurate media accounts–and they made no apparent effort immediately afterwards to correct the resulting international coverage. The misinformation spread to medical and public health journals as well.

(I mentioned this episode, regarding the Lancet “comment” that accompanied the first PACE results in 2011, in my excruciatingly long series two weeks ago on Virology Blog. However, that series focused on the PACE study, and the comment itself raised additional issues that I did not have the chance to explore. Because the Lancet comment had such an impact on media coverage, and ultimately most likely on patient care, I felt it was important to return to it.)

The Lancet comment, written by Gils Bleijenberg and Hans Knoop from the Expert Centre for Chronic Fatigue at Radboud University Nijmegen in the Netherlan was called “Chronic fatigue syndrome: where to PACE from here?” It reported that 30 percent of those receiving the two rehabilitative interventions favored by the PACE investigators–cognitive behavior therapy and graded exercise therapy–had “recovered.” Moreover, these participants had “recovered” according to what the comment stated was the “strict criterion” used by the PACE study itself.

Yet the PACE investigators themselves did not make this claim in their paper. Rather, they reported that participants in the two rehabilitative arms were more likely to improve and to be within what they referred to as “the normal range” for physical function and fatigue, the study’s two primary outcome measures. (“Normal range” is a statistical concept that has no inherent connection to “normal functioning” or “recovery.” More on that below.)

In addition, the comment did not mention that 15 percent of those receiving only the baseline condition of “specialist medical care” also “recovered” according to the same criterion. Thus, only half of this 30 percent “recovery” rate could actually be attributed to the interventions.

The PACE investigators themselves reviewed the comment before publication.

Thanks to this inaccurate account of the PACE study’s reported findings, the claim of a 30 percent “recovery” rate dominated much of the news coverage. Trudie Chalder, one of the key PACE investigators, reinforced the message of the Lancet comment when she declared at the press conference announcing the PACE results that participants in the two rehabilitative interventions got “back to normal.”

Just as the PACE paper did not report that anyone had “recovered,” it also did not report that anyone got “back to normal.”

Three months later, the PACE authors acknowledged in correspondence in The Lancet that the paper did not discuss “recovery” at all and that they would be presenting “recovery” data in a subsequent paper. They did not explain, however, why they had not taken earlier steps to correct the apparently inaccurate news coverage about how patients in the trial had “recovered” and gotten “back to normal.”

*****

It is not unusual for journals, when they publish studies of significance, to also commission commentaries or editorials that discuss the implications of the findings. It is also not unusual for colleagues of a study’s authors to be asked to write such commentaries. In this case, Bleijenberg and Knoop were colleagues of Peter White, the lead PACE investigator.  In 2007, the three had published, along with two other colleagues, a paper called “Is a full recovery possible after cognitive behavior therapy for chronic fatigue syndrome?” in the journal Psychotherapy and Psychosomatics.

(In their response last Friday to my Virology Blog story, the PACE investigators noted that they had published a “correction” to clarify that the 2011 Lancet paper was not about “recovery”; presumably, they were referring to the Lancet correspondence three months later. In their response to Virology Blog, they blamed the misconception on an “editorial…written by others.” But they did not mention that those “others” were White’s colleagues. In their response, they also did not explain why they did not “correct” this “recovery” claim during their pre-publication review of the comment, nor why Chalder spoke at the press conference of participants getting “back to normal.”)

In the Lancet comment, Bleijenberg and Knoop hailed the PACE team for its work. And here’s what they wrote about the trial’s primary outcome measures for physical function and fatigue: “PACE used a strict criterion for recovery: a score on both fatigue and physical function within the range of the mean plus (or minus) one standard deviation of a healthy person’s score.”

This statement was problematic for a number of reasons. Given that the PACE paper itself made no claims for “recovery,” Bleijenberg and Knoop’s assertion that it “used” any criterion for “recovery” at all was false. The PACE study protocol had outlined four specific criteria that constituted what the investigators referred to as “recovery.” Two of them were thresholds on the physical function and fatigue measures, but the Lancet paper did not present data for the other criteria and so could not report “recovery” rates.

Instead, the Lancet paper reported the rates of participants in all the groups who finished the study within what the researchers referred to as “the normal ranges” for physical function and fatigue. But as noted immediately by some in the patient community, these “normal ranges” featured a bizarre paradox: the thresholds for being “within the normal range” on both the physical function and fatigue scales indicated worse health than the entry thresholds required to demonstrate enough disability to qualify for the trial in the first place.

*****

To many patients and other readers, for the Lancet comment to refer to “normal range” scales in which entry and outcome criteria overlapped as a “strict criterion for recovery” defied logic and common sense. (According to data not included in the Lancet paper but obtained later by a patient through a freedom-of-information request, 13 percent of the total sample was already “within normal range” for physical function, fatigue or both at baseline, before any treatment began.)

In the Lancet comment, Bleijenberg and Knoop also noted that these “normal ranges” were based on “a healthy person’s score.” In other words, the “normal ranges” were purportedly derived from responses to the physical function and fatigue questionnaires by population-based samples of healthy people.

But this statement was also at odds with the fact. The source for the fatigue scale was a population of attendees at a medical practice—a population that could easily have had more health issues than a sample from the general population. And as the PACE authors themselves acknowledged in the Lancet correspondence several months after the initial publication, the SF-36 population-based scores they used to determine the physical function “normal range” were from an “adult” population, not the healthier, working-age population they had inaccurately referred to in The Lancet. (An “adult” population includes the elderly.)

The Lancet has never corrected this factual mistake in the PACE paper itself. The authors had described—inaccurately–how they derived a key outcome for one of their two primary measures. This error indisputably made the results appear better than they were, but only those who scrutinized the correspondence were aware of this discrepancy.

The Lancet comment, like the Lancet paper itself, has also never been corrected to indicate that the source population for the SF-36 responses was not a “healthy” population after all, but an “adult” one that included many elderly. The comment’s parallel claim that the source population for the fatigue scale “normal range” was “healthy” as well has also not been corrected.

Richard Horton, the editor of The Lancet, did not respond to a request for an interview to discuss whether he agreed that the “normal range” thresholds represented “a strict criterion for recovery.” Peter White, Trudie Chalder and Michael Sharpe, the lead PACE investigators, and Gils Bleijenberg, the lead author of the Lancet comment, also did not respond to requests for interviews for this story.

*****

How did the PACE study end up with “normal ranges” in which participants could get worse and still be counted as having achieved the designated thresholds?

Here’s how: The investigators committed a major statistical error in determining the PACE “normal ranges.” They used a standard statistical formula designed for normally distributed populations — that is, populations in which most people score somewhere in the middle, with the rest falling off evenly on each side. When normally distributed populations are graphed, they form the classic bell curve. In PACE, however, the data they were analyzing was far from normally distributed. The population-based responses to the physical function and fatigue questionnaires were skewed—that is, clustered toward the healthy end rather than symmetrically spread around a mean value.

With a normally distributed set of data, a “normal range” using the standard formula used in PACE—taking the mean, plus/minus one standard deviation–contains 68 percent of the values. But when the values are clustered toward one end, as in the source populations for physical function and fatigue, a larger percentage ends up being included in a “normal range” calculated using this same formula. Other statistical methods can be used to calculate 68 percent of the values when a dataset does not form a normal distribution.

If the standard formula is used on a population-based survey with scores clustered toward the healthier end, the result is an expanded “normal range” that pushes the lower threshold even lower, as happened with the PACE physical function scale. And in PACE, the threshold wasn’t just low–it was lower than the score required for entry into the trial. This score, of course, already represented severe disability, not “recovery” or being “back to normal”—and certainly not a “strict criterion” for anything.

Bleijenberg and Knoop, the comment authors, were themselves aware of the challenges faced in calculating accurate “normal ranges,” since the issue was addressed in the 2007 paper they co-wrote with Peter White. In this paper, White, Bleijenberg, and Knoop discussed the concerns related to determining a “normal range” from population data that was heavily clustered toward the healthy end of the scale. The paper noted that using the standard formula “assumed a normal distribution of scores” and generated different results under the “violation of the assumptions of normality.”

*****

Despite the caveats the three scientists included in this 2007 paper, Bleijenberg and Knoop’s 2011 Lancet comment did not mention these concerns about distortion arising from applying the standard statistical formula to values that were not normally distributed. (White and his colleagues also did not mention this problem in the PACE study itself.)

Moreover, the 2007 paper from White, Bleijenberg, and Knoop had identified a score of 80 on the SF-36 as representing “recovery”—a much higher “recovery” threshold than the SF-36 score of 60 that Bleijenberg and Knoop now declared to be a “strict criterion” In the Lancet comment, the authors did not mention this major discrepancy, nor did they explain how and when they had changed their minds about whether an SF-36 score of 60 or 80 best represented “recovery.” (In 2011, White and his colleagues also did not mention this discrepancy between the score for “recovery” in the 2007 paper and the much lower “normal range” threshold in the PACE paper.)

Along with the PACE paper, The Lancet comment caused an uproar in the patient and advocacy communities–especially since the claim that 30 percent of participants in the rehabilitative arms “recovered” per a “strict criterion” was widely disseminated.

The comment apparently caused some internal consternation at The Lancet as well. In an e-mail to Margaret Williams, the pseudonym for a longtime clinical manager in the National Health Service who had complained about the Lancet comment, an editor at the journal, Zoe Mullan, agreed that the reference to “recovery” was problematic.

“Yes I do think we should correct the Bleijenberg and Knoop Comment, since White et al explicitly state that recovery will be reported in a separate report,” wrote Mullan in the e-mail. “I will let you know when we have done this.”

No correction was made, however.

*****

In 2012, to press the issue, the Countess of Mar pursued a complaint about the comment’s claim of “recovery” with the (now-defunct) Press Complaints Commission, a regulatory body established by the media industry that was authorized to investigate the conduct of news organizations. The countess, who frequently championed the cause of the ME/CFS patient community in Parliament’s House of Lords, had long questioned the scientific basis of support of cognitive behavior therapy and graded exercise therapy, and she believed the Lancet’s comment’s claims of “recovery” contradicted the study itself.

In defending itself to the Press Complaints Commission, The Lancet acknowledged the earlier suggestion by a journal editor that the comment should be corrected.

“I can confirm that our editor of our Correspondence section, Zoe Mullan, did offer her personal opinion at the time, in which she said that she thought that we should correct the Comment,” wrote Lancet deputy editor Astrid James to the Press Complaints Commission, in an e-mail.

“Zoe made a mistake in not discussing this approach with a more senior member of our editorial team,” continued James in the e-mail. “Now, however, we have discussed this case at length with all members of The Lancet’s senior editorial team, and with Zoe, and we do not agree that there is a need to publish a correction.”

The Lancet now rejected the notion that the comment was inaccurate. Despite the explicit language in the comment identifying the “normal range” thresholds as the PACE trial’s own “strict criterion for recovery,” The Lancet argued in its response to the Press Complaints Commission that the authors were only expressing their personal opinion about what constituted “recovery.”

In other words, according to The Lancet, Bleijenberg and Knoop were not describing—wrongly–the conclusions of the PACE paper itself. They were describing their own interpretation of the findings. Therefore, the comment was not inaccurate and did not need to be corrected.

(In its response to the Press Complaints Commission, The Lancet did not explain why thresholds that purportedly represented a “strict criterion for recovery” overlapped with the entry criteria for disability.)

*****

The Press Complaints Commission issued its findings in early 2013. The commission agreed with the Countess of Mar that the statement about “recovery” in the Lancet comment was inaccurate. But the commission gave a slightly different reason. The commission accepted the Lancet’s argument that Bleijenberg and Knoop were trying to express their own opinion. The problem, the commission ruled, was that the comment itself didn’t make that point clear.

“The authors of the comment piece were clearly entitled to take a view on how “recovery” should be defined among the patients in the trial,” wrote the commission. However, continued the decision: “The authors of the comment had failed to make clear that the 30 per cent figure for ‘recovery’ reflected their view that function within “normal range’ was an appropriate way of ‘operationalising’ recovery–rather than statistical analysis by the researchers based on the definition for recovery provided. This was a distinction of significance, particularly in the context of a comment on a clinical trial published in a medical journal. The comment was misleading on this point and raised a breach of Clause 1 (Accuracy) of the Code.”

However, this determination seemed based on a msreading of what Bleijenberg and Knoop had actually written: “PACE used a strict criterion for recovery.” That phrasing did not suggest that the authors were expressing their own opinion about “recovery.” Rather, it was a statement about how the PACE study itself purportedly defined “recovery.” And the statement was demonstrably untrue.

Compounding the confusion, the Press Complaints Commission decision noted that the Lancet comment had been discussed with the PACE investigators prior to publication. Since the phrase “strict criterion for recovery” had thus apparently been vetted by the PACE team itself, it remained unclear why the commission determined that Bleijenberg and Knoop were only expressing their own opinion.

The commission’s response left other questions unanswered. The commission noted that the Countess had pointed out that the “recovery” score for physical function cited by the commenters was lower than the score required for entry. Despite this obvious anomaly, the commission did not indicate whether it had asked The Lancet or Bleijenberg and Knoop to explain how such a nonsensical scale could be used to assess “recovery.”.

*****

Notwithstanding the inaccuracy of the Lancet comment’s “recovery” claim, the commission also found that the journal had already taken “sufficient remedial action” to rectify the problem. The commission noted that the correspondence published after the trial had provided a prominent forum to debate concerns over the definition of “recovery.” The decision also noted that the PACE authors themselves had clarified in the correspondence that the actual “recovery” findings would be published in a subsequent paper.

In ruling that “sufficient remedial action” had already been taken, however, the commission did not mention the potential damage that already might have been caused by this inaccurate “recovery” claim. Given the comment’s declaration that 30 percent of participants in the cognitive behavior and graded exercise therapy arms had “recovered” according to a “strict criterion,” the message received worldwide dissemination—even though the PACE paper itself made no such claim.

Medical and public health journals, conflating the Lancet comment and the PACE study itself, also transmitted the 30 percent “recovery” rate directly to clinicians and others who treat or otherwise deal with ME/CFS patients.

The BMJ referred to the approximately 30 percent of patients who met the “normal range” thresholds as “cured.” A study in BMC Health Services Research cited PACE as having demonstrated “a recovery rate of 30-40%”—months after the PACE authors had issued their “correction” that their paper did not report on “recovery” at all. (Another mystery about the BMC Health Services Research report is the source of the 40 percent figure for “recovery.”) A 2013 paper in PLoS One similarly cited the PACE study—not the Lancet comment—and noted that 30 percent achieved a “full recovery.”

Given that relapsing after too much exertion is a core symptom of the illness, it is impossible to calculate the possible harms that could have arisen from this widespread dissemination of misinformation to health care professionals—all based on the flawed claim from the comment that 30 percent of participants had recovered according to the PACE study’s “strict criterion for recovery.”

And that “strict criterion,” it should be remembered, allowed participants to get worse and still be counted as better.

David Tuller responds to the PACE investigators

David Tuller’s three-installment investigation of the PACE trial for chronic fatigue syndrome, “Trial By Error,” has received enormous attention. Although the PACE investigators declined David’s efforts to interview them, they have now requested the right to reply. Today, virology blog posts their response to David’s story, and below, his response to their response. 

According to the communications department of Queen Mary University, the PACE investigators have been receiving abuse on social media as a result of David Tuller’s posts. When I published Mr. Tuller’s articles, my intent was to provide a forum for discussion of the controversial PACE results. Abuse of any kind should not have been, and must not be, part of that discourse. -vrr


Last December, I offered to fly to London to meet with the main PACE investigators to discuss my many concerns. They declined the offer. Dr. White cited my previous coverage of the issue as the reason and noted that “we think our work speaks for itself.” Efforts to reach out to them for interviews two weeks ago also proved unsuccessful.

After my story ran on virology blog last week, a public relations manager for medicine and dentistry in the marketing and communications department of Queen Mary University e-mailed Dr. Racaniello. He requested, on behalf of the PACE authors, the right to respond. (Queen Mary University is Dr. White’s home base.)

That response arrived Wednesday. My first inclination, when I read it, was that I had already rebutted most of their criticisms in my 14,000-word piece, so it seemed like a waste of time to engage in further extended debate.

Later in the day, however, the public relations manager for medicine and dentistry from the marketing and communications department of Queen Mary University e-mailed Dr. Racaniello again, with an urgent request to publish the response as soon as possible. The PACE investigators, he said, were receiving “a lot of abuse” on social media as a result of my posts, so they wanted to correct the “misinformation” as soon as possible.

Because I needed a day or two to prepare a careful response to the PACE team’s rebuttal, Dr. Racaniello agreed to post them together on Friday morning.

On Thursday, Dr. Racaniello received yet another appeal from the public relations manager for medicine and dentistry from the marketing and communications department of Queen Mary University. Dissatisfied with the Friday publishing timeline, he again urged expedited publication because “David’s blog posts contain a number of inaccuracies, may cause a considerable amount of reputational damage, and he did not seek comment from any of the study authors before the virology blog was published.”

The charge that I did not seek comment from the authors was at odds with the facts, as Dr. Racaniello knew. (It is always possible to argue about accuracy and reputational damage.) Given that much of the argument for expedited posting rested on the public relations manager’s obviously “dysfunctional cognition” that I had unfairly neglected to provide the PACE authors with an opportunity to respond, Dr. Racaniello decided to stick with his pre-planned posting schedule.

Before addressing the PACE investigators’ specific criticisms, I want to apologize sincerely to Dr. White, Dr. Chalder, Dr. Sharpe and their colleagues on behalf of anyone who might have interpreted my account of what went wrong with the PACE trial as license to target the investigators for “abuse.” That was obviously not my intention in examining their work, and I urge anyone engaging in such behavior to stop immediately. No one should have to suffer abuse, whether online or in the analog world, and all victims of abuse deserve enormous sympathy and compassion.

However, in this case, it seems I myself am being accused of having incited a campaign of social media “abuse” and potentially causing “reputational damage” through purportedly inaccurate and misinformed reporting. Because of the seriousness of these accusations, and because such accusations have a way of surfacing in news reports, I feel it is prudent to rebut the PACE authors’ criticisms in far more detail that I otherwise would. (I apologize in advance to the obsessives and others who feel they need to slog through this rebuttal; I urge you to take care not to over-exert yourself!)

In their effort to correct the “misinformation” and “inaccuracies” in my story about the PACE trial, the authors make claims and offer accounts similar to those they have previously presented in published comments and papers. In the past, astonishingly, journal editors, peer reviewers, reporters, public health officials, and the British medical and academic establishments have accepted these sorts of non-responsive responses as adequate explanations for some of the study’s fundamental flaws. I do not.

None of what they have written in their response actually addresses or resolves the core issues that I wrote about last week. They have ignored many of the questions raised in the article. In their response, they have also not mentioned the devastating criticisms of the trial from top researchers from Columbia, Stanford, University College London, and elsewhere. They have not addressed why major reports this year from the Institute of Medicine and the National Institutes of Health have presented portraits of the disease starkly at odds with the PACE framework and approach.

I will ignore their overview of the findings and will focus on the specific criticisms of my work. (I will, however, mention here that my piece discussed why their claims of cost-effectiveness for cognitive behavior therapy and graded exercise therapy are based on inaccurate statements in a paper published in PLoS One in 2012).

13% of patients had already “recovered” on entry into the trial

I did not write that 13% of the participants were “recovered” at baseline, as the PACE authors state. I wrote that they were “recovered” or already at the “recovery” thresholds for two specific indicators, physical function and fatigue, at baseline—a different statement, and an accurate one.

The authors acknowledge, in any event, that 13% of the sample was “within normal range” at baseline. For the 2013 paper in Psychological Medicine, these “normal range” thresholds were re-purposed as two of the four required “recovery” criteria.

And that begs the question: Why, at baseline, was 13% of the sample “within normal range” or “recovered” on any indicator in the first place? Why did entry criteria for disability overlap with outcome scores for being “within the normal range” or “recovered”? The PACE authors have never provided an explanation of this anomaly.

In their response, the authors state that they outlined other criteria that needed to be met for someone to be called “recovered.” This is true; as I wrote last week, participants needed to meet “recovery” criteria on four different indicators to be considered “recovered.” The PACE authors did not provide data for two of the indicators in the 2011 Lancet paper, so in that paper they could not report results for “recovery.”

However, at the press conference presenting the 2011 Lancet paper, Trudie Chalder referred to people who met the overlapping disability/”normal range” thresholds as having gotten “back to normal”—an explicit “recovery” claim. In a Lancet comment published along with the PACE study itself, colleagues of the PACE team referred to these bizarre “normal range” thresholds for physical function and fatigue as a “strict criterion for recovery.” As I documented, the Lancet comment was discussed with the PACE authors before publication; the phrase “strict criterion for recovery” obviously survived that discussion.

Much of the coverage of the 2011 paper reported that patients got “back to normal” or “recovered,” based on Dr. Chalder’s statement and the Lancet comment. The PACE authors made no public attempt to correct the record in the months after this apparently inaccurate news coverage, until they published a letter in the Lancet. In the response to Virology Blog, they say that they were discussing “normal ranges” in the Lancet paper, and not “recovery.” Yet they have not explained why Chalder spoke about participants getting “back to normal” and why their colleagues wrote that the nonsensical “normal ranges” thresholds represented a “strict criterion of recovery.”

Moreover, they still have not responded to the essential questions: How does this analysis make sense? What are the implications for the findings if 13 % are already “within normal range” or “recovered” on one of the two primary outcome measures? How can they be “disabled” enough on the two primary measures to qualify for the study if they’re already “within normal range” or “recovered”? And why did the PACE team use the wrong statistical methods for calculating their “normal ranges” when they knew that method was wrong for the data sources they had?

Bias was caused by a newsletter for patients giving quotes from patients and mentioning UK government guidance on management. A key investigator was on the guideline committee.

The PACE authors apparently believe it is appropriate to disseminate positive testimonials during a trial as long as the therapies or interventions are not mentioned. (James Coyne dissected this unusual position yesterday.)

This is their argument: “It seems very unlikely that this newsletter could have biased participants as any influence on their ratings would affect all treatment arms equally.” Apparently, the PACE investigators believe that if you bias all the arms of your study in a positive direction, you are not introducing bias into your study. It is hard to know what to say about this argument.

Furthermore, the PACE authors argue that the U.K. government’s new treatment guidelines had been widely reported. Therefore, they contend, it didn’t matter that–in the middle of a trial to test the efficacy of cognitive behavior therapy and graded exercise therapy–they had informed participants that the government had already approved cognitive behavior therapy and graded exercise therapy “based on the best available evidence.”

They are wrong. They introduced an uncontrolled, unpredictable co-intervention into their study, and they have no idea what the impact might have been on any of the four arms.

In their response, the PACE authors note that the participants’ newsletter article, in addition to cognitive behavior therapy and graded exercise therapy, included a third intervention, Activity Management. As they correctly note, I did not mention this third intervention in my Virology Blog story. The PACE authors now write: “These three (not two as David Tuller states) therapies were the ones being tested in the trial, so it is hard to see how this might lead to bias in the direction of one or other of these therapies.”

This statement is nonsense. Their third intervention was called “Adaptive Pacing Therapy,” and they developed it specifically for testing in the PACE trial. It is unclear why they now state that their third intervention was Activity Management, or why they think participants would know that Activity Management was synonymous with Adaptive Pacing Therapy. After all, cognitive behavior therapy and graded exercise therapy also involve some form of “activity management.” Precision in language matters in science.

Finally, the investigators say that Jessica Bavington, a co-author of the 2011 paper, had already left the PACE team before she served on the government committee that endorsed the PACE therapies. That might be, but it is irrelevant to the question that I raised in my piece: whether her dual role presented a conflict of interest that should have been disclosed to participants in the newsletter article about the U.K. treatment guidelines. The PACE newsletter article presented the U.K. guideline committee’s work as if it were independent of the PACE trial itself, when it was not.

Bias was caused by changing the two primary outcomes and how they were analyzed

 The PACE authors seem to think it is acceptable to change methods of assessing primary outcome measures during a trial as long as they get committee approval, announce it in the paper, and provide some sort of reasonable-sounding explanation as to why they made the change. They are wrong.

They need as well to justify the changes with references or citations that support their new interpretations of their indicators, and they need to conduct sensitivity analyses to assess the impact of the changes on their findings. Then they need to explain why their preferred findings are more robust than the initial, per-protocol findings. They did not take these steps for any of the many changes they made from their protocol.

The PACE authors mention the change from bimodal to Likert-style scoring on the Chalder Fatigue Scale. They repeat their previous explanation of why they made this change. But they have ignored what I wrote in my story—that the year before PACE was published, its “sister” study, called the FINE trial, had no significant findings on the physical function and fatigue scales at the end of the trial and only found modest benefits in a post-hoc analysis after making the same change in scoring that PACE later made. The FINE study was not mentioned in PACE. The PACE authors have not explained why they left out this significant information about their “sister” study.

Regarding the abandonment of the original method of assessing the physical function scores, this is what they say in their response: “We decided this composite method [their protocol method] would be hard to interpret clinically, and would not answer our main question of comparing effectiveness between treatment arms. We therefore chose to compare mean scores of each outcome measure between treatment arms instead.” They mention that they received committee approval, and that the changes were made before examining the outcome data.

The authors have presented these arguments previously. However, they have not responded to the questions I raised in my story. Why did they not report any sensitivity analyses for the changes in methods of assessing the primary outcome measures? (Sensitivity analyses can assess how changes in assumptions or variables impact outcomes.) What prompted them to reconsider their assessment methods in the middle of the trial? Were they concerned that a mean-based measure, unlike their original protocol measure, did not provide any information about proportions of participants who improved or got worse? Any information about proportions of participants who got better or worse were from post-hoc analyses—one of which was the perplexing “normal range” analysis.

Moreover, this was an unblinded trial, and researchers generally have an idea of outcome trends before examining outcome data. When the PACE authors made the changes, did they already have an idea of outcome trends? They have not answered that question.

Our interpretation was misleading after changing the criteria for determining recovery

 The PACE authors relaxed all four of their criteria for “recovery” in their 2013 paper and cited no committees who approved this overall redefinition of this critical concept. Three of these relaxations involved expanded thresholds; the fourth involved splitting one category into two sub-categories—one less restrictive and one more restrictive. The authors gave the full results for the less restrictive category of “recovery.”

The PACE authors now say that they changed the “recovery” thresholds on three of the variables “since we believed that the revised thresholds better reflected recovery.” Again, they apparently think that simply stating their belief that the revisions were better justifies making the changes.

Let’s review for a second. The physical function threshold for “recovery” fell from 85 out of 100 in the protocol, to a score of 60 in the 2013 paper. And that “recovery” score of 60 was lower than the entry score of 65 to qualify for the study. The PACE authors have not explained how the lower score of 60 “better reflected recovery”—especially since the entry score of 65 already represented serious disability. Similar problems afflicted the fatigue scale “recovery” threshold.

The PACE authors also report that “we included those who felt “much” (and “very much”) better in their overall health” as one of the criteria for “recovery.” This is true. They are referring to the Clinical Global Impression scale. In the protocol, participants needed to score a 1 (“very much better”) on this scale to be considered “recovered” on that indicator. In the 2013 paper, participants could score a 1 (“very much better”) or a 2 (“much better”). The PACE authors provided no citations to support this expanded interpretation of the scale. They simply explained in the paper that they now thought “much better” reflected the process of recovery and so those who gave a score of 2 should also be considered to have achieved the scale’s “recovery” threshold.

With the fourth criterion—not meeting any of the three case definitions used to define the illness in the study—the PACE authors gave themselves another option. Those who did not meet the study’s main case definition but still met one or both of the other two were now eligible for a new category called “trial recovery.” They did not explain why or when they made this change.

The PACE authors provided no sensitivity analyses to measure the impact of the significant changes in the four separate criteria for “recovery,” as well as in the overall re-definition. And remember, participants at baseline could already have achived the “recovery” requirements for one or two of the four criteria—the physical function and fatigue scales. And 13% of them already had.

Requests for data under the freedom of information act were rejected as vexatious

The PACE authors have rejected requests for the results per the protocol and many other requests for documents and data as well—at least two for being “vexatious,” as they now report. In my story, I incorrectly stated that requests for per-protocol data were rejected as “vexatious” [see clarification below]. In fact, earlier requests for per-protocol data were rejected for other reasons.

One recent request rejected as “vexatious” involved the PACE investigators’ 2015 paper in The Lancet Psychiatry. In this paper, they published their last “objective” outcome measure (except for wages, which they still have not published)—a measure of fitness called a “step-test.” But they only published a tiny graph on a page with many other tiny graphs, not the actual numbers from which the graph was drawn.

The graph was too small to extract any data, but it appeared that the cognitive behavior therapy and graded exercise therapy groups did worse than the other two. A request for the step-test data from which they created the graph was rejected as “vexatious.”

However, I apologize to the PACE authors that I made it appear they were using the term “vexatious” more extensively in rejecting requests for information than they actually have been. I also apologize for stating incorrectly that requests for per protocol data specifically had been rejected as “vexatious” [see clarification below].

This is probably a good time to address the PACE authors’ repeated refrain that concerns about patient confidentiality prevent them from releasing raw data and other information from the trial. They state: “The safe-guarding of personal medical data was an undertaking enshrined in the consent procedure and therefore is ethically binding; so we cannot publicly release these data. It is important to remember that simple methods of anonymization does [sic] not always protect the identity of a person, as they may be recognized from personal and medical information.”

This argument against the release of data doesn’t really hold up, given that researchers share data all the time without compromising confidentiality. Really, it’s not that difficult to do!

(It also bears noting that the PACE authors’ dedication to participant protection did not extend to fulfilling their protocol promise to inform participants of their “possible conflicts of interest”—see below.)

Subjective and objective outcomes

The PACE authors included multiple objective measures in their protocol. All of them failed to demonstrate real treatment success or “recovery.” The extremely modest improvements in the exercise therapy arm in the walking test still left them more severely disabled people with people with pacemakers, cystic fibrosis patients, and relatively healthy women in their 70s.

The authors now write: “We interpreted these data in the light of their context and validity.”

What the PACE team actually did was to dismiss their own objective data as irrelevant or not actually objective after all. In doing so, they cited various reasons they should have considered before including these measures in the study as “objective” outcomes. They provide one example in their response. They selected employment data as an objective measure of function, and then—as they explain in their response, and have explained previously–they decided afterwards that it wasn’t an objective measure of function after all, for this and that reason.

The PACE authors consider this interpreting data “in light of their context and validity.” To me, it looks like tossing data they don’t like.

What they should do, but have not, is to ask whether the failure of all their objective measures might mean they should start questioning the meaning, reliability and validity of their reported subjective results.

There was a bias caused by many investigators’ involvement with insurance companies and a failure not to declare links with insurance companies in information regarding consent

The PACE authors here seriously misstate the concerns I raised in my piece. I did not assert that bias was caused by their involvement with insurance companies. I asserted that they violated an international research ethics document and broke a commitment they made in their protocol to inform participants of “any possible conflicts of interest.” Whether bias actually occurred is not the point.

In their approved protocol, the authors promised to adhere to the Declaration of Helsinki, a foundational human rights document that is explicit on what constitutes legitimate informed consent: Prospective participants must be “adequately informed” of “any possible conflicts of interest.” The PACE authors now suggest this disclosure was unnecessary because 1) the conflicts weren’t really conflicts after all; 2) they disclosed these “non-conflicts” as potential conflicts of interest in the Lancet and other publications, 3) they had a lot of investigators but only three had links with insurers, and 4) they informed participants about who funded the research.

These responses are not serious. They do nothing to explain why the PACE authors broke their own commitment to inform participants about “any possible conflicts of interest.” It is not acceptable to promise to follow a human rights declaration, receive approvals for a study, and then ignore inconvenient provisions. No one is much concerned about PACE investigator #19; people are concerned because the three main PACE investigators have  advised disability insurers that cognitive behavior therapy and graded exercise therapy can get claimants off benefits and back to work.

That the PACE authors made the appropriate disclosures to journal editors is irrelevant; it is unclear why they are raising this as a defense. The Declaration of Helsinki is about protecting human research subjects, not about protecting journal editors and journal readers. And providing information to participants about funding sources, however ethical that might be, is not the same as disclosing information about “any possible conflicts of interest.” The PACE authors know this.

Moreover, the PACE authors appear to define “conflict of interest” quite narrowly. Just because the insurers were not involved in the study itself does not mean there is no conflict of interest and does not alleviate the PACE authors of the promise they made to inform trial participants of these affiliations. No one required them to cite the Declaration of Helsinki in their protocol as part of the process of gaining approvals for their trial.

As it stands, the PACE study appears to have no legitimate informed consent for any of the 641 participants, per the commitments the investigators themselves made in their protocol. This is a serious ethical breach.

I raised other concerns in my story that the authors have not addressed. I will save everyone much grief and not go over them again here.

I want to acknowledge two additional minor errors. In the last section of the piece, I referred to the drug rituximab as an “anti-inflammatory.” While it does have anti-inflammatory effects, rituximab should more properly be referred to as an “immunomodulatory” drug.

Also, in the first section of the story, I wrote that Dr. Chalder and Dr. Sharpe did not return e-mails I sent them last December, seeking interviews. However, during a recent review of e-mails from last December, I found a return e-mail from Dr. Sharpe that I had forgotten about. In the e-mail, Dr. Sharpe declined my request for an interview.

I apologize to Dr. Sharpe for suggesting he hadn’t responded to my e-mail last December.

Clarification: In a decision on a data request, the UK Information Commissioner’s Office noted last year that Queen Mary University of London “has advised that the effect of these requests [for PACE-related material] has been that the team involved in the PACE trial, and in particular the professor involved, now feel harassed and believe that the requests are vexatious in nature.” In other words, whatever the stated reason for denying requests, White and his colleagues regarded them all as “vexatious” by definition. Therefore, the statement that the investigators rejected the requests for data as being “vexatious” is accurate, and I retract my previous apology.

PACE trial investigators respond to David Tuller

Professors Peter White, Trudie Chalder and Michael Sharpe (co-principal investigators of the PACE trial) respond to the three blog posts by David Tuller, published here on 21st, 22nd and 23rd October 2015, about the PACE trial.

Overview

The PACE trial was a randomized controlled trial of four non-pharmacological treatments for 641 patients with chronic fatigue syndrome (CFS) attending secondary care clinics in the United Kingdom (UK) (http://www.wolfson.qmul.ac.uk/current-projects/pace-trial) The trial found that individually delivered cognitive behaviour therapy (CBT) and graded exercise therapy (GET) were more effective than both adaptive pacing therapy (APT), when added to specialist medical care (SMC), and SMC alone. The trial also found that CBT and GET were cost-effective, safe, and were about three times more likely to result in a patient recovering than the other two treatments.

There are a number of published systematic reviews and meta-analyses that support these findings from both before and after the PACE trial results were published (Whiting et al, 2001, Edmonds et al, 2004, Chambers et al, 2006, Malouff et al, 2008, Price et al, 2008, Castell et al, 2011, Larun et al, 2015, Marques et al, 2015, Smith et al, 2015). We have published all the therapist and patient manuals used in the trial, which can be down-loaded from the trial website (http://www.wolfson.qmul.ac.uk/current-projects/pace-trial).

We will only address David Tuller’s main criticisms. Most of these are often repeated criticisms that we have responded to before, and we will argue that they are unjustified.

Main criticisms:

13% of patients had already “recovered” on entry into the trial

Some 13% of patients entering the trial did have scores within normal range (i.e. within one standard deviation of the population means) for either one or both of the primary outcomes of fatigue and physical function – but this is clearly not the same as being recovered; we have published a correction after an editorial, written by others, implied that it was (White et al, 2011a). In order to be considered recovered, patients also had to:

  • Not meet case criteria for CFS
  • Not meet eligibility criteria for either of the primary outcome measures for entry into the trial
  • Rate their overall health (not just CFS) as “much” or “very much” better.

It would therefore be impossible to be recovered and eligible for trial entry (White et al, 2013). 

Bias was caused by a newsletter for patients giving quotes from patients and mentioning UK government guidance on management. A key investigator was on the guideline committee

It is considered good practice to publish newsletters for participants in trials, so that they are kept fully informed both about the trial’s progress and topical news about their illness. We published four such newsletters during the trial, which can all be found at http://www.wolfson.qmul.ac.uk/current-projects/pace-trial. The newsletter referred to is the one found at this link: http://www.wolfson.qmul.ac.uk/images/pdfs/participantsnewsletter3.pdf.

As can be seen no specific treatment or therapy is named in this newsletter and we were careful to print feedback from participants from all four treatment arms. All newsletters were approved by the independent research ethics committee before publication. It seems very unlikely that this newsletter could have biased participants as any influence on their ratings would affect all treatment arms equally.

The same newsletter also mentioned the release of the UK National Institute for Health and Care Excellence guideline for the management of this illness (this institute is independent of the UK government). This came out in 2007 and received much media interest, so most patients would already have been aware of it. Apart from describing its content in summary form we also said “The guidelines emphasize the importance of joint decision making and informed choice and recommended therapies include Cognitive Behavioural Therapy, Graded Exercise Therapy and Activity Management.” These three (not two as David Tuller states) therapies were the ones being tested in the trial, so it is hard to see how this might lead to bias in the direction of one or other of these therapies.

The “key investigator” on the guidelines committee, who was mentioned by David Tuller, helped to write the GET manuals, and provided training and supervision for one of the therapies; however they had left the trial team two years before the newsletter’s publication. 

Bias was caused by changing the two primary outcomes and how they were analyzed

These criticisms were first made four years ago, and have been repeatedly addressed and explained by us (White et al, 2013a, White 2015), including explicit descriptions and justification within the main paper itself (White et al, 2011), the statistical analysis plan (Walwyn et al, 2013), and the trial website section of frequently asked questions, published in 2011 (http://www.wolfson.qmul.ac.uk/images/pdfs/pace/faq2.pdf).

The two primary outcomes for the trial were the SF36 physical function sub-scale and the Chalder fatigue questionnaire, as in the published trial protocol; so there was no change in the outcomes themselves. The only change to the primary outcomes from the original protocol was the use of the Likert scoring method (0, 1, 2, 3) of the fatigue questionnaire. This was used in preference to the binary method of scoring (0, 0, 1, 1). This was done in order to improve the variance of the measure (and thus provide better evidence of any change).

The other change was to drop the originally chosen composite measures (the number of patients who either exceeded a threshold score or who changed by more than 50 per cent). After careful consideration, we decided this composite method would be hard to interpret clinically, and would not answer our main question of comparing effectiveness between treatment arms. We therefore chose to compare mean scores of each outcome measure between treatment arms instead.

All these changes were made before any outcome data were analyzed (i.e. they were pre-specified), and were all approved by the independent Trial Steering Committee and Data Monitoring and Ethics committee.

Our interpretation was misleading after changing the criteria for determining recovery

We addressed this criticism two years ago in correspondence that followed the paper (White et al, 2013b), and the changes were fully described and explained in the paper itself (White et al, 2013). We changed the thresholds for recovery from the original protocol for our secondary analysis paper on recovery for three, not four, of the variables, since we believed that the revised thresholds better reflected recovery. For instance, we included those who felt “much” (and “very much”) better in their overall health as one of the five criteria that defined recovery. This was done before the analysis occurred (i.e. it was pre-specified). In the discussion section of the paper we discussed the limitations and difficulties in measuring recovery, and stated that other ways of defining recovery could produce different results. We also provided the results of different criteria for defining recovery in the paper. The bottom line was that, however we defined recovery, significantly more patients had recovered after receiving CBT and GET than after other treatments (White et al, 2013).

Requests for data under the freedom of information act were rejected as vexatious

 We have received numerous Freedom of Information Act requests over the course of many years. These even included a request to know how many Freedom of Information requests we had received. We have provided these data when we were able to (e.g. the 13% figure mentioned above came from our releasing these data). However, the safe-guarding of personal medical data was an undertaking enshrined in the consent procedure and therefore is ethically binding; so we cannot publicly release these data. It is important to remember that simple methods of anonymization does not always protect the identity of a person, as they may be recognized from personal and medical information. We have only considered two of these many Freedom of Information requests as vexatious, although an Information Tribunal judge considered an earlier request was also vexatious (General Regulation Chamber, 2013).

Subjective and objective outcomes

These issues were first raised seven years ago and have all been addressed before (White et al, 2008, White et al, 2011, White et al, 2013a, White et al, 2013b, Chalder et al, 2015a). We chose (subjective) self-ratings as the primary outcomes, since we considered that the patients themselves were the best people to determine their own state of health. We have also reported the results of a number of objective outcomes, including a walking test, a stepping test, employment status and financial benefits (White et al, 2011a, McCrone et al, 2012, Chalder et al, 2015). The distance participants could walk in six minutes was significantly improved following GET, compared to other treatments. There were no significant differences in fitness, employment or benefits between treatments. We interpreted these data in the light of their context and validity. For instance, we did not use employment status as a measure of recovery or improvement, because patients may not have been in employment before falling ill, or they may have lost their job as a consequence of being ill (White et al, 2013b). Getting better and getting a job are not the same things, and being in employment depends on the prevailing state of the local economy as much as being fit for work.

There was a bias caused by many investigators’ involvement with insurance companies and a failure not to declare links with insurance companies in information regarding consent

No insurance company was involved in any aspect of the trial. There were some 19 investigators, three of whom have done consultancy work at various times for insurance companies. This was not related to the research and was listed as a potential conflict of interest in the relevant papers. The patient information sheet informed all potential participants as to which organizations had funded the research, which is consistent with ethical guidelines.

References

Castell BD et al, 2011. Cognitive Behavioral Therapy and Graded Exercise for Chronic Fatigue Syndrome: A Meta‐Analysis. Clin Psychol Sci Pract 18; 311-324.

doi: http://dx.doi.org/10.1111/j.1468-2850.2011.01262.x

Chalder T et al, 2015. Rehabilitative therapies for chronic fatigue syndrome: a secondary mediation analysis of the PACE trial. Lancet Psychiatry 2; 141-152.

doi: http://dx.doi.org/10.1016/S2215-0366(14)00069-8

Chalder T et al, 2015a. Methods and outcome reporting in the PACE trial–Author’s reply. Lancet Psychiatry 2; e10–e11. doi: http://dx.doi.org/10.1016/S2215-0366(15)00114-5.

Chambers D et al, 2006. Interventions for the treatment, management and rehabilitation of patients with chronic fatigue syndrome/myalgic encephalomyelitis: an updated systematic review. J R Soc Med 99: 506-520.

Edmonds M et al, 2004. Exercise therapy for chronic fatigue syndrome. Cochrane Database Syst Rev 3: CD003200. doi: http://dx.doi.org/10.1002/14651858.CD003200.pub2

General Regulation Chamber (Information Rights) First Tier Tribunal. Mitchell versus Information commissioner. EA 2013/0019.

www.informationtribunal.gov.uk/DBFiles/Decision/i1069/20130822%20Decision%20EA20130019.pdf

Larun L et al, 2015. Exercise therapy for chronic fatigue syndrome. Cochrane Database of Systematic Reviews Issue 2. Art. No.: CD003200.

doi: http://dx.doi.org/10.1002/14651858.CD003200.pub3

Malouff JM et al, 2008. Efficacy of cognitive behavioral therapy for chronic fatigue syndrome: a meta-analysis. Clin Psychol Rev 28: 736–45.

doi: http://dx.doi.org/10.1016/j.cpr.2007.10.004

Marques MM et al, 2015. Differential effects of behavioral interventions with a graded physical activity component in patients suffering from Chronic Fatigue (Syndrome): An updated systematic review and meta-analysis. Clin Psychol Rev 40; 123–137. doi: http://dx.doi.org/10.1016/j.cpr.2015.05.009

McCrone P et al. Adaptive pacing, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome: a cost effectiveness analysis. PLoS ONE 2012; 7: e40808. Doi: http://dx.doi.org/10.1371/journal.pone.0040808

Price JR et al, 2008. Cognitive behaviour therapy for chronic fatigue syndrome in adults. Cochrane Database Syst Rev 3: CD001027.

doi: http://dx.doi.org/10.1002/14651858.CD001027.pub2

Smith MB et al, 2015. Treatment of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome: A Systematic Review for a National Institutes of Health Pathways to Prevention Workshop. Ann Intern Med. 162: 841-850. doi: http://dx.doi.org/10.7326/M15-0114

Walwyn R et al, 2013. A randomised trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome (PACE): statistical analysis plan. Trials 14: 386. http://www.trialsjournal.com/content/14/1/386

White PD et al, 2007. Protocol for the PACE trial: a randomised controlled trial of adaptive pacing, cognitive behaviour therapy, and graded exercise, as supplements to standardised specialist medical care versus standardised specialist medical care alone for patients with the chronic fatigue syndrome/myalgic encephalomyelitis or encephalopathy. BMC Neurol 7:6. doi: http://dx.doi.org/10.1186/1471-2377-7-6

White PD et al, 2008. Response to comments on “Protocol for the PACE trial”. http://www.biomedcentral.com/1471-2377/7/6/COMMENTS/prepub#306608

White PD et al, 2011. The PACE trial in chronic fatigue syndrome – Authors’ reply. Lancet 377; 1834-35. DOI: http://dx.doi.org/10.1016/S0140-6736(11)60651-X

White PD et al, 2011a. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. Lancet 377:823-36. doi: http://dx.doi.org/10.1016/S0140-6736(11)60096-2

White PD et al, 2013. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychol Med 43: 227-35. doi: http://dx.doi.org/10.1017/S0033291713000020

White PD et al, 2013a. Chronic fatigue treatment trial: PACE trial authors’ reply to letter by Kindlon. BMJ 347:f5963. doi: http://dx.doi.org/10.1136/bmj.f5963

White PD et al, 2013b. Response to correspondence concerning ‘Recovery from chronic fatigue syndrome after treatments in the PACE trial’. Psychol Med 43; 1791-2. doi: http://dx.doi.org/10.1017/S0033291713001311

White PD et al, 2015. The planning, implementation and publication of a complex intervention trial for chronic fatigue syndrome: the PACE trial. Psychiatric Bulletin 39, 24-27. doi: http://dx.doi.org/10.1192/pb.bp.113.045005

Whiting P et al, 2001. Interventions for the Treatment and Management of Chronic Fatigue Syndrome: A Systematic Review. JAMA. 286:1360-68. doi: http://dx.doi.org/10.1001/jama.286.11.1360