An open letter to Psychological Medicine, again!

Last week, Virology Blog posted an open letter to the editors of Psychological Medicine. The letter called on them to retract the misleading findings that participants in the PACE trial for ME/CFS had “recovered” from cognitive behavior therapy and graded exercise therapy. More than 100 scientists, clinicians, other experts and patient organizations signed the letter.

Three days later, I received a response from Sir Robin Murray, the UK editor of Psychological Medicine. Here’s what he wrote:

 Thank you for your letter and your continuing interest in the paper on the PACE Trial which Psychological Medicine published. I was interested to learn that Wilshire and colleagues have now published a reanalysis of the original data from the PACE Trial in the journal Fatigue: Biomedicine, Health & Behavior, a publication that I was not previously aware of. Presumably, interested parties will now be able to read this reanalysis and compare the scientific qualiity of the re-analysis with that of the original. My understanding is that this is the way that science advances.

This is an unacceptable response.  Sir Robin Murray is misguided if he believes that science advances by allowing misleading claims based on manipulated data to stand in the literature. When researchers include participants who were already “recovered” on key indicators at baseline, the findings are by definition so flawed and nonsensical they must be retracted.

That the editors of Psychological Medicine do not grasp that it is impossible to be “disabled” and “recovered” simultaneously on an outcome measure is astonishing and deeply troubling. It is equally astonishing that the PACE authors now defend themselves, as noted in a New York Times opinion piece on Sunday, by arguing that this overlap doesn’t matter because there were also other recovery criteria.

In response to the comments from Psychological Medicine, we are reposting the open letter with 17 added individuals and 23 more organizations, for a total of 141 signatories altogether. These include two lawyers from Queen Mary University of London, the academic home of lead PACE investigator Peter White, along with other experts and ME/CFS patient groups from around the world.

 

Sir Robin Murray and Dr. Kenneth Kendler
Psychological Medicine
Cambridge University Press
University Printing House
Shaftesbury Road
Cambridge CB2 8BS
UK

Dear Sir Robin Murray and Dr. Kendler:

In 2013, Psychological Medicine published an article called “Recovery from chronic fatigue syndrome after treatments given in the PACE trial.”[1] In the paper, White et al. reported that graded exercise therapy (GET) and cognitive behavioural therapy (CBT) each led to recovery in 22% of patients, compared with only 7% in a comparison group. The two treatments, they concluded, offered patients “the best chance of recovery.”

PACE was the largest clinical trial ever conducted for chronic fatigue syndrome (also known as myalgic encephalomyelitis, or ME/CFS), with the first results published in The Lancet in 2011.[2] It was an open-label study with subjective primary outcomes, a design that requires strict vigilance to prevent the possibility of bias. Yet PACE suffered from major flaws that have raised serious concerns about the validity, reliability and integrity of the findings.[3] Despite these flaws, White et al.’s claims of recovery in Psychological Medicine have greatly impacted treatment, research, and public attitudes towards ME/CFS.

According to the protocol for the PACE trial, participants needed to meet specific benchmarks on four different measures in order to be defined as having achieved “recovery.”[4] But in Psychological Medicine, White et al. significantly relaxed each of the four required outcomes, making “recovery” far easier to achieve. No PACE oversight committees appear to have approved the redefinition of recovery; at least, no such approvals were mentioned. White et al. did not publish the results they would have gotten using the original protocol approach, nor did they include sensitivity analyses, the standard statistical method for assessing the impact of such changes.

Patients, advocates and some scientists quickly pointed out these and other problems. In October of 2015, Virology Blog published an investigation of PACE, by David Tuller of the University of California, Berkeley, that confirmed the trial’s methodological lapses.[5] Since then, more than 12,000 patients and supporters have signed a petition calling for Psychological Medicine to retract the questionable recovery claims. Yet the journal has taken no steps to address the issues.

Last summer, Queen Mary University of London released anonymized PACE trial data under a tribunal order arising from a patient’s freedom-of-information request. In December, an independent research group used that newly released data to calculate the recovery results per the original methodology outlined in the protocol.[6] This reanalysis documented what was already clear: that the claims of recovery could not be taken at face value.

In the reanalysis, which appeared in the journal Fatigue: Biomedicine, Health & Behavior, Wilshire et al. reported that the PACE protocol’s definition of “recovery” yielded recovery rates of 7 % or less for all arms of the trial. Moreover, in contrast to the findings reported in Psychological Medicine, the PACE interventions offered no statistically significant benefits. In conclusion, noted Wilshire et al., “the claim that patients can recover as a result of CBT and GET is not justified by the data, and is highly misleading to clinicians and patients considering these treatments.”

In short, the PACE trial had null results for recovery, according to the protocol definition selected by the authors themselves. Besides the inflated recovery results reported in Psychological Medicine, the study suffered from a host of other problems, including the following:

*In a paradox, the revised recovery thresholds for physical function and fatigue–two of the four recovery measures–were so lax that patients could deteriorate during the trial and yet be counted as “recovered” on these outcomes. In fact, 13 % of participants met one or both of these recovery thresholds at baseline. White et al. did not disclose these salient facts in Psychological Medicine. We know of no other studies in the clinical trial literature in which recovery thresholds for an indicator actually represented worse health status than the entry thresholds for serious disability on the same indicator.

*During the trial, the authors published a newsletter for participants that included glowing testimonials from earlier participants about their positive outcomes in the trial.[7] An article in the same newsletter reported that a national clinical guidelines committee had already recommended CBT and GET as effective; the newsletter article did not mention adaptive pacing therapy, an intervention developed specifically for the PACE trial. The participant testimonials and the newsletter article could have biased the responses of an unknown number of the two hundred or more people still undergoing assessments—about a third of the total sample.

*The PACE protocol included a promise that the investigators would inform prospective participants of “any possible conflicts of interest.” Key PACE investigators have had longstanding relationships with major insurance companies, advising them on how to handle disability claims related to ME/CFS. However, the trial’s consent forms did not mention these self-evident conflicts of interest. It is irrelevant that insurance companies were not directly involved in the trial and insufficient that the investigators disclosed these links in their published research. Given this serious omission, the consent obtained from the 641 trial participants is of questionable legitimacy.

Such flaws are unacceptable in published research; they cannot be defended or explained away. The PACE investigators have repeatedly tried to address these concerns. Yet their efforts to date—in journal correspondence, news articles, blog posts, and most recently in their response to Wilshire et al. in Fatigue[8]have been incomplete and unconvincing.

The PACE trial compounded these errors by using a case definition for the illness that required only one symptom–six months of disabling, unexplained fatigue. A 2015 report from the U.S. National Institutes of Health recommended abandoning this single-symptom approach for identifying patients.[9] The NIH report concluded that this broad case definition generated heterogeneous samples of people with a variety of fatiguing illnesses, and that using it to study ME/CFS could “impair progress and cause harm.”

PACE included sub-group analyses of two alternate and more specific case definitions, but these case definitions were modified in ways that could have impacted the results. Moreover, an unknown number of prospective participants might have met these alternate criteria but been excluded from the study by the initial screening.

To protect patients from ineffective and possibly harmful treatments, White et al.’s recovery claims cannot stand in the literature. Therefore, we are asking Psychological Medicine to retract the paper immediately. Patients and clinicians deserve and expect accurate and unbiased information on which to base their treatment decisions. We urge you to take action without further delay.

Sincerely,

Dharam V. Ablashi, DVM, MS, Dip Bact
Scientific Director
HHV-6 Foundation
Former Senior Investigator
National Cancer Institute
National Institutes of Health
Bethesda, Maryland, USA

James N. Baraniuk, MD
Professor, Department of Medicine
Georgetown University
Washington, D.C., USA

Lisa F. Barcellos, MPH, PhD
Professor of Epidemiology
School of Public Health
California Institute for Quantitative Biosciences
University of California, Berkeley
Berkeley, California, USA

Lucinda Bateman, MD
Medical Director
Bateman Horne Center
Salt Lake City, Utah, USA

Alison C. Bested, MD, FRCPC
Clinical Associate Professor
Faculty of Medicine
University of British Columbia
Vancouver, British Columbia, Canada

Molly Brown, PhD
Assistant Professor
Department of Psychology
DePaul University
Chicago, Illinois, USA

John Chia, MD
Clinician and Researcher
EVMED Research
Lomita, California, USA

Todd E. Davenport, PT, DPT, MPH, OCS
Associate Professor
Department of Physical Therapy
University of the Pacific
Stockton, California, USA

Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University
Stanford, California, USA

Simon Duffy, PhD, FRSA
Director
Centre for Welfare Reform
Sheffield, UK

Jonathan C.W. Edwards, MD
Emeritus Professor of Medicine
University College London
London, UK

Derek Enlander, MD
New York, New York, USA

Meredyth Evans, PhD
Clinical Psychologist and Researcher
Chicago, Illinois, USA

Kenneth J. Friedman, PhD
Associate Professor of Physiology and Pharmacology (retired)
New Jersey Medical School
University of Medicine and Dentistry of New Jersey
Newark, New Jersey, USA

Robert F. Garry, PhD
Professor of Microbiology and Immunology
Tulane University School of Medicine
New Orleans, Louisiana, USA

Keith Geraghty, PhD
Honorary Research Fellow
Division of Population Health, Health Services Research & Primary Care
School of Health Sciences
University of Manchester
Manchester, UK

Ian Gibson, PhD
Former Member of Parliament for Norwich North
Former Dean, School of Biological Sciences
University of East Anglia
Honorary Senior Lecturer and Associate Tutor
Norwich Medical School
University of East Anglia
Norwich, UK

Rebecca Goldin, PhD
Professor of Mathematics
George Mason University
Fairfax, Virginia, USA

Ellen Goudsmit, PhD, FBPsS
Health Psychologist (retired)
Former Visiting Research Fellow
University of East London
London, UK

Maureen Hanson, PhD
Liberty Hyde Bailey Professor
Department of Molecular Biology and Genetics
Cornell University
Ithaca, New York, USA

Malcolm Hooper, PhD
Emeritus Professor of Medicinal Chemistry
University of Sunderland
Sunderland, UK

Leonard A. Jason, PhD
Professor of Psychology
DePaul University
Chicago, Illinois, USA

Michael W. Kahn, MD
Assistant Professor of Psychiatry
Harvard Medical School
Boston, Massachusetts, USA

Jon D. Kaiser, MD
Clinical Faculty
Department of Medicine
University of California, San Francisco
San Francisco, California, USA

David L. Kaufman, MD
Medical Director
Open Medicine Institute
Mountain View, California, USA

Betsy Keller, PhD
Department of Exercise and Sports Sciences
Ithaca College
Ithaca, New York, USA

Nancy Klimas, MD
Director, Institute for Neuro-Immune Medicine
Nova Southeastern University
Director, Miami VA Medical Center GWI and CFS/ME Program
Miami, Florida, USA

Andreas M. Kogelnik, MD, PhD
Director and Chief Executive Officer
Open Medicine Institute
Mountain View, California, USA

Eliana M. Lacerda, MD, MSc, PhD
Clinical Assistant Professor
Disability & Eye Health Group/Clinical Research Department
Faculty of Infectious and Tropical Diseases
London School of Hygiene & Tropical Medicine
London, UK

Charles W. Lapp, MD
Medical Director
Hunter-Hopkins Center
Charlotte, North Carolina, USA
Assistant Consulting Professor
Department of Community and Family Medicine
Duke University School of Medicine
Durham, North Carolina, USA

Bruce Levin, PhD
Professor of Biostatistics
Columbia University
New York, New York, USA

Alan R. Light, PhD
Professor of Anesthesiology
Professor of Neurobiology and Anatomy
University of Utah
Salt Lake City, Utah, USA

Vincent C. Lombardi, PhD
Director of Research
Nevada Center for Biomedical Research
Reno, Nevada, USA

Alex Lubet, PhD
Professor of Music
Head, Interdisciplinary Graduate Group in Disability Studies
Affiliate Faculty, Center for Bioethics
Affiliate Faculty, Center for Cognitive Sciences
University of Minnesota
Minneapolis, Minnesota, USA

Steven Lubet
Williams Memorial Professor of Law
Northwestern University Pritzker School of Law
Chicago, Illinois, USA

Sonya Marshall-Gradisnik, PhD
Professor of Immunology
Co-Director, National Centre for Neuroimmunology and Emerging Diseases
Griffith University
Queensland, Australia

Patrick E. McKnight, PhD
Professor of Psychology
George Mason University
Fairfax, Virginia, USA

Jose G. Montoya, MD, FACP, FIDSA
Professor of Medicine
Division of Infectious Diseases and Geographic Medicine
Stanford University School of Medicine
Stanford, California, USA

Zaher Nahle, PhD, MPA
Vice President for Research and Scientific Programs
Solve ME/CFS Initiative
Los Angeles, California, USA

Henrik Nielsen, MD
Specialist in Internal Medicine and Rheumatology
Copenhagen, Denmark

James M. Oleske, MD, MPH
François-Xavier Bagnoud Professor of Pediatrics
Senator of RBHS Research Centers, Bureaus, and Institutes
Director, Division of Pediatrics Allergy, Immunology & Infectious Diseases
Department of Pediatrics
Rutgers New Jersey Medical School
Newark, New Jersey, USA

Elisa Oltra, PhD
Professor of Molecular and Cellular Biology
Catholic University of Valencia School of Medicine
Valencia, Spain

Richard Podell, MD, MPH
Clinical Professor
Department of Family Medicine
Rutgers Robert Wood Johnson Medical School
New Brunswick, New Jersey, USA

Nicole Porter, PhD
Psychologist in Private Practice
Rolling Ground, Wisconsin, USA

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University
New York, New York, USA

Arthur L. Reingold, MD
Professor of Epidemiology
University of California, Berkeley
Berkeley, California, USA

Anders Rosén, MD
Professor of Inflammation and Tumor Biology
Department of Clinical and Experimental Medicine
Division of Cell Biology
Linköping University
Linköping, Sweden

Peter C. Rowe, MD
Professor of Pediatrics
Johns Hopkins University School of Medicine
Baltimore, Maryland, USA

William Satariano, PhD
Professor of Epidemiology and Community Health
University of California, Berkeley
Berkeley, California, USA

Ola Didrik Saugstad, MD, PhD, FRCPE
Professor of Pediatrics
University of Oslo
Director and Department Head
Department of Pediatric Research
University of Oslo and Oslo University Hospital
Oslo, Norway

Charles Shepherd, MB, BS
Honorary Medical Adviser to the ME Association
Buckingham, UK

Christopher R. Snell, PhD
Scientific Director
WorkWell Foundation
Ripon, California, USA

Donald R. Staines, MBBS, MPH, FAFPHM, FAFOEM
Clinical Professor
Menzies Health Institute Queensland
Co-Director, National Centre for Neuroimmunology and Emerging Diseases
Griffith University
Queensland, Australia

Philip B. Stark, PhD
Professor of Statistics
University of California, Berkeley
Berkeley, California, USA

Eleanor Stein, MD, FRCP(C)
Psychiatrist in Private Practice
Assistant Clinical Professor
University of Calgary
Calgary, Alberta, Canada

Staci Stevens, MA
Founder, Exercise Physiologist
Workwell Foundation
Ripon, California, USA

Julian Stewart, MD, PhD
Professor of Pediatrics, Physiology and Medicine
Associate Chairman for Patient Oriented Research
Director, Center for Hypotension
New York Medical College
Hawthorne, NY, USA

Leonie Sugarman, PhD
Emeritus Associate Professor of Applied Psychology
University of Cumbria
Carlisle, UK

John Swartzberg, MD
Clinical Professor Emeritus
School of Public Health
University of California, Berkeley
Berkeley, California, USA

Ronald G. Tompkins, MD, ScD
Summer M Redstone Professor of Surgery
Harvard Medical School
Boston, Massachusetts, USA

David Tuller, DrPH
Lecturer in Public Health and Journalism
University of California, Berkeley
Berkeley, California, USA

Rosemary A. Underhill, MB, BS, MRCOG, FRCSE
Physician and Independent Researcher
Palm Coast, Florida, USA

Rosamund Vallings, MNZM, MB, BS
General Practitioner
Auckland, New Zealand

Michael VanElzakker, PhD
Research Fellow, Psychiatric Neuroscience Division
Harvard Medical School & Massachusetts General Hospital
Instructor, Tufts University Psychology
Boston, Massachusetts, USA

Mark VanNess, PhD
Professor of Health, Exercise & Sports Sciences
University of the Pacific
Stockton, California, USA
Workwell Foundation
Ripon, California, USA

Mark Vink, MD
Family Physician
Soerabaja Research Center
Amsterdam, Netherlands

Frans Visser, MD
Cardiologist
Stichting Cardiozorg
Hoofddorp, Netherlands

Tony Ward, MA (Hons), PhD, DipClinPsyc
Registered Clinical Psychologist
Professor of Clinical Psychology
School of Psychology
Victoria University of Wellington
Wellington, New Zealand
Adjunct Professor, School of Psychology
University of Birmingham
Birmingham, UK
Adjunct Professor, School of Psychology
University of Kent
Canterbury, UK

William Weir, FRCP
Infectious Disease Consultant
London, UK

John Whiting, MD
Specialist Physician
Private Practice
Brisbane, Australia

Carolyn Wilshire, PhD
Senior Lecturer
School of Psychology
Victoria University of Wellington
Wellington, New Zealand

Michael Zeineh, MD, PhD
Assistant Professor
Department of Radiology
Stanford University
Stanford, California, USA

Marcie Zinn, PhD
Research Consultant in Experimental Electrical Neuroimaging and Statistics
Center for Community Research
DePaul University
Chicago, Illinois, USA
Executive Director
Society for Neuroscience and Psychology in the Performing Arts
Dublin, California, USA

Mark Zinn, MM
Research Consultant in Experimental Electrophysiology
Center for Community Research
DePaul University
Chicago, Illinois, USA

New individuals added 23 March 2017

Norman E. Booth, PhD, FInstP
Emeritus Fellow in Physics
Mansfield College
University of Oxford
Oxford, UK

Joan Crawford, CPsychol, CEng, CSci, MA, MSc
Chartered Counselling Psychologist
Chronic Pain Management Service
St Helens Hospital
St Helens, UK

Lucy Dechene, PhD
Professor of Mathematics (retired)
Fitchburg State University
Fitchburg, Massachusetts, USA

Valerie Eliot Smith
Barrister and Visiting Scholar
Centre for Commercial Law Studies
Queen Mary University of London
London, UK

Margaret C. Fernald, PhD
Clinical and Research Psychologist
University of Maine
Orono, Maine, USA

Simin Ghatineh, MSc, PhD
Biochemist
London, UK

Alan Gurwitt, M.D.
Former Clinical Child Psychiatry Faculty Member
Yale Child Study Center, New Haven, Connecticut
University of Connecticut School of Medicine, Farmington, Connecticut
Harvard Medical School, Boston, Massachusetts
Co-author of primers on Adult and Pediatric ME/CFS
Clinician in Private Practice (retired)
Boston, Massachusetts, USA

Geoffrey Hallmann, LLB, DipLegPrac
Former Laywer, (Disability And Compensation)
Lismore, Australia

Susan Levine, MD
Clinician in Private Practice
New York, New York, USA
Visiting Fellow
Cornell University
Ithaca, New York, USA

Marvin S. Medow, Ph.D.
Professor of Pediatrics and Physiology
Chairman, New York Medical College IRB
Associate Director of The Center for Hypotension
New York Medical College
Hawthorne, New York, USA

Sarah Myhill MB BS
Clinician in Private Practice
Knighton, UK

Pamela Phillips, Dip, Dip. MSc MBACP (registered)
Counsellor in Private Practice
London, UK

Gwenda L Schmidt-Snoek, PhD
Researcher
Former Assistant Professor of Psychology
Hope College
Holland, Michigan, USA

Robin Callender Smith, PhD
Professor of Media Law
Centre for Commercial Law Studies
Queen Mary University of London.
Barrister and Information Rights Judge
London, UK

Samuel Tucker, MD
Former Assistant Clinical Professor of Psychiatry
University of California, San Francisco
San Francisco, California, USA

AM Uyttersprot, MD
Neuropsychiatrist
AZ Jan Portaels
Vilvoorde, Belgium

Paul Wadeson, Bsc, MBChB, MRCGP
GP Principal
Ash Trees Surgery
Carnforth, UK

 

ME/CFS Patient Organizations

25% ME Group
UK

Emerge Australia
Australia

European ME Alliance:

Belgium ME/CFS Association
Belgium

ME Foreningen
Denmark

Suomen CFS-Yhdistys
Finland

Fatigatio e.V.
Germany

Het Alternatief
Netherlands

Icelandic ME Association
Iceland

Irish ME Trust
Ireland

Associazione Malati di CFS
Italy

Norges ME-forening
Norway

Liga SFC
Spain

Riksföreningen för ME-patienter
Sweden

Verein ME/CFS Schweiz
Switzerland

Invest in ME Research
UK

Hope 4 ME & Fibro Northern Ireland
UK

Irish ME/CFS Association
Ireland

Massachusetts CFIDS/ME & FM Association
USA

ME Association
UK

ME/cvs Vereniging
Netherlands

National ME/FM Action Network
Canada

New Jersey ME/CFS Association
USA

Pandora Org
USA

Phoenix Rising
International membership representing many countries

Solve ME/CFS Initiative
USA

Tymes Trust (The Young ME Sufferers Trust)
UK

Wisconsin ME and CFS Association
USA

New Organizations added 23 March 2017

Action CND
Canada

Associated New Zealand ME Society
New Zealand

Chester MESH (ME self-help) group
Chester, UK

German Society for ME/CFS (Deutsche Gesellschaft für ME/CFS)
Germany

Lost Voices Stiftung
Germany

M.E. Victoria Association
Canada

ME North East
UK

ME Research UK
UK

ME Self Help Group Nottingham
UK

ME/CFS and Lyme Association of WA, Inc.
Australia

ME/CFS (Australia) Ltd
Australia

ME/CFS Australia (SA), Inc.
Australia

ME/CVS Stichting Nederland
Netherlands

ME/FM Myalgic Encephalomyelitis and Fibromyalgia Society of British Columbia
Canada

MEAction
International membership representing many countries 
 
Millions Missing Canada
Canada
 
National CFIDS Foundation, Inc.
USA
 
North London ME Network
UK
 
OMEGA (Oxfordshire ME Group for Action)
UK
 
Open Medicine Foundation
USA

Quebec ME Association
Canada
 
The York ME Community
UK
 
Welsh Association of ME & CFS Support
UK

[1] White PD, Goldsmith K, Johnson AL, et al. 2013. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychological Medicine 43(10): 2227-2235.

[2] White PD, Goldsmith KA, Johnson AL, et al. 2011. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. The Lancet 377: 823–836

[3] Racaniello V. 2016. An open letter to The Lancet, again. Virology Blog, 10 Feb. Available at: http://www.virology.ws/2016/02/10/open-letter-lancet-again/ (accessed on 2/24/17).

[4] White PD, Sharpe MC, Chalder T, et al. 2007. Protocol for the PACE trial: a randomised controlled trial of adaptive pacing, cognitive behaviour therapy, and graded exercise, as supplements to standardised specialist medical care versus standardised specialist medical care alone for patients with the chronic fatigue syndrome/myalgic encephalomyelitis or encephalopathy. BMC Neurology 7: 6.

[5] Tuller D. 2015. Trial by error: the troubling case of the PACE chronic fatigue syndrome trial. Virology Blog, 21-23 Oct. Available at: http://www.virology.ws/2015/10/21/trial-by-error-i/ (accessed on 2/24/17)

[6] Wilshire C, Kindlon T, Matthees A, McGrath S. 2016. Can patients with chronic fatigue syndrome really recover after graded exercise or cognitive behavioural therapy? A critical commentary and preliminary re-analysis of the PACE trial. Fatigue: Biomedicine, Health & Behavior; published online 14 Dec. Available at: http://www.tandfonline.com/doi/full/10.1080/21641846.2017.1259724 (accessed on 2/24/17)

[7] PACE Participants Newsletter. December 2008. Issue 3. Available at: http://www.wolfson.qmul.ac.uk/images/pdfs/participantsnewsletter3.pdf (accessed on 2/24/17).

[8] Sharpe M, Chalder T, Johnson AL, et al. 2017. Do more people recover from chronic fatigue syndrome with cognitive behaviour therapy or graded exercise therapy than with other treatments? Fatigue: Biomedicine, Health & Behavior; published online 15 Feb. Available at: http://www.tandfonline.com/doi/full/10.1080/21641846.2017.1288629 (accessed on 2/24/17).

[9] Green CR, Cowan P, Elk R. 2015. National Institutes of Health Pathways to Prevention Workshop: Advancing the research on myalgic encephalomyelitis/chronic fatigue syndrome. Annals of Internal Medicine 162: 860-865.

An open letter to Psychological Medicine about “recovery” and the PACE trial

Sir Robin Murray and Dr. Kenneth Kendler
Psychological Medicine
Cambridge University Press
University Printing House
Shaftesbury Road
Cambridge CB2 8BS
UK

Dear Sir Robin Murray and Dr. Kendler:

In 2013, Psychological Medicine published an article called “Recovery from chronic fatigue syndrome after treatments given in the PACE trial.”[1] In the paper, White et al. reported that graded exercise therapy (GET) and cognitive behavioural therapy (CBT) each led to recovery in 22% of patients, compared with only 7% in a comparison group. The two treatments, they concluded, offered patients “the best chance of recovery.”

PACE was the largest clinical trial ever conducted for chronic fatigue syndrome (also known as myalgic encephalomyelitis, or ME/CFS), with the first results published in The Lancet in 2011.[2] It was an open-label study with subjective primary outcomes, a design that requires strict vigilance to prevent the possibility of bias. Yet PACE suffered from major flaws that have raised serious concerns about the validity, reliability and integrity of the findings.[3] Despite these flaws, White et al.’s claims of recovery in Psychological Medicine have greatly impacted treatment, research, and public attitudes towards ME/CFS.

According to the protocol for the PACE trial, participants needed to meet specific benchmarks on four different measures in order to be defined as having achieved “recovery.”[4] But in Psychological Medicine, White et al. significantly relaxed each of the four required outcomes, making “recovery” far easier to achieve. No PACE oversight committees appear to have approved the redefinition of recovery; at least, no such approvals were mentioned. White et al. did not publish the results they would have gotten using the original protocol approach, nor did they include sensitivity analyses, the standard statistical method for assessing the impact of such changes.

Patients, advocates and some scientists quickly pointed out these and other problems. In October of 2015, Virology Blog published an investigation of PACE, by David Tuller of the University of California, Berkeley, that confirmed the trial’s methodological lapses.[5] Since then, more than 12,000 patients and supporters have signed a petition calling for Psychological Medicine to retract the questionable recovery claims. Yet the journal has taken no steps to address the issues.

Last summer, Queen Mary University of London released anonymized PACE trial data under a tribunal order arising from a patient’s freedom-of-information request. In December, an independent research group used that newly released data to calculate the recovery results per the original methodology outlined in the protocol.[6] This reanalysis documented what was already clear: that the claims of recovery could not be taken at face value.

In the reanalysis, which appeared in the journal Fatigue: Biomedicine, Health & Behavior, Wilshire et al. reported that the PACE protocol’s definition of “recovery” yielded recovery rates of 7 % or less for all arms of the trial. Moreover, in contrast to the findings reported in Psychological Medicine, the PACE interventions offered no statistically significant benefits. In conclusion, noted Wilshire et al., “the claim that patients can recover as a result of CBT and GET is not justified by the data, and is highly misleading to clinicians and patients considering these treatments.”

In short, the PACE trial had null results for recovery, according to the protocol definition selected by the authors themselves. Besides the inflated recovery results reported in Psychological Medicine, the study suffered from a host of other problems, including the following:

*In a paradox, the revised recovery thresholds for physical function and fatigue–two of the four recovery measures–were so lax that patients could deteriorate during the trial and yet be counted as “recovered” on these outcomes. In fact, 13 % of participants met one or both of these recovery thresholds at baseline. White et al. did not disclose these salient facts in Psychological Medicine. We know of no other studies in the clinical trial literature in which recovery thresholds for an indicator actually represented worse health status than the entry thresholds for serious disability on the same indicator.

*During the trial, the authors published a newsletter for participants that included glowing testimonials from earlier participants about their positive outcomes in the trial.[7] An article in the same newsletter reported that a national clinical guidelines committee had already recommended CBT and GET as effective; the newsletter article did not mention adaptive pacing therapy, an intervention developed specifically for the PACE trial. The participant testimonials and the newsletter article could have biased the responses of an unknown number of the two hundred or more people still undergoing assessments—about a third of the total sample.

*The PACE protocol included a promise that the investigators would inform prospective participants of “any possible conflicts of interest.” Key PACE investigators have had longstanding relationships with major insurance companies, advising them on how to handle disability claims related to ME/CFS. However, the trial’s consent forms did not mention these self-evident conflicts of interest. It is irrelevant that insurance companies were not directly involved in the trial and insufficient that the investigators disclosed these links in their published research. Given this serious omission, the consent obtained from the 641 trial participants is of questionable legitimacy.

Such flaws are unacceptable in published research; they cannot be defended or explained away. The PACE investigators have repeatedly tried to address these concerns. Yet their efforts to date—in journal correspondence, news articles, blog posts, and most recently in their response to Wilshire et al. in Fatigue[8]have been incomplete and unconvincing.

The PACE trial compounded these errors by using a case definition for the illness that required only one symptom–six months of disabling, unexplained fatigue. A 2015 report from the U.S. National Institutes of Health recommended abandoning this single-symptom approach for identifying patients.[9] The NIH report concluded that this broad case definition generated heterogeneous samples of people with a variety of fatiguing illnesses, and that using it to study ME/CFS could “impair progress and cause harm.”

PACE included sub-group analyses of two alternate and more specific case definitions, but these case definitions were modified in ways that could have impacted the results. Moreover, an unknown number of prospective participants might have met these alternate criteria but been excluded from the study by the initial screening.

To protect patients from ineffective and possibly harmful treatments, White et al.’s recovery claims cannot stand in the literature. Therefore, we are asking Psychological Medicine to retract the paper immediately. Patients and clinicians deserve and expect accurate and unbiased information on which to base their treatment decisions. We urge you to take action without further delay.

Sincerely,

Dharam V. Ablashi, DVM, MS, Dip Bact
Scientific Director
HHV-6 Foundation
Former Senior Investigator
National Cancer Institute
National Institutes of Health
Bethesda, Maryland, USA

James N. Baraniuk, MD
Professor, Department of Medicine
Georgetown University
Washington, D.C., USA

Lisa F. Barcellos, MPH, PhD
Professor of Epidemiology
School of Public Health
California Institute for Quantitative Biosciences
University of California, Berkeley
Berkeley, California, USA

Lucinda Bateman, MD
Medical Director
Bateman Horne Center
Salt Lake City, Utah, USA

Alison C. Bested, MD, FRCPC
Clinical Associate Professor
Faculty of Medicine
University of British Columbia
Vancouver, British Columbia, Canada

Molly Brown, PhD
Assistant Professor
Department of Psychology
DePaul University
Chicago, Illinois, USA

John Chia, MD
Clinician and Researcher
EVMED Research
Lomita, California, USA

Todd E. Davenport, PT, DPT, MPH, OCS
Associate Professor
Department of Physical Therapy
University of the Pacific
Stockton, California, USA

Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University
Stanford, California, USA

Simon Duffy, PhD, FRSA
Director
Centre for Welfare Reform
Sheffield, UK

Jonathan C.W. Edwards, MD
Emeritus Professor of Medicine
University College London
London, UK

Derek Enlander, MD
New York, New York, USA

Meredyth Evans, PhD
Clinical Psychologist and Researcher
Chicago, Illinois, USA

Kenneth J. Friedman, PhD
Associate Professor of Physiology and Pharmacology (retired)
New Jersey Medical School
University of Medicine and Dentistry of New Jersey
Newark, New Jersey, USA

Robert F. Garry, PhD
Professor of Microbiology and Immunology
Tulane University School of Medicine
New Orleans, Louisiana, USA

Keith Geraghty, PhD
Honorary Research Fellow
Division of Population Health, Health Services Research & Primary Care
School of Health Sciences
University of Manchester
Manchester, UK

Ian Gibson, PhD
Former Member of Parliament for Norwich North
Former Dean, School of Biological Sciences
University of East Anglia
Honorary Senior Lecturer and Associate Tutor
Norwich Medical School
University of East Anglia
Norwich, UK

Rebecca Goldin, PhD
Professor of Mathematics
George Mason University
Fairfax, Virginia, USA

Ellen Goudsmit, PhD, FBPsS
Health Psychologist (retired)
Former Visiting Research Fellow
University of East London
London, UK

Maureen Hanson, PhD
Liberty Hyde Bailey Professor
Department of Molecular Biology and Genetics
Cornell University
Ithaca, New York, USA

Malcolm Hooper, PhD
Emeritus Professor of Medicinal Chemistry
University of Sunderland
Sunderland, UK

Leonard A. Jason, PhD
Professor of Psychology
DePaul University
Chicago, Illinois, USA

Michael W. Kahn, MD
Assistant Professor of Psychiatry
Harvard Medical School
Boston, Massachusetts, USA

Jon D. Kaiser, MD
Clinical Faculty
Department of Medicine
University of California, San Francisco
San Francisco, California, USA

David L. Kaufman, MD
Medical Director
Open Medicine Institute
Mountain View, California, USA

Betsy Keller, PhD
Department of Exercise and Sports Sciences
Ithaca College
Ithaca, New York, USA

Nancy Klimas, MD
Director, Institute for Neuro-Immune Medicine
Nova Southeastern University
Director, Miami VA Medical Center GWI and CFS/ME Program
Miami, Florida, USA

Andreas M. Kogelnik, MD, PhD
Director and Chief Executive Officer
Open Medicine Institute
Mountain View, California, USA

Eliana M. Lacerda, MD, MSc, PhD
Clinical Assistant Professor
Disability & Eye Health Group/Clinical Research Department
Faculty of Infectious and Tropical Diseases
London School of Hygiene & Tropical Medicine
London, UK

Charles W. Lapp, MD
Medical Director
Hunter-Hopkins Center
Charlotte, North Carolina, USA
Assistant Consulting Professor
Department of Community and Family Medicine
Duke University School of Medicine
Durham, North Carolina, USA

Bruce Levin, PhD
Professor of Biostatistics
Columbia University
New York, New York, USA

Alan R. Light, PhD
Professor of Anesthesiology
Professor of Neurobiology and Anatomy
University of Utah
Salt Lake City, Utah, USA

Vincent C. Lombardi, PhD
Director of Research
Nevada Center for Biomedical Research
Reno, Nevada, USA

Alex Lubet, PhD
Professor of Music
Head, Interdisciplinary Graduate Group in Disability Studies
Affiliate Faculty, Center for Bioethics
Affiliate Faculty, Center for Cognitive Sciences
University of Minnesota
Minneapolis, Minnesota, USA

Steven Lubet
Williams Memorial Professor of Law
Northwestern University Pritzker School of Law
Chicago, Illinois, USA

Sonya Marshall-Gradisnik, PhD
Professor of Immunology
Co-Director, National Centre for Neuroimmunology and Emerging Diseases
Griffith University
Queensland, Australia

Patrick E. McKnight, PhD
Professor of Psychology
George Mason University
Fairfax, Virginia, USA

Jose G. Montoya, MD, FACP, FIDSA
Professor of Medicine
Division of Infectious Diseases and Geographic Medicine
Stanford University School of Medicine
Stanford, California, USA

Zaher Nahle, PhD, MPA
Vice President for Research and Scientific Programs
Solve ME/CFS Initiative
Los Angeles, California, USA

Henrik Nielsen, MD
Specialist in Internal Medicine and Rheumatology
Copenhagen, Denmark

James M. Oleske, MD, MPH
François-Xavier Bagnoud Professor of Pediatrics
Senator of RBHS Research Centers, Bureaus, and Institutes
Director, Division of Pediatrics Allergy, Immunology & Infectious Diseases
Department of Pediatrics
Rutgers New Jersey Medical School
Newark, New Jersey, USA

Elisa Oltra, PhD
Professor of Molecular and Cellular Biology
Catholic University of Valencia School of Medicine
Valencia, Spain

Richard Podell, MD, MPH
Clinical Professor
Department of Family Medicine
Rutgers Robert Wood Johnson Medical School
New Brunswick, New Jersey, USA

Nicole Porter, PhD
Psychologist in Private Practice
Rolling Ground, Wisconsin, USA

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University
New York, New York, USA

Arthur L. Reingold, MD
Professor of Epidemiology
University of California, Berkeley
Berkeley, California, USA

Anders Rosén, MD
Professor of Inflammation and Tumor Biology
Department of Clinical and Experimental Medicine
Division of Cell Biology
Linköping University
Linköping, Sweden

Peter C. Rowe, MD
Professor of Pediatrics
Johns Hopkins University School of Medicine
Baltimore, Maryland, USA

William Satariano, PhD
Professor of Epidemiology and Community Health
University of California, Berkeley
Berkeley, California, USA

Ola Didrik Saugstad, MD, PhD, FRCPE
Professor of Pediatrics
University of Oslo
Director and Department Head
Department of Pediatric Research
University of Oslo and Oslo University Hospital
Oslo, Norway

Charles Shepherd, MB, BS
Honorary Medical Adviser to the ME Association
Buckingham, UK

Christopher R. Snell, PhD
Scientific Director
WorkWell Foundation
Ripon, California, USA

Donald R. Staines, MBBS, MPH, FAFPHM, FAFOEM
Clinical Professor
Menzies Health Institute Queensland
Co-Director, National Centre for Neuroimmunology and Emerging Diseases
Griffith University
Queensland, Australia

Philip B. Stark, PhD
Professor of Statistics
University of California, Berkeley
Berkeley, California, USA

Eleanor Stein, MD, FRCP(C)
Psychiatrist in Private Practice
Assistant Clinical Professor
University of Calgary
Calgary, Alberta, Canada

Staci Stevens, MA
Founder, Exercise Physiologist
Workwell Foundation
Ripon, California, USA

Julian Stewart, MD, PhD
Professor of Pediatrics, Physiology and Medicine
Associate Chairman for Patient Oriented Research
Director, Center for Hypotension
New York Medical College
Hawthorne, NY, USA

Leonie Sugarman, PhD
Emeritus Associate Professor of Applied Psychology
University of Cumbria
Carlisle, UK

John Swartzberg, MD
Clinical Professor Emeritus
School of Public Health
University of California, Berkeley
Berkeley, California, USA

Ronald G. Tompkins, MD, ScD
Summer M Redstone Professor of Surgery
Harvard Medical School
Boston, Massachusetts, USA

David Tuller, DrPH
Lecturer in Public Health and Journalism
University of California, Berkeley
Berkeley, California, USA

Rosemary A. Underhill, MB, BS, MRCOG, FRCSE
Physician and Independent Researcher
Palm Coast, Florida, USA

Rosamund Vallings, MNZM, MB, BS
General Practitioner
Auckland, New Zealand

Michael VanElzakker, PhD
Research Fellow, Psychiatric Neuroscience Division
Harvard Medical School & Massachusetts General Hospital
Instructor, Tufts University Psychology
Boston, Massachusetts, USA

Mark VanNess, PhD
Professor of Health, Exercise & Sports Sciences
University of the Pacific
Stockton, California, USA
Workwell Foundation
Ripon, California, USA

Mark Vink, MD
Family Physician
Soerabaja Research Center
Amsterdam, Netherlands

Frans Visser, MD
Cardiologist
Stichting Cardiozorg
Hoofddorp, Netherlands

Tony Ward, MA (Hons), PhD, DipClinPsyc
Registered Clinical Psychologist
Professor of Clinical Psychology
School of Psychology
Victoria University of Wellington
Wellington, New Zealand
Adjunct Professor, School of Psychology
University of Birmingham
Birmingham, UK
Adjunct Professor, School of Psychology
University of Kent
Canterbury, UK

William Weir, FRCP
Infectious Disease Consultant
London, UK

John Whiting, MD
Specialist Physician
Private Practice
Brisbane, Australia

Carolyn Wilshire, PhD
Senior Lecturer
School of Psychology
Victoria University of Wellington
Wellington, New Zealand

Michael Zeineh, MD, PhD
Assistant Professor
Department of Radiology
Stanford University
Stanford, California, USA

Marcie Zinn, PhD
Research Consultant in Experimental Electrical Neuroimaging and Statistics
Center for Community Research
DePaul University
Chicago, Illinois, USA
Executive Director
Society for Neuroscience and Psychology in the Performing Arts
Dublin, California, USA

Mark Zinn, MM
Research Consultant in Experimental Electrophysiology
Center for Community Research
DePaul University
Chicago, Illinois, USA

 

ME/CFS Patient Organizations

25% ME Group
UK

Emerge Australia
Australia

European ME Alliance:

Belgium ME/CFS Association
Belgium

ME Foreningen
Denmark

Suomen CFS-Yhdistys
Finland

Fatigatio e.V.
Germany

Het Alternatief
Netherlands

Icelandic ME Association
Iceland

Irish ME Trust
Ireland

Associazione Malati di CFS
Italy

Norges ME-forening
Norway

Liga SFC
Spain

Riksföreningen för ME-patienter
Sweden

Verein ME/CFS Schweiz
Switzerland

Invest in ME Research
UK

Hope 4 ME & Fibro Northern Ireland
UK

Irish ME/CFS Association
Ireland

Massachusetts CFIDS/ME & FM Association
USA

ME Association
UK

ME/cvs Vereniging
Netherlands

National ME/FM Action Network
Canada

New Jersey ME/CFS Association
USA

Pandora Org
USA

Phoenix Rising
International membership representing many countries

Solve ME/CFS Initiative
USA

Tymes Trust (The Young ME Sufferers Trust)
UK

Wisconsin ME and CFS Association
USA

[1] White PD, Goldsmith K, Johnson AL, et al. 2013. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychological Medicine 43(10): 2227-2235.

[2] White PD, Goldsmith KA, Johnson AL, et al. 2011. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. The Lancet 377: 823–836

[3] Racaniello V. 2016. An open letter to The Lancet, again. Virology Blog, 10 Feb. Available at: http://www.virology.ws/2016/02/10/open-letter-lancet-again/ (accessed on 2/24/17).

[4] White PD, Sharpe MC, Chalder T, et al. 2007. Protocol for the PACE trial: a randomised controlled trial of adaptive pacing, cognitive behaviour therapy, and graded exercise, as supplements to standardised specialist medical care versus standardised specialist medical care alone for patients with the chronic fatigue syndrome/myalgic encephalomyelitis or encephalopathy. BMC Neurology 7: 6.

[5] Tuller D. 2015. Trial by error: the troubling case of the PACE chronic fatigue syndrome trial. Virology Blog, 21-23 Oct. Available at: http://www.virology.ws/2015/10/21/trial-by-error-i/ (accessed on 2/24/17)

[6] Wilshire C, Kindlon T, Matthees A, McGrath S. 2016. Can patients with chronic fatigue syndrome really recover after graded exercise or cognitive behavioural therapy? A critical commentary and preliminary re-analysis of the PACE trial. Fatigue: Biomedicine, Health & Behavior; published online 14 Dec. Available at: http://www.tandfonline.com/doi/full/10.1080/21641846.2017.1259724 (accessed on 2/24/17)

[7] PACE Participants Newsletter. December 2008. Issue 3. Available at: http://www.wolfson.qmul.ac.uk/images/pdfs/participantsnewsletter3.pdf (accessed on 2/24/17).

[8] Sharpe M, Chalder T, Johnson AL, et al. 2017. Do more people recover from chronic fatigue syndrome with cognitive behaviour therapy or graded exercise therapy than with other treatments? Fatigue: Biomedicine, Health & Behavior; published online 15 Feb. Available at: http://www.tandfonline.com/doi/full/10.1080/21641846.2017.1288629 (accessed on 2/24/17).

[9] Green CR, Cowan P, Elk R. 2015. National Institutes of Health Pathways to Prevention Workshop: Advancing the research on myalgic encephalomyelitis/chronic fatigue syndrome. Annals of Internal Medicine 162: 860-865.

Trial By Error, Continued: A Follow-Up Post on FITNET-NHS

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

Last week’s post on FITNET-NHS and Esther Crawley stirred up a lot of interest. I guess people get upset when researchers cite shoddy “evidence” from poorly designed trials to justify foisting psychological treatments on kids with a physiological disease. I wanted to post some additional bits and pieces related to the issue.

*****

I sent Dr. Crawley a link to last week’s post, offering her an opportunity to send her response to Dr. Racaniello for posting on Virology Blog, along with my response to her response. So far, Dr. Racaniello and I haven’t heard back—I doubt we will. Maybe she feels more comfortable misrepresenting facts in trial protocols and radio interviews than in addressing the legitimate concerns raised by patients and confronting the methodological flaws in her research. I hope Dr. Crawley knows she will always have a place on Virology Blog to present her perspective, should she choose to exercise that option. (Esther, are you reading this?)

*****

From reading the research of the CBT/GET/PACE crowd, I get the impression they are all in the habit of peer-reviewing and supporting each others’ work. I make that assumption because it is hard to imagine that independent scientists not affiliated with this group would overlook all the obvious problems that mar their studies—like outcome measures that represent worse health than entry criteria, as in the PACE trial itself. So it’s not surprising to learn that one of the three principal PACE investigators, psychiatrist Michael Sharpe, was on the committee that reviewed—and approved—Dr. Crawley’s one-million-pound FITNET-NHS study.

FITNET-NHS is being funded by the U.K.’s National Institute for Health Research. I have no idea what role, if any, Dr. Sharpe played in pushing through Dr. Crawley’s grant, but it likely didn’t hurt that the FITNET-NHS protocol cited PACE favorably while failing to point out that it has been rejected as fatally flawed by dozens of distinguished scientists and clinicians. Of course, the protocol also failed to point out that the reanalyses of the trial data have shown that the findings published by the PACE authors were much better than the results using the methods they promised in their protocol. (More on the reanalyses below.) And as I noted in my previous post, the FITNET-NHS protocol also misstated the NICE guidelines for chronic fatigue syndrome, making post-exertional malaise an optional symptom rather than a required component—thus conflating chronic fatigue and chronic fatigue syndrome, just as the PACE authors did by using the overly broad Oxford criteria.

The FITNET-NHS proposal also didn’t note some similarities between PACE and the Dutch FITNET trial on which it is based. Like the PACE trial, the Dutch relied on a post-hoc definition of “recovery.” The thresholds the FITNET investigators selected after they saw the results were pretty lax, which certainly made it easier to find that participants had attained “recovery.” Also like the PACE trial, the Dutch participants in the comparison group ended up in the same place as the intervention group at long-term follow-up. Just as the CBT and GET in PACE offered no extended advantages, the same was true of the online CBT provided in FITNET.

And again like the PACE authors, the FITNET investigators downplayed these null findings in their follow-up paper. In a clinical trial, the primary results are supposed to be comparisons between the groups. Yet in the follow-up PACE and FITNET articles, both teams highlighted the “within-group” comparisons. That is, they treated the fact that there were no long-term differences between the groups as an afterthought and boasted instead that the intervention groups sustained the progress they initially made. That might be an interesting sub-finding, but to present “within-group” results as a clinical trial’s main outcome is highly disingenuous.

*****

As part of her media blitz for the FITNET-NHS launch, Dr. Crawley was interviewed on a BBC radio program by a colleague, Dr. Phil Hammond. In this interview, she made some statements that demonstrate one of two things: Either she doesn’t know what she’s talking about and her misrepresentations are genuine mistakes, or she’s lying. So either she’s incompetent, or she lacks integrity. Not a great choice.

Let’s parse what she said about the fact that, at long-term follow-up, there were no apparent differences between the intervention and the comparison groups in the Dutch FITNET study. Here’s her comment:

“Oh, people have really made a mistake on this,” said Dr. Crawley. “So, in the FITNET Trial, they were offered FITNET or usual care for six months, and then if they didn’t make a recovery in the usual care, they were offered FITNET again, and they were then followed up at 2 to 3 years, so of course what happened is that a lot of the children who were in the original control arm, then got FITNET as well, so it’s not surprising that at 2 or 3 years, the results were similar.”

This is simply not an accurate description. As Dr. Crawley must know, some of the Dutch FITNET participants in the “usual care” comparison group went on to receive FITNET, and others didn’t. Both sets of usual care participants—not just those who received FITNET—caught up to the original FITNET group. For Dr. Crawley to suggest that the reason the others caught up was that they received FITNET is, perhaps, an unfortunate mistake. Or else it’s a deliberate untruth.

*****

Another example from the BBC radio interview: Dr. Crawley’s inaccurate description of the two reanalyses of the raw trial data from the PACE study. Here’s what she said:

“First of all they did a reanalysis of recovery based on what the authors originally said they were going to do, and that reanalysis done by the authors is entirely consistent with their original results. [Actually, Dr. Crawley is mistaken here; the PACE authors did a reanalysis of “improvement,” not of “recovery”]…Then the people that did the reanalysis did it again, using a different definition of recovery, that was much much harder to reach–and the trial just wasn’t big enough to show a difference, and they didn’t show a difference. [Here, Dr. Crawley is talking about the reanalysis done by patients and academic statisticians.] Now, you know, you can pick and choose how you redefine recovery, and that’s all very important research, but the message from the PACE Trial is not contested; the message is, if you want to get better, you’re much more likely to get better if you get specialist treatment.”

This statement is at serious odds with the facts. Let’s recap: In reporting their findings in The Lancet in 2011, the PACE authors presented “improvement” results for the two primary outcomes of fatigue and physical function. They reported that about 60 percent of participants in the CBT and GET arms reached the selected thresholds for “improvement” on both measures. In a 2013 paper in the journal Psychological Medicine, they presented “recovery” results based on a composite “recovery” definition that included the two primary outcomes and two additional measures. In this paper, they reported “recovery” rates for the favored intervention groups of 22 percent.

Using the raw trial data that the court ordered them to release earlier this year, the PACE authors themselves reanalyzed the Lancet improvement findings, based on their own initial, more stringent definition of “improvement” in the protocol. In this analysis, the authors reported that only about 20 percent “improved” on both measures, using the methods for assessing “improvement” outlined in the protocol. In other words, only a third as many “improved,” according to the authors’ own original definition, compared to the 60 percent they reported in The Lancet. Moreover, in the reanalysis, ten percent “improved” in the comparison group, meaning that CBT and GET led to “improvements” in only one in ten participants—a pretty sad result for a five-million-pound trial.

However, because these meager findings were statistically significant, the PACE authors and their followers have, amazingly, trumpeted them as supporting their initial claims. In reality, the new “improvement” findings demonstrate that any “benefits” offered by CBT and GET are marginal. It is preposterous and insulting to proclaim, as the PACE authors and Dr. Crawley have, that this represents confirmation of the results reported in The Lancet. Dr. Crawley’s statement that “the message from the PACE trial is not contested” is of course nonsense. The PACE “message” has been exposed as bullshit—and everyone knows it.

The PACE authors did not present their own reanalysis of the “recovery” findings—probably because those turned out to be null, as was shown in a reanalysis of that data by patients and academic statisticians, published on Virology Blog. That reanalysis found single-digit “recovery” rates for all the study arms, and no statistically significant differences between the groups. Dr. Crawley declared in the radio interview that this reanalysis used “a different definition of recovery, that was much harder to reach.” And she acknowledged that the reanalysis “didn’t show a difference”—but she blamed this on the fact that the PACE trial wasn’t big enough, even though it was the largest trial ever of treatments for ME/CFS.

This reasoning is specious. Dr. Crawley is ignoring the central point: The “recovery” reanalysis was based on the authors’ own protocol definition of “recovery,” not some arbitrarily harsh criteria created by outside agitators opposed to the trial. The PACE authors themselves had an obligation to provide the findings they promised in their protocol; after all, that’s the basis on which they received funding and ethical permission to proceed with the trial.

It is certainly understandable why they, and Dr. Crawley, prefer the manipulated and false “recovery” data published in Psychological Medicine. But deciding post-hoc to use weaker outcome measures and then refuse to provide your original results is not science. That’s data manipulation. And if this outcome-switching is done with the intent to hide poor results in favor of better ones, it is considered scientific misconduct.

*****

I also want to say a few words about the leaflet promoting FITNET-NHS. The leaflet states that most patients “recover” with “specialist treatment” and less than ten percent “recover” from standard care. Then it announces that this “specialist treatment” is available through the trial—implicitly promising that most of those who get the therapy will be cured.

This is problematic for a host of reasons. As I pointed out in my previous post, any claims that the Dutch FITNET trial, the basis for Dr. Crawley’s study, led to “recovery” must be presented with great caution and caveats. Instead, the leaflet presents such “recovery” as an uncontested fact. Also, the whole point of clinical trials is to find out if treatments work—in this case, whether the online CBT approach is effective, as well as cost-effective. But the leaflet is essentially announcing the result–“recovery”—before the trial even starts. If Dr. Crawley is so sure that this treatment is effective in leading to “recovery,” why is she doing the trial in the first place? And if she’s not sure what the results will be, why is she promising “recovery”?

Finally, as has been pointed out many times, the PACE investigators, Dr. Crawley and their Dutch colleagues all appear to believe that they can claim “recovery” based solely on subjective measures. Certainly any definition of “recovery” should require that participants can perform physically at their pre-sickness level. However, the Dutch researchers refused to release the one set of data—how much participants moved, as assessed by ankle monitors called actometers–that would have proven that the kids in FITNET had “recovered” on an objective measure of physical performance. The refusal to publish this data is telling, and leaves room for only one interpretation: The Dutch data showed that participants did no better than before the trial, or perhaps even worse, on this measure of physical movement.

This FITNET-NHS leaflet should be withdrawn because of its deceptive approach to promoting the chances of “recovery” in Dr. Crawley’s study. I hope the advertising regulators in the U.K. take a look at this leaflet and assess whether it accurately represents the facts.

*****

As long as we’re talking about the Dutch members of the CBT/GET ideological movement, let’s also look briefly at another piece of flawed research from that group. Like the PACE authors and Dr. Crawley, these investigators have found ways to mix up those with chronic fatigue and those with chronic fatigue syndrome. A case in point is a 2001 study that has been cited in systematic reviews as evidence for the effectiveness of CBT in this patient population. (Dr. Bleijenberg, a co-investigator on the FITNET-NHS trial, was also a co-author of this study.)

In this 2001 study, published in The Lancet (of course!), the Dutch researchers described their case definition for identifying participants like this: “Patients were eligible for the study if they met the US Centers for Disease Control and Prevention criteria for CFS, with the exception of the criterion requiring four of eight additional symptoms to be present.”

This statement is incoherent. (Why do I need to keep using words like “incoherent” and “preposterous” when describing this body of research?) The CDC definition has two main components: 1) six months of unexplained fatigue, and 2) four of eight other symptoms. If you abandon the second component, you can no longer refer to this as meeting the CDC definition. All you’re left with is the requirement that participants have suffered from six months of fatigue.

And that, of course, is the case definition known as the Oxford criteria, developed by PACE investigator Michael Sharpe in the 1990s. And as last year’s seminal report from the U.S. National Institutes of Health suggested, this case definition is so broad that it scoops up many people with fatiguing illnesses who do not have the disease known as ME/CFS. According to the NIH report, the Oxford criteria can “impair progress and cause harm,” and should therefore be “retired” from use. The reason is that any results could not accurately be extrapolated to people with ME/CFS specifically. This is especially so for treatments, such as CBT and GET, that are likely to be effective for many people suffering from other fatiguing illnesses.

In short, to cite any findings from such studies as evidence for treatments for ME/CFS is unscientific and completely unjustified. The 2001 Dutch study might be an excellent look at the use of CBT for chronic fatigue*. But like FITNET-NHS, it is not a legitimate study of people with chronic fatigue syndrome, and the Dutch Health Council should acknowledge this fact in its current deliberations about the illness.

*In the original phrasing, I referred to the intervention mistakenly as ‘online CBT.’

Trial By Error, Continued: The Real Data

by David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

‘The PACE trial is a fraud.’ Ever since Virology Blog posted my 14,000-essord investigation of the PACE trial last October, I’ve wanted to write that sentence. (I should point out that Dr. Racaniello has already called the PACE trial a “sham,” and I’ve already referred to it as “doggie-poo.” I’m not sure that “fraud” is any worse. Whatever word you use, the trial stinks.)

Let me be clear: I don’t mean “fraud” in the legal sense—I’m not a lawyer–but in the sense that it’s a deceptive and morally bankrupt piece of research. The investigators made dramatic changes from the methodology they outlined in their protocol, which allowed them to report purported “results” that were much, much better than those they would have been able to claim under their originally planned methods. Then they reported only the better-looking “results,” with no sensitivity analyses to analyze the impact of the changes—the standard statistical approach in such circumstances.

This is simply not allowed in science. It means the reported benefits for cognitive behavior therapy and graded exercise therapy were largely illusory–an artifact of the huge shifts in outcome assessments the authors introduced mid-trial. (That’s putting aside all the other flaws, like juicing up responses with a mid-trial newsletter promoting the interventions under investigation, failing to obtain legitimate informed consent from the participants, etc.)

That PACE suffered from serious methodological deficiencies should have been obvious to anyone who read the studies. That includes the reviewers for The Lancet, which published the PACE results for “improvement” in 2011 after what editor Richard Horton has called “endless rounds of peer-review,” and the journal Psychological Medicine, which published results for “recovery” in 2013. Certainly the deficiencies should have been obvious to anyone who read the trenchant letters and commentaries that patients routinely published in response to the egregious errors committed by the PACE team. Even so, the entire U.K. medical, academic and public health establishments refused to acknowledge what was right before their eyes, finding it easier instead to brand patients as unstable, anti-science, and possibly dangerous.

Thanks to the efforts of the incredible Alem Matthees, a patient in Perth, Australia, the U.K.’s First-Tier Tribunal last month ordered the liberation of the PACE trial data he’d requested under a freedom-of-information request. (The brief he wrote for the April hearing, outlining the case against PACE in great detail, was a masterpiece.) Instead of appealing, Queen Mary University of London, the home institution of lead PACE investigator Peter White, made the right decision. On Friday, September 9, the university announced its intention to comply with the tribunal ruling, and sent the data file to Mr. Matthees. The university has a short window of time before it has to release the data publicly.

I’m guessing that QMUL forced the PACE team’s hand by refusing to allow an appeal of the tribunal decision. I doubt that Dr. White and his colleagues would ever have given up their data willingly, especially now that I’ve seen the actual results. Perhaps administrators had finally tired of the PACE shenanigans, recognized that the study was not worth defending, and understood that continuing to fight would further harm QMUL’s reputation. It must be clear to the university now that its own reputational interests diverge sharply from those of Dr. White and the PACE team. I predict that the split will become more apparent as the trial’s reputation and credibility crumble; I don’t expect QMUL spokespeople to be out there vigorously defending the unacceptable conduct of the PACE investigators.

Last weekend, several smart, savvy patients helped Mr. Matthees analyze the newly available data, in collaboration with two well-known academic statisticians, Bruce Levin from Columbia and Philip Stark from Berkeley.  Yesterday, Virology Blog published the group’s findings of the single-digit, non-statistically significant “recovery” rates the trial would have been able to report had the investigators adhered to the methods they outlined in the protocol. That’s a remarkable drop from the original Psychological Medicine paper, which claimed that 22 percent of those in the favored intervention groups achieved “recovery,” compared to seven percent for the non-therapy group.

Now it’s clear: The PACE authors themselves are the anti-science faction. They tortured their data and ended up producing sexier results. Then they claimed they couldn’t share their data because of alleged worries about patient confidentiality and sociopathic anti-PACE vigilantes. The court dismissed these arguments as baseless, in scathing terms. (It should be noted that their ethical concerns for patients did not extend to complying with a critical promise they made in their protocol—to tell prospective participants about “any possible conflicts of interest” in obtaining informed consent. Given this omission, they have no legitimate informed consent for any of their 641 participants and therefore should not be allowed to publish any of their data at all.)

The day before QMUL released the imprisoned data to Mr. Matthees, the PACE authors themselves posted a pre-emptive re-analysis of results for the two primary outcomes of physical function and fatigue, according to the protocol methods. In the Lancet paper, they had revised and weakened their own definition of what constituted “improvement.” With this revised definition, they could report in The Lancetthat approximately 60 % in the cognitive behavior and graded exercise therapy arms “improved” to a clinically significant degree on both fatigue and physical function.

The re-analysis the PACE authors posted last week sought to put the best possible face on the very poor data they were required to release. Yet patients examining the new numbers quickly noted that, under the more stringent definition of “improvement” outlined in the protocol, only about 20 percent in the two groups could be called “overall improvers.”. Solely by introducing a more relaxed definition of “improvement,” the PACE team—enabled by The Lancet’s negligence and an apparently inadequate “endless” review process–was able to triple the trial’s reported success rate..

So now it’s time to ask what happens to the papers already published. The editors have made their feelings clear. I have written multiple e-mails to Lancet editor Richard Horton since I first contacted him about my PACE investigation, almost a year before it ran. He never responded until September 9, the day QMUL liberated the PACE data. Given that the PACE authors’ own analysis showed that the new data showed significantly less impressive results than those published in The Lancet, I sent Dr. Horton a short e-mail asking when we could expect some sort of addendum or correction to the 2011 paper. He responded curtly: “Mr. Tuller–We have no such plans.”

The editors of Psychological Medicine are Kenneth Kendler of Virginia Commonwealth University and Robin Murray of Kings College London. After I wrote to the journal last December, pointing out the problems, I received the following from Dr. Murray, whose home base is KCL’s Department of Psychosis Studies: “Obviously the best way of addressing the truth or otherwise of the findings is to attempt to replicate them. I would therefore like to encourage you to initiate an attempted replication of the study. This would be the best way for you to contribute to the debate…Should you do this, then Psychological Medicine will be most interested in the findings either positive or negative.”

This was not an appropriate response. I told Dr. Murray it was “disgraceful,” given that the paper was so obviously flawed. This week, I wrote again to Dr. Murray and Dr. Kendler, asking if they now planned to deal with the paper’s problems, given the re-analysis by Matthees et al. In response, Dr. Murray suggested that I submit a re-analysis, based on the released data, and Psychological Medicine would be happy to consider it. “We would, of course, send it out to referees for scientific scrutiny in the same manner as we did for the original paper,” he wrote.

I explained that it was his and the journal’s responsibility to address the problems, whether or not anyone submitted a re-analysis. I also noted that I could not improve on the Matthees re-analysis, which completed rebutted the results reported in Psychological Medicine’s paper. I urged Dr. Murray to contact either Dr. Racaniello or Mr. Matthees to discuss republishing it, if he truly wished to contribute to the debate. Finally, I noted that the peer-reviewers for the original paper had okayed a study in which participants could be disabled and recovered simultaneously, so I wasn’t sure if the journal’s assessment process could be trusted.

(By the way, Kings College London, where Dr. Murray is based, is also the home institution of PACE investigator Trudie Chalder as well as Simon Wessely, a close colleague of the PACE authors and president of the Royal College of Psychiatrists*. That could explain Dr. Murray’s inability or reluctance to acknowledge that the “recovery” paper his journal peer-reviewed and published is meaningless.)

Earlier today, the PACE authors posted a blog on The BMJ site, their latest effort to salvage their damaged reputations. They make no mention of their massive research errors and focus only on their supposed fears that releasing even anonymous data will frighten away future research participants. They have provided no evidence to back up this unfounded claim, and the tribunal flatly rejected it. They also state that only researchers who present  “pre-specified” analysis plans should be able to obtain trial data. This is laughable, since Dr. White and his colleagues abandoned their own pre-specified analyses in favor of analyses they decided they preferred much later on, long after the trial started.

They have continued to refer to their reported analyses, deceptively, as “pre-specified,” even though these methods were revised mid-trial. The following point has been stated many times before, but bears repeating: In an open label trial like PACE, researchers are likely to know very well what the outcome trends are before they review any actual data. So the PACE team’s claim that the changes they made were “pre-specified” because they were made before reviewing outcome data is specious. I have tried to ask them about this issue multiple times, and have never received an answer.

Dr. White, his colleagues, and their defenders don’t yet seem to grasp that the intellectual construct they invented and came to believe in—the PACE paradigm or the PACE enterprise or the PACE cult, have your pick—is in a state of collapse. They are used to saying whatever they want about patients—Internet Abuse! Knife-wielding! Death threats!!–and having it be believed. In responding to legitimate concerns and questions, they have covered up their abuse of the scientific process by providing non-answers, evasions and misrepresentations—the academic publishing equivalent of “the dog ate my homework.” Amazingly, journal editors, health officials, reporters and others have accepted these non-responsive responses as reasonable and sufficient. I do not.

Now their work is finally being scrutinized the way it should have been by peer reviewers before this damaging research was ever published in the first place. The fallout is not going to be pretty. If nothing else, they have provided a great gift to academia with their $8 million** disaster—for years to come, graduate students in the U.S., the U.K. and elsewhere will be dissecting PACE as a classic case study of bad research and mass delusion.

*Correction: The original version of the post mistakenly called the organization the Royal Society of Psychiatrists.

**Correction: The original version of the post stated that PACE cost $8 million, not $6.4 million. In fact, PACE cost five million pounds, so the cost in dollars depends on the exchange rate used. The $8 million figure is based on the exchange rate from last October, when Virology Blog published my PACE investigation. But the pound has fallen since the Brexit vote in June, so the cost in dollars at the current exchange rate is lower.

Zika Virus in the USA

On this episode of Virus Watch we cover three Zika virus stories: the first human trial of a Zika virus vaccine, the first local transmission of infection in the United States, and whether the virus is a threat to participants in the 2016 Summer Olympic and Paralympic Games.

TWiV 397: Trial by error

Journalism professor David Tuller returns to TWiV for a discussion of the PACE trial for ME/CFS: the many flaws in the trial, why its conclusions are useless, and why the data must be released and re-examined.

You can find TWiV #397 at microbe.tv/twiv, or listen below.

Click arrow to play
Download TWiV 397 (67 MB .mp3, 93 min)
Subscribe (free): iTunesRSSemailGoogle Play Music

Become a patron of TWiV!

A request for data from the PACE trial

Mr. Paul Smallcombe
Records & Information Compliance Manager
Queen Mary University of London
Mile End Road
London E1 4NS

Dear Mr Smallcombe:

The PACE study of treatments for ME/CFS has been the source of much controversy since the first results were published in The Lancet in 2011. Patients have repeatedly raised objections to the study’s methodology and results. (Full title: “Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome: a randomized trial.”)

Recently, journalist and public health expert David Tuller documented that the trial suffered from many serious flaws that raise concerns about the validity and accuracy of the reported results. We cited some of these flaws in an open letter to The Lancet that urged the journal to conduct a fully independent review of the trial. (Dr. Tuller did not sign the open letter, but he is joining us in requesting the trial data.)

These flaws include, but are not limited to: major mid-trial changes in the primary outcomes that were not accompanied by the necessary sensitivity analyses; thresholds for “recovery” on the primary outcomes that indicated worse health than the study’s own entry criteria; publication of positive testimonials about trial outcomes and promotion of the therapies being investigated in a newsletter for participants; rejection of the study’s objective outcomes as irrelevant after they failed to support the claims of recovery; and the failure to inform participants about investigators’ significant conflicts of interest, and in particular financial ties to the insurance industry, contrary to the trial protocol’s promise to adhere to the Declaration of Helsinki, which mandates such disclosures.

Although the open letter was sent to The Lancet in mid-November, editor Richard Horton has not yet responded to our request for an independent review. We are therefore requesting that Queen Mary University of London to provide some of the raw trial data, fully anonymized, under the provisions of the U.K.’s Freedom of Information law.

In particular, we would like the raw data for all four arms of the trial for the following measures: the two primary outcomes of physical function and fatigue (both bimodal and Likert-style scoring), and the multiple criteria for “recovery” as defined in the protocol published in 2007 in BMC Neurology, not as defined in the 2013 paper published in Psychological Medicine. The anonymized, individual-level data for “recovery” should be linked across the four criteria so it is possible to determine how many people achieved “recovery” according to the protocol definition.

We are aware that previous requests for PACE-related data have been rejected as “vexatious.” This includes a recent request from psychologist James Coyne, a well-regarded researcher, for data related to a subsequent study about economic aspects of the illness published in PLoS One—a decision that represents a violation of the PLoS policies on data-sharing.

Our request clearly serves the public interest, given the methodological issues outlined above, and we do not believe any exemptions apply. We can assure Queen Mary University of London that the request is not “vexatious,” as defined in the Freedom of Information law, nor is it meant to harass. Our motive is easy to explain: We are extremely concerned that the PACE studies have made claims of success and “recovery” that appear to go beyond the evidence produced in the trial. We are seeking the trial data based solely on our desire to get at the truth of the matter.

We appreciate your prompt attention to this request.

Sincerely,

Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University

Bruce Levin, PhD
Professor of Biostatistics
Columbia University

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University

David Tuller, DrPH
Lecturer in Public Health and Journalism
University of California, Berkeley

Trial by error, Continued: PACE Team’s Work for Insurance Companies Is “Not Related” to PACE. Really?

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

In my initial story on Virology Blog, I charged the PACE investigators with violating the Declaration of Helsinki, developed in the 1950s by the World Medical Association to protect human research subjects. The declaration mandates that scientists disclose “institutional affiliations” and “any possible conflicts of interest” to prospective trial participants as part of the process of obtaining informed consent.

The investigators promised in their protocol to adhere to this foundational human rights document, among other ethical codes. Despite this promise, they did not tell prospective participants about their financial and consulting links with insurance companies, including those in the disability sector. That ethical breach raises serious concerns about whether the “informed consent” they obtained from all 641 of their trial participants was truly “informed,” and therefore legitimate.

The PACE investigators do not agree that the lack of disclosure is an ethical breach. In their response to my Virology Blog story, they did not even mention the Declaration of Helsinki or explain why they violated it in seeking informed consent. Instead, they defended their actions by noting that they had disclosed their financial and consulting links in the published articles, and had informed participants about who funded the research–responses that did not address the central concern.

“I find their statement that they disclosed to The Lancet but not to potential subjects bemusing,” said Jon Merz, a professor of medical ethics at the University of Pennsylvania. “The issue is coming clean to all who would rely on their objectivity and fairness in conducting their science. Disclosure is the least we require of scientists, as it puts those who should be able to trust them on notice that they may be serving two masters.”

In their Virology Blog response, the PACE team also stated that no insurance companies were involved in the research, that only three of the 19 investigators “have done consultancy work at various times for insurance companies,” and that this work “was not related to the research.” The first statement was true, but direct involvement in a study is of course only one possible form of conflict of interest. The second statement was false. According to the PACE team’s conflict of interest disclosures in The Lancet, the actual number of researchers with insurance industry ties was four—along with the three principal investigators, physiotherapist Jessica Bavington acknowledged such links.

But here, I’ll focus on the third claim–that their consulting work “was not related to the research.” In particular, I’ll examine an online article posted by Swiss Re, a large reinsurance company. The article describes a “web-based discussion group” held with Peter White, the lead PACE investigator, and reveals some of the claims-assessing recommendations arising from that presentation. White included consulting work with Swiss Re in his Lancet disclosure.

The Lancet published the PACE results in February, 2011; the undated Swiss Re article was published sometime within the following year or so. The headline: “Managing claims for chronic fatigue the active way.” (Note that this headline uses “chronic fatigue” rather than “chronic fatigue syndrome,” although chronic fatigue is a symptom common to many illnesses and is quite distinct from the disease known as chronic fatigue syndrome. Understanding the difference between the two would likely be helpful in making decisions about insurance claims.)

The Swiss Re article noted that the illness “can be an emotive subject” and then focused on the implications of the PACE study for assessing insurance claims. It started with a summary account of the findings from the study, reporting that the “active rehabilitation” arms of cognitive behavioral therapy and graded exercise therapy “resulted in greater reduction of patients’ fatigue and larger improvement in physical functioning” than either adaptive pacing therapy or specialist medical care, the baseline condition. (The three intervention arms also received specialist medical care.)

The trial’s “key message,” declared the article, was that “pushing the limits in a therapeutic setting using well described treatment modalities is more effective in alleviating fatigue and dysfunction than staying within the limits imposed by the illness traditionally advocated by ‘pacing.’”

Added the article: “If a CFS patient does not gradually increase their activity, supported by an appropriate therapist, then their recovery will be slower. This seems a simple message but it is an important one as many believe that ‘pacing’ is the most beneficial treatment.”

This understanding of the PACE research—presumably based on information from Peter White’s web-based discussion—was wrong. Pacing is not and has never been a “treatment.” It is also not one of the “four most commonly used therapies,” as the newsletter article declared, since it has never been a “therapy” either. It is a self-help method practiced by many patients seeking the best way to manage their limited energy reserves.

The PACE investigators did not test pacing. Instead, the intervention they dubbed “adaptive pacing therapy” was an operationalized version of “pacing” developed specifically for the study. Many patients objected to the trial’s form of pacing as overly prescriptive, demanding and unlike the version they practiced on their own. Transforming an intuitive, self-directed approach into a “treatment” administered by a “therapist” was not a true test of whether the self-help approach is effective, they argued–with significant justification. Yet the Swiss Re article presented “adaptive pacing therapy” as if it were identical to “pacing.”

The Swiss Re article did not mention that the reported improvements from “active rehabilitation” were based on subjective outcomes and were not supported by the study’s objective data. Nor did it report any of the major flaws of the PACE study or offer any reasons to doubt the integrity of the findings.

The article next asked, “What can insurers and reinsurers do to assist the recovery and return to work of CFS claimants?” It then described the conclusions to be drawn from the discussion with White about the PACE trial—the “key takeaways for claims management.”

First, Swiss Re advised its employees, question the diagnosis, because “misdiagnosis is not uncommon.”

The second point was this: “It is likely that input will be required to change a claimant’s beliefs about his or her condition and the effectiveness of active rehabilitation…Funding for these CFS treatments is not expensive (in the UK, around £2,000) so insurers may well want to consider funding this for the right claimants.”

Translation: Patients who believe they have a medical disease are wrong, and they need to be persuaded that they are wrong and that they can get better with therapy. Insurers can avoid large payouts by covering the minimal costs of these treatments for patients vulnerable to such persuasion, given the right “input.”

Finally, the article warned that private therapists might not provide the kinds of “input” required to convince patients they were wrong. Instead of appropriately “active” approaches like cognitive behavior therapy and graded exercise therapy, these therapists might instead pursue treatments that could reinforce claimants’ misguided beliefs about being seriously ill, the article suggested.

“Check that private practitioners are delivering active rehabilitation therapies, such as those described in this article, as opposed to sick role adaptation,” the Swiss RE article advised. (The PACE investigators, drawing on the concept known as “the sick role” in medical sociology, have long expressed concern that advocacy groups enabled patients’ condition by bolstering their conviction that they suffered from a “medical disease,” as Michael Sharpe, another key PACE investigator, noted in a 2002 UNUMProvident report. This conviction encouraged patients to demand social benefits and health care resources rather than focus on improving through therapy, Sharpe wrote.)

Lastly, the Swiss Re article addressed “a final point specific to claims assessment.” A diagnosis of chronic fatigue syndrome, stated the article, provided an opportunity in some cases to apply a mental health exclusion, depending upon the wording of the policy. In contrast, a diagnosis of myalgic encephalomyelitis did not.

The World Health Organization’s International Classification for Diseases, or ICD, which clinicians and insurance companies use for coding purposes, categorizes myalgic encephalomyelitis as a neurological disorder that is synonymous with the terms “post-viral fatigue syndrome” and “chronic fatigue syndrome.” But the Swiss Re article stated that, according to the ICD, “chronic fatigue syndrome” can also “alternatively be defined as neurasthenia which is in the mental health chapter.”

The PACE investigators have repeatedly advanced this questionable idea. In the ICD’s mental health section, neurasthenia is defined as “a mental disorder characterized by chronic fatigue and concomitant physiologic symptoms,” but there is no mention of “chronic fatigue syndrome” as a discrete entity. The PACE investigators (and Swiss Re newsletter writers) believe that the neurasthenia entry encompasses the illness known as “chronic fatigue syndrome,” not just the common symptom of “chronic fatigue.”

This interpretation, however, appears to be at odds with an ICD rule that illnesses cannot be listed in two separate places—a rule confirmed in an e-mail from a WHO official to an advocate who had questioned the PACE investigators’ argument. “It is not permitted for the same condition to be classified to more than one rubric as this would mean that the individual categories and subcategories were no longer mutually exclusive,” wrote the official to Margaret Weston, the pseudonym for a longtime clinical manager in the U.K. National Health Service.

Presumably, after White disseminated the good news about the PACE results at the web-based discussion, Swiss Re’s claims managers felt better equipped to help ME/CFS claimants. And presumably that help included coverage for cognitive behavior therapy and graded exercise therapy so that claimants could receive the critical “input” they needed in order to recognize and accept that they didn’t have a medical disease after all.

In sum, contrary to the investigators’ argument in their response to Virology Blog, the PACE research and findings appear to be very much “related to” insurance industry consulting work. The claim that these relationships did not represent “possible conflicts of interest” and “institutional affiliations” requiring disclosure under the Declaration of Helsinki cannot be taken seriously.

Update 11/17/15 12:22 PM: I should have mentioned in the story that, in the PACE trial, participants in the cognitive behavior therapy and graded exercise therapy arms were no more likely to have increased their hours of employment than those in the other arms. In other words, there was no evidence for the claims presented in the Swiss Re article, based on Peter White’s presentation, that these treatments were any more effective in getting people back to work.

The PACE investigators published this employment data in a 2012 paper in PLoS One. It is unclear whether Peter White already knew these results at the time of his Swiss Re presentation on the PACE results.

Update 11/18/15 6:54 AM: I also forgot to mention in the story that the three principal PACE investigators did not respond to an e-mail seeking comment about their insurance industry work. Lancet editor Richard Horton also did not respond to an e-mail seeking comment.

An open letter to Dr. Richard Horton and The Lancet

Dr. Richard Horton
The Lancet
125 London Wall
London, EC2Y 5AS, UK

Dear Dr. Horton:

In February, 2011, The Lancet published an article called “Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomized trial.” The article reported that two “rehabilitative” approaches, cognitive behavior therapy and graded exercise therapy, were effective in treating chronic fatigue syndrome, also known as myalgic encephalomyelitis, ME/CFS and CFS/ME. The study received international attention and has had widespread influence on research, treatment options and public attitudes.

The PACE study was an unblinded clinical trial with subjective primary outcomes, a design that requires strict vigilance in order to prevent the possibility of bias. Yet the study suffered from major flaws that have raised serious concerns about the validity, reliability and integrity of the findings. The patient and advocacy communities have known this for years, but a recent in-depth report on this site, which included statements from five of us, has brought the extent of the problems to the attention of a broader public. The PACE investigators have replied to many of the criticisms, but their responses have not addressed or answered key concerns.

The major flaws documented at length in the recent report include, but are not limited to, the following:

*The Lancet paper included an analysis in which the outcome thresholds for being “within the normal range” on the two primary measures of fatigue and physical function demonstrated worse health than the criteria for entry, which already indicated serious disability. In fact, 13 percent of the study participants were already “within the normal range” on one or both outcome measures at baseline, but the investigators did not disclose this salient fact in the Lancet paper. In an accompanying Lancet commentary, colleagues of the PACE team defined participants who met these expansive “normal ranges” as having achieved a “strict criterion for recovery.” The PACE authors reviewed this commentary before publication.

*During the trial, the authors published a newsletter for participants that included positive testimonials from earlier participants about the benefits of the “therapy” and “treatment.” The same newsletter included an article that cited the two rehabilitative interventions pioneered by the researchers and being tested in the PACE trial as having been recommended by a U.K. clinical guidelines committee “based on the best available evidence.” The newsletter did not mention that a key PACE investigator also served on the clinical guidelines committee. At the time of the newsletter, two hundred or more participants—about a third of the total sample–were still undergoing assessments.

*Mid-trial, the PACE investigators changed their protocol methods of assessing their primary outcome measures of fatigue and physical function. This is of particular concern in an unblinded trial like PACE, in which outcome trends are often apparent long before outcome data are seen. The investigators provided no sensitivity analyses to assess the impact of the changes and have refused requests to provide the results per the methods outlined in their protocol.

*The PACE investigators based their claims of treatment success solely on their subjective outcomes. In the Lancet paper, the results of a six-minute walking test—described in the protocol as “an objective measure of physical capacity”–did not support such claims, notwithstanding the minimal gains in one arm. In subsequent comments in another journal, the investigators dismissed the walking-test results as irrelevant, non-objective and fraught with limitations. All the other objective measures in PACE, presented in other journals, also failed. The results of one objective measure, the fitness step-test, were provided in a 2015 paper in The Lancet Psychiatry, but only in the form of a tiny graph. A request for the step-test data used to create the graph was rejected as “vexatious.”

*The investigators violated their promise in the PACE protocol to adhere to the Declaration of Helsinki, which mandates that prospective participants be “adequately informed” about researchers’ “possible conflicts of interest.” The main investigators have had financial and consulting relationships with disability insurance companies, advising them that rehabilitative therapies like those tested in PACE could help ME/CFS claimants get off benefits and back to work. They disclosed these insurance industry links in The Lancet but did not inform trial participants, contrary to their protocol commitment. This serious ethical breach raises concerns about whether the consent obtained from the 641 trial participants is legitimate.

Such flaws have no place in published research. This is of particular concern in the case of the PACE trial because of its significant impact on government policy, public health practice, clinical care, and decisions about disability insurance and other social benefits. Under the circumstances, it is incumbent upon The Lancet to address this matter as soon as possible.

We therefore urge The Lancet to seek an independent re-analysis of the individual-level PACE trial data, with appropriate sensitivity analyses, from highly respected reviewers with extensive expertise in statistics and study design. The reviewers should be from outside the U.K. and outside the domains of psychiatry and psychological medicine. They should also be completely independent of, and have no conflicts of interests involving, the PACE investigators and the funders of the trial.

Thank you very much for your quick attention to this matter.

Sincerely,

Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University

Jonathan C.W. Edwards, MD
Emeritus Professor of Medicine
University College London

Leonard A. Jason, PhD
Professor of Psychology
DePaul University

Bruce Levin, PhD
Professor of Biostatistics
Columbia University

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University

Arthur L. Reingold, MD
Professor of Epidemiology
University of California, Berkeley

Trial By Error, Continued: Why has the PACE Study’s “Sister Trial” been “Disappeared” and Forgotten?

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

In 2010, the BMJ published the results of the Fatigue Intervention by Nurses Evaluation, or FINE. The investigators for this companion trial to PACE, also funded by the Medical Research Council, reported no benefits to ME/CFS patients from the interventions tested.

 In medical research, null findings often get ignored in favor or more exciting “positive” results. In this vein, the FINE trial seems to have vanished from the public discussion over the controversial findings from the PACE study. I thought it was important to re-focus some attention on this related effort to prove that “deconditioning” is the cause of the devastating symptoms of ME/CFS. (This piece is also too long but hopefully not quite as dense.)

An update on something else: I want to thank the public relations manager from Queen Mary University of London for clarifying his previous assertion that I did not seek comment from the PACE investigators before Virology Blog posted my story. In an e-mail, he explained that he did not mean to suggest that I hadn’t contacted them for interviews. He only meant, he wrote, that I hadn’t sent them my draft posts for comment before publication. He apologized for the misunderstanding.

I accept his apology, so that’s the end of the matter. In my return e-mail, however, I did let him know I was surprised at the expectation that I might have shared the draft with the PACE investigators before publication. I would not have done that whether or not they had granted me interviews. This is journalism, not peer-review. Different rules.

************************************************************************

In 2003, with much fanfare, the U.K. Medical Research Council announced that it would fund two major studies of non-pharmacological treatments for chronic fatigue syndrome. In addition to PACE, the agency decided to back a second, smaller study called “Fatigue Intervention by Nurses Evaluation,” or FINE. Because the PACE trial was targeting patients well enough to attend sessions at a medical clinic, the complementary FINE study was designed to test treatments for more severely ill patients.

(Chronic fatigue syndrome is also known as myalgic encephalomyelitis, CFS/ME, and ME/CFS, which has now been adopted by U.S. government agencies. The British investigators of FINE and PACE prefer to call it chronic fatigue syndrome, or sometimes CFS/ME.)

Alison Wearden, a psychologist at the University of Manchester, was the lead FINE investigator. She also sat on the PACE Trial Steering Committee and wrote an article about FINE for one of the PACE trial’s participant newsletters. The Medical Research Council and the PACE team referred to FINE as PACE’s “sister” trial. The two studies included the same two primary outcome measures, self-reported fatigue and physical function, and used the same scales to assess them.

The FINE results were published in BMJ in April, 2010. Yet when the first PACE results were published in The Lancet the following year, the investigators did not mention the FINE trial in the text. The trial has also been virtually ignored in the subsequent public debate over the results of the PACE trial and the effectiveness, or lack thereof, of the PACE approach.

What happened? Why has the FINE trial been “disappeared”?

*****

The main goal of the FINE trial was to test a treatment for homebound patients that adapted and combined elements of cognitive behavior therapy and graded exercise therapy, the two rehabilitative therapies being tested in PACE. The approach, called “pragmatic rehabilitation,” had been successfully tested in a small previous study. In FINE, the investigators planned to compare “pragmatic rehabilitation” with another intervention and with standard care from a general practitioner.

Here’s what the Medical Research Council wrote about the main intervention in an article in its newsletter, MRC Network, in the summer of 2003: “Pragmatic rehabilitation…is delivered by specially trained nurses, who give patients a detailed physiological explanation of symptom patterns. This is followed by a treatment programme focussing on graded exercise, sleep and relaxation.”

The second intervention arm featured a treatment called “supportive listening,” a patient-centered and non-directive counseling approach. This treatment presumed that patients might improve if they felt that the therapist empathized with them, took their concerns seriously, and allowed them to find their own approach to addressing the illness.

The Medical Research Council committed 1.3 million pounds to the FINE trial. The study was conducted in northwest England, with 296 patients recruited from primary care. Each intervention took place over 18 weeks and consisted of ten sessions–five home visits lasting up to 90 minutes alternating with five telephone conversations of up to 30 minutes.

As in the PACE trial, patients were selected using the Oxford criteria for chronic fatigue syndrome, defined as the presence of six months of medically unexplained fatigue, with no other symptoms required. The Oxford criteria have been widely criticized for yielding heterogeneous samples, and a report commissioned by the National Institutes of Health this year recommended by the case definition be “retired” for that reason.

More specific case definitions for the illness require the presence of core symptoms like post-exertional malaise, cognitive problems and sleep disorders, rather than just fatigue per se. Because the symptom called post-exertional malaise means that patients can suffer severe relapses after minimal exertion, many patients and advocacy organizations consider increases in activity to be potentially dangerous.

To be eligible for the FINE trial, participants needed to score 70 or less out of 100 on the physical function scale, the Medical Outcomes Study 36-Item Short Form Health Survey, known as the SF-36. They also needed to score a 4 or more out of 11 on the 11-item Chalder Fatigue Scale, with each item scored as either 0 or 1. On the fatigue scale, a higher score indicated greater fatigue.

Among other measures, the trial also included a key objective outcome–the “time to take 20 steps, (or number of steps
taken, if this is not achieved) and maximum heart rate reached on a step-test.”

Participants were to be assessed on these measures at 20 weeks, which as right after the end of the treatment period, and again at 70 weeks, which was one year after the end of treatment. According to the FINE trial protocol, published in the journal BMC Medicine in 2006, “short-term assessments of outcome in a chronic health condition such as CFS/ME can be misleading” and declared the 70-week assessment to be the “primary outcome point.”

*****

The theoretical model behind the FINE trial and pragmatic rehabilitation paralleled the PACE concept. The physical symptoms were presumed to be the result not of a pathological disease process but of “deconditioning” or “dysregulation” caused by sedentary behavior, accompanied by disrupted sleep cycles and stress. The sedentary behavior was itself presumed to be triggered by patients’ “unhelpful’ conviction that they suffered from a progressive medical illness. Counteracting the deconditioning involved re-establishing normal sleep cycles, reducing anxiety levels and gently increasing physical exertion, even if patients remained homebound.

“The treatment [pragmatic rehabilitation] is based on a model proposing that CFS/ME is best understood as a consequence of physiological dysregulation associated with inactivity and disturbance of sleep and circadian rhythms,” stated the FINE trial protocol. “We have argued that these conditions…are often maintained by illness beliefs that lead to exercise-avoidance. The essential feature of the treatment is the provision of a detailed explanation for patients’ symptoms, couched in terms of the physiological dysregulation model, from which flows the rationale for a graded return to activity.”

On the FINE trial website, a 2004 presentation about pragmatic rehabilitation explained the illness in somewhat simpler terms, comparing it to “very severe jetlag.” After explaining how and why pragmatic rehabilitation led to physical improvement, the presentation offered this hopeful message, in boldface: “There is no disease–you have a right to full health. This is a good news diagnosis. Carefully built up exercise can reverse the condition. Go for 100% recovery.”

In contrast, patients, advcoates and many leading scientists have completely rejected the PACE and FINE approach. They believe the evidence overwhelmingly points to an immunological and neurological disorder triggered by an initial infection or some other physiological insult. Last month, the National Institutes of Health ratified this perspective when it announced a major new push to seek biomedical answers to the disease, which it refers to as ME/CFS.

As in PACE, patients in the FINE trial were issued different treatment manuals depending upon their assigned study arm. The treatment manual for pragmatic rehabilitation repeatedly informed participants that the therapy could help them get better—even though the trial itself was designed to test the effectiveness of the therapy. (In the PACE trial, the manuals for the cognitive behavior and graded therapy arms also included many statements promoting the idea that the therapies could successfully treat the illness.)

“This booklet has been written with the help of patients who have made a full recovery from Chronic Fatigue Syndrome,” stated the FINE pragmatic rehabilitation manual on its second page. “Facts and information which were important to them in making this recovery have been included.” The manual noted that the patients who helped write it had been treated at the Royal Liverpool University Hospital but did not include more specific details about their “full recovery” from the illness.

Among the “facts and information” included in the manual were assertions that the trial participants, contrary to what they might themselves believe, had no persistent viral infection and “no underlying serious disease.” The manual promised them that pragmatic rehabilitation could help them overcome the illness and the deconditioning perpetuating it. “Instead of CFS controlling you, you can start to regain control of your body and your life,” stated the manual.

Finally, as in PACE, participants were encouraged to change their beliefs about their condition by “building the right thoughts for your recovery.” Participants were warned that “unhelpful thoughts”—such as the idea that continued symptoms indicated the presence of an organic disease and could not be attributed to deconditioning—“can put you off parts of the treatment programme and so delay or prevent recovery.”

The supportive listening manual did not similarly promote the idea that “recovery” from the illness was possible. During the sessions, the manual explained, “The listener, your therapist, will provide support and encourage you to find ways to cope by using your own resources to change, manage or adapt to difficulties…She will not tell you what to do, advise, coach or direct you.”

*****

A qualitative study about the challenges of the FINE research process, published by the investigators in the journal Implementation Science in 2011, shed light on how much the theoretical framework and the treatment approaches frustrated and angered trial participants. According to the interviews with some of the nurses, nurse supervisors, and participants involved in FINE, the home visits often bristled with tension over the different perceptions of what caused the illness and which interventions could help.

“At times, this lack of agreement over the nature of the condition and lack of acceptance as to the rationale behind the treatment led to conflict,” noted the FINE investigators in the qualitative paper. “A particularly difficult challenge of interacting with patients for the nurses and their supervisors was managing patients’ resistance to the treatment.”

One participant in the pragmatic rehabilitation arm, who apparently found it difficult to do what was apparently expected, attributed this resistance to the insistence that deconditioning caused the symptoms and that activity would reverse them. “If all that was standing between me and recovery was the reconditioning I could work it out and do it, but what I have got is not just a reconditioning problem,” the participant said. “I have got something where there is damage and a complete lack of strength actually getting into the muscles and you can’t work with what you haven’t got in terms of energy.”

Another participant in the pragmatic rehabilitation arm was more blunt. “I kept arguing with her [the nurse administering the treatment] all the time because I didn’t agree with what she said,” said the participant, who ended up dropping out of the trial.

Some participants in the supportive listening arm also questioned the value of the treatment they were receiving, according to the study. “I mostly believe it was more physical than anything else, and I didn’t see how talking could truthfully, you know, if it was physical, do anything,” said one.

In fact, the theoretical orientation also alienated some prospective participants as well, according to interviews the investigators conducted with some patients who declined to enter the trial. ‘It [the PR intervention] insisted that physiologically there was nothing wrong,” said one such patient. “There was nothing wrong with my glands, there was nothing wrong, that it was just deconditioned muscles. And I didn’t believe that…I can’t get well with treatment you don’t believe in.”

When patients challenged or criticized the therapeutic interventions, the study found, nurses sometimes felt their authority and expertise to be under threat. “They are testing you all the time,” said one nurse. Another reported: “That anger…it’s very wearing and demoralizing.”

One nurse remembered the difficulties she faced with a particular participant. “I used to go there and she would totally block me, she would sit with her arms folded, total silence in the house,” said the nurse. “It was tortuous for both of us.”

At times, nurses themselves responded to these difficult interactions with bouts of anger directed at the participants, according to a supervisor.

“Their frustration has reached the point where they sort of boiled over,” said the supervisor. “There is sort of feeling that the patient should be grateful and follow your advice, and in actual fact, what happens is the patient is quite resistant and there is this thing like you know, ‘The bastards don’t want to get better.’”

*****

BMJ published the FINE results in 2010. The FINE investigators found no statistically significant benefits to either pragmatic rehabilitation or supportive listening at 70 weeks. Despite these null findings one year after the end of the 18-week course of treatment, the mean scores of those in the pragmatic rehabilitative arm demonstrated at 20 weeks a “clinically modest” but statistically significant reduction in fatigue—a drop of one point (plus a little) on the 11-point fatigue scale. The slight improvement still meant that participants were much more fatigued than the initial entry threshold for disability, and any benefits were no longer statistically significant by the final assessment.

Despite the null findings at 70 weeks, the authors put a positive gloss on the results, reporting first in the abstract that fatigue was “significantly improved” at 20 weeks. Given the very modest one-point change in average fatigue scores, perhaps the FINE investigators intended to report instead that there was a “statistically significant improvement” at 20 weeks—an accurate phrase with a somewhat different meaning.

The abstract included another interesting linguistic element. While the trial protocol had designated the 70-week assessment as “the primary outcome point,” the abstract of the paper itself now stated that “the primary clinical outcomes were fatigue and physical functioning at the end of treatment (20 weeks) and 70 weeks from recruitment.”

After redefining their primary outcome points to include the 20-week as well as the 70-week assessment, the abstract promoted the positive effects found at the earlier point as the study’s main finding. Only after communicating the initial benefits did they note that these advantages for pragmatic rehabilitation later wore off. The FINE paper cited no oversight committee approval for this expanded interpretation of the trial’s primary outcome points to include the 20-week assessment, nor did it mention the protocol’s caveat about the “misleading” nature of short-term assessments in chronic health conditions.

In fact, within the text of the paper, the investigators noted that the “pre-designated outcome point” was 70 weeks. But they did not explain why they then decided to highlight most in the abstract what was not the pre-designated but instead a post-hoc “primary” outcome point—the 20-week assessment.

A BMJ editorial that accompanied the FINE trial also accentuated the positive results at 20 weeks rather than the bad news at 70 weeks. According to the editorial’s subhead, pragmatic rehabilitation “has a short term benefit, but supportive listening does not.” The editorial did not note that this was not the pre-designated primary outcome point. The null results for that outcome point—the 70-week assessment—were not mentioned until later in the editorial.

*****

Patients and advocates soon began criticizing the study in the “rapid response” section of the BMJ website, citing its theoretical framework, the use of the broad Oxford criteria as a case definition, and the failure to provide the step-test outcomes, among other issues.

“The data provide strong evidence that the anxiety and deconditioning model of CFS/ME on which the trial is predicated is either wrong or, at best, incomplete,” wrote one patient. “These results are immensely important because they demonstrate that if a cure for CFS/ME is to be found, one must look beyond the psycho-behavioural paradigm.”

Another patient wrote that the study was “a wake-up call to the whole
of the medical establishment” to take the illness seriously. One predicted “that there will those who say that the this trial failed because
the patients were not trying hard enough.”

A physician from Australia sought to defend the interests not of patients but of the English language, decrying the lack of hyphens in the paper’s full title: “Nurse led, home based self help treatment for patients in primary care with chronic fatigue syndrome: randomised controlled trial.”

“The hyphen is a coupling 
between carriages of words to ensure unambiguous
 transmission of thought,” wrote the doctor. “Surely this should read ‘Nurse-led, home-based, self-
help…’

“Lest English sink further into the Great Despond of 
ambiguity and non-sense [hyphen included in the original comment], may I implore the co-editors of
the BMJ to be the vigilant watchdogs of our mother tongue
 which at the hands of a younger ‘texting’ generation is heading towards anarchy.” [The original comment did not include the expected comma between ‘tongue’ and ‘which.’]

*****

In a response on the BMJ website a month after publishing the study, the FINE investigators reported that they had conducted a post-hoc analysis with a different kind of scoring for the Chalder Fatigue Scale.

Instead of scoring the answers as 0 or 1 using what was called a bimodal scale, they rescored them using what was called a continuous scale, with values ranging from 0 to 3. The full range of possible scores now ran from 0 to 33, rather than 0 to 11. (As collected, the data for the Chalder Fatigue Scale allowed for either scoring system; however, the original entry criteria of 4 on the bimodal scale would translate into a range from 4 to as high as 19 on the revised scale.)

With the revised scoring, they now reported a “clinically modest, but statistically significant effect” of pragmatic rehabilitation at 70 weeks—a reduction from baseline of about 2.5 points on the 0 to 33 scale. This final score represented some increase in fatigue from the 20-week interim assessment point.

In their comment on the website, the FINE investigators now reaffirmed that the 70-week assessment was “our primary outcome point.” This statement conformed to the protocol but differed from the suggestion in the BMJ paper that the 20-week results also represented “primary” outcomes. Given that the post-hoc rescoring allowed the investigators to report statistically significant results at the 70-week endpoint, this zig-zag back to the protocol language was perhaps not surprising.

In their comment, the FINE investigators also explained that they did not report their step-test results—their one objective measure of physical capacity–“due to a significant amount of missing data.” They did not provide an explanation for the missing data. (One obvious possible reason for missing data on an objective fitness test is that participants were too disabled to perform it at all.)

The FINE investigators did not address the question of whether the title of their paper should have included hyphens.

In the rapid comments, Tom Kindlon, a patient and advocate from a Dublin suburb, responded to the FINE investigators’ decision to report their new post-hoc analysis of the fatigue scale. He noted that the investigators themselves had chosen the bimodal scoring system for their study rather than the continuous method.

“I’m
 sure many pharmacological and non-pharmacological studies could look
 different if investigators decided to use a different scoring method or
scale at the end, if the results weren’t as impressive as they’d hoped,” he wrote. “But that is not normally how medicine works. So, while it is interesting
 that the researchers have shared this data, I think the data in the main
paper should be seen as the main data.”

*****

The FINE investigators have published a number of other papers arising from their study. In a 2013 paper on mediators of the effects of pragmatic rehabilitation, they reported that there were no differences between the three groups on the objective measure of physical capacity, the step test, despite their earlier decision not to publish the data in the BMJ paper.

Wearden herself presented the trial as a high point of her professional career in a 2013 interview for the website of the University of Manchester’s School of Psychological Sciences. “I suppose the thing I did that I’m most proud of is I ran a large treatment trial of pragmatic rehabilitation treatment for patients with chronic fatigue syndrome,” she said in the interview. “We successfully carried that trial out and found a treatment that improved patients’ fatigue, so that’s probably the thing that I’m most proud of.”

The interview did not mention that the improvement at 20 weeks was transient until the investigators performed a post-hoc-analysis and rescored the fatigue scale.

*****

The Science Media Centre, a self-styled “independent” purveyor of information about science and scientific research to journalists, has consistently shown an interest in research on what it calls CFS/ME. It held a press briefing for the first PACE results published in The Lancet in 2011, and has helped publicize the release of subsequent studies from the PACE team.

However, the Science Media Centre does not appear to have done anything to publicize the 2010 release of the FINE trial, despite its interest in the topic. A search of the center’s website for the lead FINE investigator, Alison Wearden, yielded no results. And a search for CFS/ME indicated that the first study embraced by the center’s publicity machine was the 2011 Lancet paper.

That might help explain why the FINE trial was virtually ignored by the media. A search on the LexisNexis database for “PACE trial” and “chronic fatigue syndrome” yielded 21 “newspaper” articles (I use the “apostrophes” here because I don’t know if that number includes articles on newspaper websites that did not appear in the print product; the accuracy of the number is also in question because the list did not include two PACE-related articles that I wrote for The New York Times).

Searches on the database combining “chronic fatigue syndrome” with either “FINE trial” or “pragmatic rehabilitation” yielded no results. (I used the version of LexisNexis Academic available to me through the University of California library system.)

Other researchers have also paid scant attention to the FINE trial, especially when compared to the PACE study. According to Google Scholar, the 2011 PACE paper in The Lancet has been cited 355 times. In contrast, the 2010 FINE paper in BMJ has only been cited 39 times.

*****

The PACE investigators likely exacerbated this virtual disappearance of the FINE trial by their decision not to mention it in their Lancet paper, despite its longstanding status as a “sister trial” and the relevance of the findings to their own study of cognitive behavior therapy and graded exercise therapy. The PACE investigators have not explained their reasons for ignoring the FINE trial. (I wrote about this lapse in my Virology Blog story, but in their response the PACE investigators did not mention it.)

This absence is particularly striking in light of the decision made by the PACE investigators to drop their protocol method of assessing the Chalder Fatigue Scale. In the protocol, their primary fatigue outcome was based on bimodal scoring on the 11-item fatigue scale. The protocol included continuous scoring on the fatigue scale, with the 0 to 33 scale, as a secondary outcome.

In the PACE paper itself, the investigators announced that they had dropped the bimodal scoring in favor of the continuous scoring “to more sensitively test our hypotheses of effectiveness.” They did not explain why they simply didn’t provide the findings under both scoring methods, since the data as collected allowed for both analyses. They also did not cite any references to support this mid-trial decision, nor did they explain what prompted it.

They certainly did not mention that PACE’s “sister” study, the FINE trial, had reported null results at the 70-week endpoint—that is, until the investigators rescored the data using a continuous scale rather than the bimodal scale used in the original paper.

The three main PACE investigators—psychiatrist Peter White and Michael Sharpe, and behavioral psychologist Trudie Chalder—did not respond to an e-mail request for comment on why their Lancet paper did not mention the FINE study, especially in reference to their post-hoc decision to change the method of scoring the fatigue scale. Lancet editor Richard Horton also did not respond to an e-mail request for an interview on whether he believed the Lancet paper should have included information about the FINE trial and its results.

*****

Update 11/9/15 10:46 PM: According to a list of published and in-process papers on the FINE trial website, the main FINE study was rejected by The Lancet before being accepted by BMJ, suggesting that the journal was at least aware of the trial well before it published the PACE study. That raises further questions about the absence of any mention of FINE and its null findings in the text of the PACE paper.