By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

Wow, the research from the CBT/GET crowd in The Netherlands never ceases to amaze. Like the work of their friends in the U.K., each study comes up with new ways to be bad. It’s almost too easy to poke holes in these things. And yet the investigators appear unable to restrain themselves from making extremely generous over-interpretations of their findings–interpretations that cannot withstand serious scrutiny. The investigators always conclude, no matter what, that cognitive and/or behavioral therapies are effective for treating the disease they usually call chronic fatigue syndrome.

That this so-called science manages to get through peer review is astonishing. That is, unless we assume the studies are all peer-reviewed by other investigators who share the authors’ “unhelpful beliefs” and “dysfunctional cognitions” about ME/CFS and the curative powers of cognitive behavior therapy and graded exercise therapy.

Let’s take a quick look at yet another Dutch study of CBT for adolescents, a 2004 trial published in the BMJ. This one offers a superb example of over-interpretation. The small trial, with 71 participants, had two arms. One group received ten sessions of CBT over five months. The other received…a place on a waiting list for treatment. That’s right–they got nothing. Guess what? Those who got something did better on subjective measures at five months than those who got nothing. The investigators’ definitive conclusion: CBT is an effective treatment for sick teens.

I mean, WTF? It’s not hard to figure out that, you know, offering people some treatment is more likely to produce positive responses to subjective questions than offering them a place on a waiting list. That banal insight must be right in the first chapter of Psychological Research for Dummies. Aren’t these investigators presenting themselves as authorities on human behavior? Have they heard of something called the placebo effect?

Here’s what this BMJ study proved: Ten sessions of something lead to more reports of short-term benefits than no sessions of anything. But ten sessions of what? Maybe ten sessions of poker-playing or ten sessions of watching Seinfeld reruns while holding hands with the therapist and singing “The Girl from Ipanema” in falsetto would have produced the same results. Who knows? To flatly declare that their findings prove that CBT is an effective treatment—without caveats or an iota of caution—is a huge and unacceptable interpretive leap. The paper should never have been published in this form. It’s ridiculous to take this study as some kind of solid “evidence” for CBT.

But from the perspective of the Dutch research group, this waiting-list strategy apparently worked so well that they used it again for a 2015 study of group CBT for chronic fatigue syndrome. In this study, providing CBT in groups of four or eight patients worked significantly better than placing patients on a waiting list and providing them with absolutely nothing. Of course, no one could possibly take these findings to mean that group CBT specifically is an effective treatment—except they did.

When I’m reading this stuff I sometimes feel like I’m going out of my mind. Do I really have to pick through every one of these papers to point out flaws that a first-year epidemiology student could spot?

One big issue here is how these folks piggy-back one bad study on top of another to build what appears to be a robust body of research but is, in fact, a house of cards. When you expose the cracks in the foundational studies, the whole edifice comes tumbling down. A case in point: a 2007 Dutch study that explored the effect of CBT on “self-reported cognitive impairments and neuropsychological test performance.” Using data from two earlier studies, the investigators concluded that CBT reduced self-reported cognitive impairment but did not improve neuropsychological test performance.

Which studies was this 2007 study based on? Well, one of them was the very problematic 2004 study I have just discussed–the one that found CBT effective when compared to nothing. The other was the 2001 study in The Lancet that I wrote about in my last post. As I noted, this Lancet study claimed to be using the CDC criteria for chronic fatigue syndrome, but then waived the requirement that patients have four other symptoms besides fatigue. So it was, in effect, a study of a heterogeneous group of people suffering from at least six months of fatigue.

This case definition—six months of fatigue, with no other symptoms necessary—was used in the PACE trial and is known as the Oxford criteria. It has been discredited because it generates heterogeneous populations of people suffering from a variety of fatiguing illnesses. The results of Oxford criteria studies cannot be extrapolated to those with ME/CFS.

The 2007 study relies on the accuracy and validity of the two studies whose data it incorporates. Since those earlier studies violated basic understandings of scientific analysis, the new study is also bogus and cannot be taken seriously.

The PACE authors themselves have perfected this strategy of generating new bad papers by stacking up earlier bad ones. In November, Trudie Chalder demonstrated her personal flair for this technique as co-author of a systematic review of “attentional and interpretive bias towards illness-related information in chronic fatigue syndrome.” The authors’ conclusion: “Cognitive processing biases may maintain illness beliefs and symptoms in people with CFS.” The proposed solution to that would obviously be some sessions of CBT to correct those pesky cognitive processing biases.

Among other problems, Dr. Chalder and her co-authors included data from Oxford criteria studies. By including in the mix these heterogeneous samples of people suffering from chronic fatigue, Dr. Chalder and her colleagues have invalidated their claim that it is a study of the illness known as chronic fatigue syndrome. Of course, Psychological Medicine, which published this new research gem, is the journal that published—and has consistently refused to correct–the PACE “recovery” paper in which participants could get worse but still meet “recovery” thresholds.

The Dutch branch of the CBT/GET ideological brigade has been centered at Radboud University Nijmegen, home base for many years of two of the movement’s leading lights: Dr. Gijs Bleijenberg and Dr. Hans Knoop. Dr. Knoop recently moved to the University of Amsterdam and is currently a co-investigator of FITNET-NHS with Esther Crawley. Dr. Bleijenberg, on the occasion of his own retirement a few years ago, had this to say about his longtime friend and colleague, PACE investigator Michael Sharpe: “Dear Mike, we know each other nearly 20 years. You have inspired me very much in the way you treated CFS. Thanks a lot!”

Indeed. Dr. Bleijenberg and his Dutch colleagues appear to have learned a great deal from their PACE besties. Dr. Bleijenberg and Dr. Knoop demonstrated their own nimble use of language in the 2011 commentary in The Lancet that accompanied the publication of the first PACE results. I discussed this deceptive commentary at length in a post last year, so I won’t regurgitate the whole sorry argument here. But the Dutch investigators themselves are well aware that their claim that thirty percent of PACE participants met a “strict criterion” for recovery is preposterous.

How do I know that Dr. Bleijenberg and Dr. Knoop know this? Because as I documented in last year’s post, claims in the 2011 commentary contradict and ignore statements they themselves made in a 2007 paper that posed this question: “Is a full recovery possible after cognitive behavioural therapy for chronic fatigue syndrome?” (The answer, of course, was yes. Peter White, the lead PACE investigator, was a co-author of the 2007 paper.) Moreover, Dr. Bleijenberg and Dr. Knoop certainly know that the “strict criterion” they touted included thresholds that some participants had already met at baseline—yet they have still refused to correct this statement.

Given that all of these studies present serious methodological concerns, the Dutch Health Council panel considering the science of ME/CFS should be very, very wary of using them to formulate recommendations. The panel should understand that, within the next few months, peer-reviewed analyses of the original PACE data are likely to be published. (Two such analyses—one by the PACE authors themselves, one by an independent group of patients and academic statisticians–have already been published online, without peer review.) The upcoming papers will demonstrate conclusively that the “benefits” reported by the PACE team were mostly or completely illusory—and were obtained only by methodological anomalies like dramatic and unacceptable changes in outcome measures.

In an open letter to The Lancet posted on Virology Blog last February, dozens of prominent scientists and clinicians condemned the PACE study and its conclusions in harsh terms. In the U.K., the First-Tier Tribunal cited this worldwide dismay about the trial’s egregious lapses while demolishing the PACE authors’ excuses for withholding their data. The studies from the Radboud University crowd and their compatriots all rest on the same silly, unproven hypotheses of dysfunctional thinking, fear of activity, and deconditioning, and are just as intellectually incoherent and dishonest.

Should the Health Council produce a report recommending cognitive and behavioral treatments based on this laughable body of “research,” the organization could become an international joke and suffer enormous long-term reputational damage. The entire PACE paradigm is undergoing a very public unraveling. Everyone can now see what patients have seen for years. Meanwhile, biomedical researchers in the U.S., Norway, and elsewhere are narrowing in on the actual pathophysiology underlying ME/CFS.

It would be a shame to see the Dutch marching backwards to embrace scientific illiteracy and adopt an “Earth-is-flat” approach to reality.


And for a special bonus, let’s now take another quick peek at Dr. Crawley’s work. Someone recently e-mailed me a photo of a poster presentation by Dr. Crawley and three colleagues. This poster was shown at the inaugural conference of the U.K. CFS/ME Research Collaborative, or CMRC, held in 2014. The poster was based on information from the same dataset used for Dr. Crawley’s recent Pediatrics study. As I pointed out two posts ago, that flawed study claimed a surprisingly high prevalence of 2 % among adolescents—a figure that drew widespread attention in media reports.

Dr. Crawley has cited high prevalence estimates to argue for more research into and treatment with CBT and GET. And if these prevalence rates were real, that might make sense. However, as I noted, her method of identifying the illness was specious—she decided, without justification or explanation, that she could diagnose chronic fatigue syndrome through parental and child reports of chronic fatigue, and without information from clinical examinations. In fact, after those who appeared to have high levels of depression were removed, the prevalence fell to 0.6 %–although this lower figure is not the one Dr. Crawley has emphasized.

Despite the high prevalence, however, the same dataset showed that adolescents suffering from the illness generally got better without any treatment at all, according to the 2014 poster presentation. Here’s the poster’s conclusion: “Persistent CFS/ME is rare in teenagers and most teenagers not seen in a clinical service will recovery spontaneously.”

Isn’t that great? Why haven’t I seen these hopeful data before? Although the poster predated this year’s Pediatrics paper, the data about very high rates of spontaneous recovery did not make it into that prevalence study. Moreover, the FITNET-NHS protocol and the recruitment leaflet highlight the claim that few adolescents will recover at six months without “specialist treatment” but most will recover if they receive it. Unmentioned is the highly salient fact that this “specialist treatment” apparently makes no long-term difference.

In reality, the adolescents who recovered spontaneously most likely were not suffering from ME/CFS in the first place. Dr. Crawley certainly hasn’t provided sufficient evidence that any of the children in the database she used actually had it, despite her insistence on using the term. Most likely, some unknown number of those identified as having chronic fatigue syndrome in the Pediatrics paper and in the poster presentation did have ME/CFS. But many or most were experiencing what could only be called a bout of chronic fatigue, for unknown reasons.

It is disappointing that Dr. Crawley did not include the spontaneous recovery rate in the Pediatrics paper or in the FITNET-NHS protocol. In fact, as far as I can tell, these optimistic findings have not been published anywhere. I don’t know the rationale for this decision to withhold rather than publish substantive information. Perhaps the calculation is that public reports of high rates of spontaneous recovery would undermine the arguments for ever-more funding to study CBT and GET? Just a guess, of course.

(Esther–Forgive me if I’m mistaken about whether these data have been published somewhere. I have only seen this information in your poster for the inaugural CMRC conference in 2014. If the data have been peer-reviewed and published, I stand corrected on that point and applaud your integrity.)

At the Hamilton, Montana Performing Arts Center, Vincent speaks with three local high school graduates and two high school teachers about how Rocky Mountain Laboratories influenced school science programs and opened up career opportunities.

You can find TWiM #140 at, or listen and watch here.

Right click to download TWiM#140 (47 MB .mp3, 78 minutes)

Subscribe to TWiM (free) on iTunesStitcherAndroidRSS, or by email. You can also listen on your mobile device with the Microbeworld app.

Become a Patron of TWiM!

For World AIDS Day 2016, Vincent speaks with Gary Nabel, Chief Scientific Officer at Sanofi and former Director of the Vaccine Research Institute of NIAID, about his career and his work on HIV vaccines.

You can find this TWiV Special at, or listen and watch here.

Click arrow to play
Download this TWiV Special (25 MB .mp3, 41 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

Last week’s post on FITNET-NHS and Esther Crawley stirred up a lot of interest. I guess people get upset when researchers cite shoddy “evidence” from poorly designed trials to justify foisting psychological treatments on kids with a physiological disease. I wanted to post some additional bits and pieces related to the issue.


I sent Dr. Crawley a link to last week’s post, offering her an opportunity to send her response to Dr. Racaniello for posting on Virology Blog, along with my response to her response. So far, Dr. Racaniello and I haven’t heard back—I doubt we will. Maybe she feels more comfortable misrepresenting facts in trial protocols and radio interviews than in addressing the legitimate concerns raised by patients and confronting the methodological flaws in her research. I hope Dr. Crawley knows she will always have a place on Virology Blog to present her perspective, should she choose to exercise that option. (Esther, are you reading this?)


From reading the research of the CBT/GET/PACE crowd, I get the impression they are all in the habit of peer-reviewing and supporting each others’ work. I make that assumption because it is hard to imagine that independent scientists not affiliated with this group would overlook all the obvious problems that mar their studies—like outcome measures that represent worse health than entry criteria, as in the PACE trial itself. So it’s not surprising to learn that one of the three principal PACE investigators, psychiatrist Michael Sharpe, was on the committee that reviewed—and approved—Dr. Crawley’s one-million-pound FITNET-NHS study.

FITNET-NHS is being funded by the U.K.’s National Institute for Health Research. I have no idea what role, if any, Dr. Sharpe played in pushing through Dr. Crawley’s grant, but it likely didn’t hurt that the FITNET-NHS protocol cited PACE favorably while failing to point out that it has been rejected as fatally flawed by dozens of distinguished scientists and clinicians. Of course, the protocol also failed to point out that the reanalyses of the trial data have shown that the findings published by the PACE authors were much better than the results using the methods they promised in their protocol. (More on the reanalyses below.) And as I noted in my previous post, the FITNET-NHS protocol also misstated the NICE guidelines for chronic fatigue syndrome, making post-exertional malaise an optional symptom rather than a required component—thus conflating chronic fatigue and chronic fatigue syndrome, just as the PACE authors did by using the overly broad Oxford criteria.

The FITNET-NHS proposal also didn’t note some similarities between PACE and the Dutch FITNET trial on which it is based. Like the PACE trial, the Dutch relied on a post-hoc definition of “recovery.” The thresholds the FITNET investigators selected after they saw the results were pretty lax, which certainly made it easier to find that participants had attained “recovery.” Also like the PACE trial, the Dutch participants in the comparison group ended up in the same place as the intervention group at long-term follow-up. Just as the CBT and GET in PACE offered no extended advantages, the same was true of the online CBT provided in FITNET.

And again like the PACE authors, the FITNET investigators downplayed these null findings in their follow-up paper. In a clinical trial, the primary results are supposed to be comparisons between the groups. Yet in the follow-up PACE and FITNET articles, both teams highlighted the “within-group” comparisons. That is, they treated the fact that there were no long-term differences between the groups as an afterthought and boasted instead that the intervention groups sustained the progress they initially made. That might be an interesting sub-finding, but to present “within-group” results as a clinical trial’s main outcome is highly disingenuous.


As part of her media blitz for the FITNET-NHS launch, Dr. Crawley was interviewed on a BBC radio program by a colleague, Dr. Phil Hammond. In this interview, she made some statements that demonstrate one of two things: Either she doesn’t know what she’s talking about and her misrepresentations are genuine mistakes, or she’s lying. So either she’s incompetent, or she lacks integrity. Not a great choice.

Let’s parse what she said about the fact that, at long-term follow-up, there were no apparent differences between the intervention and the comparison groups in the Dutch FITNET study. Here’s her comment:

“Oh, people have really made a mistake on this,” said Dr. Crawley. “So, in the FITNET Trial, they were offered FITNET or usual care for six months, and then if they didn’t make a recovery in the usual care, they were offered FITNET again, and they were then followed up at 2 to 3 years, so of course what happened is that a lot of the children who were in the original control arm, then got FITNET as well, so it’s not surprising that at 2 or 3 years, the results were similar.”

This is simply not an accurate description. As Dr. Crawley must know, some of the Dutch FITNET participants in the “usual care” comparison group went on to receive FITNET, and others didn’t. Both sets of usual care participants—not just those who received FITNET—caught up to the original FITNET group. For Dr. Crawley to suggest that the reason the others caught up was that they received FITNET is, perhaps, an unfortunate mistake. Or else it’s a deliberate untruth.


Another example from the BBC radio interview: Dr. Crawley’s inaccurate description of the two reanalyses of the raw trial data from the PACE study. Here’s what she said:

“First of all they did a reanalysis of recovery based on what the authors originally said they were going to do, and that reanalysis done by the authors is entirely consistent with their original results. [Actually, Dr. Crawley is mistaken here; the PACE authors did a reanalysis of “improvement,” not of “recovery”]…Then the people that did the reanalysis did it again, using a different definition of recovery, that was much much harder to reach–and the trial just wasn’t big enough to show a difference, and they didn’t show a difference. [Here, Dr. Crawley is talking about the reanalysis done by patients and academic statisticians.] Now, you know, you can pick and choose how you redefine recovery, and that’s all very important research, but the message from the PACE Trial is not contested; the message is, if you want to get better, you’re much more likely to get better if you get specialist treatment.”

This statement is at serious odds with the facts. Let’s recap: In reporting their findings in The Lancet in 2011, the PACE authors presented “improvement” results for the two primary outcomes of fatigue and physical function. They reported that about 60 percent of participants in the CBT and GET arms reached the selected thresholds for “improvement” on both measures. In a 2013 paper in the journal Psychological Medicine, they presented “recovery” results based on a composite “recovery” definition that included the two primary outcomes and two additional measures. In this paper, they reported “recovery” rates for the favored intervention groups of 22 percent.

Using the raw trial data that the court ordered them to release earlier this year, the PACE authors themselves reanalyzed the Lancet improvement findings, based on their own initial, more stringent definition of “improvement” in the protocol. In this analysis, the authors reported that only about 20 percent “improved” on both measures, using the methods for assessing “improvement” outlined in the protocol. In other words, only a third as many “improved,” according to the authors’ own original definition, compared to the 60 percent they reported in The Lancet. Moreover, in the reanalysis, ten percent “improved” in the comparison group, meaning that CBT and GET led to “improvements” in only one in ten participants—a pretty sad result for a five-million-pound trial.

However, because these meager findings were statistically significant, the PACE authors and their followers have, amazingly, trumpeted them as supporting their initial claims. In reality, the new “improvement” findings demonstrate that any “benefits” offered by CBT and GET are marginal. It is preposterous and insulting to proclaim, as the PACE authors and Dr. Crawley have, that this represents confirmation of the results reported in The Lancet. Dr. Crawley’s statement that “the message from the PACE trial is not contested” is of course nonsense. The PACE “message” has been exposed as bullshit—and everyone knows it.

The PACE authors did not present their own reanalysis of the “recovery” findings—probably because those turned out to be null, as was shown in a reanalysis of that data by patients and academic statisticians, published on Virology Blog. That reanalysis found single-digit “recovery” rates for all the study arms, and no statistically significant differences between the groups. Dr. Crawley declared in the radio interview that this reanalysis used “a different definition of recovery, that was much harder to reach.” And she acknowledged that the reanalysis “didn’t show a difference”—but she blamed this on the fact that the PACE trial wasn’t big enough, even though it was the largest trial ever of treatments for ME/CFS.

This reasoning is specious. Dr. Crawley is ignoring the central point: The “recovery” reanalysis was based on the authors’ own protocol definition of “recovery,” not some arbitrarily harsh criteria created by outside agitators opposed to the trial. The PACE authors themselves had an obligation to provide the findings they promised in their protocol; after all, that’s the basis on which they received funding and ethical permission to proceed with the trial.

It is certainly understandable why they, and Dr. Crawley, prefer the manipulated and false “recovery” data published in Psychological Medicine. But deciding post-hoc to use weaker outcome measures and then refuse to provide your original results is not science. That’s data manipulation. And if this outcome-switching is done with the intent to hide poor results in favor of better ones, it is considered scientific misconduct.


I also want to say a few words about the leaflet promoting FITNET-NHS. The leaflet states that most patients “recover” with “specialist treatment” and less than ten percent “recover” from standard care. Then it announces that this “specialist treatment” is available through the trial—implicitly promising that most of those who get the therapy will be cured.

This is problematic for a host of reasons. As I pointed out in my previous post, any claims that the Dutch FITNET trial, the basis for Dr. Crawley’s study, led to “recovery” must be presented with great caution and caveats. Instead, the leaflet presents such “recovery” as an uncontested fact. Also, the whole point of clinical trials is to find out if treatments work—in this case, whether the online CBT approach is effective, as well as cost-effective. But the leaflet is essentially announcing the result–“recovery”—before the trial even starts. If Dr. Crawley is so sure that this treatment is effective in leading to “recovery,” why is she doing the trial in the first place? And if she’s not sure what the results will be, why is she promising “recovery”?

Finally, as has been pointed out many times, the PACE investigators, Dr. Crawley and their Dutch colleagues all appear to believe that they can claim “recovery” based solely on subjective measures. Certainly any definition of “recovery” should require that participants can perform physically at their pre-sickness level. However, the Dutch researchers refused to release the one set of data—how much participants moved, as assessed by ankle monitors called actometers–that would have proven that the kids in FITNET had “recovered” on an objective measure of physical performance. The refusal to publish this data is telling, and leaves room for only one interpretation: The Dutch data showed that participants did no better than before the trial, or perhaps even worse, on this measure of physical movement.

This FITNET-NHS leaflet should be withdrawn because of its deceptive approach to promoting the chances of “recovery” in Dr. Crawley’s study. I hope the advertising regulators in the U.K. take a look at this leaflet and assess whether it accurately represents the facts.


As long as we’re talking about the Dutch members of the CBT/GET ideological movement, let’s also look briefly at another piece of flawed research from that group. Like the PACE authors and Dr. Crawley, these investigators have found ways to mix up those with chronic fatigue and those with chronic fatigue syndrome. A case in point is a 2001 study that has been cited in systematic reviews as evidence for the effectiveness of CBT in this patient population. (Dr. Bleijenberg, a co-investigator on the FITNET-NHS trial, was also a co-author of this study.)

In this 2001 study, published in The Lancet (of course!), the Dutch researchers described their case definition for identifying participants like this: “Patients were eligible for the study if they met the US Centers for Disease Control and Prevention criteria for CFS, with the exception of the criterion requiring four of eight additional symptoms to be present.”

This statement is incoherent. (Why do I need to keep using words like “incoherent” and “preposterous” when describing this body of research?) The CDC definition has two main components: 1) six months of unexplained fatigue, and 2) four of eight other symptoms. If you abandon the second component, you can no longer refer to this as meeting the CDC definition. All you’re left with is the requirement that participants have suffered from six months of fatigue.

And that, of course, is the case definition known as the Oxford criteria, developed by PACE investigator Michael Sharpe in the 1990s. And as last year’s seminal report from the U.S. National Institutes of Health suggested, this case definition is so broad that it scoops up many people with fatiguing illnesses who do not have the disease known as ME/CFS. According to the NIH report, the Oxford criteria can “impair progress and cause harm,” and should therefore be “retired” from use. The reason is that any results could not accurately be extrapolated to people with ME/CFS specifically. This is especially so for treatments, such as CBT and GET, that are likely to be effective for many people suffering from other fatiguing illnesses.

In short, to cite any findings from such studies as evidence for treatments for ME/CFS is unscientific and completely unjustified. The 2001 Dutch study might be an excellent look at the use of CBT for chronic fatigue*. But like FITNET-NHS, it is not a legitimate study of people with chronic fatigue syndrome, and the Dutch Health Council should acknowledge this fact in its current deliberations about the illness.

*In the original phrasing, I referred to the intervention mistakenly as ‘online CBT.’

The Fellowship of the Virus trace the early history of HIV in North America, based on genome sequences obtained from late 1970s archival sera, which also reveal that Gaetan Dugas was not Patient Zero.

You can find TWiV #417 at, or listen below.

Click arrow to play
Download TWiV 417 (67 MB .mp3, 111 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

The past year has been a disaster for proponents of the PACE trial. They have faced growing international resistance to their exaggerated claims that cognitive behavior therapy and graded exercise therapy are effective treatments for chronic fatigue syndrome, also known as ME/CFS. The recent court-ordered release of key trial data has confirmed what was long self-evident: The PACE authors weakened their outcome criteria mid-stream in ways that allowed them to report dramatically better results for “improvement” (in The Lancet in 2011) and “recovery” (in Psychological Medicine in 2013). In refusing to provide the findings per the original protocol methods, or statistical analyses assessing the impact of the many mid-trial changes, or their actual trial data, they were able to hide their disastrous results for five years.

Yet the PACE authors and their allies continue, astonishingly, to defend the indefensible study, cite its findings approvingly, and push forward with ever more research into behavioral and cognitive interventions. The latest case in point: Esther Crawley, a British pediatrician and a highly controversial figure in the ME/CFS community because of her longtime promotion of the CBT/GET approach. On November 1st, the Science Media Centre in London held a press briefing to tout Dr. Crawley’s current venture—FITNET-NHS, a one-million-pound study of online CBT that is now recruiting and seeks to enroll more than 700 adolescents.

Dr. Crawley is a professor of child health at the University of Bristol. She is also currently recruiting for the MAGENTA study of graded exercise therapy for children with the illness. She is a lead player in the U.K. CFS/ME Research Collaborative, an umbrella organization that is sponsoring an ambitious Big Data effort called MEGA, now in the planning stages. While patients and advocates are desperate for the kind of top-notch biomedical and genetic research being proposed, many oppose MEGA precisely because of the involvement of Dr. Crawley and Peter White, the lead PACE investigator. (Dr. White is reportedly no longer involved in MEGA; Dr. Crawley still definitely is.)

The rationale for FITNET-NHS is that many ME/CFS patients live too far from specialists to obtain adequate care. Therefore, CBT delivered through an online portal, along with e-mail communication with a therapist, could potentially provide a convenient answer for those in such circumstances. The SMC press briefing generated widespread and enthusiastic news coverage. The BBC’s breathless online report about the “landmark” study noted that the online CBT “successfully treats two-thirds of children with chronic fatigue syndrome.” According to the BBC story, the intervention was designed “to change the way the children think of the disease.”

The BBC story and other news reports did not mention that the PACE trial–a foundational piece of evidence for the claim that changing people’s thoughts about the disease is the best way to treat it—has been publicly exposed as nonsense and is the subject of a roiling worldwide scientific debate. The stories also didn’t mention a more recent paper by the authors of the 2012 study from the Netherlands that was the source of the BBC’s claim of a “two-thirds” success rate.

The 2012 study, a Dutch version of FITNET, was published in The Lancet. (Why is The Lancet always involved?) In a subsequent paper published in Pediatrics in 2013, the Dutch team reported no differences among their trial participants at long-term follow-up. In other words, as with the PACE trial itself, any apparent advantages conferred by the investigators’ preferred treatment disappeared after the study was over. (More on the Dutch study below.)

The SMC, a purportedly neutral arbiter of science, actually functions as a cheerleader for research about cognitive and behavioral treatments for ME/CFS. Simon Wessely, a founder of the CBT/GET treatment paradigm and a close colleague of the PACE authors, is on the SMC’s board of trustees. The journalist who wrote the BBC story, James Gallagher, sits on the SMC’s advisory committee, so the reporting wasn’t exactly conducted at arm’s length. This reportorial conflict-of-interest was not disclosed in the BBC story itself.

(In fact, the Countess of Mar, a member of the House of Lords and a longtime advocate for ME/CFS patients, has filed a formal complaint with the BBC to protest its biased reporting on FITNET-NHS. In her complaint, she noted that “the BBC coverage was so hyperbolic and it afforded the FITNET trial so much publicity that it was clearly organised as a counter-punch to the anti-PACE evidence which is now gaining world-wide attention.”)

As a treatment for chronic fatigue syndrome, cognitive behavior therapy is grounded in an unproven hypothesis. According to the theory, the cause of patients’ continuing symptoms is a vicious downward spiral generated by false illness beliefs, a fear of engaging in activity, and progressive deconditioning. Whatever the initial viral or other illness that might have triggered the syndrome, patients are presumed to be currently free of any organic disease. Changing their beliefs through CBT, per the theory, will help encourage them to increase their levels of activity and resume their normal lives.

Here’s the rationale for the treatment from the PACE study itself: “CBT was done on the basis of the fear avoidance theory of chronic fatigue syndrome. This theory regards chronic fatigue syndrome as being reversible and that cognitive responses (fear of engaging in activity) and behavioural responses (avoidance of activity) are linked and interact with physiological processes to perpetuate fatigue. The aim of treatment was to change the behavioural and cognitive factors assumed to be responsible for perpetuation of the participant’s symptoms and disability. Therapeutic strategies guided participants to address unhelpful cognitions, including fears about symptoms or activity by testing them in behavioural experiments.”

The goal of this specific form of CBT, therefore, is to reverse the “reversible” illness by helping patients abandon their “unhelpful” beliefs of having a medical disease. This is definitely not the goal of CBT when it is used to help people cope with cancer, Parkinson’s, or other illnesses—no one claims those diseases are “reversible.” That the PACE authors, Dr. Crawley, and their Dutch colleagues promote CBT as a curative treatment and not simply a management or adaptive strategy is clear from their insistence on using the word “recovery”—a term that has no well-defined or universally understood meaning when it comes to this illness but has a very clear meaning to the general public.

While PACE so far remains in the literature, the study has been rejected by dozens of leading clinicians and academics, in the U.S. and elsewhere. Last February, an open letter to The Lancet signed by 42 experts and posted on Virology Blog condemned its egregious flaws, noting that they “have no place in published research.” The study has even been presented as a case study of bad science in graduate epidemiology seminars and at major scientific gatherings.


Like the work of the PACE authors, Dr. Crawley’s research is fraught with misrepresentations and methodological problems. Like them, she routinely conflates the common symptom of chronic fatigue with the illness called chronic fatigue syndrome—a serious error with potentially harmful consequences. (I will mostly use chronic fatigue syndrome in describing the research because that is the term they use.)

Dr. Crawley favors subjective over objective outcomes. In PACE, of course, the objective measures–like a walking test, a step-test for fitness, and employment status—all failed to demonstrate “recovery” or reflect the reported improvements in the two primary subjective outcomes of physical function and fatigue. FITNET-NHS doesn’t even bother with such measures. The primary outcome is a self-report questionnaire assessing physical function, and almost all the secondary outcomes are also subjective.

This is particularly troubling because FITNET-NHS, like PACE, is non-blinded; that is, both participants and investigators know which intervention they are receiving. Non-blinded studies with subjective outcomes are notoriously vulnerable to bias—even more when the intervention itself involves telling participants that the treatment will make them better, as is the case with the kind of cognitive behavior therapy provided for ME/CFS patients.

The FITNET-NHS study protocol states that participants will be identified using the guidelines developed by NICE—the U.K.’s National Institute for Health and Care Excellence. The protocol describes the NICE guidelines as requiring three months of fatigue, plus one or more of nine additional symptoms: post-exertional malaise, difficulty sleeping, cognitive dysfunction, muscle and/or joint pain, headaches, painful lymph nodes, general malaise, dizziness and/or nausea, or palpitations. In other words, according to the protocol, post-exertional malaise is not required to participate in FITNET-NHS; it is clearly identified as an optional symptom. (In the U.K., the illness can be diagnosed at three months in children, rather than at six months.)

But the proposal’s claim to be following the NICE guidelines does not appear to be true. In the NICE guidelines, post-exertional malaise is not an optional symptom. It is required, as an essential element of the fatigue itself. (In addition, one or more of ten other symptoms must also be present.) To repeat: post-exertional malaise is required in the NICE guidelines, but is not required in the description of the NICE guidelines provided in the FITNET-NHS protocol.

By making this subtle but significant shift—a sleight-of-guideline, so to speak—Dr. Crawley and her colleagues have quietly transformed their prospective cohort from one in which post-exertional malaise is a cardinal characteristic of the illness to one in which it might or might not be present. And they have done this while still claiming–inaccurately–to follow NICE guidelines. As currently described, however, Dr. Crawley’s new study is NOT a study of chronic fatigue syndrome, as she maintains, but of chronic fatigue.

As a result, the actual study participants, like the PACE cohort, will likely be a heterogeneous grab bag of kids suffering from fatigue for any number of reasons, including depression–a common cause of exhaustion and a condition that often responds to psychotherapeutic interventions like CBT. Some or even many participants—an unknown number—will likely be genuine ME/CFS patients. Yet the results will be applied to ALL adolescents identified as having that illness. Since those who actually have it suffer from the required symptom of post-exertional malaise, an intervention that encourages them to increase their activity levels, like CBT, could potentially cause harm.

(I suppose it’s possible the FITNET-NHS protocol’s inaccurate description of the role of post-exertional malaise in the NICE guidelines was inadvertent, a case of sloppiness. If so, it would be an extraordinary oversight, given the number of people involved in the study and the enormous implications of the switch. It is curious that this obvious and jarring discrepancy between the NICE guidelines and the FITNESS-NHS description of them was not flagged during the review process, since it is easy to check whether the protocol language accurately reflects the recommendations.)

Yet Dr. Crawley is experienced at this blurring of categories–she did the same in a study she co-authored in the journal Pediatrics, in January of this year. In the study, “Chronic Fatigue Syndrome at Age 16 Years,” she and colleagues reported that almost one in fifty adolescents suffered from the illness—an extremely high rate that attracted widespread media attention. The main conclusion was described like this: “CFS affected 1.9% of 16-year-olds in a UK birth cohort and was positively associated with higher family adversity.”

However, the Pediatrics study is unreliable as a measure of “chronic fatigue syndrome.” It is of note that this paper, like the FITNET-NHS protocol, also appears to have inaccurately presented the NICE guidelines. According to the Pediatrics paper, NICE calls for a CFS diagnosis after three months of “persistent
or recurrent fatigue that is not
the result of ongoing exertion, not substantially alleviated by rest, has resulted in a substantial reduction of activities, and has no known cause.” But this description is incomplete–it omits the NICE requirement that the fatigue must include the specific characteristic of post-exertional malaise in order to render a diagnosis of chronic fatigue syndrome.

In the Pediatrics paper, the determination of illness was based not on clinical examination but on parental reports of children’s unexplained fatigue. In a previous study of 13-year-olds that relied on the same U.K. database, Dr. Crawley and her co-authors referred to the endpoint—appropriately–as “disabling chronic fatigue.” But in this study, they justified changing the endpoint to “chronic fatigue syndrome” by noting that they cross-referenced the parental reports with children’s self-reports of their own fatigue.

Here’s how they explained this shift in nomenclature: “In the earlier study, we were unable to confirm a diagnosis of CFS because we had only parental report of fatigue; hence, chronic disabling fatigue was defined as the study outcome. In the present study, parental and child report of fatigue were combined to identify adolescents with CFS.”

This reasoning is incoherent. A child’s confirmation of a parental report of fatigue cannot be taken to indicate the presence of chronic fatigue syndrome–especially without a clinical examination to rule out other possible conditions. Moreover, neither the parental nor child reports appear to have included information about post-exertional malaise, which is required for a diagnosis of chronic fatigue syndrome—even though the Pediatrics study did not mention this requirement in its description of the NICE guidelines. In fact, the authors provided no evidence or data to support their assumption that a double-report of fatigue equaled a case of chronic fatigue syndrome. (How’d that assumption ever pass peer review, anyway?)

Moreover, the study itself acknowledged that, when those found to be suffering from high levels of depression were removed, the prevalence of what the investigators called chronic fatigue syndrome was only 0.6 %. And since depression is likely to be highly correlated with chronic fatigue as well as with family adversity, it is not surprising that the study found the apparent association between family adversity and chronic fatigue syndrome that the investigators highlighted in their conclusion. That misinterpretation of their data has likely lent support to the widespread but inaccurate belief that the illness is largely or even partly psychiatric in nature.

In any event, the figure of 0.6 % should have been identified as the prevalence of “chronic disabling fatigue, not attributable to high levels of depression.” Without any further clinical data, to identify either 1.9 % or 0.6 % as the prevalence of chronic fatigue syndrome was unwarranted and irresponsible. Although the authors cited the lack of clinical diagnosis as a limitation, this acknowledgement does not excuse their interpretive leap. To call this a study of chronic fatigue syndrome is really misleading–a serious over-interpretation of the data.

In subsequent correspondence, three professors of pediatrics—Marvin Medow and Julian Stewart from New York Medical College, and Peter Rowe from Johns Hopkins–scolded the study authors for identifying the participants as having chronic fatigue syndrome rather than chronic fatigue. They cited this misclassification as the likely source of the reported link between chronic fatigue syndrome and family adversity. In particular, they challenged diagnoses made without benefit of clinical evaluations.

“An important component of the diagnosis is a physician’s history and physical examination to exclude conditions that could explain the fatigue, including hypothyroidism, heart disease, cancer, liver failure, covert drug abuse, medication side effects, gastrointestinal/nutritional, infectious and psychiatric conditions,” they wrote. The Pediatrics paper, concluded the three pediatricians, “should be titled ‘Chronic Fatigue but not Chronic Fatigue Syndrome at Age 16 Years.’”

In response, the study authors agreed that clinical diagnoses would be more accurate. But they did not address the critical issue of why they decided that two reports of chronic fatigue could be used to identify chronic fatigue syndrome.


The conflation of chronic fatigue and chronic fatigue syndrome is a huge problem in ME/CFS research. That’s why a major report last year from the National Institutes of Health declared that the case definition used in PACE—which required only six months of unexplained fatigue but no other symptoms–could “impair progress and cause harm,” and should be “retired” from use. But Dr. Crawley and her colleagues do not seem to have gotten the message.

At the SMC press briefing presenting FITNET-NHS, one of the experts appearing with Dr. Crawley was Dr. Stephen Holgate, the leader of the CFS/ME Research Collaborative and a professor of immunopharmacology at the University of Southampton. According to the BBC report, he praised the new trial as “high-quality research.” This endorsement suggests that Dr. Holgate, like Dr. Crawley, does not appreciate the significance of the distinction between the symptom of chronic fatigue and the illness called chronic fatigue syndrome—a troubling blind spot. It also suggests that Dr. Holgate is unaware or unconcerned that the main support for the use of CBT in this illness, the PACE trial, has been discredited.

Also at the SMC briefing was Paul McCrone, a professor of health economics from King’s College London and a PACE co-author. Dr. McCrone is serving as the chair of FITNET-NHS’ independent steering committee–another unsettling sign. As I have documented on Virology Blog, Dr. McCrone made false claims as lead author in a 2012 PLoS One paper—and those false claims allowed the PACE authors to declare that CBT and GET were cost-effective. They have routinely cited this fraudulent finding in promoting the therapies.

Beyond the problem of conflating “chronic fatigue” and “chronic fatigue syndrome,” Dr. Crawley’s reliance on the Dutch trial suggests that this previous FITNET study warrants a closer look—especially since the BBC and other news outlets cited its robust claims of success in extolling the U.K. version.

The approach to CBT in the Dutch FITNET trial reflects that in the U.K. Of the online intervention’s 21 modules, according to the protocol for the Dutch study, fourteen “focus on cognitive behavioural strategies and include instructions and exercises on how to identify, challenge and change cognitive processes that contribute to CFS.” Of course, experts outside the CBT/GET/PACE bubble understand that ME/CFS is a physiological disease and that faulty “cognitive processes” have nothing to do with perpetuating or contributing to it.

The Dutch study found that those assigned to FITNET reported less fatigue, greater physical function, and greater school attendance than those in the comparison group, who received standard treatment–referred to as “usual care.” And using a composite definition of “recovery,” the study reported that 63% of those in the FITNET group–just shy of two-thirds–“recovered” at six months, compared to just eight percent in the comparison group. But this apparent success masks a much more complicated reality and cannot be taken at face value, for multiple reasons.

First, the subsequent 2013 paper from the Dutch team found no differences in “recovery” between participants in the two groups at long-term follow-up (on average, 2.7 years after starting). Those in the comparison group improved after the trial and had caught up to the intervention group, so the online CBT conferred no extended advantages or benefits. The researchers argued that the therapy was nonetheless useful because patients achieved gains more quickly. But they failed to consider another reasonable explanation for their results.

Those in usual care were attending in-person sessions at clinics or doctors’ offices. Depending on how often they went, how far they had to travel and how sick they were, the transportation demands could easily have triggered relapses and harmed their health. In contrast, those in the FITNET group could be treated at home. Perhaps they improved not from the treatment itself but from an unintended side effect–the sedentary nature of the intervention allowed them more time to rest. The investigators did not control for this aspect of the online CBT.

Second, the “recovery” figure in the Dutch FITNET study was a post-hoc calculation, as the authors acknowledged. The protocol for the trial included the outcomes to be measured, of course, but the authors did not identify before the trial what thresholds participants would need to meet to be considered “recovered.” The entire definition was constructed only after they saw the results—and the thresholds they selected were extremely lenient. Even two of the PACE authors, in a Lancet commentary praising the Dutch study, referred to the “recovery” criteria as “liberal” and “not stringent.” (In fact, only 36% “recovered” under a more modest definition of “recovery,” but the FITNET authors tucked this finding into an appendix and Dr. Crawley’s FITNET-NHS protocol didn’t mention it.)

Now, the fact that “recovery” was a post-hoc measure doesn’t mean it isn’t valid. But anyone citing this “recovery” rate should do so with caveats and some measure of caution. Dr. Crawley has exhibited no such reticence—in a recent radio interview, she declared flatly that the Dutch participants had made a “full recovery.” (In the same interview, she called PACE “a great, great study.” Then she completely misrepresented the results of the recent reanalyses of the PACE trial data. So, you know, take her words for what they’re worth.)

Given the hyperbole about “recovery,” the public is understandably likely to assume that Dr. Crawley’s new “landmark” study will result in similar success. A corollary of that assumption is that anyone who opposes the study’s approach, like so many in the patient and advocacy communities, could be accused of acting in ways that harm children by depriving them of needed treatment. This would be an unfair charge, since the online CBT being offered is based on the questionable premise that the children harbor untrue cognitions about their illness.

Third, the standard treatments received by the usual care group were described like this: “individual/group based rehabilitation programs, psychological support including CBT face-to-face, graded exercise therapy by a physiotherapist, etc.” In other words, pretty much the kinds of “evidence-based” strategies these Dutch experts and their U.K. colleagues had promoted for years as being effective for chronic fatigue syndrome. In the end, two-thirds of those in usual care received in-person CBT, and half received graded exercise therapy. (Many participants in this arm received more than one form of usual care.)

And yet less than one in ten of the usual care participants were found to have “recovered” at six months, according to the 2012 study. So what does that say about the effectiveness of these kinds of rehabilitative approaches in the first place? In light of the superlative findings for online CBT, why haven’t all chronic fatigue syndrome patients in the Netherlands now been removed from in-person treatments and offered this more convenient option? (Dr. Crawley’s FITNET-NHS proposal tried to explain away this embarrassing finding of the Dutch study by suggesting that those providing usual care were not trained to work with this kind of population.)

Finally, the Dutch study did not report any objective measures of physical performance. Although the study included assessments using an actometer—an ankle bracelet that monitors distance moved—the Lancet paper did not mention those results. In previous studies of cognitive and behavioral treatments for ME/CFS, reported improvements on subjective measures for fatigue or physical function were not accompanied by increases in physical movement, as measured by actometer. And in PACE, of course, the investigators dismissed their own objective measures as irrelevant or non-objective—after these outcomes failed to provide the desired results.

In response to correspondence calling for publication of the actometer data, the Dutch investigators refused, noting that “the goal of our treatment was reduction of fatigue and increase in school attendance, not increase in physical activity per se.” This is an inadequate explanation for the decision to withhold data that would shed light on whether participants actually improved in their physical performance as well as in their subjective impressions of their condition. If the actometer data demonstrated remarkable increases in activity levels in the online CBT group, is there any doubt they would have reported it?

In short, the Dutch FITNET study leaves a lot of questions unanswered. So does its U.K. version, the proposed FITNET-NHS. And Dr. Crawley’s recent media blitz—which included a “can’t-we-all-get-along” essay in The New Scientist—did little to quell any of the reasonable qualms observers might have about this latest effort to bolster the sagging fortunes of the CBT/GET/PACE paradigm.

“Patients are desperate for this trial, yet some people are still trying to stop us,” wrote Dr. Crawley in The New Scientist. “The fighting needs to end.”

However, those mysterious and sinister-sounding “some people” cited by Dr. Crawley have very thoughtful and legitimate reasons for questioning the quality of her research. The fighting, as she calls it, is likely to end when Dr. Crawley and her colleagues stop conflating chronic fatigue and chronic fatigue syndrome through the use of loose diagnostic criteria. And when they acknowledge what scientists in the U.S. and around the world now understand: The claim that cognitive and behavioral approaches are effective treatments that lead to “recovery” is based on deeply flawed research.

A Short Postscript:

Several Dutch colleagues have joined Dr. Crawley as part of the FITNET-NHS study. Two of them, Dr. Gijs Bleijenberg from the Radboud University Medical Centre in Nijmegen, and Dr. Hans Knoop from the University of Amsterdam, are among the leaders of the CBT/GET movement in the Netherlands and have collaborated with their U.K. counterparts. Not surprisingly, their work is similarly dodgy.

In a post last year, I dissected a 2011 commentary in The Lancet on the PACE trial, co-authored by Dr. Bleijenberg and Dr. Knoop, in which they argued that 30 percent of the participants in the CBT and GET groups had met “a strict criterion for recovery.” This statement was absurd, since these “strict” thresholds for “recovery” were in fact so lax that participants could get worse during the study and still meet them. Although the problematic nature of the thresholds has been pointed out to Dr. Bleijenberg and Dr. Knoop, they have stood by their nonsensical claim.

Earlier this year, the Dutch parliament asked the Health Council—an independent scientific advisory body—to review the state of evidence related to the illness, including the evidence on treatments like CBT and GET. The Health Council appointed a committee to conduct the review. Among the committee members are Dr. Knoop and colleagues who share his perspective. It remains unclear whether the committee is taking sufficient account of the methodological flaws underpinning the evidence for the CBT/GET paradigm and of the ongoing condemnations of the PACE trial from well-respected scientists. I plan to blog about this situation soon.

The multi-dimensional TWiV-brane brings you the entries in the haiku/limerick contest, and explain how a giant virus infects a host within another host (it has to do with predators!).

You can find TWiV #416 at, or listen below.

Click arrow to play
Download TWiV 416 (77 MB .mp3, 126 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

aids_index_case_graphThe popular history of HIV/AIDS describes a man known as Patient Zero, a sexually active flight attendant who traveled the globe and initiated the AIDS epidemic in North America. A new analysis of the viral genome recovered from his serum and that of other patients in the 1970s proves beyond a doubt that he was not Patient Zero (link to paper).

In a heroic effort, thousands of archived serum samples originally collected from cohorts of men who have sex with men in the 1970s in New York and San Francisco, were examined for the presence of HIV by western blot analysis. A total of 83 samples were found to be HIV positive and subjected to deep sequencing, but the viral RNA was degraded and present only in short pieces. To overcome this problem, many DNA primers were used to amplify short RNA fragments by PCR in a procedure colorfully called ‘jackhammering’. The impressive result is that complete HIV-1 coding sequences were obtained from 8 samples: 3 from San Francisco and 5 from New York City.

Analysis of the HIV genome sequences, and comparison with earlier and later data revealed that the virus likely traveled from Africa to the Caribbean around 1967, and from there to New York City in 1971. These results disprove previous ideas that HIV arrived in the Caribbean from the US.

Sequence analysis also reveals that New York City was a hub of early diversification of HIV, and that the epidemic was already mature and genetically diverse by the late 1970s. There appears to have been a single introduction of HIV into San Francisco from New York City in 1976. From those two cities the virus spread elsewhere in the US and overseas.

It has been suggested that a sexually active flight attendant, identified as Gaetan Dugas by Randy Shilts in his book And the Band Played On, was the source of the North American AIDS epidemic. Although at least one study years ago concluded that he was not the first case, this belief persists. Sequencing of HIV from this patient’s serum revealed that he was certainly not the first person in North America infected with this line of HIV-1 (Group M, subtype B) .

A historical reconstruction of the early days of AIDS in the US reveals how Dugas earned the label ‘Patient Zero’. CDC investigators who were studying a sexual network of 40 gay men placed one man at its center, whom they called ‘Patient O’, standing for ‘outside of California’ because he was Canadian (pictured; image credit). Upon publication of this work, the ‘O’ was misinterpreted as a zero and so began the belief that he was the origin of the AIDS outbreak in North America.

Jeremy Luban, Aaron Lin, and Ted Diehl join the TWiV team to discuss their work on identifying a single amino acid change in the Ebola virus glycoprotein from the West African outbreak that increases infectivity in human cells.

You can find TWiV #415 at, or listen below.

Click arrow to play
Download TWiV 415 (67 MB .mp3, 110 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

Eucyclops agilisEndosymbionts, organisms that may live inside of another cell, can be infected with viruses. An example is Wolbachia, which lives inside the cells of insects and nematodes, and is infected with Wolbachia phage WO. It’s always been a puzzle how viruses of endosymbionts pass through the host cell to infect their host. A study of the giant chloroviruses, which infect endosymbiotic algae, provides some answers (link to article).

Chloroviruses are well known because they are big: at one time they had the largest known dsDNA genome, up to 370,000 bases in length (they have since been eclipsed by viruses with far larger genomes). These viruses infect green algae (called zoochlorellae) which are endosymbionts of a variety of fresh water hosts, including paramecium, hydra, and coral.

While inside a paramecium, zoochlorellae can’t be infected with chloroviruses – the viruses can attach to the cell surface but cannot enter the host. If the zoochlorellae are mechanically released from the paramecium, they are readily infected with chloroviruses.

Chloroviruses are certainly found in the same environments as zoochlorellae – which begs the question, how do they infect their hosts in nature when they are shielded by paramecium? The answer is foraging.

Copepods, which are crustaceans found in nearly every freshwater habitat, include Eucyclops agilis (pictured; image credit) which eats paramecia. These protozoans pass through the Eucyclops digestive tract partially disrupted, exposing the zoochlorellae to the external environment. The zoochlorellae can then be infected with chloroviruses in the water.

Here we have a solution to an age old question in virology which involves copepods not fully digesting their food. A line from the movie Matilda comes to mind: Chew your food!

Modeling predator-prey cycles can also explain the observation that the abundance of chloroviruses in lakes fluctuate throughout the year, peaking in late spring and late fall. Rising levels of prey (paramecium) are followed by rising numbers of predators (Eucyclops), exposing zoochlorellae and leading to a burst of virus production.

The traditional view of virus-host interactions involves random collisions between virus and cells that lead to infection. The observation that predation can expose cells to virus infection places a new variable into this process. Whether foraging can explain how viruses of other endosymbionts access their cell hosts remains to be seen. I can imagine, for example, that bacteriophage WO might reach its host cell after an insect containig the endosymbiont is eaten by a predator, releasing free Wolbachia.