Origin of virusesPlasmids have been discovered that can move from cell to cell within membrane vesicles in a species of Archaea (link to paper). They provide clues about the origin of virus particles.

Electron microscope analysis of the culture medium from Halobrum lacusprofundi R1S1, an Archaeal strain from Antarctica, revealed spherical particles which were subsequently shown to contain a 50,000 base pair circular double-stranded DNA molecule. When added to H. lacusprofundi, the purified membrane vesicles entered the cells and the DNA replicated.

Nucleotide sequence analysis of the plasmid within the membrane vesicles revealed 48 potential protein coding regions and an origin of DNA replication. None of these proteins showed any similarity to viral stuctural proteins, leading the authors to conclude that these particles are not viruses.

Many of the proteins encoded in the plasmid DNA were found in the membrane vesicles. Some of these are similar to cell proteins known to be involved in the generation of membrane vesicles. However no DNA polymerase-like proteins are encoded in the plasmid. These data suggest that the plasmid encodes proteins that generate, from the membranes of the cell, the vesicles needed for their transport to other cells. However, replication of the plasmid is carried out by cellular DNA polymerases.

It is likely that the plasmid-containing membrane vesicles are precursors of what we know today as virus particles. It is thought that viruses originated from selfish genetic elements such as plasmids and transposons when these nucleic acids acquired structural proteins (pictured; image credit). Phylogenetic analyses of the structural proteins of many enveloped and naked viruses reveal that they likely originated from cell proteins on multiple occasions (link to paper).

The membrane-encased Archaeal plasmid seems well on its way to becoming a virus, pending acquisition of viral structural proteins. Such an early precursor of virus particles has never been seen before, emphasizing that science should not be conducted only under the streetlight.

By David Tuller, DrPH

This morning I sent the following freedom of information request to Bristol University. My friend and colleague Steven Lubet, a professor at Northwestern Pritzker School of Law, joined me in making this request. Professor Lubet is an expert on legal ethics, among many other fields, and in July he guest-blogged here about the purported “ethics” lecture given at Oxford by Professor Michael Sharpe.

Our freedom of information request to Bristol involves Professor Esther Crawley’s 2011 school absence study, which I blogged about on Monday. In this study, schools identified students with unexplained absences and invited them and their families to meet with Professor Crawley to discuss the matter. The study authors did not seek ethical review for this study on the grounds that it only involved “service evaluation,” even though it was piloting a new method of identifying previously undiagnosed patients for Professor Crawley’s CFS/ME clinical service.

Under the circumstances, we were interested in reviewing the letters sent to the families, as well as any other information they were provided about the study. We did not send the request directly to Professor Crawley but to the Bristol University legal representative who handled my complaint about Professor Crawley’s false libel accusation earlier this year. We also sent it to the university’s freedom of information office.

Here’s what Professor Lubet and I wrote:

In 2011, Professor Esther Crawley and two colleagues, all from Bristol University, published a study in BMJ Open titled “Unidentified Chronic Fatigue Syndrome/myalgic encephalomyelitis (CFS/ME) is a major cause of school absence: surveillance outcomes from school-based clinics.”
 
As part of the study, three different schools sent letters to the families of children with patterns of chronic absence. Here is how this process was described in the BMJ Open paper: “Families…were sent a letter from the school that invited them to meet with a paediatrician from the Bath specialist CFS/ME team (EMC) and a member of school staff to discuss why their child was missing school.” EMC is Professor Crawley.
 
Under the UK’s Freedom of Information law, we are requesting a copy of the letter sent to these families—or copies of the letters, if the different schools used different versions.  Of course, the copy or copies of the letter or letters shared under this FOI request should be fully anonymized. Given the provenance of the study, we presume that Bristol holds copies of the letters in its files. 

We are also requesting copies of the information about the 2011 school absence study that was provided to the families and students contacted through these letters. This information might have included printed or online leaflets, for example, or other material.

Finally, we are requesting a copy of the consent form or forms these families and students might have been asked to sign as participants in the 2011 school absence study. Of course, any such forms should be fully anonymized.
 
Thank you for your quick attention to this matter.

By David Tuller, DrPH

This is a complicated post. Here are the key points. The rest is details:

*Professor Esther Crawley and co-authors claimed a 2011 study in BMJ Open was exempt from ethical review because it involved the routine collection of data for “service evaluation.” Yet the 2011 study was not an evaluation of routine clinical service provision–it was designed to road-test a new methodology to identify undiagnosed CFS/ME patients among students with records of chronic absence.

*To support the claim that the study was exempt from ethical review, Professor Crawley and co-authors cited a 2007 research ethics committee opinion that had nothing to do with the data-collection activities described in the 2011 paper.

*For the 2011 study, school letters were sent to families of 146 student, inviting them to meet with Professor Crawley. In the end, only 28 were identified as having CFS/ME–meaning more than 100 families of students without CFS/ME received potentially disconcerting letters inviting them to a medical meeting about a sensitive issue. This type of pilot program is beyond the scope of what many would consider to be service evaluation.

*A pre-publication reviewer, noting the data collection activities described in the paper, raised serious questions about the lack of ethical review. In her response, Professor Crawley did not provide satisfactory answers to the concerns raised by the reviewer, but BMJ Open published the paper anyway, without ethical review.

*BMJ Open’s recent response to the concerns has been confused, contradictory and inadequate. In separate e-mails, the editor and editor-in-chief have provided two distinct and incompatible justifications for the decision to publish without ethical review. Neither explanation is convincing.

**********

For a 2011 study, Professor Esther Crawley of Bristol University hypothesized that many children remained undiagnosed for chronic fatigue syndrome/myalgic encephalomyelitis. So she decided to investigate whether school absence records could yield further cases.

Working with three schools, she designed a pilot program that targeted students with a history of being absent for unexplained reasons 20 percent or more of the time. Students and their families were invited to meet and discuss this pattern of school absence with Professor Crawley, a pediatrician, along with a school staff member. Some of these students were subsequently evaluated and treated at Professor Crawley’s Bath clinical service for young people with CFS/ME. (I am using CFS/ME here because that is the term Professor Crawley uses.)

The study, published in the journal BMJ Open, concluded that a program involving this kind of school outreach based on absence records could identify previously undiagnosed children, who could then benefit from treatment. But there was a troubling twist: Professor Crawley and her co-authors did not seek ethical review from a U.K. National Health Service Research Ethics Committee, as would normally be expected for studies involving human subjects. Instead, the paper included the creative claim that the study was exempt from such ethical review because it qualified as “service evaluation.”

[Key information in this post comes from documentation provided by an independent researcher, who obtained it via a freedom-of-information request to the NHS Health Research Authority and then corresponded with BMJ Open about the issue. To be fully transparent, Professor Crawley is not a fan of my work and has publicly accused me of writing “libelous blogs.” However, she has repeatedly failed to respond to requests that she present documentation or evidence that anything I have written about her work is false or inaccurate.]

According to HRA guidelines, service evaluation studies are “designed and conducted solely to define or judge current care” and “involve minimal additional risk, burden or intrusion for participants.” For these studies, investigators are not required to seek the kind of ethical review from an REC that is mandated for what is deemed “research”—that is, studies that are potentially more risky, burdensome or intrusive for participants and raise more possible ethical concerns.

A pre-publication review of the 2011 paper highlighted the lack of ethical review as a major concern. [BMJ Open has an open review process, so reviews and author responses are posted with the published article.] In his comments, the reviewer questioned how the actions taken to identify new patients, as described in the paper, could be considered service evaluation rather than research. In her response to this pre-publication review, Professor Crawley did not provide direct and satisfactory answers to the reviewer’s concerns, yet BMJ Open editors apparently took no further steps to address the issue.

The independent researcher who obtained the documentation wrote to BMJ Open about the issue earlier this year. After reviewing the matter, the journal’s editor acknowledged the obvious—that Professor Crawley’s 2011 study “is not strictly a service evaluation”–but maintained that it was nonetheless exempt from ethical review for other reasons. This retroactive claim for exemption, however, was itself based on false information.

(I know, I know–it’s confusing. Sorry!)

The central question here is whether the 2011 BMJ Open study should have been defined as “research,” which would have required REC ethical review, or “service evaluation,” which would not. BMJ Open itself appeared to have expressed its view by publishing the study under a prominent heading slugged “Research.” The study included a hypothesis to be tested as well as a specific “research question”–markers of what is typically defined as research and not as service evaluation.

Moreover, the paper clearly presented itself not as investigating “current care,” per the HRA definition of service evaluation, but as piloting a new strategy or intervention to identify previously undiagnosed patients. The main outcome measure was “the number of children newly diagnosed as having CFS/ME.” On the face of it, this does not sound like part of service evaluation involving care for patients already being seen.

To support the claim that the study did not need to undergo ethical review from an REC, the paper provided the following explanation: “The clinical service in this study was provided as an outreach from the Bath specialist CFS/ME service. The North Somerset & South Bristol Research Ethics Committee decided that the collection and analysis of data from children and young people seen by the CFS/ME specialist service were part of service evaluation and as such did not require ethical review by the NHS Research Ethics Committee or approval from the NHS R&D office (REC reference number 07/Q2006/48).” [The separate issue of obtaining approval from NHS R&D is not of concern here.]

An REC reference number cited in a paper would often identify an opinion specifically about the particular study and data-collection method at issue. In this case, the REC number involved a 2007 decision about a data collection procedure very different from the activities involved in the 2011 pilot program to conduct outreach through the monitoring of school absence records. [The independent researcher obtained the documentation about this 2007 decision under her freedom-of-information request.]

The 2007 REC decision involved an application seeking permission to expand the schedule of assessments of children referred to and receiving specialist care at Professor Crawley’s CFS/ME clinical service in Bath. The clinical service was at that time conducting assessments with several questionnaires at entry and at twelve months. The REC application proposed adding further assessments at six weeks and six months, arguing that this would be useful for service evaluation as well as improving the delivery of clinical care.

In response to a question about how patients for this expanded questionnaire regimen would be “identified,” “approached, and “recruited,” the REC application declared that “there will be no change in the way potential participants are identified.” In other words, the application was explicitly not describing or seeking permission for identifying new patients or implementing new methods to recruit undiagnosed children. It was simply seeking permission to collect some more data on patients who were being referred to the CFS/ME clinical service through standard channels. The additional burden to patients and their families was considered minimal; the REC application estimated that filling out the two sets of extra questionnaires would take less than twenty minutes each time.

After reviewing the application, the North Somerset & South Bristol REC sent a letter dated May 1, 2007. Here’s the operative phrasing: “Members [of the REC] considered this project to be service evaluation. Therefore it does not require ethical review by a NHS Research Ethics Committee or approval from the NHS R&D office.”

The letter referred to “this project”—-i.e. the activities proposed in the application, specifically the expanded schedule of assessments that would take participants less than forty minutes to complete. The letter did not indicate that the same consideration or determination applied to other, as-yet-unspecified projects with as-yet-unspecified data-collection activities—-such as efforts to monitor school absences and recruit new patients.

Yet Professor Crawley has cited this 2007 REC reference number to support the case that not just the 2011 study but several other studies were exempt from REC review as service evaluation rather than research. Some of the other studies that cite the REC reference number appear to genuinely qualify as service evaluation. But by the BMJ Open editor’s own admission, Professor Crawley’s 2011 study “is not strictly service evaluation,” a determination that contradicts what the paper states about itself.

In the 2011 paper, the authors appear to make excessive claims about the 2007 service evaluation exemption provided by the North Somerset & South Bristol REC. The 2007 REC letter did not make a blanket assertion that any data collection from any young people seen by the clinical service could be considered service evaluation. It made the narrower finding that expanding the current assessment schedule by adding several questionnaires at two points in time, as specifically outlined in the REC application, could be considered service evaluation.

Let’s compare that limited scope of activity approved as service evaluation to the data collection strategy pursued for Professor Crawley’s 2011 study. After school attendance officers identified children who met the study’s designated absence threshold, the families “were sent a letter from the school that invited them to meet with a paediatrician from the Bath specialist CFS/ME team (EMC) and a member of school staff to discuss why their child was missing school.” [EMC is Professor Crawley.] Some of these students were then assessed at the Bath clinical service and offered treatment, if indicated.

It should be noted that the study’s designated thresholds for school absences netted many more students than were ultimately diagnosed as having CFS/ME. Letters were sent to the families of 146 students who met the absence criteria. In the end, 28 of them were identified as having CFS/ME. Let’s put that another way: For this study, the families of 118 students who did not have CFS/ME were sent school letters calling them to a medical meeting about a sensitive issue. The families were sent these letters, which could possibly have caused anxiety and alarm, as part of what was purportedly service evaluation of care for young people already diagnosed with CFS/ME.

It is hard to understand how sending out such letters and recruiting new patients in this potentially intrusive manner could be considered part of service evaluation for “current care,” especially since this was a pilot project. It seems unusual that researchers would bypass ethical review for such an active patient recruitment effort–even more so given that their approach for identifying possible cases of CFS/ME was likely to impact an unknown number of students and families beyond the specific group of interest.

In fact, a reviewer invited by BMJ Open to comment on the draft of the 2011 paper appeared perturbed at the lack of ethical review and approval. In his comments, he expressed surprise that the REC would have considered the outreach aspects of the study to be service evaluation exempt from ethical approval, rather than research requiring it.

Here’s what he wrote: “It is understandable that the REC might see use of routine data from the existing clinical service as not being research…but it is surprising that they did not see the surveillance component as research. Children who are unknown to services were being contacted using information from their schools and it seems to me that there are significant issues of confidentiality and data protection which, in my experience as a researcher and one time REC member, I am surprised the REC did not think amount to research. Assuming the REC was fully aware of these issues, and still made a decision that the work was not research, then it would be unfair to oppose publication on those grounds, but the authors should make a fuller explanation, and in the interest of openness might want to make their application to the REC and subsequent correspondence available with the publication.”

The reviewer’s statement clearly presumed that Professor Crawley and her colleagues had filed an application and corresponded with the appropriate REC about the specific set of activities involved in this school absence study. Given that presumption, he made a reasonable request–that they should publish the REC application and correspondence along with the paper. The reviewer also requested the authors to provide more information about what the families were told and how their consent to participate in the project was obtained.

A pre-publication review like that should raise red flags with editors–at least enough for them to ensure that the authors provide acceptable answers. That apparently did not happen in this case. In response to the reviewer’s comments, Professor Crawley did not cite any correspondence with the REC about the school absence study under discussion, perhaps because no such correspondence existed. Nor did she offer much detail on what information was provided to families and on how consent was obtained. And she did not mention that she was not relying on recent REC correspondence about this specific study, as the reviewer presumed, but on an REC opinion about a much narrower method for additional data collection from four years earlier.

Instead, Professor Crawley explained in her response that “the specialist service has been advised that ethical approval for routine collection and analysis of service data is not required.” She did not provide a legitimate explanation for why this pilot program qualified as “routine collection and analysis of service data” when it involved outreach to families whose children were not already enrolled in the clinical service, including many families whose children did not even have CFS/ME. Instead, she pointed out that she is a community pediatrician and that the children were seen in school clinics, although it is not immediately evident why these facts should have exempted the study from ethical review.

In her response, Professor Crawley also referenced top-level admiration for her work. “The project has been of great interest to the Department of Education who included it last year as an exemplar in their training for attendance officers in the UK,” she wrote. But whether or not education officials were impressed with Professor Crawley and her work was of course irrelevant to the question at hand, which was whether the study should have been considered research or service evaluation. It was not clear why Professor Crawley included this point, except perhaps to suggest that she had well-placed supporters.

According to Professor Crawley’s response, she sought further assurance that the data collection involved was indeed part of service evaluation. Here’s what she wrote: “We checked with the co-ordinator for the local REC that recording outcomes on school based clinics run by school nurses is part of service evaluation (and therefore does not require a submission to Ethics) and they have agreed that it is.”

This statement is confusing. The clinics described in this study cannot reasonably be defined as nurse-run school-based clinics. According to the paper, they were set up specifically so Professor Crawley could meet with the children and families identified through the pilot program she developed. Professor Crawley is not a school nurse. The paper’s description of the meetings with families indicated the presence of “a member of school staff,” not a school nurse. Professor Crawley’s own response to the pre-publication reviewer indicated that this staff member was “usually the attendance officer.”

In fact, the study itself stated that “it would be of interest to evaluate whether school nurses, rather than doctors, can undertake the initial assessments in school clinics.” In other words, the way Professor Crawley personally collected data for this pilot program had little or nothing to do with the provision of routine care in nurse-run school clinics.

In any event, Professor Crawley provided no documentation of this exchange with the unnamed “co-ordinator for the local REC.” Nor did she provide any details of what this unnamed local coordinator was told. Was the coordinator told simply that this was data collection made during routine nurse-run school clinics? Or was the coordinator told that this was a pilot program to identify previously undiagnosed patients using their school absence records?

Furthermore, was the coordinator told that families would be sent letters inviting them to meet with a community pediatrician at the school, in the likely presence of the school attendance officer but not a school nurse? Was the coordinator told that the students might then be recruited as patients into the clinical service run by the community pediatrician? Was the coordinator told that most of the families identified and impacted by this recruitment process turned out not to have children with CFS/ME? Absent further documentation of this apparently critical exchange, Professor Crawley’s second-hand reassurance that the unnamed local coordinator agreed with her interpretation of events is utterly meaningless.

The independent researcher who drew my attention to this case (and provided input for this post) had sought clarification from BMJ Open about the lack of ethical review. In response to her inquiry, she received an e-mail from editor Adrian Aldcroft in June. He wrote this about the study:

“While we appreciate the article published in BMJ Open is not strictly a service evaluation, we agree with the statement provided by The University of Bristol that further ethical approval would not have been required for the analysis due to the following exemption:

‘REC review is not required for the following types of research: Research limited to secondary use of information previously collected in the course of normal care (without an intention to use it for research at the time of collection), provided that the patients or service users are not identifiable to the research team in carrying out the research.’

The data in the BMJ Open article meet these criteria.

As such, we do not think any further action is necessary relating to the article.”

This response did not resolve the matter. First, to state that the 2011 article was “not strictly a service evaluation” was to validate the complaint. It was also a tacit acknowledgement that the paper’s own claim for exemption from ethical review was unjustified. Then, in a sort of sleight-of-exemption, Aldcroft cited a recent statement from Bristol University that studies were exempt if they involved “secondary use of information previously collected in the course of normal care.” And Aldcroft then stated flatly that the data in the article “meet these criteria.”

This last statement is startling because it so obviously untrue. The 2011 paper did not involve “secondary use” of data “previously collected in the course of normal care.” The data were collected through implementation of a pilot program designed by Professor Crawley to answer the specific “research question” posed at the beginning of the article: “Are school-based clinics a feasible way to identify children with CFS/ME and offer treatment?”

Moreover, the Bristol University exemption cited by Aldcroft stated that the patients or service users must be “not identifiable” to the research team. In this case, Professor Crawley was the community pediatrician who met with the families; she was also the head of the research team. Under the circumstances, it is impossible to argue that patients in the study were “not identifiable” to the research team, per the exemption requirements. It is therefore hard to understand Aldcroft’s statement that the data in the 2011 article meet the necessary criteria for exemption from ethical review. It would have been easy for him to determine the correct answer by reviewing the paper itself.

Last Thursday, I wrote to Aldcroft to follow up. I also wrote to both the local REC and the HRA, and to Bristol University. (I did not write directly to Professor Crawley, since I gather she does not want to receive e-mails from me. Instead, I wrote the legal department representative to whom I complained earlier this year about Professor Crawley’s false accusation that I had written “libelous blogs.”)

I did not hear back from Bristol. A spokesperson for the HRA wrote that he could not answer my specific questions about this case, but he sent general information about RECs and service evaluation. I did receive an e-mail from Trish Groves, editor-in-chief at BMJ Open (she also has other titles), who wrote that Aldcroft had forwarded my e-mail to her.

Dr. Groves wrote that the 2011 study authors had addressed the concerns about ethical review raised by the peer reviewer. After citing the “ethical approval” statement included in the 2011 paper itself (and quoted above in this post), Dr. Groves wrote: “Given the guidance provided by the local REC, we consider that the authors were entitled to reach the conclusions that they did concerning the need for ethics approval.”

This reply is disingenuous and troubling–especially given that BMJ Open itself published the study under the heading “Research.” Dr. Groves has now provided a completely different account of the matter than Aldcroft, the journal’s editor, which indicates some incoherence or confusion in the journal’s position. Aldcroft acknowledged that the paper was “not strictly a service evaluation” and then provided a retroactive justification for why it was exempt from ethical review anyway. According to Dr. Groves’ version, all was done properly the first time around. But she did not then explain why Aldcroft, on behalf of BMJ Open, had presented an alternate point of view.

Here’s my question for Dr. Groves: Is she really comfortable that–as part of a study defined as service evaluation–more than one hundred families whose children did not have CFS/ME were nonetheless sent school letters on a sensitive issue and invited to meet with Professor Crawley? Does Dr. Groves really believe that testing out a new strategy to identify patients unknown to the clinical service qualifies as service evaluation for routine care? I doubt she actually does believe that, but who knows? Smart people can convince themselves to believe a lot of stupid things. In any event, in dismissing these concerns, BMJ Open has demonstrated that something is seriously amiss with its ethical compass.

Dr. Groves did leave the door open a tiny crack. She wrote: “BMJ Open is a member of the Committee on Publication Ethics (COPE) and follows its best practice guidance and policies. In light of the matters that have been raised, we will submit this case (anonymised, as always) to the COPE Forum.”

Leading scientific journals are members of COPE. Unfortunately, when it comes to rigorous and honest assessment of research from members of the GET/CBT ideological movement, including Professor Crawley, leading scientific journals have exhibited little in the way of “publication ethics.” So I am not optimistic that an examination by COPE will produce better results.

Brianne joins the TWiVMasters to explain how mutations in genes encoding RNA polymerase III predispose children to severe varicella, and detection of an RNA virus by a DNA sensor.

 

Click arrow to play
Download TWiV 456 (75 MB .mp3, 124 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

Show notes at microbe.tv/twiv.

bacteriophage modelNot long after their discovery, viruses that infect bacteria – bacteriophages – were considered as therapeutic agents for treating infections. Despite many years of research on so-called phage therapy, clinical trials have produced conflicting results. They might be explained in part by the results of a new study which show that the host innate immune system is crucial for the efficacy of phage therapy.

When mice are infected intranasally with Pseudomonas aeruginosa (which causes pneumonia in patients with weak immune systems), the bacterium multiplies in the lungs and kills the animals in less than two days. When a P. aeruginosa lytic phage (i.e. that kills the bacteria) is instilled in the nose of the mice two hours after bacterial infection, all the mice survive and there are no detectable bacteria in the lungs. The phage can even be used prophylactically: it can prevent pneumonia when given up to four days before bacterial challenge.

The ability of phage to clear P. aeruginosa infection in the mouse lungs depends on the innate immune response. When bacteria infect a host, they are rapidly detected by pattern recognition receptors such as toll-like receptors. These receptors detect pathogen-specific molecular patterns and initiate a signaling cascade that leads to the production of cytokines, which may stop the infection. Phage cannot clear P. aeruginosa infection in mice lacking the myd88 gene, which is central to the activity of toll like receptors. This result shows that the innate immune response is crucial for the ability of phages to clear bacterial infections. In contrast, neither T cells, B cells, or innate lymphoid cells such as NK cells are needed for phage therapy to work.

The neutrophil is a cell of the immune system that is important in curtailing bacterial infections. Phage therapy does not work in mice depleted of neutrophils. This result suggests that humans with neutropenia, or low neutrophil counts, might not respond well to phage therapy.

A concern with phage therapy is that bacterial mutants resistant to infection might arise, leading to treatment failure. In silico modeling indicated that phage-resistant bacteria are eliminated by the innate immune response. In contrast, phage resistant bacteria dominate the population in mice lacking the myd88 gene.

These results demonstrate that in mice, successful phage therapy depends on a both the innate immune response of the host, which the authors call ‘immunophage synergy’. Whether such synergy also occurs in humans is not known, but should be studied. Even if observed in humans, immunophage synergy might not be a feature of infections in other anatomical locations, or those caused by other bacteria. Nevertheless, should immunophage synergy occur in people, then clearly only those with appropriate host immunity – which needs to be defined – should be given phage therapy.

TWiV 455: Pork and genes

Erin Garcia joins the TWiVirions to discuss a computer exploit encoded in DNA, creation of pigs free of endogenous retroviruses, and mutations in the gene encoding an innate sensor of RNA in children with severe viral respiratory disease.

 

Click arrow to play
Download TWiV 455 (64 MB .mp3, 105 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

Show notes at microbe.tv/twiv

Purging the PERVs

pigThere aren’t enough human organs to meet the needs for transplantation, so we have turned to pigs. Unfortunately pig cells contain porcine endogenous retroviruses, PERVS, which could infect the transplant recipient, leading to tumor formation. But why worry? Just use CRISPR to purge the PERVs.

The genomes of many species on Earth are littered with endogenous retroviruses. These are DNA copies of retroviral genomes from previous infections that are integrated into germ line DNA and passed from parent to offspring. About 8% of the human genome consists of ERVs. The pig genome is no different – it contains PERVs (an acronym made to play with). The genome of an immortalized pig cell line called PK15 contains 62 PERVs. Human cells become infected with porcine retroviruses when they are co-cultured with PK15 cells.

The presence of PERVS is an obvious problem for using pig organs for transplantation into humans – a process called xenotransplantation. The retroviruses produced by pig cells might infect human cells, leading to problems such as immunosuppression and tumor formation. No PERV has ever been shown to be transmitted to a human, but the possibility remains, especially with   the transplantation of increasing numbers of pig organs into humans.

The development of CRISPR/Cas9 gene editing technology made it possible to remove PERVs from pigs, potentially easing the fears of xenotransplantation. This technology was first used to remove all 62 copies of PERVS from the PK15 cell line. But having PERV-free pig cells doesn’t help humans in need of pig organs – for that you need pigs.

To make pigs without PERVs, CRISPR/Cas9 was used to remove the PERVs from primary (that is, not immortal) pig cells in culture. Next, the nuclei of these PERV-less cells was used to replace the nucleus of a pig egg cell. After implantation into a female, these cells gave rise to piglets lacking PERVs.

In theory such PERV-less piglets can be used to supply organs for human transplantation, eliminating the worrying about infecting humans with pig retroviruses. But first we have to make sure that the PERV-free pigs, and their organs, are healthy. The more we study ERVs, the more we learn that they supply important functions for the host. For example, the protein syncytin, needed to form the placenta, is a retroviral gene, and the regulatory sequences of interferon genes come from retroviruses. There are likely to be many more examples of essential functions provided by ERVs. It would not be a good idea to have transplanted pig organs fail because they lack an essential PERV!

By David Tuller, DrPH

On Friday, I had an e-mail exchange with Sir Andrew Dillon, chief executive of the NICE Guidance Executive. The other seven Guidance Executive members are various directors within the NICE hierarchy, including the communications director. This group will make the final decision about whether to accept the provisional decision of a NICE surveillance review team to leave as is CG53, the guidance for CFS/ME released in 2007. (I have written about the NICE review process on CG53 here, here and here.)

That ten-year-old CFS/ME guidance recommends treatment with graded exercise therapy and cognitive behavior therapy. NICE reaffirmed the guidance after the 2011 publication of the first PACE results, which were taken as evidence that these treatments were effective. As part of the current review process, NICE provided stakeholders with a two-week window last month to submit comments about the provisional decision not to change CG53. Not surprisingly, this recommendation has alarmed many patients and advocates.

I didn’t expect to get answers to my questions from the Guidance Executive, but I felt an obligation to pose them anyway, given the importance of the issues. In fact, Sir Andrew responded to my e-mail within an hour. He explained that no comments would be forthcoming while the Guidance Executive was reviewing the situation. I have posted his response below my initial e-mail.

**********

Sir Andrew Dillon
Chief Executive
National Institute for Health and Care Excellence
 
Dear Sir Andrew:
 
I am a journalist and public health researcher at the University of California, Berkeley. I have reported on the current review of CG53, the NICE guidance for CFS/ME, for the science site Virology Blog, which is hosted by Professor Vincent Racaniello, a microbiologist at Columbia University. I have previously reported for Virology Blog on the PACE trial and other issues related to graded exercise therapy and cognitive behavior therapy. Earlier this year, I co-authored a commentary about the serious problems with PACE for the Sunday opinion section of The New York Times. 
 
In my role as a journalist covering this issue, I have some questions for you and the other members of the NICE Guidance Executive about the decision-making process concerning the provisional recommendation to make no changes to CG53: 
 
1) For many years, the U.S. Centers for Disease Control recommended GET and CBT as treatments, citing PACE. In late June or early July, the agency removed all references to these therapies from its main pages on the illness. Does the Guidance Executive plan to consult with American public health officials about what prompted this major “dis-endorsement” of these two therapies that NICE continues to promote? 

2) In 2015, both the U.S. National Institutes of Health and the Institute of Medicine (now the National Academy of Medicine) released reports on the illness (they call it ME/CFS). These reports both concluded that it is a serious organic disease involving pathophysiological processes and not a psychological or psychiatric disorder—a determination that would have significant impact on treatment options. Does the Guidance Executive plan to consider these two reports and consult with any of the members of the panels that wrote them?
  
3) Other fields of medicine have abandoned the use of the trial design favored in this entire body of research, including PACE: open-label studies with subjective outcomes. That’s because other fields of medicine recognize that the combination of those two features in one study inherently produces bias. Does the Guidance Executive share these concerns about results from open-label studies with subjective outcomes, or does it believe that such studies can produce reliable and unbiased evidence suitable for clinical decision-making?  
 
4) In PACE and other studies from this field, objective measures have largely failed to support the subjective results that have generated claims of “recovery” or significant clinical improvement. Does this pattern of sharp contradiction between objective and subjective results raise questions for the Guidance Executive about whether patients are objectively getting better?
 
5) In the 2011 Lancet paper, 13 % of the PACE participants had already met one of the study’s outcome thresholds at trial entry—that is, although assessed as “disabled” enough in physical function to qualify for the study, they were also found to be “within normal range” for physical function, before any treatment at all. In the 2013 Psychological Medicine paper, the same 13 % were already “recovered” for physical function at baseline, before any treatment at all—that is, they were simultaneously “disabled” for physical function and “recovered” for physical function. These facts were not included in the published papers but emerged later through a patient’s freedom-of-information request. Does the Guidance Executive have confidence in the reported results of a study in which a significant minority of participants have already met a key outcome threshold at baseline? If so, can the Guidance Executive point to other studies in the clinical trial literature in which a significant number of participants have already met a key outcome threshold at baseline? Does the Guidance Executive believe that the published PACE papers should have mentioned the fact that a signifiant minority of participants had already met a key outcome threshold at baseline? 
 
6) In February 2016, forty-two leading scientists and clinicians signed an open letter to The Lancet in which they outlined the methodological lapses of the PACE trial, stated unequivocally that “such flaws have no place in published research,” and demanded an independent investigation. In March 2017, more than 100 experts signed an open letter to Psychological Medicine, asking the journal to retract immediately its core finding that GET and CBT helped patients “recover.” Does the Guidance Executive plan to review these open letters and consult with any of the signatories–from Columbia, University College London, Harvard, Stanford, Berkeley, etc.—about their reasons for publicly dismissing the PACE findings as invalid? 
 
7) Both GET and CBT, as described in PACE and other studies from this field of research, involve telling participants that the treatments can reverse the illness and return them to a state of health. Is the Guidance Executive concerned that telling study participants repeatedly about the effectiveness of the treatments could bias their responses, augmenting any bias already inherent in open-label studies with subjective outcomes? 
 
8) Some defenders of PACE note that CBT is also recommended for patients with cancer and other chronic diseases. But the approach advocated in PACE and related studies is not the kind of CBT focused on helping patients adapt to the reality of their illness. Rather, this form of CBT is specifically designed to alleviate patients of their purportedly “unhelpful” beliefs of having an ongoing medical disease that can be exacerbated by activity and exercise. Is the Guidance Executive aware of this critical distinction between CBT as normally administered in the case of other chronic illnesses and the adapted form of CBT investigated in PACE and other studies in this field? 
 
9) The PACE trial used the Oxford criteria to identify participants. This case definition requires only six months of unexplained fatigue, so its use could result in the selection of participants with depression or other unidentified fatiguing illnesses. Some of these other illnesses might resolve spontaneously or respond to behavioral and psychological interventions like GET and CBT. In fact, the NIH report noted that using the broad Oxford case definition could “impair progress and cause harm,” and recommended that it be “retired.” Is the Guidance Executive concerned that populations derived using the Oxford criteria might contain many participants experiencing prolonged fatigue for a range of reasons unrelated to the illness being investigated? Is the Guidance Executive concerned that such heterogeneity in study samples could lead to erroneous findings about treatments?
 
10) The U.S. Agency for Healthcare Research and Quality found evidence to support GET and CBT for ME/CFS in its review of multiple studies. However, when the agency subsequently removed Oxford criteria studies from this analysis, it found no evidence that GET provided any benefits and almost no evidence that CBT provided benefits. Is the Guidance Executive considering this AHRQ re-analysis in its decision-making? Does the Guidance Executive plan to consult with officials at the agency to discuss why they conducted this re-analysis and how it subsequently led them to downgrade their assessments of the therapies? 
 
11) The surveillance review team cites Cochrane reviews of GET and CBT to support the recommendation to leave the 2007 guidance as is. Many of the trials included in these Cochrane reviews rely on a broad case definition like the Oxford criteria. Is the Guidance Executive comfortable relying on Cochrane reviews for confirmation of controversial findings when the reviews themselves include the studies that feature the methodological problems being questioned? Will the Guidance Executive consider asking Cochrane to follow the lead of American public health officials and conduct a re-analysis of its GET and CBT reviews with Oxford criteria studies removed from the sample? 
 
12) In the PACE trial protocol, the investigators promised to follow the Declaration of Helsinki, which requires researchers to tell prospective participants about “any possible conflicts of interest.” The three main PACE investigators have had longstanding relationships with insurance companies, advising them to offer GET and CBT to claimants diagnosed with the illness. Yet the investigators did not tell prospective PACE participants about these extensive consulting and financial links with insurance companies or include the information in consent forms. Is the Guidance Executive concerned that this clear violation of the investigators’ protocol promise to disclose “any possible conflicts of interest” to prospective participants means that they did not obtain properly “informed” consent? Does the Guidance Executive believe it should base clinical guidelines on studies that have not obtained properly “informed” consent? 
 
13) More than 15,000 people signed the ME Association’s online petition outlining their concerns with the 2007 guidance and their objection to the provisional decision to leave it unchanged. Is it unusual for that many people to sign a petition protesting a NICE guidance? 
 
14) Surveys of patients who have undergone GET have routinely found that more patients report harms from the intervention than benefits. In making its decision, does the Guidance Executive plan to consider these reports based on the clinical experiences of patients receiving GET in the real world? 
 
15) The conduct and findings of the PACE trial have become a worldwide controversy. The study has been presented as a paragon of bad science at conferences of epidemiologists and statisticians and in graduate-level seminars. Leading scientists and clinicians have publicly denounced the trial’s perplexing irregularities. The CDC has removed references to PACE and has dropped the associated treatment recommendations. In making its decision about the 2007 guidance, does the Guidance Executive plan to consider that large segments of the scientific and public health worlds have already rejected the evidence base for GET and CBT as interventions for CFS/ME, ME/CFS, or whatever the disease entity is called? Given the public health stakes involved, will the Guidance Executive consider commissioning a more extensive, authoritative, independent and unbiased review of the evidence–and perhaps even a review in which the reviewers read the actual studies on which they are basing their recommendations, and not just the study abstracts?   
 
I have other questions, but will leave it at that for now. I would be delighted should you and/or other members of the Guidance Executive choose to respond.
 
Kind regards–David
 
David Tuller, DrPH
Senior fellow in public health and journalism
Center for Global Public Health
School of Public Health
University of California, Berkeley

**********

Sir Andrew’s réponse to me:

Dear Dr Tuller,

Thank you for your enquiry.

It looks like you are aware that we have recently concluded a public consultation about our provisional decision on the review of this clinical guideline. We are in the process of reviewing the results of that consultation and will make our final decision in due course. We will make that decision public, together with any other statements we think will be helpful to contextualise it. Until then, we don’t intend to respond to enquiries about the provisional decision. It may be that our final decision, when placed in the public domain, will help you with some of your questions, but if not, we will endeavour to answer them as best we can at that time.

Yours sincerely,

Andrew Dillon
Chief Executive
National Institute for Health and Care Excellence
10 Spring Gardens | London | SW1A 2BU | United Kingdom

TWiV 454: FGCU, Zika

Sharon Isern and Scott Michael return to TWiV for a Zika virus update, including their work on viral evolution and spread, and whether pre-existing immunity to dengue virus enhances pathogenesis.

 

Click arrow to play
Download TWiV 454 (65 MB .mp3, 108 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

Show notes at microbe.tv/twiv

Antibody dependent enhancementThe short answer to the question posed in the title of this blog is: we don’t know.

Why would we even consider that a prior dengue virus infection would increase the severity of a Zika virus infection? The first time you are infected with dengue virus, you are likely to have a mild disease involving fever and joint pain, from which you recover and develop immunity to the virus. However, there are four serotypes dengue virus, and infection with one serotype does not provide protection against infection with the other three. If you are later infected with a different dengue virus serotype, you may even experience more severe dengue disease involving hemorrhagic fever and shock syndrome.

The exacerbation of dengue virus disease has been documented in people. Upon infection with a different serotype, antibodies are produced against the previous dengue virus encountered. These antibodies bind the new dengue virus but cannot block infection. Dengue virus then enters and replicates in cells that it does not normally infect, such as macrophages. Entry occurs when Fc receptors on the cell surface bind antibody that is attached to virus particles (illustrated). The result is higher levels of virus replication and more severe disease. This phenomenon is called antibody-dependent enhancement, or ADE.

When Zika virus emerged in epidemic form, it was associated with microcephaly and Guillain-Barré syndrome, diseases that had not been previously known to be caused by infection with this virus. As Zika virus and dengue virus are closely related, because ADE was known to occur with dengue virus, and both viruses often co-circulated, it was proposed that antibodies to dengue virus might exacerbate Zika virus disease.

It has been clearly shown by several groups that antibodes to dengue virus can enhance Zika virus infection of cells in culture. Specifically, adding dengue virus antibodies to Zika virus allows it to infect cells that bear receptors for antibodies – called Fc receptors. Without Fc receptors, the Zika virus plus dengue antibodies cannot infect these cells. ADE in cultured cells has been reported by a number of groups; the first was discussed here when it appeared on bioRxiv.

The important question is whether antibodies to dengue virus enhance Zika virus disease in animals, and there the results are mixed. In one experiment, mice were injected with serum from people who had recovered from dengue virus infection, followed by challenge with Zika virus. These sera, which cause ADE of Zika virus in cultured cells, led to increased fever, viral loads, and death of mice.

These finding were not replicated in two independent studies conducted in rhesus macaques (paper one, paper two). In these experiments, the macaques were first infected with dengue virus, and shown to mount an antibody response to that virus. Over one year later the animals were infected with Zika virus (the long time interval was used because in humans dengue ADE is observed mainly with second infections 12 months or more after a primary infection). Both groups concluded that prior dengue virus immunity did not lead to more severe Zika virus disease.

Which animals are giving us the right answer, mice or monkeys? It should be noted that the mouse study utilized an immunodeficient strain lacking a key component of innate immunity. As the authors of paper one concluded, it’s probably not a good idea to use immune deficient mice to understand the pathogenesis of Zika virus infection of people.

When it comes to viral pathogenesis, we know that mice lie; but we also realize that monkeys exaggerate. Therefore we should be cautious in concluding from the studies on nonhuman primates that dengue virus antibodies do not enhance Zika virus pathogenesis.

The answer to the question of whether dengue antibodies cause Zika virus ADE will no doubt come from carefully designed epidemiological studies to determine if Zika virus pathogenesis differs depending on whether the host has been previously infected with dengue virus. Such studies have not yet been done*.

You might wonder about the significance of dengue virus antibodies enhancing infection of cells in culture with Zika virus. An answer is provided by the authors of paper one:

In vitro ADE assays using laboratory cell lines are notoriously promiscuoius and demonstrate no correlation with disease risk. For example, DENV-immune sera will enhance even the homotypic serotype responsible for a past infection in the serum is diluted to sub-neutralizing concentrations.

The conundrum of whether ADE is a contributor to Zika virus pathogeneis is an example of putting the cart before the horse. For dengue virus, we obtained clear evidence of ADE in people before experiments were done in animals. For Zika virus, we don’t have the epidemiological evidence in humans, and therefore interpreting the animals results are problematic.

*Update 8/12/17: A study has been published on Zika viremia and cytokine levels in patients previously infected with dengue virus. The authors find no evidence of ADE in patients with acute Zika virus infection who had previously been exposed to dengue virus. However the study might not have been sufficiently powered to detect ADE.