Trial By Error: The School Absence Study, Revisited

By David Tuller, DrPH

This post is about a serious issue–ethical approval for research studies involving children. It is also about how powerful institutions, like leading medical journals, respond to concerns. But the story is really too long and complicated. I recommend it only for those following things pretty closely or who for whatever reason like this kind of granular, somewhat obsessive analysis.

Elsewhere, Tom Chivers at BuzzFeed has written a terrific piece on the cult-like Lightning Process, its founder Phil Parker, and Professor Esther Crawley’s SMILE trial, the story was posted over the weekend. Chivers captures the woo-woo-ness of the Lightning Process with lots of details about what it entails, from the perspective of patients who have undergone it.

The piece doesn’t delve into SMILE’s methodological flaws (outlined in this Virology Blog post). But Chivers includes key observations from Jonathan Edwards, Charles Shepherd, Peter Rowe and others. He also highlights Professor Crawley’s unsupported accusation that I have libeled her and Bristol University’s effort to pressure me by complaining to Berkeley. Definitely worth reading.

**********

In August, I posted a critical analysis of a 2011 study by Professor Esther Crawley and colleagues. The article, published in BMJ Open, was called Unidentified Chronic Fatigue Syndrome/myalgic encephalomyelitis (CFS/ME) is a major cause of school absence: surveillance outcomes from school-based clinics. In my post, I criticized BMJ Open for how it handled the paper and also for how its editors handled concerns raised about the paper.

Studies that involve human subjects and draw broad conclusions normally need approval from a research ethics review committee. Remarkably, Professor Crawley and her co-investigators decided that this school absence study was exempt from such a review. The paper included the creative claim that the study did not need one because it qualified as what was called service evaluation. In publishing the paper, BMJ Open implicitly endorsed that questionable argument.

Why am I reprising the issue now? After my post about the school absence paper, a journal presented a summary of the exact same situation to the Committee on Publication Ethics. The journal’s narrative was published as part of the agenda for the November meeting of what is called the COPE forum, a group of editors who debate publishing dilemmas.

These COPE submissions are anonymous, they do not include the names of journals, editors, or study authors. No one from BMJ Open or COPE has officially informed me this submission to the forum involves Professor Crawley’s school absence paper. But the specifics coincide. Moreover, the statement to COPE is initialed AA. BMJ Open’s editor is Adrian Aldcroft, who last year tried to deflect concerns raised about the paper by an observant reader.

Unfortunately, when read as an anonymized version of the events involving BMJ Open and Professor Crawley’s school absence study, the account presented to the COPE forum is misleading and inaccurate. Post-hoc rationalizations cannot undo what happened or obscure the journal’s lapses in editorial judgement.

In this post, I will recap the study itself. Then I will explain the official guidelines on the differences between research and service evaluation, and discuss how these guidelines relate to the study. Next I will look at BMJ Open’s earlier responses to the concerns. Finally, I will post the statement to COPE, and respond to it.

In a subsequent post, I will review the COPE forum’s answer to the questions posed by BMJ Open. The group’s response was reasonable, given the information or misinformation provided.

(For the sake of clarity and convenience, I will substitute the terms BMJ Open and Professor Crawley while discussing the journal and author mentioned in the submission to COPE. Given the parallels between the anonymous account and publicly documented events, the substitute terms would be accurate even if it were to turn out that a different journal was describing a different case altogether.)

**********

The Study:

Professor Crawley’s school absence study investigated what it called a pilot clinical service–a novel intervention to use school absence records as a way of identifying students suffering from undiagnosed and/or untreated CFS/ME. Working in coordination with the research team, three schools in southwest England reviewed their records and identified a total of 146 students who had experienced significant absences for unexplained reasons.

The schools sent letters to the families of these students, asking them to attend a private meeting with a school officer (in most cases this was the attendance officer) and a pediatrician, i.e. Professor Crawley. In these meetings, Professor Crawley personally evaluated the students, with those suspected of having CFS/ME referred to her Bath specialist CFS/ME service for further diagnosis.

Of the total who attended these meetings, 28 were ultimately found to have the illness. Let’s put that another way: For this pilot clinical service, the families of 118 students who were not ultimately diagnosed as having CFS/ME received potentially troubling letters inviting them to a meeting about a sensitive issue with a physician and a school officer.

The study included both a hypothesis (Many children with CFS/ME remain undiagnosed and untreated, despite evidence that treatment is effective in children) and a research question (Are school-based clinics a feasible way to identify children with CFS/ME and offer treatment?) The study compared students diagnosed with CFS/ME through the pilot intervention with those referred to the CFS/ME specialist clinic through health networks.

Among the study’s conclusions: Chronic fatigue is an important cause of unexplained absence from school and school-based settings have the potential to identify children with CFS/ME. BMJ Open published this study under the following slug: Research.

Under ethics approval, the paper included the following statement: The clinical service in this study was provided as an outreach from the Bath specialist CFS/ME service. The North Somerset & South Bristol Research Ethics Committee decided that the collection and analysis of data from children and young people seen by the CFS/ME specialist service were part of service evaluation and as such did not require ethical review by the NHS Research Ethics Committee or approval from the NHS R&D office (REC reference number 07/Q2006/48).

In other words, the study exempted itself from ethical review because the data were being collected as part of what was being called service evaluation. From the phrasing of this ethics approval statement, it was not clear whether the research ethics committee specifically reviewed the school absence study.

**********

Health Research Authority Guidelines

According to guidelines from the U.K.’s Health Research Authority, research studies involve the attempt to derive generalisable or transferable new knowledge to answer questions with scientifically sound methods including studies that aim to generate hypotheses as well as studies that aim to test them, in addition to simply descriptive studies. In contrast, service evaluation studies are designed and conducted solely to define or judge current care and involve minimal additional risk, burden or intrusion for participants. Moreover, service evaluation involves an intervention in use only.

(These official definitions of what constitutes research and service evaluation for the purpose of obtaining ethical approval have remained essentially unchanged for years.)

With the school absence study, the facts are clear. The study was not about current care being provided to patients already attending clinics, so it could not reasonably be categorized as service evaluation. Moreover, the intervention, sending letters to families of children who were not patients but had high school absence rates, was presented as a pilot clinical service, so it could not reasonably be called an intervention in use, as required for service evaluation.

The paper also included a hypothesis and generalizable conclusions, hallmarks of what the Health Research Authority considers to be research. The study did include major caveats about the conclusions, declaring that results may not be generalisable to regions without a CFS/ME service or to regions with different socioeconomic factors that impact on school attendance. But many studies include limitations on the generalizability of their findings. The salient point is that the findings were not presented as restricted to students from these three specific schools.

The University of Bristol itself promoted the study as research that produced generalizable conclusions. Here’s the opening of the university’s press release: New research into the cause of school absence finds that up to one per cent of secondary pupils could be suffering from chronic fatigue…

The Bristol press release included quotations from Professor Crawley, in which she made similarly broad claims: These findings reveal the scale of how many children are affected by disabling chronic fatigue that prevents them attending school, and how few are diagnosed and offered help… Our findings suggest that school-based surveillance for fatigue could be of potential benefit.

Following the lead of Professor Crawley and Bristol University, media outlets touted these conclusions as newsworthy. Here’s how The Guardian characterized the study: Far more children than previously thought miss a lot of schooling because of chronic fatigue syndrome which has not been diagnosed, according to research.

By any reasonable measure, this school absence study cannot be considered a service evaluation. No journal should have accepted the argument that it was exempt from ethical review on those grounds.

In fact, such a claim should have raised questions that could have been clarified by examining the 2007 research ethics committee letter referenced in the paper’s ethics approval statement. Professor Crawley presumably could have made this documentation available to BMJ Open, had anyone requested it.

The 2007 research ethics committee letter did not support the investigators’ assertion that no ethical review was required. The letter involved a different set of circumstances and had nothing to do with the pilot intervention reported in the school absence study.

The 2007 letter was written in response to an application seeking to expand data collection among children attending the Bath CFS/ME specialist clinic, run by Professor Crawley. Patients at the clinic already filled out self-reported questionnaires for assessment at entry and at 12 months. The application sought approval to add additional assessments for children at six weeks and six months, which were expected to take no more than an additional 20 minutes to complete each time. The questionnaires included measures for school attendance, which was listed as the primary outcome.

The 2007 application included a question about how patients for this expanded questionnaire regimen would be identified, approached, and recruited. In response, the investigators stated that there will be no change in the way potential participants are identified. In other words, the application did not anticipate, mention or seek approval for the new actions involved in implementing and evaluating the pilot intervention.

After reviewing the application, the North Somerset & South Bristol Research Ethics Committee sent a letter dated May 1, 2007. Here’s the operative phrasing: Members [of the REC] considered this project to be service evaluation. Therefore it does not require ethical review by a NHS Research Ethics Committee or approval from the NHS R&D office.

The approval letter referred specifically to this project as service evaluation, that is, the approval was for the extra questionnaires for children who were current patients of the CFS/ME specialist service. Professor Crawley and her co-authors, therefore, appear to have over-interpreted the letter’s meaning when they wrote, in their paper, that the research ethics committee decided that the collection and analysis of data from children and young people seen by the CFS/ME specialist service were part of service evaluation and as such did not require ethical review.

The research ethics committee letter did not make such a blanket decision. The letter did not grant Professor Crawley and her co-investigators the right or permission to apply the term service evaluation to any other data-collection projects undertaken by the specialist service, including extensive outreach to students not currently in clinical care. In short, the research ethics committee did not review the school absence study conducted by Professor Crawley or determine that it was service evaluation. The claim that the 2007 letter applies to the data collection for the pilot intervention cannot withstand scrutiny.

I did not discover this discrepancy. As is often the case, a perceptive observer alerted me to the issue. She had accessed the key documents through Freedom of Information requests and had attempted to seek clarity from BMJ Open. Dissatisfied with the journal’s response, she contacted me and provided copies of the 2007 research ethics committee letter and related documentation.

**********

BMJ Open’s Initial Response

When this perceptive observer first wrote to BMJ Open, editor Adrian Aldcroft acknowledged in his response that the article published in BMJ Open is not strictly a service evaluation. But Aldcroft argued that the article was nonetheless exempt from ethical review, citing a May 2017 statement of support for Professor Crawley’s research from the University of Bristol. This statement, like the research ethics committee’s 2007 letter, had nothing to do with the data collection for the school absence study.

Bristol’s May 2017 statement involved a separate issue. It was headlined University of Bristol statement about the CFS/ME National Outcome Database (NOD). This National Outcome Database included information collected from regular clinic patients. The pilot intervention was an outreach to non-patients and was not part of the National Outcome Database. Yet in his response to the observant reader who had noted the issue, BMJ Open editor Aldcroft cited the following sentence from Bristol’s CFS/ME National Outcome Database statement:

REC review is not required for the following types of research: Research limited to secondary use of information previously collected in the course of normal care (without an intention to use it for research at the time of collection), provided that the patients or service users are not identifiable to the research team in carrying out the research.

Aldcroft then declared in his e-mail: The data in the BMJ Open article meet these criteria.

Aldcroft’s declaration was false, the data in the BMJ Open article did not meet the criteria mentioned in Bristol’s statement. The pilot intervention described in the BMJ Open paper provided new data for analysis, so the study was not based on secondary use of information previously collected in the course of normal care. And as I’ve noted, the students in the school absence study were clearly identifiable, not anonymous, since Professor Crawley assessed them personally and carried out the research.

Last summer, I wrote to BMJ Open editor Aldcroft while preparing my Virology Blog post. I heard back not from him but from BMJ Open’s editor-in-chief, Dr. Trish Groves. She argued, unconvincingly, from my perspective–that the authors of the school absence study had satisfactorily answered the concerns and had followed the proper procedures in making their decisions about ethical review.

Dr. Groves did not explain why Aldcroft had already acknowledged, in responding to the observant reader, that the study was not strictly a service evaluation. She did not explain why Aldcroft had cited in defense a Bristol University statement that was unrelated to the school absence study. Nor did she explain why the journal had chosen to publish the paper under the heading of research if it considered it to be service evaluation.

Here are some of the questions I posed for Dr. Groves in that Virology Blog post, plus some comments:

“Is she [Dr. Groves] really comfortable that”“as part of a study defined as service evaluation”“more than one hundred families whose children did not have CFS/ME were nonetheless sent school letters on a sensitive issue and invited to meet with Professor Crawley? Does Dr. Groves really believe that testing out a new strategy to identify patients unknown to the clinical service qualifies as service evaluation for routine care? I doubt she actually does believe that, but who knows? Smart people can convince themselves to believe a lot of stupid things. In any event, in dismissing these concerns, BMJ Open has demonstrated that something is seriously amiss with its ethical compass.”

**********

The Anonymous Journal’s Statement to COPE

Service evaluation as research in a controversial area of medicine

We received an email from a reader relating to the ethics statement in a research article published in 2011. The article presented data collected at a clinic relating to a controversial area in medicine. The ethics statement in the article indicates that, in accordance with regional guidelines, the research ethics committee deemed that the study was a service evaluation and formal ethical review was not required.

Using the reference number cited in the article, the reader obtained the relevant documents from the research ethics committee via a freedom of information request. The reader argued that the documents from the ethics committee related to data that predated what was presented in the article. A review of the documents indicated that this appeared to be the case. In addition, the reader argued that service evaluations should not be presented as research articles as these are two separate things.

The editor of the journal wrote to the author of the article and asked for comment on the issues raised. The author replied that there had been regular contact with the ethics committee as the service period of the clinic was extended, and the ethics committee continued to indicate that the data were being collected as part of a service evaluation and further ethical review was not required. In addition, the data were collected anonymously, which would further exempt the study from requiring formal ethical approval. The ethics committee also provided the authors with a letter indicating that this letter …may be provided to a journal or other body as evidence that ethical approval is not required under [the regional] research governance arrangements.

The author indicated that similar requests had been made in the past and that, due to the controversial area of the work, many attempts were being made to retract articles that used the data from the clinic. Attempting to prevent further queries, the author asked the institutional head of research to post a public statement indicating that the work was conducted appropriately and met the highest ethical standards. As requested, the head of research issued a statement on the institutional website in support of the work.

The editor then responded to the reader indicating that the journal was satisfied with the author’s response and the support of the head of research. The reader was not satisfied with the editor’s response and forwarded the details of the case to a high profile blogger who writes extensively on this controversial area of medicine. The blogger then posted a blog criticising both the article and the journal’s handling of the case. The blog was shared widely on social media. From the journal’s perspective, the blog was inaccurate, misrepresentative and damaging to the publisher’s reputation.

Question(s) for the COPE Forum

•Should we allow data collected in service evaluations to be published as research articles? In medical journals, this is often seen as an acceptable exception; however, if research ethics committees are declaring a study “not research”, should journals do the same?

•Should the journal have posted a correction on the article to provide a more detailed ethics statement, bearing in mind that anything labelled a “correction” in a controversial area would be misinterpreted as an error in the research by the critics?

•How should journals respond to blog posts that they feel portray them unfairly and are damaging to the publisher’s reputation?

**********

My Response to the Journal’s Statement to COPE

(Again, the statement to the COPE forum is anonymous. For the sake of argument, I am using BMJ Open in lieu of journal and Professor Crawley in lieu of author. The circumstances fit, even if it were to turn out that this account was written by a different journal about a different paper altogether.)

In constructing this account for the COPE forum, BMJ Open has chosen to ignore the testimony of the school absence paper itself, the claims of Bristol University’s press release, and the documentation from my August blog post. The journal has relied instead on statements from the regional ethics committee and Professor Crawley’s university that fail to support its argument.

Let’s look at this account in more detail.

BMJ Open: Service evaluation as research in a controversial area of medicine…We received an email from a reader relating to the ethics statement in a research article published in 2011. The article presented data collected at a clinic relating to a controversial area in medicine. The ethics statement in the article indicates that, in accordance with regional guidelines, the research ethics committee deemed that the study was a service evaluation and formal ethical review was not required.

Using the reference number cited in the article, the reader obtained the relevant documents from the research ethics committee via a freedom of information request. The reader argued that the documents from the ethics committee related to data that predated what was presented in the article. A review of the documents indicated that this appeared to be the case. In addition, the reader argued that service evaluations should not be presented as research articles as these are two separate things.

RESPONSE: The initial description of the material at hand, the headline itself and the second sentence’s statement that the article presented data collected at a clinic, bias the discussion. The headline suggests that the matter is settled, that this was a service evaluation. But that is the issue being contested. And the phrasing in the second sentence implies that the data were collected during normal functioning of a clinic, which would be consistent with service evaluation.

But the implication is incorrect. The people contacted were not identified through their attendance at clinical services, so they were not current patients receiving regular care. The study participants were identified through their school absence records. Their data were gathered while road-testing a novel method of engaging non-patients with clinical services. In this context, to refer to these records simply as data collected at a clinic is not only vague but misleading.

Moreover, most of those impacted by the study’s intervention and outreach were not ultimately diagnosed with the illness. Service evaluation is supposed to pose no more than a minimal additional risk, burden or intrusion on participants. The use of the word additional here is important. Professor Crawley’s school absence study was not posing an additional risk on current patients. In sending potentially troubling letters to the families of high-absentee students, it was creating an entirely new risk, burden or intrusion for an entirely new set of people.

It is unclear how or even whether these families provided informed consent to participate in Professor Crawley’s innovative pilot program. The paper itself does not mention any consent obtained before or after these school letters were sent.
The BMJ Open’s account mentions the questions about the paper’s ethical review status. But the account is silent on why Professor Crawley believed she could cite the 2007 letter, which involved unrelated circumstances, to exempt this pilot intervention from ethical review. The account is also silent on why experienced editors did not recognize that this paper was not a service evaluation, whatever the authors claimed, and that it therefore required an ethical review.

BMJ Open: The editor of the journal wrote to the author of the article and asked for comment on the issues raised. The author replied that there had been regular contact with the ethics committee as the service period of the clinic was extended, and the ethics committee continued to indicate that the data were being collected as part of a service evaluation and further ethical review was not required. In addition, the data were collected anonymously, which would further exempt the study from requiring formal ethical approval. The ethics committee also provided the authors with a letter indicating that ‘this letter …may be provided to a journal or other body as evidence that ethical approval is not required under [the regional] research governance arrangements.’

RESPONSE: Let’s dispense first with this statement: the data were collected anonymously. This is untrue, as any review of the school absence study would indicate. Professor Crawley gathered information in person–directly from the students and families who came to the school to meet with her. Is it possible that BMJ Open editors are defending this study without having re-read or even read what they published? Beyond that, are they really suggesting that anonymous collection of data alone qualifies a study for exemption from ethical review?

Next, regular contact between an author and an ethics committee cannot substitute for formal ethics approval. Absent any documentation, we have no idea what information was presented to the ethics committee during these informal communications, so BMJ Open’s third-hand accounting of the interactions between Professor Crawley and the committee is meaningless. The way that Professor Crawley and other investigators characterized what they were doing to ethics committee members would have impacted the responses.

For example, were members of the ethics committee told simply that the data were collected during routine school clinics? Or were they told that this was a pilot program to identify previously undiagnosed students using their school absence records? Were they told that most of the families identified and impacted by this recruitment process turned out not to have children with CFS/ME after all?

Besides citing unverifiable exchanges, BMJ Open’s account includes as back-up evidence a snippet of a letter from the research ethics committee itself. This snippet is presented as if it were possibly from a post-2007 or updated ethics committee letter based on new information arising from regular contact with Professor Crawley. Instead, it actually appears to have been taken from the 2007 letter already cited in the paper as the source of the ethical review exemption.

That 2007 letter ended with this paragraph, from which the snippet could have been extracted: This letter should not be interpreted as giving a form of ethical approval to the project, but it may be provided to a journal or other body as evidence that ethical approval is not required under NHS research governance arrangements. In the context of the 2007 letter, the project is solely referring to the collection from current patients of additional data at six weeks and six months. Since this does not describe the data collection involved in the pilot intervention, it is unclear why BMJ Open is citing the letter as evidence to support the case for service evaluation.

BMJ Open: The author indicated that similar requests had been made in the past and that, due to the controversial area of the work, many attempts were being made to retract articles that used the data from the clinic. Attempting to prevent further queries, the author asked the institutional head of research to post a public statement indicating that the work was conducted appropriately and met the highest ethical standards. As requested, the head of research issued a statement on the institutional website in support of the work.

RESPONSE: This line of argument also has nothing to do with the school absence study. To be used in service evaluation studies, data from the clinic–such as referenced in this passage–would have to be data gathered from current patients receiving routine care. But data for the school absence study were gathered differently–in specially arranged meetings between Professor Crawley, a school official, and families whose children were identified through school absence records.

Instead of recognizing this distinction, BMJ Open is again presenting Bristol’s irrelevant CFS/ME National Outcome Database statement as a defense for the school absence study. As I have explained, in my August blog and earlier in this one, the data generated by the pilot intervention were not from this national database, so Bristol’s statement has nothing to do with the matter at hand. BMJ Open’s decision to reference it once more in the account to the COPE forum does not enhance the credibility or integrity of the journal’s position.

BMJ Open: The editor then responded to the reader indicating that the journal was satisfied with the author’s response and the support of the head of research. The reader was not satisfied with the editor’s response and forwarded the details of the case to a high profile blogger who writes extensively on this controversial area of medicine. The blogger then posted a blog criticising both the article and the journal’s handling of the case. The blog was shared widely on social media. From the journal’s perspective, the blog was inaccurate, misrepresentative and damaging to the publisher’s reputation.

RESPONSE: BMJ Open apparently believes that medical journals should not be expected to make their own independent judgements, even when study authors, ethics panels or academic offices provide evidence or information that is irrelevant or untrue. Others would hold that a major journal like BMJ Open not only has a right but a responsibility to reject testimony that is clearly at odds with the facts, no matter the source.

Instead of taking that approach here, BMJ Open’s editors have expressed themselves satisfied with Professor Crawley’s deficient answers and with a Bristol statement about secondary use of data and not identifiable participants that is not germane to the data collection for the pilot intervention described in the school absence study. Given the journal’s apparent obtuseness in addressing this matter objectively, it is understandable that the observant reader was dissatisfied with the response and contacted the blogger, me.

I’m glad that BMJ Open considers me high profile and thinks my critical post was shared widely. Moreover, it is possible that what I wrote was damaging to the publisher’s reputation. When editors and journals twist themselves upside down to avoid admitting mistakes, they should not be surprised if they suffer reputational damage.

But I don’t agree that my post was inaccurate and misrepresentative. No one–including BMJ Open, its editor-in-chief (Dr. Groves), or its editor (Aldcroft)–has let me know of inaccuracies that need to be corrected. In this case, the misrepresentations have been made by Professor Crawley, with her claim that this research qualified as service evaluation, and by BMJ Open, with its clumsy effort to air-brush the editorial history.

BMJ Open:
Question(s) for the COPE Forum

•Should we allow data collected in service evaluations to be published as research articles? In medical journals, this is often seen as an acceptable exception; however, if research ethics committees are declaring a study “not research”, should journals do the same?

•Should the journal have posted a correction on the article to provide a more detailed ethics statement, bearing in mind that anything labelled a “correction” in a controversial area would be misinterpreted as an error in the research by the critics?

•How should journals respond to blog posts that they feel portray them unfairly and are damaging to the publisher’s reputation?

RESPONSE:
BMJ Open’s questions to the COPE forum rest on the fiction that Professor Crawley’s article described a service evaluation. From this perspective, the journal’s main goals would be to mitigate the embarrassment of having published the paper as research and to deal with pesky bloggers who have misinterpreted these actions and written bad things.

I don’t accept the premise of the BMJ Open’s questions. So I’ll ignore them and pose my own questions:

1) Why did the author exempt the school absence study from ethical review as a service evaluation for current care, even though the study tested a new intervention to engage students unknown to the clinical service?

2) Why did the journal accept the claim that investigating a pilot intervention to identify new patients could be exempt from ethical review as a service evaluation?

3) Are journals supposed to accept at face value the claims of authors, ethics review boards and university committees, even if such claims are irrelevant, unconvincing or based on a tenuous relationship to the facts? Or are journals supposed to exercise independent judgement in making their decisions?

Comments are closed.

Scroll to Top