By David Tuller, DrPH
In teaching courses on covering public health and medical issues, I have often highlighted how university press releases about studies can read like efforts at obfuscating problematic findings rather than providing an accurate account of research. A recent press release from King’s College London, about a high-profile study published by Lancet Psychiatry, is an excellent example of this problematic genre.
I plan to write more about the study—“Cognitive behavioural therapy for adults with dissociative seizures (CODES): a pragmatic, multicentre, randomised controlled trial”—when I get a chance. This post focuses on some discrepancies between the KCL press release and the study itself.
On June 3, Kings College London posted a press release touting a major study of cognitive behavioural therapy for the treatment of what have long been called psychogenic non-epileptic seizures (PNES) but are now sometimes referred to as dissociative seizures (DS). The press release bore the following headline: “Cognitive behavioural therapy reduces the impact of dissociative seizures.”
And here’s the first paragraph: “Scientists have found that adding cognitive behavioural therapy (CBT) to standardised medical care gives patients with dissociative seizures longer periods of seizure freedom, less bothersome seizures and a greater quality of life, in a study published in Lancet Psychiatry and by the Cognitive behavioural therapy for adults with dissociative seizures (CODES) study group funded by National Institute for Health Research (NIHR).”
The study was the largest of its kind, with 368 participants randomized to receive either standardized medical care, or standardized medical care combined with a form of CBT designed specifically for the disorder. The results were published by Lancet Psychiatry, a high-impact journal. The press release included comments from the investigators about the significance of the work, a common feature of such releases.
“We have delivered the first large-scale multi-centre and multi-professional trial investigating treatments for adults with dissociative seizures,” said Laura Goldstein, a professor of clinical neuropsychology at KCL’s Institute of Psychiatry, Psychology & Neuroscience. Trudie Chalder, a professor of CBT at the same KCL institute, declared that “we now have evidence for the effectiveness of dissociative seizure specific CBT combined with standardised medical care.”
But effectiveness for what, exactly? Professor Chalder’s statement did not elaborate. Readers of the press release could be forgiven for not realizing that the outcome selected as the primary endpoint by the investigators—the average number of seizures per month at 12 months after randomization–had null results. In other words, a course of CBT that included specific seizure reduction techniques and that Professor Chalder touted as an effective treatment had no impact on the reduction of the number of seizures–the study’s most important and significant finding.
Psychogenic non-epileptic seizures, or dissociative seizures, are part of the larger category called functional neurological disorders (FNDs), which themselves are part of the even larger category called medically unexplained symptoms (MUS). All of these conditions and illnesses share the characteristic of having no identified pathophysiological cause. Experts differ sharply on whether the absence of currently known or identified pathophysiological causes means that no such pathophysiological causes exist.
In the UK, CBT is a favored intervention for all kinds of conditions placed in the MUS domain, including irritable bowel syndrome and what investigators often call chronic fatigue syndrome—despite the paucity of quality evidence to support this approach. The findings of the PACE trial, which purported to be the definitive examination of the effectiveness of CBT for chronic fatigue syndrome, have been convincingly rebutted, despite continued bleating from the study’s dwindling number of defenders.
Earlier this year, I documented that a web-based CBT program for IBS produced statistically significant but clinically insignificant reductions in symptom severity—yet is being deceptively marketed as an effective treatment for reducing symptom severity. And articles on the Opposing MEGA website have highlighted questions about some of the data underlying the UK’s MUS project, which involves presuming those diagnosed with these complex disorders require psychological treatment rather than specialist medical care.
(Northwestern University law professor Steven Lubet and I have recently published a commentary about MUS that advocates humility in making declarative statements about conditions of unknown etiology. That piece can be accessed here; unfortunately, it is behind a paywall. Professor Lubet has blogged about our paper and included short excerpts here.)
In the pilot study that formed the basis for the CODES trial, the investigators made clear they believe the absence of evidence that pathophysiological processes cause dissociative seizures is proof that pathophysiological processes do not play a role in causing dissociative seizures. Here’s how they explained the situation in the pilot study, published by Neurology in 2010:
“Psychogenic nonepileptic seizures (PNES) are paroxysmal episodes of behavior resembling epileptic seizures but lacking organic etiology. Most clinicians agree that in most cases the episodes are involuntary, arising through unconscious psychological mechanisms.”
The statement that these paroxysmal episodes have no organic etiology but are instead triggered by emotional and affective distress through undefined mechanisms is a hypothesis stated as if it were a conclusion. Given the investigators’ adoption of the psychogenic framework, they understandably wanted to assess whether an approach that helped patients restructure or reformulate beliefs and ideations could reduce the number of seizures.
In the pilot study, the investigators explicitly rejected the idea that other primary endpoints might be more suitable than seizure reduction–presumably after careful consideration of other possibilities. Here’s what they wrote:
“Our CBT approach is predicated on the assumption that PNES represent dissociative responses to arousal, occurring when the person is faced with fearful or intolerable circumstances. Our treatment model emphasizes seizure reduction techniques especially in the early treatment sessions…While the usefulness of seizure remission as an outcome measure has been questioned, seizures are the reason for patients’ referral for treatment.”
The statistical analysis plan, citing several other papers, noted that “seizure frequency has been used as an outcome measure in other studies of psychological interventions for DS.” And it explained in detail how power calculations for the trial were based on assumptions related to that specific measure. The statistical analysis plan included no details about how the secondary outcomes would be analyzed—presumably because the investigators considered these outcomes to be of only secondary importance.
In the pilot study, with 66 participants, the primary outcome was seizure frequency, at the end of treatment and at a follow-up assessment six months after the end of treatment. In the CBT group, a reported benefit at the end of treatment was no longer statistically significant by six-month follow-up. At follow-up, results for one of several secondary outcomes “tended to” indicate benefit from the CBT. These modest results were apparently enough to secure funding from a major UK agency for a full-scale trial. Not surprisingly, the full-scale trial seems to have produced similarly unimpressive findings.
In CODES, the primary outcome was the average number of seizures per month at 12 months post-randomization. That number fell in both arms of the study–great!–but CBT provided no benefit, with no statistically significant differences between the two trial groups. Let’s say that again—the outcome that the investigators had spent at least a decade promoting as the critical measure of treatment success had null results. The therapy did not work as billed, leading to questions about the therapy’s theoretical underpinnings.
Of more than a dozen secondary outcomes measured, nine showed statistically significant improvement, although how clinically significant these changes were is another question. Moreover, the number of secondary outcomes showing statistically significant improvements dropped to five by the more conservative analytic strategy preferred by many experts in study design and statistics. (More on these outcomes in another post.)
As written, the press release represented a valiant effort to obscure the bad news. The headline and first paragraphs did not mention the null results for the primary outcome. In a cumbersome sentence tucked into the bottom half of the release, these null results were referenced in passing–at which point the sentence immediately switched gears to focus on the purported “better functioning” demonstrated by other measures. The status of all these other measures as secondary outcomes was not mentioned.
Here’s what the release stated:
“While overall there appeared to be a reduction in how often people in both groups of the study were having dissociative seizures at the end of the trial, with no clear difference between the groups, the group who had received our package of dissociative seizure-specific CBT were reporting better functioning across a range of everyday situations.”
Readers can judge for themselves whether they or anyone would understand from that sentence, and from the rest of the press release, that seizure reduction was the predesignated primary outcome—per both the trial protocol and the statistical analysis plan. It goes without saying that press releases about major studies should make it clear whether the primary outcome of interest reported positive or null results and whether reported benefits are from primary or secondary outcomes.
It also goes without saying that investigators should review press material about their work before it is distributed. If the primary and secondary results are not transparently communicated as primary and secondary results, investigators should request changes to ensure this sort of clarity. That does not appear to have happened in this case.