Trial By Error: My Letter to IBS Study’s Corresponding Author

By David Tuller, DrPH

I am slowly getting back to my efforts to highlight Mahana Therapeutics’ continuing misrepresentation of its new web-based cognitive behavior therapy program for irritable bowel syndrome. In January, the start-up company that it had licensed the program from King’s College London, based on a high-profile study published last year in Gut, a BMJ journal.

The deal seems to represent the triumph of hype over facts. Even a cursory reading of the study results should have noticed that the web-based program did not produce clinically significant benefits over standard treatment in reducing symptom severity at 12 months. I had temporarily put aside my pursuit of this issue, for understandable reasons. But I have taken it up again. Earlier today, I sent the following letter to Hazel Everitt, the corresponding author of the Gut study, who is a GP and a professor of primary care research at the University of Southamptom, England.

**********

Dear Dr Everitt–

I am a senior fellow in public health and journalism at the Center for Global Public Health at the University of California, Berkeley. I write frequently about research on illnesses in the category of so-called “medically unexplained symptoms.” Much of my work appears on Virology Blog, a science site hosted by Vincent Racaniello, a professor of microbiology at Columbia.

I am writing to you in your capacity as the corresponding author of the ACTIB study and as a scientific adviser to Mahana Therapeutics, as listed on its website. In January and February, I reported in several posts on Virology Blog that Mahana was making unjustified claims about the web-based cognitive behavior therapy program for irritable bowel syndrome that you and your colleagues road-tested in the study. I also critiqued the study itself.

It appears that the company would like to promote the program to regulators in the UK and US as an evidenced-based treatment providing clinically significant relief from IBS symptom severity. But this claim seems to go beyond the ACTIB results. At best, Mahana could argue that the program demonstrated very modest benefits in more generic domains like work and social adjustment. 

Before the pandemic took hold, I reached out to Rob Paull, the chief executive officer, and some of Mahana’s advisers. No one responded. At this point, despite the coronavirus situation, I am trying to get back to work–especially since the Mahana website continues to include misleading information.

(I have cc’d Mr Paull on this letter, as well as Professor Racaniello. For transparency, I plan to post the letter on Virology Blog.)

To recap: Mahana announced in January that it had licensed the web-based CBT program from King’s College London. My concerns involved statements in a press release and on Mahana’s website that clearly exceeded the data from ACTIB. The press release described the web-based program’s impacts on symptom severity as “substantial” and “durable.” The website called them “dramatic” and “potentially game-changing.”

As I have pointed out, these descriptions cannot be justified. At 12 months, the mean score for the web-based group in the intention-to-treat analysis was 35.2 points lower than for the treatment-as-usual group—much less than the 50-point difference that would be considered a clinically significant improvement. At 24 months, the reported benefits over treatment-as-usual were neither statistically nor clinically significant. Given the weak results for this core indicator, it is hard to understand why Mahana decided to license the program in the first place. 

When I rechecked recently, it appeared that Mahana had changed the website’s description of the program’s purported benefits on symptom severity and no longer characterizes them as “dramatic” and “potentially game-changing.” I was pleased to see this change. However, the site currently states that “patients enrolled in a minimal-contact digital CBT program experienced significant and clinically meaningful reduction in the severity of their IBS.”

This is an empty statement, since it would be true if only two patients in the web-based arm had achieved this “significant and clinically meaningful reduction” in symptom severity. It could also easily be said about the treatment-as-usual arm, since at least two patients in that arm also achieved “significant and clinically meaningful reduction” in symptom severity. The relevant question is whether the program provides clinically meaningful benefits over and above what is achieved through treatment-as-usual. And this is not the case, according to the study’s own symptom severity results–as I pointed out above.

The two data points Mahana includes on the website to highlight the program’s purported benefits are not the central findings for IBS symptom severity. To cite them out of context in this manner is unacceptable.

First, according to the website, “66% of patients reported significant and clinically meaningful reduction in the severity of their IBS.”

It is true that 66% of those in the web-based CBT arm who responded at 12 months had a reduction in the scores on the IBS Symptom Severity Scale of at least 50 points–the threshold for clinical significance. But it is not true that most of those changes can be attributed to the web-based program, which is what the statement implies. Mahana does not mention that 44% of those in the treatment-as-usual arm who reported at 12 months also had a reduction in score of 50 or more points. That means many or most of the 66% in the web-based arm could have reported those improvements anyway.

Moreover, the site does not make clear that only 70% of the study sample provided data at 12 months. We can’t know what the final results would have been for the remaining 30%–those considered “lost to follow-up” in epidemiological terms. That means we have no idea how participants who dropped out from the web-based program arm felt about the intervention, or whether it helped or harmed them. At 12 months, as I have already noted, the mean score for the web-based group in the intention-to-treat analysis was 35.2 points lower than for the treatment-as-usual group—much less than the 50-point difference that is considered a clinically significant improvement.

Mahana’s website also states“On average, reduction in IBS severity was twice that of patients receiving medical care as usual.”

Again true, and again misleading. When improvements are marginal, improvements that are twice the size are also pretty small. Just because something doubles does not mean the change is of much or any clinical significance. The more telling and relevant statistic is often not the relative difference between groups but the absolute difference. In this case, as I’ve already noted, the absolute difference in score between the means of the groups was 35.2 points–well under the 50 points that is considered a clinically significant change.

Unfortunately for Mahana, this web-based CBT program cannot be accurately marketed as having clinically meaningful impacts on reducing symptom severity beyond treatment-as-usual. The coronavirus epidemic certainly heightens the potential appeal of effective web-based therapies, but the ACTIB findings cannot be twisted to mean what they don’t. 

Dr Everitt, here are my questions for you: –
 
Do you stand by the way in which Mahana is presenting the ACTIB symptom severity findings for the web-based program, or do you find the selective use of data problematic? 
 
Can you provide insight into why Mahana decided to license a program with minimal reported benefits over treatment-as-usual in reducing symptom severity?
 
How did your own association with Mahana begin, and is your advisory role compensated or uncompensated? 
 

Thanks for your attention to this matter.

Best–David

David Tuller, DrPH

Senior Fellow in Public Health and Journalism
Center for Global Public Health
School of Public Health

University of California, Berkeley

{ 4 comments… add one }
  • Wendy 5 May 2020, 6:33 pm

    Suppressing evidence, or the fallacy of incomplete evidence is the act of pointing to individual cases or data that seem to confirm a particular position while ignoring a significant portion of related and similar cases or data that may contradict that position. It is a kind of fallacy of selective attention or cherry picking to attract the vulnerable.

  • CT 6 May 2020, 2:28 am

    These doctors need to understand that they’re paid for out of the public purse and are ultimately accountable to the public for what they say and do in the course of their work. That includes their research ‘contributions’. They call themselves researchers or scientists but seem to think that they’re above us all and answerable to no one. That isn’t what science is about. True scientists should always be willing to engage with others and explain their work.

  • CT 6 May 2020, 4:22 am

    And can we really ‘improve’ evidence -https://www.southampton.ac.uk/medicine/about/staff/hae1.page – or does the use of that phrase suggest a pre-determined, politically-driven goal?

Leave a Comment