Yesterday was the last day of the 24th Psychoneuroimmunology Research Society meeting (PNIRS). Here is an attempt to gather some thoughts on the field of psychoneuroimmunology. These thoughts will lead up to an assessment of publication bias in a recent meta-analysis of cytokine-inhibiting drugs against depression.
The Central Hypothesis of Psychoneuroimmunology
Research in psychoneuroimmunology relies heavily on what I am going to call (only half in jest) the Central Hypothesis of Psychoneuroimmunology: stress in the organism causes inflammation, which causes adverse mental and physical health outcomes. The notion of stress here is broad and includes challenging or adverse mental and physical states as well as exposures. Hence, a principal feature of the Central Hypothesis is a reciprocal back-and-forth interaction or loop between the brain and the immune system.
Much of the work presented during this PNIRS meeting, if not most, addressed associations that have a place within the framework of the Central Hypothesis. There were reports of associations between stressful exposures and inflammatory mediators such as the cytokine interleukin-6. There were reports of associations between inflammatory markers and outcomes such as depression and cancer progression. Notably absent were reports of successful lifestyle interventions to reduce inflammation and thereby reduce physical and mental symptoms.
Efficacy as a sign of truth in medical science
In medical science, translation of research findings to clinical use is in a sense an ultimate truth criterion. If a physiological effect is reliable and important enough that we can base diagnosis or treatment on it, then it must really be true. This sentiment, which you may or may not share, is founded not only in the effusive self-assuredness of the medical profession, but also on the strong evidential value of well-conducted randomized clinical trials, when assessed under conditions of low risk of bias.
The caveats in the last sentence are necessary. A clinical trial cannot be interpreted in isolation. If many trials have been made, but only a subset have been published, then conclusions from the overall literature must be very cautious, regardless whether the trials that can be accessed were up to the highest standards. What I have just described is the effect of publication bias, one of the most important biases to consider when evaluating clinical trials.
Anti-cytokine treatment and depression
The Central Hypothesis predicts that anti-inflammatory treatment will help against a set of diseases. Depending on who you ask, this set may include depression, sleep disorders, post-traumatic stress disorder, cardiovascular diseases, cancer, and many other diseases. In support of this prediction, one meta-analysis has showed that anti-cytokine treatment is effective against depression. These results were prominently featured during the PNIRS meeting. I decided to look closer at the assessment of publication bias in this analysis. The paper describing the meta-analysis has been published here and the work was presented at the meeting by the senior author Dr. G M Khandaker.
The meta-analysis identified seven placebo-controlled trials of anti-cytokine treatment. These are the main objects of analysis, since placebo-controlled trials are less at risk of within-study bias compared to studies without placebo controls. Seven is a rather small number, which limits the usefulness of quantitative techniques, as we soon shall see. Nonetheless, it is helpful to plot the data as a starting point. I have copied the data from the paper and made the following plot:
This forest plot faithfully replicates the overall summary result reported in the paper (figure 2a). There appears to be a moderate effect of anti-cytokine treatment on symptoms of depression (Cohen’s d = 0.40). The confidence interval reaches down to d = 0.22, which is still quite high and suggestive to my eyes of a clinically relevant effect.
Analysis of publication bias
But what about publication bias? The authors investigated this with a funnel plot, available in the supplementary materials, and concluded that the risk is low. I have reproduced the funnel plot and it looks like this:
The idea behind the funnel plot is that larger studies have lower variance, and therefore the distribution of studies should resemble a funnel standing upside-down under the largest studies, if there is no publication bias. A skew to the right indicates that there is publication bias. This can be tested using a regression method (Egger’s test), but here is the snag: with only seven data points, the test will almost always fail to find evidence of bias even if it is there. The authors did perform Egger’s test, and also a procedure known as trim-and-fill, which attempts to impute missing studies and adjust the estimates downward. And the results held up. But when I look at the funnel plot, I think it looks right-skewed, and suspicion strikes me. What if there is in fact no effect? Then the funnel plot would look like this:
Under this assumption of no effect, it would appear that almost only studies that more or less narrowly exceeded the significance threshold (the white triangle) were published, and the distribution is strongly right-skewed. Could it be?
Reanalysis with the three-parameter selection model
I wanted to take the opportunity to test the hottest new thing in bias estimation, available thanks to Joe Hilgard, with whom I once co-authored a letter about bias in a meta-analysis of IL-6 in PTSD, and three colleagues. The new thing is a model that simultaneously estimates three parameters: effect size, heterogeneity (i.e. how much different studies vary), and publication bias (i.e. the probability that findings are published). For a hot new thing, it has been around for a while, but it is only recently that Joe and others have returned it to the forefront by showing that it works well in simulated samples and by making available code to run it. Here is the result:
The estimated effect of anti-interleukin treatment on depression is now d = -0.04, with a 95% confidence interval ranging from -0.89 to 0.82. The whole of the observed effect has been attributed to bias.
This new estimate must be interpreted cautiously and is certainly not the final word. Quantitative bias analysis relies on many assumptions. Also, I have implemented the code without fully understanding what every bit of it does, so there may be errors. Caveat lector and do not take my word for it; you can check my code for yourself here if you want.
Meta-analysis has high status as a method to reach trustworthy estimates. But meta-analyses are affected by bias, and if there is no effect, a meta-analysis will converge on the prevailing bias in the field. The analysis presented above suggests that the risk of bias when analysing studies of anti-cytokine treatment against depression is larger than Khandaker and colleagues determined in their assessment, and that it is not possible to conclude that a positive association exists.
There are two ways, in principle, to overcome this bias risk. One is to publish all trials. I assume now that unpublished trials exist; it is this possibility that projects uncertainty on the existing trials. The other way is to collect more new data under bias-protected conditions, i.e. using public preregistration. The amount of new data must be large enough to completely overwhelm the bias risk in the existing data. Such an effort will be costly and will involve risks and harms to patients, which would have been unnecessary had the bias risk in the present literature been lower.
The Central Hypothesis of Psychoneuroimmunology will be vindicated in my eyes when successful clinical translation is achieved.