A recent paper on lithium and suicide is an excellent example of how to mislead with meta-analysis. It is pseudoscience at its finest. The authors are major critics of the pharmaceutical industry; they claim that drugs are over-hyped. What pharmaceutical company makes profit on lithium? It was introduced as a generic drug from its very beginning in the US in 1970. The authors’ opposition to lithium shows that they’re opposed to psychiatric drugs per se, not just because of the influence of the pharmaceutical industry. They can’t let even lithium – the most effective psychiatric medication in human history – get away.
“The idea that lithium helps prevent suicide really should be put to bed now,” the senior author concludes in an anti-psychiatry website; only if you use meta-analysis in a misleading and scientifically false way; only if you mistake pseudoscience for science.
Here is a summary of the paper: If you exclude the studies that show benefit with lithium in prevention of completed suicide, and limit yourself to a time frame where you can’t observe the outcome (completed suicide), you won’t observe anything. You then tout that negative result as absence of benefit.
This paper is classic pseudoscience. Read on, and when you reach the end of this post, you’ll see how physicist Richard Feynman explained this type of pseudoscience long ago, and saw it as the major problem of scientific professions.
There is a problem with meta-analysis, which seems so definitive. Depending on which studies you include, and which you exclude, you can prove any point you want. In short, if you want to mislead others, or worse yet, if you want to mislead yourself, you can do so. A founder of modern clinical epidemiology, Alvan Feinstein, warned clearly against this misuse half a century ago when meta-analysis was invented. The prominent psychologist Hans Eysenck sounded the same warning decades ago. They predicted the false paper we’re analyzing now.
One way you can mislead is to exclude those studies which produce a result you don’t want. Another way is to create a false outcome. This paper does both.
What are “antisuicidal” effects. Well, this can mean thoughts (suicidal ideation), acts which fail (suicide attempts), and acts which do not fail (completed suicide). In a prior honest meta-analysis, Oxford researchers found that lithium prevented completed suicide, but not suicide attempts. This is the study that has been cited by many of us as proving that lithium prevented actual suicide, not just attempts or thoughts. In fact, it’s the only drug that was proven to so in randomized clinical trials (RCTs). (Clozapine prevented suicide attempts but not completed suicide in one large oft-cited RCT).
So we already knew that lithium has not been proven to prevent suicide attempts. And no had ever claimed that it improved suicidal ideation (though it may indeed, but the topic has not been studied systematically). In fact, recently a large US Veteran’s Administration study found that lithium did not prevent suicide attempts, a finding that was immediately misinterpreted by many people as contradicting the prior literature, which it did not; it was consistent with the prior literature that lithium didn’t prevent suicide attempts. The VA study did not have data to assess whether lithium prevented completed suicide, so it did not refute or add to that literature. (In fact, the VA study did find that lithium prevented suicide attempts in the bipolar illness subgroup).
One other point about prevention of completed suicide. In order to prevent something, you have to wait until it would happen. Prevention studies of suicide have to be long studies, lasting years. A 1-2 month study is not long enough to show that a drug would prevent suicide or not. A person can have depression this month, and die by suicide 10 years from now. A two month study on placebo would find that placebo prevented suicide in the person who dies from it ten years later.
That’s an obvious point, but one which the authors ignored so as to get the result they wanted.
Back to the false meta-analysis published recently.
It ignored all this prior literature, and decided to combine in one big analysis studies of lithium short-term studies of suicidal ideation or behavior – lasting weeks to months, which cannot prove or disprove completed suicide prevention – with the long-term studies lasting years that could assess suicide prevention. The minimum duration for the meta-analysis inclusion criteria was 12 weeks. This is totally inappropriate.
Further, the meta-analysis required that lithium be compared to placebo or “treatment as usual” only in studies since 2000 “to ensure that suicide was reliably reported.”
Was suicide never reliably reported before 2000? In the case of completed suicide, it’s not hard to tell if someone is dead. And yet, 3 of 4 studies which showed benefit for lithium in prevention of suicide were published before – you guessed it – the year 2000 or earlier. The authors claim that the CONSORT statement was introduced in 1996 and thus consistent reporting of outcomes cannot be ensured in prior meta-analyses. Maybe. Maybe not. You are excluding four decades of data that show something in favor of two decades which appear not to show it.
With this sleight of hand of inclusion criteria, the authors excluded the studies which found that lithium prevented suicide, and they included the ones that didn’t. Of course, the conclusion is preexistent in the premise.
Besides this central issue, there are important other ways in which the studies included in this meta-analysis could give misleading results.
There’s no rationale at all to include treatment as usual and exclude active controls. There is also no rationale to limit the data to placebo-controlled studies when most of those studies are randomized discontinuation trials (RDTs), which preselect for a drug being studied (always not lithium) and thus are biased in favor of that drug in comparison to the control group (lithium). Hence the result is not a fair comparison of lithium since the sample is already preselected to respond preferentially to the non-lithium drug. (I’ve critiqued RDTs in detail previously).
Twelve RCTs, most of which were maintenance studies in which patients were in remission, were included: three were 6 months long; five were one year long; two were 1.5 years long; two were 2 years long. Most people don’t commit suicide within two years of remission from a recent mood episode, and certainly not within 6-12 months. Some studies were tiny, with only 14 or 15 subjects in an arm. The lamotrigine and quetiapine maintenance RCTs, conducted by the pharmaceutical companies who created those drugs (I thought the authors were highly critical of pharmaceutical drug research?) in the classic biased design where all patients were chosen to be responders to those drugs but not lithium, were included; those studies were biased against lithium. Other maintenance RCTs with lithium control arms, like a classic olanzapine RCT, were not included presumable because of an absence of a placebo arm.
The short-term studies found no benefit for suicidal acts, which was already known. The short-term studies of course found no benefit for suicide prevention, because they could not assess the matter. The benefit already known in the long-term studies was not included because those studies were excluded from the meta-analysis on false grounds.
Voila.
Now let’s see how meta-analysis can tell the truth. The prior major meta-analysis on this topic, published in 2013 in the British Medical Journal, included 48 studies, not 12. It did not falsely limit itself to short-term studies that can’t answer the question. It included all comparator studies with lithium, 14 different arms in all. It did subgroup analyses by diagnosis, e.g. limiting results to bipolar illness and/or unipolar depression as opposed to other diagnoses; this meta-analysis did not look at those subgroups. It included much longer studies of up to 4 years. Its mean follow up was 19 months. It had data on suicide attempts and outcome in 4246 patients, about twice as many as in this false meta-analysis. It had 23 studies comparing lithium and placebo, twice as many as in this false meta-analysis. The BMJ meta-analysis reported no benefit for self-harm or suicide attempts, but it found benefit on completed suicide for which it had data on completed suicide in 485 patients, a reasonable size for the most difficult to assess and most important outcome. There were four studies which directly compared lithium to placebo in prevention of completed suicide in those 485 subjects. There were zero suicides in the lithium arm in 241 subjects versus 6 suicides in the placebo arm in 244 subjects. Lithium clearly showed benefit in those patients in prevention of suicide, with an odds ratio of 0.13 (95% confidence intervals 0.03, 0.65). This is 87% reduction in actual suicide, ie., about as close as you can get to complete prevention. And yet, if you exclude those studies from your false meta-analysis, you can pretend they didn’t happen.
To repeat: If you exclude the studies that show benefit with lithium in prevention of completed suicide, and limit yourself to a time frame where you can’t observe the outcome (completed suicide), you won’t observe anything. You then tout that negative result as absence of benefit.
This kind of paper is not “research”: it produced not a single datum of new fact. It’s meant to use the scientific journals for a patina of respectability for the purpose of advancing the anti-psychiatric drug agenda on the internet and in social media. It’s propaganda in the interest of social activism.
And yet the authors certainly would disagree. And that’s the problem. They think they’re engaging in science when they are doing the opposite of science. The physicist Richard Feynman identified and explained this problem well: Pseudoscience is when you use the superficial techniques of science, like meta-analysis, in the service of supporting your own beliefs, rather than seeking to refute those beliefs. It’s not that such pseudoscientists decieve others. They deceive themselves, and then they can honestly mislead others. Self-deception is a precondition for deception.