Flawed Thinking

In psychotherapy, Sharon Begley warns, cognitive bias can put patients’ recoveries at risk.

Illustration by Sébastien Thibault

A throng of psychologists listens in rapt attention. The speaker declares that psychotherapies don’t require systematic study—akin to the clinical trials that determine the efficacy of drugs—because psychologists can see whether clients are improving. Thunderous applause. And when the speaker, rising to his theme, asks how many have seen their clients improve after therapy, it’s as if he’s asked who wants a free trip to Paris: a sea of hands shoots into the air.

The confident psychologists could well be right about their healing abilities. But Scott Lilienfeld wonders.

A research psychologist at Emory University, Lilienfeld has written papers on the resistance of clinical psychologists—practitioners who treat depression, social phobia, marital troubles, personality disorders, and the whole woeful litany of human mental miseries—to evidence-based practice, in which the choice of treatment is guided by rigorous, large-scale studies. Whenever Lilienfeld advocates this, the blowback is immediate: We know what works; we see it every time a satisfied client leaves therapy cured of what troubled her. As one therapist wrote, “When treatments work, the condition being treated vanishes, and we don’t need randomized controlled trials to see this happening.”

A journal editor once suggested Lilienfeld and some of his colleagues explore why psychologists are so adamant that they know what works. What they came up with is on a par with the cobbler’s barefoot children and the auto mechanic’s chronically stalling car: Psychologists—experts on the human mind—can succumb to cognitive biases, some of the most basic flaws in reasoning, including hindsight bias (“I knew all along you’d cheat on me”) and post-purchase rationalization (“I love Windows Vista; I paid $1,200 for that computer!”).

But as Lilienfeld and four colleagues explained in a 2014 paper in Perspectives on Psychological Science, dozens of cognitive biases can lead psychologists to err in two ways: therapists see client improvement where there may be none, or they attribute improvement to their intervention when in fact something else is the cause.


Psychologists tend to take credit for clients’ recoveries, but just because improvement follows therapy doesn’t mean therapy caused the improvement.


Surely a professional therapist can’t err on something so basic as whether a client improved. Guess again. In a 2005 study, 49 psychotherapists at college counseling clinics estimated that 91% of the 500-plus students they treated got better. In fact, when the students were (unbeknownst to the therapists) assessed by a standard, objective measure of symptoms, only 40% had improved. The number who got worse was a remarkable 15 times greater than the therapists estimated. These huge misses in estimates of effectiveness reflect optimism bias.

When clients get better, psychologists naturally take credit, a bias that even creeps into published research. Take, for example, a 2007 study of people with severe depression. Researchers randomly assigned volunteers to receive either cognitivebehavioral or interpersonal therapy. The verdict: both worked. But the study lacked a crucial third arm, a group that received no therapy.

Piles of research show that a high percentage of depression dissipates on its own, as people’s circumstances change or a life event lifts them out of the psychological abyss: A client experiencing unemployment or illness or divorce may improve when the impact of these events recedes or the situation turns around, but the clinician may erroneously attribute the improvement to treatment. Studies of patients with major depression have found that about half improve spontaneously, and that half of depressive episodes last less than 13 weeks. The longer someone is in therapy, the more chances she has for spontaneous remission, natural healing, coping, or positive experiences to help her get better. As psychoanalyst Karen Horney wrote in 1945, “Life itself still remains a very effective psychotherapist.”

Consider some of the other cognitive pitfalls leading psychologists to take more credit than evidence supports.

Naive realism—seeing is believing. A clinician who “sees the change with my own eyes” believes that’s sufficient to know an intervention was effective. This is also an example of the post hoc, ergo propter hoc fallacy: just because improvement follows therapy doesn’t mean therapy caused the improvement. Clinicians base their judgments of effectiveness on clients they see, but they don’t follow clients who drop out. Yet research shows that clients who are not improving are especially likely to quit.

Therapists may conclude their treatments are effective merely because their remaining clients are those who have improved.

There is also no way to know what would have happened without therapy. Not only can life events alleviate mental illness, as Horney pointed out, but so can the placebo effect. People expect to improve, and they do. Estimates of the size of the placebo effect in psychotherapy (from comparisons of people who got actual therapy to those who got a therapist’s attention and promise of treatment but no actual intervention) put it at half the effect of active therapies. A 1986 study found that the mere prospect of receiving help can make a difference: about 15% of patients improved between the first phone call and the first session.

Confirmation bias—the deeply ingrained tendency to seek evidence that supports one’s beliefs and to overlook or deny what doesn’t. Clinicians may notice and remember clients who seemed to improve, and forget those who dropped out or never made progress. For instance, a therapist who practices confrontation therapy, in which clients are forced to face their weaknesses, may “attend to and recall the sessions in which the client was doing better and neglect and forget” those where he was doing “worse,” Lilienfeld and his colleagues wrote. “As a consequence, the therapist may conclude that his use of confrontation was consistently followed by client improvement, even though it was not.”

The illusion of control—the all-too-human tendency to inflate our ability to influence events. It’s why people prefer to choose their own lottery number, and it biases some therapists to believe what they do exerts a greater effect on client outcomes than it actually does. In a 2012 study, therapists in private practice rated their own effectiveness as in the 80th percentile, on average, while one-quarter thought they were in the 90th percentile; call it psychology’s Lake Wobegon Effect.

Clients are no less subject to the illusion of therapeutic efficacy. Consider critical incident stress debriefing (CISD), a popular intervention for victims of trauma such as natural disasters, war, terrorism, and crime. Controlled studies—giving CISD to some people and not others and evaluating them before and after—found it ineffective and sometimes harmful: people with high scores on a measure of post-traumatic stress disorder pre-treatment who received CISD improved, but high scorers who received no intervention improved more. When trauma victims who receive CISD improve, it’s not because of the therapy; they probably would have improved even more without it.

Opponents of evidence-based practice in psychology contend clinical experience trumps controlled trials. They argue that trials might use clients with different symptoms or circumstances from those they see or conduct the therapy differently from how they do. But as long as cognitive pitfalls lead psychologists to erroneously believe a treatment worked, there is less incentive to rigorously identify those that truly do. As long as countless people suffer with mental disorders, ignorance is not bliss, and self-delusion among therapists is the worst kind.

This article also appeared in the February 2015 issue of Mindful magazine.