Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
ABC News
Your Brain Is Primed To Reach False Conclusions

Paul Offit likes to tell a story about how his wife, pediatrician Bonnie Offit, was about to give a child a vaccination when the kid was struck by a seizure. Had she given the injection a minute sooner, Paul Offit says, it would surely have appeared as though the vaccine had caused the seizure and probably no study in the world would have convinced the parent otherwise. (The Offits have such studies at the ready — Paul is the director of the Vaccine Education Center at the Children’s Hospital of Philadelphia and author of “Deadly Choices: How the Anti-Vaccine Movement Threatens Us All.”) Indeed, famous anti-vaxxer Jenny McCarthy has said her son’s autism and seizures are linked to “so many shots” because vaccinations preceded his symptoms.

But, as Offit’s story suggests, the fact that a child became sick after a vaccine is not strong evidence that the immunization was to blame. Psychologists have a name for the cognitive bias that makes us prone to assigning a causal relationship to two events simply because they happened one after the other: the “illusion of causality.” A study recently published in the British Journal of Psychology investigates how this illusion influences the way we process new information. Its finding: Causal illusions don’t just cement erroneous ideas in the mind; they can also prevent new information from correcting them.

Helena Matute, a psychologist at Deusto University in Bilbao, Spain, and her colleagues enlisted 147 college students to take part in a computer-based task in which they each played a doctor who specializes in a fictitious rare disease and assessed whether new medications could cure it.

In phase one of the study, the student volunteers were divided into two groups — a “high illusion” one that was presented with mostly patients who had taken Drug A and a “low illusion” one that saw mostly patients who hadn’t taken the drug. Each student volunteer saw 100 patients, and in each instance, the students were told whether the patient had recovered. The student volunteers weren’t told that the drug didn’t work — the recovery rate was 70 percent whether or not patients took the drug. Yet, as expected, people in the high illusion group were more susceptible to erroneously concluding that it had an effect.

Presumably, because the student volunteers in the low illusion group had more opportunities to see the syndrome resolve without the drug, they were less prone to assuming that recovery was linked to it. Previous studies have shown that simply seeing a high volume of people achieve the desired outcome after doing something ineffective primes the observer to correlate the two.

Phase two of the study is when things got interesting. The experiment was repeated, except this time some patients simultaneously received two drugs — the ineffective one from phase one and a second drug that actually worked. This time, volunteers from both the high and low illusion groups were presented with 50 patients who’d received the two drugs and 50 who received no drugs. Patients in the drug group recovered 90 percent of the time, while the group that didn’t get meds continued to have a 70 percent recovery rate. Volunteers in the “high illusion” group were less likely than participants in the “low illusion” group to recognize the new drug’s effectiveness and instead attributed the benefits to the drug they’d already deemed effective. The prior belief in the first drug’s potency essentially blocked acquisition of the new information.

“You have to be sure before you’ll destroy what you already know and substitute it with something new,” Matute told me.

This finding might seem like nothing more than an interesting psychological quirk if it didn’t make us so vulnerable to quackery. Many so-called “alternative” remedies exploit the illusion of causality, Matute said, by targeting conditions that naturally have high rates of spontaneous recovery, such as headaches, back pain and colds. Quack cures remain popular in part because they bestow a sense of empowerment on people who are feeling miserable, by giving them something to do while they wait for their problem to run its course.

But not every dicey claim is made by a charlatan. Some credentialed clinics advertise platelet-rich plasma (PRP) as a “revolutionary” treatment for sports injuries even though the data on these therapies remains mixed. While writing about PRP last year, I was surprised to learn that none of the doctors I interviewed were tracking their outcomes. Because many people who get PRP seek it after they’ve tried everything else, doctors who provide the treatment are likely to see a lot of patients who are already on their way to recovery, as their condition finishes its natural course.

And that means it’s probable that these doctors are essentially replicating the first part of Matute’s study — priming themselves to find a correlation between PRP and recovery that isn’t there. We won’t know until they start tracking their results and comparing them to similar patients who didn’t get the treatment. (An easy way to protect against the causality illusion is to pay attention and count.) Physicians don’t have a great track record for self-assessment. A 2006 study published in The Journal of the American Medical Association found that doctors are poor judges of their own performance.

In this respect, doctors are only human, and so it’s not so surprising that the medical profession is filled with practices that have been disproved. Even when the evidence for or against a treatment or intervention is clear, medical providers and patients may not accept it. In some cases, the causality illusion is to blame, but usually the reasons are more complex. Other cognitive biases — such as motivated reasoning (all of us want to believe that the things we do make a difference), base rate neglect (failing to pay attention to what happens in the absence of the intervention), and confirmation bias (the tendency to look for evidence that supports what you already know and to ignore the rest) — also influence how we process information. In medicine, perverse incentives can push people in the wrong direction. There’s no easy fix here.

One thing seems clear, though. Simply exposing people to more information doesn’t help. Last year, political scientist Brendan Nyhan at Dartmouth and his collaborators published a randomized trial of four different approaches to influencing attitudes about vaccines among parents. The study’s 1,759 participants were split into groups, and each subset was presented with information about why vaccines are important — everything from why the diseases that a measles, mumps, and rubella vaccine could prevent are worth avoiding to images of children stricken with those diseases and a heart-felt story about an infant who nearly died of a case of measles. None of these efforts made parents more likely to vaccinate their kids.

Matute and her colleagues recently tested a different approach. Instead of trying to counteract false associations through information, they experimented with ways to improve the way people think. In a study published in science journal PLoS ONE, they invited a group of teenagers to test a wristband with a metal bar. Using language deliberately filled with jargon and pseudoscientific concepts, the researchers explained that the metal strip could improve physical and intellectual abilities, and the students were invited to test out the product while performing written tasks like solving a maze or number exercise. All the while, the researchers primed the volunteers to find a benefit from the band by raving about how previous users had noticed its alleged properties. By the end of the demonstration, many participants said they’d be willing to buy the magical ferrite strip.

Next came the twist — the researchers stepped out of their huckster roles and guided the teens through an analysis of what they’d just seen. They pointed out the holes in the evidence for the band’s powers, introduced the students to the causality illusion, and emphasized that to assess whether the product had improved their performance, they needed to have a baseline score for comparison. The objective of this second part of the intervention was to teach the teens to think critically about causality.

Afterward, the researchers ran the students through a computer test similar to the one used in the study on the causality illusion. They were shown a series of fictitious patients and given the opportunity to administer a fake (and, unbeknownst to them, ineffective) drug with the goal of figuring out whether the medication worked. Participants who’d learned about the challenges of establishing causality ran more trials without the medication (a necessary step to measuring the drug’s effectiveness) and made more accurate assessments of the drug’s efficacy.

It’s a promising result, but whether such an intervention could, say, prevent NFL players from buying up deer-antler spray or Olympians from taking risky supplements remains to be seen. Nyhan cautions against assuming that this educational approach can eliminate the causality illusion. Many psychological studies have shown promising improvements in belief accuracy when it involves matters that participants don’t care about, Nyhan told me. “But the lesson of controversial political, health and science issues is that people don’t apply their critical-thinking skills in the same way when they have a preference for who’s right.” Studies by law professor Dan Kahan at Yale show that even highly numerate people are prone to cognitive traps when the data contradicts the conclusion most congenial to their political values.

So where does this leave us? With a lot of evidence that erroneous beliefs aren’t easily overturned, and when they’re tinged with emotion, forget about it. Explaining the science and helping people understand it are only the first steps. If you want someone to accept information that contradicts what they already know, you have to find a story they can buy into. That requires bridging the narrative they’ve already constructed to a new one that is both true and allows them to remain the kind of person they believe themselves to be.

Christie Aschwanden was a lead science writer for FiveThirtyEight. Her book “Good to Go: What the Athlete in All of Us Can Learn from the Strange Science of Recovery” is available here.

Comments