How
people use, and lose, preexisting biases to make decisions
The Zuckerman
Institute at Columbia University
These people will be the ultimate test of this science |
But a new study from
Columbia University neuroscientists uncovers a surprisingly rational feature of
the human brain: A previously held bias can be set aside so that the brain can
apply logical, mathematical reasoning to the decision at hand.
These findings highlight
the importance that the brain places on the accumulation of evidence during
decision-making, as well as how prior knowledge is assessed and updated as the
brain incorporates new evidence over time.
This research was
reported in Neuron.
"As we interact with the world every day, our brains constantly form opinions and beliefs about our surroundings," said Michael Shadlen, MD, PhD, the study's senior author and a principal investigator at Columbia's Mortimer B. Zuckerman Mind Brain Behavior Institute.
"Sometimes
knowledge is gained through education, or through feedback we receive. But in
many cases we learn, not from a teacher, but from the accumulation of our own
experiences. This study showed us how our brains help us to do that."
As an example,
consider an oncologist who must determine the best course of treatment for a
patient diagnosed with cancer.
Based on the doctor's
prior knowledge and her previous experiences with cancer patients, she may
already have an opinion about what treatment combination (i.e. surgery,
radiation and/or chemotherapy) to recommend -- even before she examines this
new patient's complete medical history.
But each new patient
brings new information, or evidence, that must be weighed against the doctor's
prior knowledge and experiences. The central question, the researchers of
today's study asked, was whether, or to what extent, that prior knowledge would
be modified if someone is presented with new or conflicting evidence.
To find out, the team
asked human participants to watch a group of dots as they moved across a
computer screen, like grains of sand blowing in the wind. Over a series of
trials, participants judged whether each new group of dots tended to move to
the left or right -- a tough decision as the movement patterns were not always
immediately clear.
As new groups of dots
were shown again and again across several trials, the participants were also
given a second task: to judge whether the computer program generating the dots
appeared to have an underlying bias.
Without telling the
participants, the researchers had indeed programmed a bias into the computer;
the movement of the dots was not evenly distributed between rightward and
leftward motion, but instead was skewed toward one direction over another.
"The bias varied
randomly from one short block of trials to the next," said Ariel
Zylberberg, PhD, a postdoctoral fellow in the Shadlen lab at Columbia's
Zuckerman Institute and the paper's first author.
"By altering the strength and direction of the bias across different blocks of trials, we could study how people gradually learned the direction of the bias and then incorporated that knowledge into the decision-making process."
"By altering the strength and direction of the bias across different blocks of trials, we could study how people gradually learned the direction of the bias and then incorporated that knowledge into the decision-making process."
The study, which was
co-led by Zuckerman Institute Principal Investigator Daniel Wolpert, PhD, took
two approaches to evaluating the learning of the bias.
First, implicitly, by
monitoring the influence of bias in the participant's decisions and their
confidence in those decisions.
Second, explicitly, by
asking people to report the most likely direction of movement in the block of
trials.
Both approaches
demonstrated that the participants used sensory evidence to update their
beliefs about directional bias of the dots, and they did so without being told
whether their decisions were correct.
"Originally, we
thought that people were going to show a confirmation bias, and interpret
ambiguous evidence as favoring their preexisting beliefs" said Dr.
Zylberberg. "But instead we found the opposite: People were able to update
their beliefs about the bias in a statistically optimal manner."
The researchers argue
that this occurred because the participants' brains were considering two
situations simultaneously: one in which the bias exists, and a second in which
it does not.
"Even though
their brains were gradually learning the existence of a legitimate bias, that
bias would be set aside so as not to influence the person's assessment of what
was in front of their eyes when updating their belief about the bias,"
said Dr. Wolpert, who is also professor of neuroscience at Columbia University
Irving Medical Center (CUIMC).
"In other words, the brain performed counterfactual reasoning by asking 'What would my choice and confidence have been if there were no bias in the motion direction?' Only after doing this did the brain update its estimate of the bias.
"In other words, the brain performed counterfactual reasoning by asking 'What would my choice and confidence have been if there were no bias in the motion direction?' Only after doing this did the brain update its estimate of the bias.
The researchers were
amazed at the brain's ability to interchange these multiple, realistic
representations with an almost Bayesian-like, mathematical quality.
"When we look
hard under the hood, so to speak, we see that our brains are built pretty
rationally," said Dr. Shadlen, who is also professor of neuroscience at
CUIMC and an investigator at the Howard Hughes Medical Institute. "Even
though that is at odds with all the ways that we know ourselves to be irrational."
Although not addressed
in this study, irrationality, Dr. Shadlen hypothesizes, may arise when the
stories we tell ourselves influence the decision-making process.
"We tend to
navigate through particularly complex scenarios by telling stories, and perhaps
this storytelling -- when layered on top of the brain's underlying rationality
-- plays a role in some of our more irrational decisions; whether that be what
to eat for dinner, where to invest (or not invest) your money or which
candidate to choose."
This research was
supported by the Howard Hughes Medical Institute, the National Eye Institute
(R01 EY11378), the Human Frontier Science Program, the Wellcome Trust and the
Royal Society.