When making decisions in the real
world, the situation can easily become even far more biased than our
natural tendencies shown in those artificial studies. Not only we
tend to keep our initial opinions much longer than we should, we also
directly decide the sources of information we will use. And that
almost always means looking for opinions of those we already agree
with, while disregarding people who opposes our own views. Of course,
this will simply make us more sure about what we thought, even when
that should not be the case. While doing that, we just learn the
reasons why our opinion might be right, but we rarely come to know
the reasons why it might actually be wrong. Test yourself: Can you
make a convincing argument about some political or religious idea you
oppose? You don't have to believe the argument is enough to change
your mind, but it should be considered a solid argument ("It is the mark of an educated mind to be able to
entertain a thought without accepting it'', attributed to Aristotle).
Of course, anyone would like to think
that their beliefs are reasonable, rational, and well justified.
After all, if they weren't, we wouldn't have them, right? But
evidence, unfortunately, is not on our side. In a very interesting
example, Jervis observed an effect he called
irrational consistency (Baron uses the term belief
overkill). This consists of the fact that when people hold a
specific belief, for example in a policy, they usually have many
independent ideas they believe in and all of them support the said
policy. And those who oppose the policy tend to defend the opposite
set of ideas. However, if those ideas are independent, any rational
being could defend some and oppose others, while a consideration
about the total effect would lead to the final point of view on the
policy. That people are too consistent is a clear sign reason is not
playing the role it should in this problem.
Jervis mentions as an example the case
of people who supported or opposed a ban on nuclear tests. Among the
issues behind a decision to support or ban, he presents three issues:
if the tests would cause serious medical danger; if the tests would
lead to major weapon improvements; and if they would be a source of
international tension. It is important to notice that it is
completely reasonable to believe that the tests would not cause
serious medical danger but would cause international tension. These
evaluations are independent and any of the four possible combinations
of beliefs make just as sense as the other three. That means that, if
people were reasoning in a competent and independent way, no
correlation between those beliefs should be observed. And yet those
who were in favor of the ban held all the beliefs that the tests
would cause healthy problems, would lead to more dangerous weapons,
and would increase international tension. And, as it should be
obvious by now, those who opposed the ban, disagreed in all the
subjects with those who were in favor. Apparently, people felt
somehow led to have a consistent set of beliefs, even when there was
no reason at all for that consistency.
As a matter of fact, when our beliefs
seem to conflict with each other, a phenomenon called cognitive
dissonance, we have a tendency to change some of those beliefs to
avoid the conflict. This was observed in a series of experiments
conducted by Festinger . The typical experiment
included performing some task and be paid either a very small amount
for it ($1.00) or a more reasonable amount ($20.00, in 1962). When
the subjects were asked about their feelings about the task, those
who had been paid very little had a better evaluation of it than
those who had received more. The explanation proposed by Festinger is
that people wouldn't perform that task for just one dollar. But they
had done it, what created the cognitive dissonance that the subjects
solved by evaluating the task as more entertaining. After all, doing
an entertaining task for basically no money makes more sense than
doing a boring task.
No comments:
Post a Comment