Individual Thinking
How smart we really are is an old question. In the Western tradition, it is easy to remember the phrase attributed to Socrates “I know that I know nothing”. And yet, despite a few people who recognized how untrustworthy knowledge can be, most people actually think they have correct answers about special issues. When some people say they believe in something, be that a religion, a political ideology or a scientific result, they often mean that they know that to be true and are only using the word believe to acknowledge that not everyone has seen the truth. Of course, that is not the only use of the term and we also often use it in a probabilistic way, like in “I believe it will rain tonight”. This tells us that the speaker considers more likely that rain will happen than not. But there is no certainty. As such, if it doesn't rain, the speaker is not at fault, as long as the reasoning and evidence used to make the prediction were solid enough. This different meanings of believing will be discussed later.
The problem is clearly linked to the question of what it means to know something. After all, if any opinion or statement were equally valid, an assassin vision sadly shared by many people, we wouldn't be able to speak of smart. Anything anyone said would be acceptable, any premise could lead to any conclusion, and we would have no way to measure smartness. What actually happens is that there are conclusions that anyone sane agree are correct. The trivial examples of Aristotelian Classic Logic come to mind. If I accept as true that all texts about Logic are boring and that this one you are reading now is a text about Logic, it is unavoidable to conclude that this text is boring. If you don't think it is boring, you must obviously disagree with at least one of the premises (as I hope you do). On the other hand, if you accept as true that all plants are green, the fact that you have a green car is not reason enough to conclude your car is a plant. These cases are so obvious that you need to training in Logic to agree with basically everyone else on Earth.
Unfortunately, we are not allowed the luxury of dealing only with trivial cases or those our brains are already well adapted for. By adapted, I mean either from an evolutionary point of view, that is, problems our ancestors had to deal with so often some wiring might have happened, or from a learning point of view, that is, problems we have encountered very often in our daily lives we learned to solve them. For everything else, we need good standards to compare with. And the sad part is that if you take just one step further in still trivial Aristotelian Logic problems, all hell breaks loose.
P.C. Watson and P. Johnson-Laird describe in chapter 9 of their book Psychology of Reasoning: Structure and Content the results of experiments performed to test how well people reason on such simple problems. What they observed is quite troubling. The now classical example has four cards on a table, so that you can see just one side of each one. This deck is known to always have one letter at one side of the card and one number at the other. The problem is to test if a simple rule can be proven false: “Whenever there is a vowel in one side, the other side will always have an even number”. The four cards, obviously, show one of each possible cases. Per example, they might show “A”, “M”, “2”, and “5”. The question each subject has to answer is: “If you turn the cards to inspect what is in the side that is not visible, which of these four cards can prove the rule to be false?” Answers are open, so any set, from the empty set (none of them could prove that) to all of them can prove it is false, is an acceptable answer.
Think a little and give your own answer. Don't look ahead, where I will eventually provide the right one. Whenever I am teaching TADI (Treatment and Analysis of Data and Information), I present the card problem (among others) to my students in the second meeting. While of very little value as experimental evidence, I observe an astonishing tendency to errors, typically 1 in 60 get the correct answer fast. That is, if you got it wrong, it just means you are human. The original experiment observed better proportions, making the species look less dumb. Of course, there are important differences, among them, the lack of control and a proper setting in my classes. The original experiments also didn't include things like peer pressure and the fact I don't give the students a lot of time to think. I actually ask them to commit soon to the answer they feel to be correct. My goal in the classroom is to make a clear point about how what we feel to be the right answer is often VERY wrong.
It is curious to see that this result is actually dependent on the problem that is presented to the subjects of the experiment. If, instead of unusual cards, the same logical question is about violations of a non-drinking while underage rule, people tend to perform very well and very easily. This suggests that while competent learners, we need to be trained if we are to have any chance at getting the right answer in some very easy problems. And sometimes even training might not be enough. Extra details on this problem and several others that I will speak about later can be found in two very interesting books. The one from, Scott Plous, The Psychology of Judgment and Decision Making, is already 20 years old (1993) but it still has a wide range of experiments on the way people think. Jonathan Baron also discusses the same problem in his newer (2007) book Thinking and Deciding. Both are very interesting readings.
One of the reasons for this phenomenon seems to be a characteristic of our reasoning called confirmation bias. Basically, when analyzing an idea, we tend to look for cases that confirm it. But this is not a true tests. Cases where the idea seems to work well are interesting examples. But, if you really want to test any idea, you must look for cases where the idea can fail.
And while being prey to confirmation bias
(as many people seem to do in the card problem) is a bad strategy, we do even
worse than that. We actually choose not to look at arguments and data
that contradicts our beliefs, or we actually interpret them
erroneously. Recently, Dan
Kahan and collaborators have reported results that
show that mathematically educated people make serious errors
when analyzing data that conflicted with their personal opinions. In a control scenario, if
the same data was about a neutral problem, people with better
numeracy skills performed better at interpreting the data. However, when the problem was gun
control, a very controversial issue in USA, people with better
numeracy interpreted the data in ways that agreed with their initial
points of view, regardless of the real data. People with improved
numeracy would become even more polarized on the subject than people
less well trained in Mathematics. This strongly suggests that smarter
can mean more ability to, perhaps unconsciously, distort reality
description to conform to one's own point of view! A more pedestrian
discussion of these results can be found here
by Mark Kaplan.
1) This text will probably be expanded later. If and when I do, I will make a new post warning about the fact.
2) The correct answer for the cards problem will only appear here later. Try leaving yours in the Comments section. No cheating.
3) I will strongly welcome suggestions of literature and results to add here. Note that probabilistic biases as well as group and societal effects will come later, in new entries. Therefore, they are not discussed here, just logical problems are.
No comments:
Post a Comment