Search This Blog

Thursday, January 9, 2014

Human Stupidity: Historical: Probability Thinking III

If we just made mistakes when interpreting probability values but were able to analyse data correctly, the mistakes I just described wouldn't be so serious. After all, probability is a recent discipline we invented to deal with uncertain outcomes. If our brains were able to interpret data correctly but not stated probability values, that would just mean we should be careful in how we present evidence and results. Indeed, Gigerenzer and Hoffrage  have observed that, under some circumstances, people make less mistakes when presented with frequency information rather than probability values.

Unfortunately, although different ways to present a problem can have a positive impact in our capacity to analyse it, we also have a tendency to misinterpret data. When asked to estimate the probability of a result in a two-stage lottery with equal probabilities of winning or losing at each stage, Cohen et al.  observed people tend to overestimate their chances of winning. Instead of the correct chance of 25%, they actually observed an average estimate of 45%. Evaluating chances wrongly can easily cause anyone to make wrong decisions.

In similar lines to the errors in mathematical analysis that Kahan et al observed, people also see things in data that are not there, while failing to notice effects that are real, according to the data they observe. Chapman and Chapman (see also this) presented pairs of words on a large screen to the several people in order to test how we perceive correlations. The pairs were presented such that each first word was shown an equal number of times together with each of the different word from the second set. However, the subjects of the experiment described the pairs that made more sense, such as lion and tiger, or eggs and bacon, had, according to their perception showed up more often than the pairs where there was no logical relation between the words. This illusory correlation effect, where a non-existent correlation is perceived was confirmed by later studies.

The opposite case was also observed, that is, in problems where people expect to find no correlation between variables, they fail to notice real correlations in the data. Or, when the correlation is very strong, they tend to consider it smaller than it really is. This was observed initially by Hamilton and Rose , who called the effect invisible correlation. One interesting thing about both illusory and invisible correlation biases is that our prior beliefs seem to play a central role in how we interpret the data we obtain. While prior information should indeed affect our final opinions after observing data, it should not change what that data says.

The list of probabilistic biases that have been observed is huge, however it is not the objective of this text to get anywhere close to an extensive treatment of the problem. My goal here is just to convince people that our so often praised human rationality is actually far worse than we would like to admit. Or, at least, to convince that they should check the existing literature and see for themselves. Still, it is worth showing a few more cases before going on to other subjects.

No comments:

Post a Comment