Search This Blog

Friday, January 17, 2014

Human Stupidity: Historical: Opinions I



Before moving on to other issues, there are still a couple of other examples of our probabilistic thinking that I'd like to discuss. As we have seen in the AIDS problem, the correct way to solve whether the patient was sick or not was first to consider the initial probability for the disease and then change it to a new, posteriori value as we learn the result of the exam. This method is the basis of the Bayesian view in Statistics and there is actual evidence that we reason in a way that resembles Bayesian methods even as early as 12-month old .

But we do not do that in a perfect way, of course. In the AIDS problem, we saw that people generally simple disregard the initial, prior information contained in the initial chance of 1 in 1,000. We just use the information about the exam, as if we had no initial good guess about the chance of the patient being sick. And, as a matter of fact, if we had started with equal chances, 50% instead of 0.1%, the final chance for the patient to be sick would, indeed, in that case, be 98%. But we knew better and, by ignoring the prior information, we could cause a lot of unnecessary damage.

This effect is known as base rate neglect. In 1973, Kahneman and Tversky presented a problem to several people where they would have to guess, based on a text description, if the described person was a lawyer or an engineer. It was clearly stated that the described person was part of a group with 30 engineers and 70 lawyers. This should mean that it was more likely the person would be a lawyer than an engineer. However, this piece of information was completely disregarded and only the text used to make that inference. When the text was non-informative, with no clues pointing to engineer or lawyer, people would state there was a 50-50% chance, instead of the correct 30-70%.

At this point, it should be expected this is not the only mistake we do on how we change our opinions. As a matter of fact, the base rate neglect is not exactly an effect on how we change opinions. It actually happens in subjects that we have no initial opinions about. In those cases, even if there is evidence to be used as an initial opinion, that initial evidence is disregarded and only the new information is used. In many cases, however, people have an opinion before new information is provided. In this case, they should use that information as prior and update it following Bayes theorem.

While this is a qualitatively correct description of what we seem to do, it is not exact when we try to see it in numeric terms. Phillips and Edwards  observed that, while people do change their opinions in the correct direction, making a statement more or less probable when given data, the amount of the change is smaller than it would be expected from a simple use of the Bayes theorem. They have named this bias as conservatism, as people tend to conserve their initial opinions more than they should. And it is worth mentioning that they observed this in a completely neutral situation, where people were asked to evaluate from which bag a set of red and blue chips had come. They informed the subjects that there were two possible bags, one with 700 red chips and 300 blue ones, while the other bag had 300 red chips and 700 blue ones. If, after taking 12 chips from one of the bags, it was observed that 8 were red and 4 were blue, the question is how likely it is that those chips came from the bag with a majority of red chips. You can ask yourself, as a reader, what probability value would you state. Phillip and Edwards observed people tended to answer a value around 70%. However, the correct value, if you assume both bags were just as likely initially is a much larger change from the initial 50%, the correct final probability is actually approximately 97%.

While this tendency to change opinions too little might look at first a simple effect of analysing an unknown problem, that is not the case. Even in the world of corporate finance, evidence was observed that investors tend to under-react to information. Baron has an interesting section on this problem, that he calls the persistence of irrational beliefs where he cites some of the literature in the area. This includes studies that show that the order the data is presented affects the final opinion of individuals, even when that order was irrelevant and contained no new information.

One interesting study on this primacy effect, where first observations carry more weight than later ones, even when they should not, was conducted by Peterson and DuCharme. Like the Phillips and Edwards study, they had the question was to find out from which bag a set of poker was more likely to have been drawn for. Urn C had 3 red, 2 blue, 2 yellow, 1 green and 2 white chips, while urn D contained 2 red, 3 blue, 1 yellow, 2 green and 2 white ships. One urn was shown to the subjects and chips were taken from that urn and return to it, one at a time. After each draw, the subjects were asked to evaluate the probability that the urn they were drawing from was urn C. However, the draws were not random but arranged so that the first 30 draws favored the idea that was urn C, while the 30 following draws favored D, in an exact mirror way to the first 30. That is, the total evidence in favor of each urn canceled after the 60 draws and the final opinion should be equal chances for each urn. But, since the individuals started believing C was more probable, they observed a very clear tendency to keep that initial evaluation. It typically took series of 50 draws favoring D in order to counter the initial 30 draws supporting C.

No comments:

Post a Comment