Search This Blog

Monday, December 30, 2013

On Human Stupidity: Historical: Probability Thinking II

When facing a problem with uncertain outcomes, of course, we do not think in probabilistic terms. At least, the vast majority of us don't. However, ideally, we could hope that, whatever our mind does, it would respect basic principles of rationality. On the other hand, at this point, given our failure in simple logic reasoning, it should be no surprise to learn that, when our abilities for uncertain reasoning were tested, it soon became clear that we failed to follow those principles.

Early tests of EUT showed clearly that we do not reason in a way that is compatible with it, if EUT is used as a description. Those results were initially called paradoxes of decision making, namely Allais'  and Ellsberg's paradoxes, despite the fact there is nothing paradoxical in them. Both experiments just showed that people sometimes do not obey the principle that choice between two bets should depend only on the aspects where bets are different and not on those where they are equal. This principle is known as Cancellation Principle.

Things might not have been so serious if that were all. But the literature is filled with different examples of mistakes we make and attempts to understand what we actually do. In 1979, Kahneman and Tversky showed that if one assumes we change the probabilities we know to a different value, by using what they called weighting functions, we can still describe our reasoning, at least in the Allais' and Ellsberg's paradoxes, using EUT with those altered probabilities.

Basically, what they observed is that, when we get close to certainties, that is, when the probability of something happening gets close to zero or one, we make decisions as if there were more uncertainty than there really is. This is actually a well observed phenomenon. Per example, people usually make bets on lotteries that shows they consider their chance of winning to be much larger than it really is. For example, if there is just one chance in 50 millions that yu will win $ 10 millions, that means that, in average, you would win 0.20 per bet. But people happily pay more than that to enter such a lottery (actually, if you do think abut utilities, the problem is far more serious since the utility of each dollar decreases as you have more). What Kahneman and Tversk proposed in their Prospect Theory is that people would actually work with a modified probability value. In the lottery example, if you think you chance is actually one in 100,000, instead of one in 50,000,000, it might make sense to pay up to $ 100.00 (if you don't correct for decreasing utility, if you do, the value would be smaller, but it still can be much higher than $ 0.20). The same effect is observed at the other extreme, where there is just a very small chance that something will NOT happen.

It was later shown in newer experiments that Prospect Theory and other options that came later can not fully explain our mistakes. Birnbaum performed a series of experiments where he shows we even disobey a principle called stochastic dominance. Stochastic dominance is basically the simple fact that we should not choose alternatives that are obviously worse in at least one aspect, while equal in all other aspects. One example he tested was a choice between the  bets G and G+, given by

G
90% chance to win $96.00
10% chance to win $12.00

G +
90% chance to win $96.00
5% chance to win $14.00
5% chance to win $12.00

Clearly the option G+ is better since it is exactly the same 95% of the time and, in the remaining 5%, it pays $ 2.00 more. That is what it is meant by saying that G+ stochastically dominates G. However, what Birnbaum observed consistently in a number of choices like this is that often people would pick the worse bet!

Saturday, December 21, 2013

Human Stupidity: Historical: Probabilistic thinking I

Probabilistic thinking still refers to reasoning made by just one individual. But, unlike the cases presented above, the reasoning happens about a subject that one can not be certain about. That means there are no absolute true answers, in contrast to the case of the cards, where one can finally convince oneself that some answers are exactly correct and others exactly wrong. This does not mean that there must be no evidence favouring one possibility over another. If you are evaluating if it will rain today, the fast formation of dark clouds in the sky does tell you it is very likely it will rain. But there is still a possibility rain won't come or that it does rain, but somewhere other than where you are. In principle, of course, you might have no probabilities associated with these outcomes. However, as we will discuss when talking about induction and probability, there is no theoretical reason why you couldn't associate subjective probabilities to each outcome. It is an open problem how to do that, but the possibility exists. Therefore, inductive and probabilistic reasoning will be used here as synonyms, except when noted otherwise.


In 1947, von Neumann and Morgenstern published Theory of Games and Economic Behavior, a book about Expected Utility Theory, EUT. It is worth noting that a lot of confusion can happen here, caused by the not technically correct name of their idea. The problem is that, as a norm, EUT is not actually a theory, in the sense of a well tested set of ideas that describe the world well. Instead, it can be understood as a prescription for correct reasoning, in the same way Aristotelian Logic is. In this sense, its correctness should not be evaluated in comparison with how people think. EUT can, in principle, fail as a theory, as a description of the real world, and yet, be rationally correct. In the book, they laid out the basis for how a rational being should behave in situations of uncertainty. This was done by assuming that, when deciding on a course of action, these rational individuals would be able to assign a probability to each possible future outcome as well as measure how much they value that outcome. Different action choices would influence the world and therefore, the probabilities could be conditional on the choice made.


The classical example gives you the opportunity to choose between two bets. Per example, in bet A, you would receive $100.00 with a chance of 50%, getting nothing the other 50% of the times; bet B, on the other hand gives you $40.00 with certainty. In this case, according to expected utility theory, each individual should assign an utility value to the possible gains. This utility does not need to be a linear function of the monetary value, that is, doubling the money does not necessarily mean double the utility. And it should also depend on your total wealth, since it is obvious that $100.00 would be much more useful to you if you are broke and unemployed than they would be worth if you were a billionaire.


An interesting problem to play with is the St. Petersburg Paradox. Assume you can enter a bet, by paying a fee. In this bet, a coin will be tossed as many times as it takes until it lands as head. If you get head on the first toss, you will get $1.00; if it happens in the second toss, $2.00; each new toss needed to get head doubling the amount of money you get. If you get very lucky and head only happens in the 10th toss, you would actually receive $512.00. How much would you be willing to pay as fee to enter this game? Check your math.


Not many years after von Neumann and Morgenstern, Savage extended EUT to include subjective probabilities in his book The Foundations of Statistics. Unlike what is usually taught nowadays in most introductory probability courses, probability is not necessarily defined as the proportion of times an event will happen if we made infinite measurements. This definition exists and it is actually called frequentist. But probability can also be defined as an individual degree of belief in the truth of a proposition. This leads us to a probability assessment that is subjective, in the sense that it depends on the data available to each individual and also on the initial probabilities (known as priors) each person choose.


This definition is the one used in Bayesian Statistics (see, for example, the books from Bernardo, Bayesian Theory or  from O'Hagan, Bayesian Inference) and, despite problems with defining those priors, the Bayesian methods can be shown to respect principles of reasoning in a way that the frequentist definition fails to (a very good introduction to Bayesian methods as a logically sound method can be found in Jaynes' Probability Theory: The Logic of Science). However, from an operational point of view, Bayesian methods depend on the specification of initial knowledge by means of the prior probability distribution, something we humans do not do well. In order to deal with this type of problem, extensions exist that consider imprecise probabilities, as proposed by Keynes in his A Treatise on Probability.

Wednesday, December 18, 2013

Season Carols- Alber Einstein is coming to town - Happy Newtonmas (Feliz Newtal)

Albert Einstein is coming to town

(My very obviously slightly altered version from "Santa Claus is coming to town")

 
You better watch out
You better not cry
Better think right
I'm telling you why
Albert Einstein is coming to town

He's making a list
And checking it twice
Gonna find out who's dumb or bright
Albert Einstein is coming to town

He sees you when you're thinking
He knows when you're a fake
He knows if you've been silly or smart
So be smart for smartness sake!

Oh! You better watch out!
You better not cry
Better think right
I'm telling you why
Albert Einstein is coming to town
Albert Einstein is coming to town

Saturday, December 14, 2013

On Human Stupidity, part III - A Short Historical Perspective b

The literature on how our reasoning is far from optimal is already huge and it keeps growing everyday. I have no intention to even try to be comprehensive here. While the subject is very interesting, ultimately, this whole text of mine is about how we can try to get as close to good answers as possible. Answers about anything and that, of course, means something some people would call scientific method, despite the problems with that name. There are already good introductory texts to the psychological aspects of human reasoning, such as the two I mentioned in the first post of some history on human stupidity (Plous and Baron) and I strongly encourage anyone interested in the matter to read them and others as well as the many papers in dedicated journals and sites. Currently, there is a very nice list of online resources and academic journals at http://www.sjdm.org/links.html.


What I find crucial to understand is how limited we actually are. The examples here are for didatic purposes at educating people on this specific question and do not, by any means, replace the existing literature. As we will discuss later throughout the book and especially in the Chapter "The Real Strength of Science'', it is fundamental to know what the really serious scientific community is discussing. Not because it is correct, scientists who do understand Epistemology well should never actually make truth claims about the real world. But because Science is always the best answer we have at the moment. On a sidenote, I just love this phrase: ``It is therefore a truism, almost a tautology, to say that all magic is necessarily false and barren;  for were it ever to become true and fruitful, it would no longer be magic but science'' from James Frazer in the The Golden Bough.



And, of course, in order to illustrate our known failings, likely to be a characteristic of the Homo Sapiens species, I take to class not just the card problem, but a number of now classical examples of our human stupidity. The second traditional example of the psychological literature I like to present to my students is already based on probability evaluations. This example is now known as the Linda problem. The text I present them is this:



"Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which of the following two alternatives is more probable?
  1. Linda is a bank teller.
  2. Linda is a bank teller and active in the feminist movement.'' Tversky and Kahneman, 1983.

It is obvious after careful inspection that if the second alternative is true, the first must also be. Therefore we can easily prove that the first alternative must be more probable than the second. Equality would be theoretically possible, but it would demand that we are absolutely sure that, if Linda is a bank teller, there is no chance at all she wouldn't be active in the feminist movement. So, despite mentioning probabilities, the answer is known for certain here.



Amazingly, many people get somehow drawn by the word feminist in the second phrase, that seems to fit Linda description better and pick the second alternative. But the question is not if she is more likely to be a feminist or a teller. While the exact reason we do it is not completely clear, for me, there is some amount of evil fun in watching the faces of students realizing they are failing miserably in trivial problems. That their intuition can not be trusted at all. This is a lesson I can only wish they will carry through their lives, allowing them to be much more careful in their reasoning. And, hopefully, better at the decision making and judgement problems they will face in their lives.



Examples of our failure are numerous. And, while this previous example had a certain answer, it does raises the question that, in real life, it is quite common that the best we can hope to achieve in a specific situation is a solid probabilistic assessment of the problem. Which brings us to the question of how we deal with problems where there is uncertainty of some kind. Remember, we already fail where there is certainty to be had. Try to guess how we, as a species, will fare next, when we check what is know about probabilistic reasoning.


#reasoning #biases

Card Problem answer (from the Historical Perspective entry)

The correct answer for the card problem is the set of cards "A'' and "5''. Almost everyone gets "A'' correctly. It is indeed quite obviously that if we turn it and find an odd number, the rule was proven wrong. "B'' can not do that, despite some people picking it, simply because nothing was said about what happens when the letter is a consonant. "2'' is a very interesting case. It can provide confirmation, but not proof, to the rule, if we find a vowel at the other side. But if it is a consonant, that means nothing. It is a wonderfully safe check for the rule, since it will not be proven wrong! Hence, a terrible check of the rule, a wrong answer. On the other hand, while "5'' is often neglected, if we find a vowel in the other side, the rule was indeed broken

Thursday, December 12, 2013

Just a simple post while other related things get done

This is almost just a filler.

I am still working on the blog and the texts, but I made a little detour. I just had to learn how the very nice Tufte LaTeX style really works, since the plan is to make a book out of what I write here. I now have the first posts nicely displayed in that style for books. It is more beautiful to look at (sorry, no previews right now, maybe soon) and this should help me write more. Both because I can see it as a book and I am old enough to love books much more than blogs, and for the aesthetic value as well.

More to come on the Historic Perspective soon.