Probabilistic thinking still refers to reasoning made by just one individual. But, unlike the cases presented above, the reasoning happens about a subject that one can not be certain about. That means there are no absolute true answers, in contrast to the case of the cards, where one can finally convince oneself that some answers are exactly correct and others exactly wrong. This does not mean that there must be no evidence favouring one possibility over another. If you are evaluating if it will rain today, the fast formation of dark clouds in the sky does tell you it is very likely it will rain. But there is still a possibility rain won't come or that it does rain, but somewhere other than where you are. In principle, of course, you might have no probabilities associated with these outcomes. However, as we will discuss when talking about induction and probability, there is no theoretical reason why you couldn't associate subjective probabilities to each outcome. It is an open problem how to do that, but the possibility exists. Therefore, inductive and probabilistic reasoning will be used here as synonyms, except when noted otherwise.
In 1947, von Neumann and Morgenstern published Theory of Games and Economic Behavior, a book about Expected Utility Theory, EUT. It is worth noting that a lot of confusion can happen here, caused by the not technically correct name of their idea. The problem is that, as a norm, EUT is not actually a theory, in the sense of a well tested set of ideas that describe the world well. Instead, it can be understood as a prescription for correct reasoning, in the same way Aristotelian Logic is. In this sense, its correctness should not be evaluated in comparison with how people think. EUT can, in principle, fail as a theory, as a description of the real world, and yet, be rationally correct. In the book, they laid out the basis for how a rational being should behave in situations of uncertainty. This was done by assuming that, when deciding on a course of action, these rational individuals would be able to assign a probability to each possible future outcome as well as measure how much they value that outcome. Different action choices would influence the world and therefore, the probabilities could be conditional on the choice made.
The classical example gives you the opportunity to choose between two bets. Per example, in bet A, you would receive $100.00 with a chance of 50%, getting nothing the other 50% of the times; bet B, on the other hand gives you $40.00 with certainty. In this case, according to expected utility theory, each individual should assign an utility value to the possible gains. This utility does not need to be a linear function of the monetary value, that is, doubling the money does not necessarily mean double the utility. And it should also depend on your total wealth, since it is obvious that $100.00 would be much more useful to you if you are broke and unemployed than they would be worth if you were a billionaire.
An interesting problem to play with is the St. Petersburg Paradox. Assume you can enter a bet, by paying a fee. In this bet, a coin will be tossed as many times as it takes until it lands as head. If you get head on the first toss, you will get $1.00; if it happens in the second toss, $2.00; each new toss needed to get head doubling the amount of money you get. If you get very lucky and head only happens in the 10th toss, you would actually receive $512.00. How much would you be willing to pay as fee to enter this game? Check your math.
Not many years after von Neumann and Morgenstern, Savage extended EUT to include subjective probabilities in his book The Foundations of Statistics. Unlike what is usually taught nowadays in most introductory probability courses, probability is not necessarily defined as the proportion of times an event will happen if we made infinite measurements. This definition exists and it is actually called frequentist. But probability can also be defined as an individual degree of belief in the truth of a proposition. This leads us to a probability assessment that is subjective, in the sense that it depends on the data available to each individual and also on the initial probabilities (known as priors) each person choose.
This definition is the one used in Bayesian Statistics (see, for example, the books from Bernardo, Bayesian Theory
or from O'Hagan, Bayesian Inference) and, despite problems with defining those priors, the Bayesian methods can be shown to respect principles of reasoning in a way that the frequentist definition fails to (a very good introduction to Bayesian methods as a logically sound method can be found in Jaynes' Probability Theory: The Logic of Science). However, from an operational point of view, Bayesian methods depend on the specification of initial knowledge by means of the prior probability distribution, something we humans do not do well. In order to deal with this type of problem, extensions exist that consider imprecise probabilities, as proposed by Keynes in his
A Treatise on Probability.
No comments:
Post a Comment