Yesterday in class I introduced you briefly to the St. Petersburg Paradox. There was a good bit of resistance to the result that the most rational choice (according to decision theory models that focus solely on the maximization of expected value) is to choose the coin toss rather than the $100. We’re going to revisit St. Petersburg in a few weeks, but I did a poor job presenting the math behind the result yesterday, so I want to clarify that a bit.
I claimed that decision theory mandates choosing the coin toss because the expected value of that outcome is infinite while the expected value of the outcome on the first branch of the decision tree is only $100. And I said that it was infinite because the game is unbounded, i.e., the payouts begin infinitely large and, as a result, they “wash out” the very low probability of actually realizing those payouts. I want to state this a bit more precisely. Assuming that each dollar in payout represents one unit of utility, i.e., there’s no diminishing marginal utility, we calculate the expected value as the sum of the products of the probability of each payout and the value of each payout. The probability of a payout of $2 is .5 (or 1/2), of $4 is .25 (or 1/4) and so on. So we get:
EV = (2 x 1/2) + (4 x 1/4) + (8 x 1/8) + (16 x 1/16) + (32 x 1/32) + …
EV = 1 + 1 + 1 + 1 + 1 + …
EV = ∞
The numbers we’ve chosen here make the math easy, but notice that even if the expected value of each toss is less than $1, the fact that there are potentially infinite tosses ensures that the expected value of the game is always infinite.
There’s one hint in what I said above to one of the potential solutions to this paradox…we may need to take into account the diminishing marginal utility of each additional $1. We’ll explore that and other solutions in a few weeks.