Maximum Expected Utility

Expected Value Theory

Expected Value :

In a gamble in which there is a p% chance of winning $X, the expected value is equal to pX. If there is a p% chance of X and a q% chance of Y, then EV=pX+qY. Expected value theory says you should always choose the option with the highest expected value.

Examples of Expected Value Theory :
10% chance of $90 *[If you are given probabilities that add up to less than 100%, you can assume the payoff otherwise equals 0]
0.1 * 90 = $9
50% chance of $200
0.5*200 = $100
However, expected value theory conflicts with people’s intuitions sometimes and researchers also attribute this discrepancy to a flaw in the theory.

Consider this example :
You have to choose between -
(A) $1 million for sure OR (B) 50% chance of $3 million
EV says you should prefer B to A. Many people prefer A to B.
Why do people prefer A to B, in contrast to the advice given by EV theory?

Here’s another example :
A. 99% chance of $1 million OR B. $1 for sure
Most people would choose the gamble A over the sure thing B.

Expected Utility Theory

Expected Utility :

Most decision researchers explain the pattern of choices in the first example by saying that the satisfaction we’d get from $3 million isn’t that much greater than the satisfaction we’d get from $1 million.
We can construct a scale, called a utility scale in which we try to quantify the amount of satisfaction (UTILITY) we would derive from each option.
Suppose you use the number 0 to correspond to winning nothing and 100 to correspond to winning $3 million.
What number would correspond to winning $1 million?

Suppose you said 80. This means that the difference between what you have now and a million extra dollars is four times as great as the difference between a million and three million extra dollars.
We can express the original choice between A and B in terms of these units (UTILES) instead of dollars.
A) 80 utilities for sure 1 * 80 = 80
B) 50% chance of 100 utiles = 0.5 * 100 = 50
You calculate Expected Utility by multiplying probabilities and utility amounts.

(EU) = P(X) x amount of utiles.

So, EU(A)=80. EU(B)=50.
Expected utility theory says if you rate $1 million as 80 utiles and $3 million as 100 utiles, you ought to choose option A.

Diminishing Marginal Utility

EU theory captures the very important intuition that there is diminishing marginal utility of money. Definition of DMU : The value of an additional dollar DECREASES as total wealth INCREASES. The change in your life when you go from 0 to 1 million is larger than the change in your life when you go from 1 million to 2 million.

Difference between EV & EU

In expected value theory, the correct choice is the same for all people.
In expected utility theory, what is right for one person is not necessarily right for another person.
Consider a decision of the following form:
Option A: p% chance of $X
Option B: q% chance of $Y

Once you specify what p, q, X, Y are, then EV theory will give VERY SPECIFIC ADVICE. It will say one of the following things:
1. You should choose A
2. You should choose B
3. You should be indifferent between A and B

In contrast, EU theory says
1. You might choose A
2. You might choose B
3. You might be indifferent between A and B
It all DEPENDS on the utilities you assign to X and Y.
Let us look at this through an example to understand better
Option A: 90% chance of $100
Option B: 20% chance of $500
EV theory says that you SHOULD prefer B.
EV(A)=$90. EV(B)=$100.
EU theory says that before you can decide which one you prefer, you need to determine the UTILITY of $100 vs. $500.


Person 1 might say: $500 is better than $100, but not five times better. It's really only three times better.
So I'll rate $100 as 10 and $500 as 30. (This person demonstrates the typical diminishing marginal utility for money.)
Therefore, EU(A)=9 and EU(B)=6.
EU theory would say that Person 1 should choose A.
In contrast to EV theory (which says EVERYONE SHOULD CHOOSE THE SAME THING), EU theory says different people might choose different things.

Person 2 might say: I really want to buy a bat that costs $500. Winning $100 would be nice but wouldn't let me achieve that goal. But if I won $500, I could buy the bat. This person might think that $500 is MORE than five times as good as $100.
This person might assign a utility of 10 to $100 and a utility of 70 to $500.
For this person,
EU(A)=0.9*10 = 9
EU(B)=0.2*70 = 14.
According to EU theory, Person 2 should choose B.

Numericals Based on Expected Utility

You have an opportunity to place a bet on the outcome of an upcoming race involving a certain female horse named Bayes:
If you bet x dollars and Bayes wins, you will have w0 + x, while if she loses you will have w0 - x, where w0 is your initial wealth.

1. Suppose that you believe the horse will win with probability p and that your utility for wealth w is ln (w). Find your optimal bet as a function of p and w0.
2. You know little about horse racing, only that racehorses are either winners or average, that winners win 90% of their races, and that average horses win only 10% of their races. After all the buzz you’ve been hearing, you are 90% sure that Bayes is a winner. What fraction of your wealth do you plan to bet?
3. As you approach the betting window at the track, you happen to run into your uncle. He knows rather a lot about horse racing: he correctly identifies a horse’s true quality 95% of the time. You relay your excitement about Bayes. “Don’t believe the hype,” he states. “That Bayes mare is only an average horse.” What do you bet now (assume that the rules of the track permits you to receive money only if the horse wins)?


Let us first solve (1)
The expected utility from betting x is:
Expected Utility 1
Your objective is to choose x to maximize your expected utility. The first order condition (derivative) w.r.t x is
Expected Utility 1


Solving (2) Your probability that Bayes will win can be determined as follows: p = 0.9 * 0.9 + 0.1 * 0.1 = 0.82 Therefore by using the formula from part 1 - x* = w0 (2 * 0.82 - 1) = 0.64 * w0

Solving (3) Let q denote the true type of Bayes. q = 1 (Bayes is a winner) q = 0 (Bayes is an average) Let s denote the signal from your uncle. s = 1 (uncle asserts Bayes is a winner) s = 0 (uncle asserts Bayes is average) The uncle’s signal is accurate 95% of the time, i.e., Expected Utility 1
Using the formula from part 1 again, we obtain x* = 0.357w0 < w0. You would like to bet against Bayes, but this is not allowed, so the optimal choice is to bet nothing.

Utility Curve

Expected Utility 1

A utility curve is a mathematical representation of an individual's preferences and the value they assign to different outcomes. It helps compare and rank options. Risk-neutral behavior, associated with a linear utility curve, means making decisions solely based on expected outcomes, without considering the associated risk or uncertainty.