The terms “expected value” and “variance” are thrown around a lot among Magic players, but I'm willing to bet that a significant number of them don't know what those terms actually mean. Expected value and variance have specific meanings within the realm of mathematics and statistics. Understanding what they mean, when it's appropriate to use them, and more importantly, when you shouldn’t base your decisions on expected value are all really important.
First of all, I should define what those terms mean so we’re all on the same page. I’m going to try to use as little jargon as possible, but if anything isn’t clear, feel free to ask me about it in the comments.
The simplest way to think of expected value is that it’s a weighted average. Say you have some random variable—for instance, the result of a roll of a six-sided die. We’ll assume it’s perfectly weighted so that the odds of rolling any particular number are 1 in 6. The expected value, then, is ? times each of the possible outcomes added together.
So, the expected value of a die roll is 3.5. But how is it possible that we expect a die to roll 3.5? The key thing to keep in mind here is that expected value is a long-run average. If you average out a large number of die rolls, the result will work out to 3.5. The more rolls you do, the closer your average roll will come to 3.5. I'll come back to expected value being a long-run concept later.
Variance is another term that often is misused by Magic players. It doesn't just mean that your opponent was lucky or you didn't draw enough lands. Variance is a measure of how much a random variable deviates from its mean (its expected value). Going back to our die example, we look at each possible result, see how much it differs from its expected value, and we square that difference. The reason we do that is that we care about the magnitude of the deviation and not necessarily whether that deviation is positive or negative. We then add all those numbers together and divide by the number of possible results. That's a bit of a mouthful, so I hope the numbers will make that clearer:
Okay, so what does this number really mean? By itself, nothing—at least for the purposes of this discussion. What matters is when we start comparing variances among different random variables. What you need to know at this point is that the larger the variance, the bigger the swings in outcome will be.
Say I offered you the chance to play a betting game. You pay me $1, and you can pick either Door 1 or Door 2. Both have the same expected payoff of $2, but Door 2 has a higher variance. Which one do you pick? There’s no wrong answer here, but how you answer determines if you’re risk-averse, risk-loving, or risk-neutral. If you pick Door 1, you are risk-averse, Door 2 is for the risk-lovers, and if you don’t care either way, you’re risk-neutral.
Most of you picked Door 2 because $1 is not a lot of money, so you don’t care about losing it. You might as well let it ride on the riskier choice for the potential of receiving a bigger payoff. What if instead, you’re paying $1,000? Or $10,000? Door 1 is starting to look a lot better. I would go as far as to say that you’d be foolish to pick Door 2 because you’re not being compensated for the additional risk.
If we repeated this game a million times, variance doesn't matter. It all evens out in the end, and your average payoff is the expected value. But in this case, we're only playing the game once, so variance is really important when making your decision.
This leads us to some problems when you only consider expected value when making a decision:
- Expected value is a long-run concept.
- Expected value ignores risk.
- Some things are hard to quantify.
To really drive this point home, I’m going to talk about something called the St. Petersburg Paradox. Back in the eighteenth century, a bunch of mathematicians came together to figure out the best way to play games of chance. They figured that if a game has an expected payoff of X, a rational person would pay any amount up to and including X to play. The reasoning is that in the long run, your expected profit is X minus whatever amount you’re paying to play. So, long as that amount is less than or equal to X, you’re at worst breaking even. Okay, so far so good.
Now for the St. Petersburg Lottery: I'm going to keep flipping a coin until I flip heads, and I'm going to pay you 2X, where X is the number of flips it took me to flip the first heads. I'll do some quick math for you to show the most likely outcomes:
- 50% of the time, you are paid $2.
- 75% of the time, you are paid $4 or less.
- 87.5% of the time, you are paid $8 or less.
- 93.75% of the time, you are paid $16 or less.
Okay, so how much would you pay to play this game? It doesn’t seem reasonable to pay more than a few dollars to play this game. Remember that mathematicians of the day reasoned that a rational person would pay any amount up to and including the expected value of the game to play. So, what’s the expected value of this game?
It turns that out the expected value of the St. Petersburg Lottery is infinite. I won’t go through the details, but just take my word for it. According to the thinking of the day, a rational person would pay any amount to play this game. But how can it be rational to empty your bank account for a game that any reasonable person wouldn’t pay more than a few dollars for? Hence the paradox.
We can resolve the paradox with something called a utility function, but I don’t want to get too far off topic. The main thing to take away here is that basing your decisions strictly on expected value can lead you to do some pretty silly things.
Going back to the concept of risk aversion, recall that a risk-neutral person is indifferent to risk. In other words, a risk-neutral person only cares about expected value and doesn’t care about variance. I’ve illustrated how that can be a flawed way of looking at things. I’d argue that most people are on the risk-averse side of things. Having some degree of risk aversion prevents you from making some very bad decisions. You don’t have to be risk-averse about everything. Not all decisions are equal, not all risks are the same, and not all outcomes are comparable.
If you've ever been to dinner with a large group of Magic players, you know all about the credit card game. Basically how it works is that one person is randomly selected to pay the bill for the whole table, and everyone else eats for free. Assuming everyone's dinner costs the same, your expected value for gaming or not gaming is the same. A risk-neutral person would be indifferent to playing the credit card game. Not gaming has a variance of zero—you always pay your own share with no variation. You have two choices with the same expected value, but one has a higher variance than the other. I hear pretty frequently that the credit card game has the same EV as paying for your dinner, but as I said, this is flawed thinking. You can only use the expected-value argument if you're planning to game a large number of dinners, and even in that case, it ignores risk.
A risk-averse person shouldn’t play the credit card game unless . . .
- . . . His or her dinner costs significantly more than the average person’s, or . . .
- . . . He or she receives some non-financial value out of the game.
It’s entirely possible that you derive pleasure from the excitement of the game. Or that you really want a particular person to lose the game so you can run the rub-ins for the rest of the night. Or maybe you really don’t want to deal with the hassle of splitting up the bill. The point is that these intangibles don’t enter the equation. It’s what I was talking about before about certain things being difficult to quantify. You can still be risk-averse and play the credit card game if you think any of these factors compensates you for the risk you’re taking on.
I stopped playing the credit card game since I quit my job and went back to school. Receiving a free dinner is not worth the chance that I’ll be stuck with a several-hundred-dollar bill. I can’t afford to lose the game. If I was working a nine-to-five job, I wouldn’t necessarily care about losing a few hundred dollars, so I’d be much more inclined to play the game. Not all situations are the same, and your attitude toward risk can change depending on the circumstances.
Say you have a decision to make in a game of Magic. Magic is chock-full of random variables: the top card of your or your opponent's deck, whether your opponent has an instant that ruins your plans, whether your hand will improve if you take a mulligan, and so on. Variance plays a huge role in making correct decisions, and the correct decision may change depending on the circumstances. Playing the finals of an FNM is not the same thing as the finals of a PTQ. Your attitude toward risk is going to be very different in those two circumstances. Losing a match in an FNM is not a big deal, but there's a huge difference between a plane ticket to Barcelona and a box or two of boosters.
Basically what I'm getting at is that there isn't always one correct play in every situation. The correct play depends on the situation, but also the person's attitude toward risk. For a game as complex as Magic, there can be a huge number of possible plays you can make. All of those plays will have different expected values in terms of how much they affect your chances of winning. Those plays will also have difference variances. If you only consider expected value while ignoring variance, you will end up making some suboptimal decisions.
Expected value is a great tool, but it’s not everything. It has its limitations, and it’s not always appropriate to base your decisions on it. I hope this discussion was interesting, and the next time someone throws the terms EV and variance around willy-nilly, you’ll be able to tell them what those terms actually mean.
Until next time, may variance only benefit you when it matters and hurt you when it doesn’t.
Nassim Ketita