The probability that I know what I am talking about is zero

All right, math people, help me out.

I just completed a section on probability with my daughter, and by and large she’s got it down. We both understand that if you flip a coin and roll a die, the probability of getting tails and a five is 1/12. (You multiply the odds of getting tails (1/2) by the odds of getting a five (1/6).) We made a little pie chart to show why this works and… well, I’m not going to drone on about this, but in short: Lea not only understands how to do the calculation, but she understands why. And so do I.

Similarly, we understand that the probability of rolling a two or a three on a 6-sided die is 1/3. This is even easier to grasp: The odds of rolling a two are 1/6; the odds of rolling a three are 1/6. Add ’em up, and you get 2/6, or 1/3. No problemo — the reasons for this are pretty easy to visualize.

You add up the probabilities when the “or” events are, as the math textbook tells us in boldfaced words, mutually exclusive. You can’t roll a five and a six on a single roll of the die. (Although God knows when I ask Lea if this is possible — just so I can see that she’s with me — she’ll say something about the die rolling into a crack on the table, or bring up the idea of a time machine, or…)

It’s when two events aren’t mutually exclusive that my mind goes kabloo. The example in the book is:

The Yankees have a .4 chance of winning today’s baseball game. The Red Sox have a .6 chance of winning their baseball game. (This was obviously written by a Red Sox fan.) What is the probability that one team or the other will win, assuming they are not playing each other?

The formula for this is straightforward enough: First you add up the probabilities (.4 + .6 = 1) and from this you subtract the product of the probabilities (.4 * .6 = .24). So the odds of one team or the other winning is .76, or 76%. Easy enough — I can memorize a formula, and so can my daughter. I just don’t get why this works. We were able to visualize and fully grasp the other principles in this section, but this one is eluding us.

Is there a way of looking at this that will make me say, “Oh! Sure!”

Tweet about this on TwitterShare on FacebookShare on Google+Share on TumblrEmail this to someone
This entry was posted in Uncategorized. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.


  1. Andrew Greene
    Posted March 13, 2013 at 1:19 pm | Permalink

    I’m made deeply suspicious by the coincidence that the odds given add up to one.

    The way I would solve it is thus:

    What are the odds that NEITHER team wins? That’s a simple probability: The odds of the NYY losing are 1 – 0.4 = 0.6, and the odds of BOS losing are 1 – 0.6 = 0.4. So the odds of both losing are 0.4*0.6=0.24

    So the odds of at least one of them winning is 1-0.24 = 0.76

    This works even if you say the odds of the NYY winning are 0.9 — you’d have 1-(1-0.9)*(1-0.6)=1-(0.1*0.4)=1-(0.04)=0.96


  2. Posted March 13, 2013 at 1:27 pm | Permalink

    The probability of at least one of them winning is equal to:
    1 minus the probability that they both lose.

    Probability of Yankees losing = 1-.4 = .6
    Probability of Red Sox losing = 1-.6 = .4
    Probability of both losing = .6*.4 = .24
    Probability at least one of them wins = 1-.24 = .76

    The method you described (add the probabilities first, then subtract the product) works because when you add you are getting all the times when either event happens. You have to subtract because you are double-counting the times when both teams win.


  3. Posted March 13, 2013 at 1:29 pm | Permalink

    There are a couple of ways to get at this. One of them involves looking at the different possible ways that at least one of them could win their game. The Yankees win and the Red Sox lose with probability
    the Red Sox win and the Yankees lose with probability
    and both teams win with probability
    Adding them up you get
    Another way to get at this, which is a common trick in probability, is to look at the opposite probability: what is the probability that neither team wins? In other words, the probability that the Yankees lose and the Red Sox lose. This is (1-.4)*(1-.6) = 1-.4-.6+(.4*.6). Then to get the probability of the event you actually wanted, subtract this result from 1 to get .4+.6-(.4*.6).

    Sometimes the method will be explained by saying that straight-up adding the probabilities (.4+.6) double-counts the situation where both teams win, so you subtract out that probability, but I have found that understanding when double-counting applies, and how, is a tricky thing.

    Also, my explanation of the above might be a little confusing, because the numbers were chosen such that .4+.6=1, which makes some of the quantities involved equal to each other, though they don’t necessarily have to be. Let me know if you need further explanation.


  4. Sofiya
    Posted March 13, 2013 at 1:30 pm | Permalink

    Here’s a way to visualize it. Draw a 10×10 square. Color in the first 4 columns — this is your .4 probability that Yankees win. Color in the top 6 rows — this is your .6 probability that Red Sox win. The area that’s been colored twice is the probability of both teams winning their games. The total area that’s been colored is the probability that at least one team wins. The area that’s not colored is the probability of both teams losing.


  5. Eric
    Posted March 13, 2013 at 1:31 pm | Permalink

    An alternative way to look at this is to change the OR into and AND. Take the probability that BOTH teams will lose, and subtract it from 1. The Yankees have a 0.6 chance of losing, and the Sox have a 0.4 chance of losing. The chance of them BOTH losing is (0.6 * 0.4 = 0.24), so the chance of at least ONE of them winning is 1 – 0.24 = 0.76, or 76%

    This example might not be as clear because of the choice of reciprocal probabilities. Let’s say the chances of winning are instead 0.7 for the Yanks and 0.6 for the Sox. Using your method, the odds of at least one win between the two teams is (0.7 + 0.6) – (0.7 * 0.6) = 1.30 – 0.42 = 0.88 = 88%. Using the method of determining the chance that neither team will lose, you get 1 – ((1 – 0.7) * (1 – 0.6)) = 1 – (0.3 * 0.4) = 1 – 0.12 = 0.88 = 88%.


  6. Eric Berlin
    Posted March 13, 2013 at 1:31 pm | Permalink

    Okay, I’m getting there, I think. (I sure wish WordPress comments let you make little charts…) Thanks, folks.


  7. Eric
    Posted March 13, 2013 at 1:37 pm | Permalink

    Andrew types much faster than I do… ;-)


  8. Eric Berlin
    Posted March 13, 2013 at 1:38 pm | Permalink

    Sweet, Sofiya, I’m going to demonstrate that as soon as I get home.


  9. Posted March 13, 2013 at 1:41 pm | Permalink

    Use this for a visual chart for my explanation (and Sofiya’s above).

    The first two columns (the blue-green area) are the 0.4 probability that the Yanks win.
    The top three columns (the yellow-green area) are the 0.6 prob that the Sox win.
    The green-section is the part where both teams have won…and is being double-counted already. (Yellow + Blue makes green).


  10. Kurtis
    Posted March 13, 2013 at 2:14 pm | Permalink

    Comment #5 was going to be my explanation. Figure the odds of them both losing, then subtract that from one. The odds of anything NOT happening are one minus the odds of it happening. But if you want your head to hurt even more, what are the odds that exactly one team will win (presuming they are not playing each other)?


  11. Kurtis
    Posted March 13, 2013 at 2:22 pm | Permalink

    (I get 52%)


  12. Eric Berlin
    Posted March 13, 2013 at 2:23 pm | Permalink

    Let me think, do I want my head to hurt even more?

    Actually, with the help of Sofiya’s chart, that’s pretty easy to figure out. Don’t ask me what the formula is, though.


  13. Kurtis
    Posted March 13, 2013 at 2:34 pm | Permalink

    OK, if you want your head to hurt less, calculate the odds of exactly one team winning when they ARE playing each other. Timer is on. Go.


  14. Posted March 13, 2013 at 3:12 pm | Permalink

    Hmm… are either of the teams aware of the other team’s outcome? Because that could affect it. For example, if BOS needs a win plus an NYY loss to make the playoffs, seeing the Yankees’ outcome would make a difference (e.g., play your best vs. it doesn’t matter).

    I think you need to look into whether Bayesian logic applies here.


  15. Peter Sarrett
    Posted March 13, 2013 at 5:27 pm | Permalink

    Another way to think about it (not necessarily a better or easier way, but the way I learned) is this:

    40% of the time, the Yankees win, and you’re done– at least one team has won. Whether or not the Sox won is irrelevant in these cases. You only care about the Red Sox when the Yankees lose. So you multiply the Red Sox’s chance of winning by the proportion of time that fact even matters, which is 60%.

    YankeesChance + ((1 – YankeesChance) * RedSoxChance)

    .4 + ((1 – .4) * .6) =
    .4 + (.6 * .6) =
    .4 + .36 =

    If you also cared about the Cubs, you’d just continue the process. We’d only care about the Cubs when both the Yankees and Red Sox lost (1 – .76 = .24, or 24% of the time), so it would look like this:

    .76 + (.24 * CubsChance)

    And so on.

    This is an iterative, and therefore slower, way to get to the answer, but it was the first way I was ever able to full understand the process rather than just memorizing a method, and so it’s the model I always go back to.


  16. Eric Prestemon
    Posted March 14, 2013 at 12:14 pm | Permalink

    Building on your explanation…

    If two things are mutually exclusive (rolling a 2 and rolling a 3 on a particular die roll) you know what to do (this is the OR math).

    ONLY If two things are unrelated you know how to do AND math.

    The two baseball games are unrelated. So you need to compute your answer using AND math. If you limit yourself to AND math, one probability you are able to compute is “Red Sox lose” AND “Yankees lose”. And since you’ve cleverly chosen to calculate the opposite of what you want to know, you can subtract from one.

    Choosing the clever AND math to calculate (and then subtracting the result from one, usually) is the hard part. Once you start seeing these calculations with unrelated events as requiring AND math, they get easier.

    For example: If you flip a coin 3 times, what are the chances you get at least one heads (first flip heads OR second flip heads OR third flip heads)? Well, that’s an OR question, and the coin flips are unrelated, so you can’t get it directly. Instead you can get the odds of (tails AND tails AND tails), which is simple: 1/2*1/2*1/2 or 1/8. Then you subtract from 1 to get 7/8 as the probability of at least one heads.


  17. Mikalye
    Posted March 14, 2013 at 9:42 pm | Permalink

    Yes the most common error is in trying to do the OR math with unrelated items. You do see people make this mistake all the time. Let’s say that people are asked the odds of getting at least one heads on one of two independent coin flips. You see folks figure “That’s 0.5+0.5. It must be 1” Ummm nope but thanks for playing.

    The equivalent carnival/funfair version of this is usually played with three large six-sided dice with symbols instead of numbers on the sides. Punters are able to bet on any of the six symbols and after all of the bets in, the three dice are rolled. If your symbol comes up on one of the dice, you win your bet and are paid. If it shows up on two dice, you receive double your money bet. if on three faces, you receive triple your money bet. Since most people erroneously believe that the odds of a particular symbol showing up are roughly (1/6+1/6+1/6) OR roughly a half, they figure that the game is fair. But, of course that is based on misreading probability. This game has been around a long time, and is sometimes called Bird Cage, Chuck-a-luck, or Crown and Anchor. The probability question here is that if the game is fair (and Scarne reported that in a number of carnival shows, the dice had magnets in them) for every dollar bet, how much would a player expect in the long runt to get back?


  18. Geoff Bailey
    Posted March 16, 2013 at 7:22 pm | Permalink

    I endorse the explanations of Peter Sarrett and Nathan C. Peter’s approach is one that I generally end up using whenever I am dealing with one of those problems where A and B are playing a {coin/dice} game and whoever gets {their combination} first wins (and the aim is to find out their respective chances of winning). Flipping the calculation around to ANDs bogs down as the expressions become more complicated, and of course formula memorisation only lets you deal with cases handled by the formula.

    Nathan C’s approach is a good general one, and you may find using Venn diagrams helpful. (The horizontal and vertical lines in the grid, as suggested by Sofiya, can be considered to be a less traditional representation of a Venn diagram for two sets; it does not extend so well to three sets, though. On the other hand, using Venn diagrams for four or more sets is also a bit awkward.)

    Venn diagrams are good for visualising the components to the problem, and Nathan’s approach is to compute the individual values for each component and then simply sum them up. It’s another good general method that will apply in more complicated situations.

    As Nathan observed, you can also simply add the probabilities and subtract the part that has been counted twice. (Again, the Venn diagram can make this clearer.) Considered in this way it can lay the foundation for an exploration of the principle of inclusion-exclusion, which extends the necessary concept to larger numbers of sets.


Post a Comment

Your email is never published nor shared. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

  • Archives

  • Subscribe

    By signing up, you agree to our Terms of Service and Privacy Policy.

  • Twitter