# Probability

**Probability** is a concept that is generally easy to understand intuitively, but can be difficult to define rigorously. When it *is* defined and studied carefully, it can lead to counter-intuitive results. Even elementary ideas of probability can be widely misunderstood, making it a good way to dishonestly bolster a weak argument.

When properly used, probability and its more "applied" cousin, statistics, can be powerful tools for discerning empirical truth. Indeed, essentially every field of modern science relies heavily on statistical analysis of data, which in turn relies on probability.

In the area of apologetics, ideas of probability lie at the heart of some arguments for the existence of God, especially the tornado argument and, to varying degrees, various cosmological arguments.

A concept important to countering some probability-based arguments is that for any event that is *possible* (i.e., probability greater than zero), given enough time it almost surely *will* happen. This is known in some circles as the infinite monkey theorem.

## Definitions of probability[edit]

There are two main interpretations of probability, each of which leads to a different *definition* of probability:

- Relative frequency
- The
*long-run relative frequency*of occurrence of a random event (the fraction of the time it happens in a long run of repeated "trials") can be defined as the*probability*of the event.*Example:*"A 'fair' coin has a 50% probability of coming up 'heads'."*More information:*Wikipedia:Frequency probability

- Degree of belief (a.k.a., personal probability)
- The
*degree to which one believes*a statement to be true can be defined as the*probability*of that statement.*Example:*"I'd say there's about a 20% probability that the human race will end up nuking itself out of existence."*More information:*Wikipedia:Bayesian probability, Wikipedia:Probability interpretations

Although the degree-of-belief "definition" might seem quite weak, it can be made rigorous by carefully considering, for example, how much one would be willing to bet in a game where one would gain a certain amount of money if the statement turns out to be true. It can be shown that any internally consistent method of choosing one's wager must obey the laws of probability (see Wikipedia:Probability theory).

## Conditional probability[edit]

One of the most widely misunderstood concepts in probability has to do with **conditional probability**, the probability of one thing happening (or being true) *given* that something else definitely happens (or is true).

- Example
- The probability that a king has been drawn from a well-shuffled deck of playing cards,
*given that you know it's a face card*, is one-third.

This is because once you know a face card has been drawn, 4 out of the 12 face cards are kings, giving a probability of 4/12, or 1/3 that the card was a king.

If one cannot intuitively compute the conditional probability so easily, the following formula can be used:

- "Probability of A given B" = P(A|B) = P(A and B) / P(B)

Or, equivalently, when it is possible to count equally-likely outcomes:

- P(A|B) = (Number of things that are A and B) / (Number of things that are B)

Using the formula(s) on the playing card example, we see that:

- P(king | face card) = P(king and face card) / P(face card)
- = (number of "king-and-face" cards) / (number of face cards)
- = 4/12
- = 1/3

### Correctly interpreting conditional probabilities[edit]

Conditional probabilities are often the result of narrowing the population of interest. In the above example, the set of face cards served as a new, smaller population of interest for which we wanted the probability of choosing a king. To state it in terms of a percent, about 33.3% of face cards are kings. This is quite different from the percent of *all* playing cards that are kings (just over 7.5%, being 4 out of 52), and certainly different from the percent of kings that are face cards (100%). This illustrates how crucial the population of interest is to a conditional probability statement, and why conditional probabilities are easily misinterpreted.

For a more extensive example, consider the following two statements (using made-up figures):

- "10% of convicted criminals are atheists."
- "Only 5% of theists have ever been convicted of a crime."

Given this information, does it look like atheists are more likely to be criminals than theists? Maybe twice as likely? On first glance, it might seem so. However, notice that the two percentages treat criminal conviction quite differently. In the first statement, the *population of interest* is convicted criminals and we are looking at what fraction of that population are atheists. In the second statement, the population of interest is theists and we are looking at what fraction of that population *have the characteristic* of being convicted criminals. In other words, each percent is calculated as a fraction of a population having a certain characteristic, but neither the population nor the characteristic is the same in both statements. This means the figures are not directly comparable.

Now consider the additional statement:

- "90% of convicted criminals are theists."

You can compare this figure directly with statement #1 above, because they are using the same population: convicted criminals. But the comparison is not very informative because the two statements give *exactly the same information* (if 10% of criminals are atheists, then 90% must be non-atheists, or what we're calling "theists" for convenience sake). So we still don't know whether atheism or theism is associated with criminal conviction (i.e., whether you're more likely to be a convicted criminal if you're an atheist or if you're a theist).

To get an accurate impression of what's going on, the question we really need to ask is:

- What percent of atheists are convicted criminals?

If we can answer that, then we can directly compare the probability of an atheist being a criminal with the probability of a theist being a criminal. Those probabilities (percents) will be calculated for two different populations (and thus need not add to 100%), but they will reflect the prevalence of the *same characteristic* in those two populations.

Unfortunately, we need one more piece of information to answer the latter question. This piece of information can be the answer to *any* of the following:

- What percent of the population are convicted criminals?
- What percent of the population are not convicted criminals?
- What percent of the population are atheists?
- What percent of the population are theists?

Once any one of these percentages are known, we can calculate the others using elementary laws of probability and the information in statements #1 and #2 above.

Let's say the answer to the first question, which is also being pulled out of thin air for the purpose of this example, is:

- 3. "5% of the population are convicted criminals."

Here's a table that matches all the information we've been given. All percents are calculated *out of the total population*.

Non-criminal | Criminal | Total | |
---|---|---|---|

Theist | 85.5% | 4.5% | 90.0% |

Atheist | 9.5% | 0.5% | 10.0% |

Total | 95.0% | 5.0% | 100.0% |

Statement #3 is obviously true by looking at the last row of the table. To verify statements #1 and #2, we calculate:

- Fraction of convicted criminals who are atheists = 0.5% / 5.0% = 0.5/5 = 1/10 = 10%
- Fraction of theists who are convicted criminals = 4.5% / 90.0% = 4.5/90 = 1/20 = 5%

And finally:

- Fraction of atheists who are convicted criminals = 0.5% / 10.0% = 0.5/10 = 1/20 = 5%

The last two percents are exactly the same (and thus necessarily the same as the percent in statment #3), so being an atheist does *not* make it more likely you're a convicted criminal (nor less likely). There's no association at all (probabilistically speaking) between criminality and atheism.

See also the Wikipedia article on Bayes' theorem for a case in which careful analysis of conditional probabilities leads to counter-intuitive results (the drug-testing example).