Bayes’ theorem in Paradox

In probability theory and statistics, Bayes’ theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if cancer is related to age, then, using Bayes’ theorem, a person’s age can be used to more accurately assess the probability that they have cancer, compared to the assessment of the probability of cancer made without knowledge of the person’s age.

Statement of theorem:

Bayes’ theorem is stated mathematically as the following equation:[2]

P ( A ∣ B ) = P ( B ∣ A ) P ( A ) P ( B ) , {\displaystyle P(A\mid B)={\frac {P(B\mid A)\,P(A)}{P(B)}},}

where A {\displaystyle A} and B {\displaystyle B} are events and P ( B ) ≠ 0 {\displaystyle P(B)\neq 0} .

  • P ( A ) {\displaystyle P(A)} and P ( B ) {\displaystyle P(B)} are the probabilities of observing A {\displaystyle A} and B {\displaystyle B} without regard to each other.
  • P ( A ∣ B ) {\displaystyle P(A\mid B)} , a conditional probability, is the probability of observing event A {\displaystyle A} given that B {\displaystyle B} is true.

P ( B ∣ A ) {\displaystyle P(B\mid A)} is the probability of observing event B {\displaystyle B} given that A {\displaystyle A} is true.

Examples:

Drug testing

Suppose a drug test is 99% sensitive and 99% specific. That is, the test will produce 99% true positive results for drug users and 99% true negative results for non-drug users. Suppose that 0.5% of people are users of the drug. If a randomly selected individual tests positive, what is the probability that he is a user?

P ( User ∣ + ) = P ( + ∣ User ) P ( User ) P ( + ) = P ( + ∣ User ) P ( User ) P ( + ∣ User ) P ( User ) + P ( + ∣ Non-user ) P ( Non-user ) = 0.99 × 0.005 0.99 × 0.005 + 0.01 × 0.995 ≈ 33.2 % {\displaystyle {\begin{aligned}P({\text{User}}\mid {\text{+}})&={\frac {P({\text{+}}\mid {\text{User}})P({\text{User}})}{P(+)}}\\&={\frac {P({\text{+}}\mid {\text{User}})P({\text{User}})}{P({\text{+}}\mid {\text{User}})P({\text{User}})+P({\text{+}}\mid {\text{Non-user}})P({\text{Non-user}})}}\\[8pt]&={\frac {0.99\times 0.005}{0.99\times 0.005+0.01\times 0.995}}\\[8pt]&\approx 33.2\%\end{aligned}}}

Despite the apparent accuracy of the test, if an individual tests positive, it is more likely that they do not use the drug than that they do. This surprising result arises because the number of non-users is very large compared to the number of users; thus the number of false positives outweighs the number of true positives. To use concrete numbers, if 1000 individuals are tested, there are expected to be 995 non-users and 5 users. From the 995 non-users, 0.01 × 995 ≃ 10 false positives are expected. From the 5 users, 0.99 × 5 ≈ 5 true positives are expected. Out of 15 positive results, only 5, about 33%, are genuine. This illustrates the importance of base rates, and how the formation of policy can be egregiously misguided if base rates are neglected.[15]

The importance of specificity in this example can be seen by calculating that even if sensitivity is raised to 100% and specificity remains at 99% then the probability of the person being a drug user only rises from 33.2% to 33.4%, but if the sensitivity is held at 99% and the specificity is increased to 99.5% then probability of the person being a drug user rises to about 49.9%.

Bayesian interpretation:

In the Bayesian (or epistemological) interpretation, probability measures a “degree of belief.” Bayes’ theorem then links the degree of belief in a proposition before and after accounting for evidence. For example, suppose it is believed with 50% certainty that a coin is twice as likely to land heads than tails. If the coin is flipped a number of times and the outcomes observed, that degree of belief may rise, fall or remain the same depending on the results.

For proposition A and evidence B,

  • P (A ), the prior, is the initial degree of belief in A.
  • P (A | B ), the “posterior,” is the degree of belief having accounted for B.
  • the quotient P(B |A )/P(B) represents the support B provides for A.

For more on the application of Bayes’ theorem under the Bayesian interpretation of probability, see Bayesian inference.

Frequentist interpretation

Illustration of frequentist interpretation with tree diagrams. Bayes’ theorem connects conditional probabilities to their inverses.

In the frequentist interpretation, probability measures a “proportion of outcomes.” For example, suppose an experiment is performed many times. P(A) is the proportion of outcomes with property A, and P(B) that with property B. P(B | A ) is the proportion of outcomes with property B out of outcomes with property A, and P(A | B ) the proportion of those with A out of those with B.

The role of Bayes’ theorem is best visualized with tree diagrams, as shown to the right. The two diagrams partition the same outcomes by A and B in opposite orders, to obtain the inverse probabilities. Bayes’ theorem serves as the link between these different partitionings.

Example

Tree diagram illustrating frequentist example. R, C, P and P bar are the events representing rare, common, pattern and no pattern. Percentages in parentheses are calculated. Note that three independent values are given, so it is possible to calculate the inverse tree (see figure above).

An entomologist spots what might be a rare subspecies of beetle, due to the pattern on its back. In the rare subspecies, 98% have the pattern, or P(Pattern | Rare) = 98%. In the common subspecies, 5% have the pattern. The rare subspecies accounts for only 0.1% of the population. How likely is the beetle having the pattern to be rare, or what is P(Rare | Pattern)?

From the extended form of Bayes’ theorem (since any beetle can be only rare or common),

P ( Rare ∣ Pattern ) = P ( Pattern ∣ Rare ) P ( Rare ) P ( Pattern ∣ Rare ) P ( Rare ) + P ( Pattern ∣ Common ) P ( Common ) = 0.98 × 0.001 0.98 × 0.001 + 0.05 × 0.999 ≈ 1.9 % {\displaystyle {\begin{aligned}P({\text{Rare}}\mid {\text{Pattern}})&={\frac {P({\text{Pattern}}\mid {\text{Rare}})P({\text{Rare}})}{P({\text{Pattern}}\mid {\text{Rare}})P({\text{Rare}})+P({\text{Pattern}}\mid {\text{Common}})P({\text{Common}})}}\\[8pt]&={\frac {0.98\times 0.001}{0.98\times 0.001+0.05\times 0.999}}\\[8pt]&\approx 1.9\%\end{aligned}}}

Events:

Simple form

For events A and B, provided that P(B) ≠ 0,

P ( A ∣ B ) = P ( B ∣ A ) P ( A ) P ( B ) ⋅ {\displaystyle P(A\mid B)={\frac {P(B\mid A)\,P(A)}{P(B)}}\cdot \,}

In many applications, for instance in Bayesian inference, the event B is fixed in the discussion, and we wish to consider the impact of its having been observed on our belief in various possible events A. In such a situation the denominator of the last expression, the probability of the given evidence B, is fixed; what we want to vary is A. Bayes’ theorem then shows that the posterior probabilities are proportional to the numerator:

P ( A ∣ B ) ∝ P ( A ) ⋅ P ( B ∣ A )   {\displaystyle P(A\mid B)\propto P(A)\cdot P(B\mid A)\ } (proportionality over A for given B).

In words: posterior is proportional to prior times likelihood.[16]

If events A1, A2, …, are mutually exclusive and exhaustive, i.e., one of them is certain to occur but no two can occur together, and we know their probabilities up to proportionality, then we can determine the proportionality constant by using the fact that their probabilities must add up to one. For instance, for a given event A, the event A itself and its complement ¬A are exclusive and exhaustive. Denoting the constant of proportionality by c we have

P ( A ∣ B ) = c ⋅ P ( A ) ⋅ P ( B ∣ A )  and  P ( ¬ A ∣ B ) = c ⋅ P ( ¬ A ) ⋅ P ( B ∣ ¬ A ) . {\displaystyle P(A\mid B)=c\cdot P(A)\cdot P(B\mid A){\text{ and }}P(\neg A\mid B)=c\cdot P(\neg A)\cdot P(B\mid \neg A).}

Adding these two formulas we deduce that

1 = c ⋅ ( P ( B ∣ A ) ⋅ P ( A ) + P ( B ∣ ¬ A ) ⋅ P ( ¬ A ) ) , {\displaystyle 1=c\cdot (P(B\mid A)\cdot P(A)+P(B\mid \neg A)\cdot P(\neg A)),}

or

c = 1 P ( B ∣ A ) ⋅ P ( A ) + P ( B ∣ ¬ A ) ⋅ P ( ¬ A ) = 1 P ( B ) . {\displaystyle c={\frac {1}{P(B\mid A)\cdot P(A)+P(B\mid \neg A)\cdot P(\neg A)}}={\frac {1}{P(B)}}.}

Random variables:

Diagram illustrating the meaning of Bayes’ theorem as applied to an event space generated by continuous random variables X and Y. Note that there exists an instance of Bayes’ theorem for each point in the domain. In practice, these instances might be parametrized by writing the specified probability densities as a function of x and y.

Consider a sample space Ω generated by two random variables X and Y. In principle, Bayes’ theorem applies to the events A = {X = x} and B = {Y = y}. However, terms become 0 at points where either variable has finite probability density. To remain useful, Bayes’ theorem may be formulated in terms of the relevant densities (see Derivation).

Simple form

If X is continuous and Y is discrete,

f X ( x ∣ Y = y ) = P ( Y = y ∣ X = x ) f X ( x ) P ( Y = y ) . {\displaystyle f_{X}(x\mid Y=y)={\frac {P(Y=y\mid X=x)\,f_{X}(x)}{P(Y=y)}}.}

If X is discrete and Y is continuous,

P ( X = x ∣ Y = y ) = f Y ( y ∣ X = x ) P ( X = x ) f Y ( y ) . {\displaystyle P(X=x\mid Y=y)={\frac {f_{Y}(y\mid X=x)\,P(X=x)}{f_{Y}(y)}}.}

If both X and Y are continuous,

f X ( x ∣ Y = y ) = f Y ( y ∣ X = x ) f X ( x ) f Y ( y ) . {\displaystyle f_{X}(x\mid Y=y)={\frac {f_{Y}(y\mid X=x)\,f_{X}(x)}{f_{Y}(y)}}.}

Extended form

Diagram illustrating how an event space generated by continuous random variables X and Y is often conceptualized.

A continuous event space is often conceptualized in terms of the numerator terms. It is then useful to eliminate the denominator using the law of total probability. For fY(y), this becomes an integral:

f Y ( y ) = ∫ − ∞ ∞ f Y ( y ∣ X = ξ ) f X ( ξ ) d ξ . {\displaystyle f_{Y}(y)=\int _{-\infty }^{\infty }f_{Y}(y\mid X=\xi )\,f_{X}(\xi )\,d\xi .}

Bayes’ rule

Bayes’ rule is Bayes’ theorem in odds form.

O ( A 1 : A 2 ∣ B ) = O ( A 1 : A 2 ) ⋅ Λ ( A 1 : A 2 ∣ B ) {\displaystyle O(A_{1}:A_{2}\mid B)=O(A_{1}:A_{2})\cdot \Lambda (A_{1}:A_{2}\mid B)}

where

Λ ( A 1 : A 2 ∣ B ) = P ( B ∣ A 1 ) P ( B ∣ A 2 ) {\displaystyle \Lambda (A_{1}:A_{2}\mid B)={\frac {P(B\mid A_{1})}{P(B\mid A_{2})}}}

is called the Bayes factor or likelihood ratio and the odds between two events is simply the ratio of the probabilities of the two events. Thus

O ( A 1 : A 2 ) = P ( A 1 ) P ( A 2 ) , {\displaystyle O(A_{1}:A_{2})={\frac {P(A_{1})}{P(A_{2})}},}
O ( A 1 : A 2 ∣ B ) = P ( A 1 ∣ B ) P ( A 2 ∣ B ) , {\displaystyle O(A_{1}:A_{2}\mid B)={\frac {P(A_{1}\mid B)}{P(A_{2}\mid B)}},}

So the rule says that the posterior odds are the prior odds times the Bayes factor, or in other words, posterior is proportional to prior times likelihood.

Derivation

For events:

Bayes’ theorem may be derived from the definition of conditional probability:

P ( A ∣ B ) = P ( A ∩ B ) P ( B ) ,  if  P ( B ) ≠ 0 , {\displaystyle P(A\mid B)={\frac {P(A\cap B)}{P(B)}},{\text{ if }}P(B)\neq 0,\!}
P ( B ∣ A ) = P ( B ∩ A ) P ( A ) ,  if  P ( A ) ≠ 0 , {\displaystyle P(B\mid A)={\frac {P(B\cap A)}{P(A)}},{\text{ if }}P(A)\neq 0,\!}

because

P ( B ∩ A ) = P ( A ∩ B ) {\displaystyle P(B\cap A)=P(A\cap B)}
⇒ P ( A ∩ B ) = P ( A ∣ B ) P ( B ) = P ( B ∣ A ) P ( A ) {\displaystyle \Rightarrow P(A\cap B)=P(A\mid B)\,P(B)=P(B\mid A)\,P(A)\!}
⇒ P ( A ∣ B ) = P ( B ∣ A ) P ( A ) P ( B ) ,  if  P ( B ) ≠ 0. {\displaystyle \Rightarrow P(A\mid B)={\frac {P(B\mid A)\,P(A)}{P(B)}},{\text{ if }}P(B)\neq 0.}

For random variables:

For two continuous random variables X and Y, Bayes’ theorem may be analogously derived from the definition of conditional density:

f X ( x ∣ Y = y ) = f X , Y ( x , y ) f Y ( y ) {\displaystyle f_{X}(x\mid Y=y)={\frac {f_{X,Y}(x,y)}{f_{Y}(y)}}}
f Y ( y ∣ X = x ) = f X , Y ( x , y ) f X ( x ) {\displaystyle f_{Y}(y\mid X=x)={\frac {f_{X,Y}(x,y)}{f_{X}(x)}}}
⇒ f X ( x ∣ Y = y ) = f Y ( y ∣ X = x ) f X ( x ) f Y ( y ) . {\displaystyle \Rightarrow f_{X}(x\mid Y=y)={\frac {f_{Y}(y\mid X=x)\,f_{X}(x)}{f_{Y}(y)}}.}

I quote Lindley’s paradox is a counterintuitive situation in statistics in which the Bayesian and frequentist approaches to a hypothesis testing problem give different results for certain choices of the prior distribution. The problem of the disagreement between the two approaches was discussed in Harold Jeffreys‘ 1939 textbook;[1] it became known as Lindley’s paradox after Dennis Lindley called the disagreement a paradox in a 1957 paper.

Description of the paradox

Consider the result x {\displaystyle \textstyle x} of some experiment, with two possible explanations, hypotheses H 0 {\displaystyle \textstyle H_{0}} and H 1 {\displaystyle \textstyle H_{1}} , and some prior distribution π {\displaystyle \textstyle \pi } representing uncertainty as to which hypothesis is more accurate before taking into account x {\displaystyle \textstyle x} .

Lindley’s paradox occurs when

  1. The result x {\displaystyle \textstyle x} is “significant” by a frequentist test of H 0 {\displaystyle \textstyle H_{0}} , indicating sufficient evidence to reject H 0 {\displaystyle \textstyle H_{0}} , say, at the 5% level, and
  2. The posterior probability of H 0 {\displaystyle \textstyle H_{0}} given x {\displaystyle \textstyle x} is high, indicating strong evidence that H 0 {\displaystyle \textstyle H_{0}} is in better agreement with x {\displaystyle \textstyle x} than H 1 {\displaystyle \textstyle H_{1}} .

These results can occur at the same time when H 0 {\displaystyle \textstyle H_{0}} is very specific, H 1 {\displaystyle \textstyle H_{1}} more diffuse, and the prior distribution does not strongly favor one or the other.

Numerical example:

We can illustrate Lindley’s paradox with a numerical example. Imagine a certain city where 49,581 boys and 48,870 girls have been born over a certain time period. The observed proportion x {\displaystyle \textstyle x} of male births is thus 49,581/98,451 ≈ 0.5036. We assume the number of male births is a binomial variable with parameter θ {\displaystyle \textstyle \theta } . We are interested in testing whether θ {\displaystyle \textstyle \theta } is 0.5 or some other value. That is, our null hypothesis is H 0 : θ = 0.5 {\displaystyle \textstyle H_{0}:\theta =0.5} and the alternative is H 1 : θ ≠ 0.5 {\displaystyle \textstyle H_{1}:\theta \neq 0.5} .

Cinnamon 2.6 Is a Massive Update and now is live

IpzLP

Cinnamon 2.6 Is a Massive Update and now is live.

Cinnamon is one of the most used desktop environments on Linux system, and it’s used to power Linux Mint, which in turn is the second most used operating system after Ubuntu. It’s safe to say that many users have been expecting this update, and they are going to receive it by the end of the month.  And, now Cinnamon 2.6 is live.

What is interesting about this upgrade is the sheer size of it and the fact that Cinnamon is actually at version 2.6.6. The developers have been updating this new branch for a couple of weeks now, and they have finally decided that it’s stable enough for everyone.

The developers started their announcement with a fix for some desktop freezes reported by the community. They have added a newer “cogl” API that should make most of the issues go away.

“In case of a freeze or if you need to restart Cinnamon for any reason, you can now do so via a keyboard shortcut. The default key combination is Ctrl+Alt+Escape. Pressing this combination of keys restarts nemo and cinnamon-settings-daemon in case they had crashed, and launches a brand new instance of the Cinnamon desktop,” said Clement Lefebvre, the leader of the Linux Mint project.

Also, the devs have explained that it’s no longer necessary to recompile Cinnamon to choose between consolekit and logind support, the load times have been greatly improved, the CPU usage has been diminished by about 40%, and the support for multiple monitors has been improved as well.

“Un-necessary calculations in the window management part of Cinnamon could also be dropped, leading to reduced idle CPU usage (about 40% reduction in the number of CPU wakes per second),” Lefebvre also noted.

Other smaller improvements include a working screensaver that does more than just lock the screen, panels that can be removed or added individually, a much better “System Settings” panel that should make things much clearer, cool new effect have been added for windows, and a brand-new plugin manager for Nemo.