This last example, in which the order of the conditioning is reversed (i.e. obtaining P(M|E) from P(E|M) and P(E|F)) illustrates Bayes' Theorem.
In general, for any two events X and Y,
Bayes' theorem is useful in many circumstances. For example, in medical diagnosis can be is used to estimate the probability that a patient really has a certain disease, when a particular diagnostic procedure says so.
Consider screening for cervical (or breast) cancer.
Let C be the event of a woman having the disease. Suppose P(C) = 0.0002 (1 in 5000).
Let B be the event of a positive test: i.e. the screening procedure says she has the disease.
Let P(B|C) = 0.9: i.e. test correctly identifies 90% of women who have the disease.
Let P(B|not C) = 0.005: i.e. the test gives 5 positive results, on average, in every 1000 women who do not have the disease. These results are called "false positives".
Find P(C|B), i.e. the probability that a woman has the disease given that the test says she does.
By Bayes' Theorem P(C|B) = P(B|C)P(C) / P(B) where P(B) = P(B|C)P(C) + P(B|not C)P(not C).
For these data P(B|C)P(C) = 0.9 x 0.0002 = 0.00018
P(B) = 0.00018 + 0.005(1 - 0.0002) = 0.005179
Thus fewer than 4% of women diagnosed as having the disease actually do have it!
The expected outcomes among a million women tested are like this:
|Test negative||Test positive|
(Try doing this calculation for other values of P(C) and P(B|not C))
This illustrates the dubious value of large-scale screening programs directed at diseases of low prevalence.
|... Previous page||Next page ...|