Question
A medical test is 95% sensitive and 90% specific. Only 2% of the population has the disease. If someone tests positive, what is the probability they are actually sick?
Quick Answer
Using Bayes Theorem, the probability that a person actually has the disease after testing positive is approximately 16.24%. This might seem low, but it makes sense once you factor in how rare the disease is. With only 2% prevalence, most of the positive results come from healthy people, not sick ones. Watch the video below to see exactly how we get there.
Bottom Line
P (Disease | Positive Test) = 16.24%
Even with a 95% sensitive test, low disease prevalence means only about 1 in 6 positive results is a true positive.
How to Solve a Bayes Theorem Problem: Video Walkthrough
We created a step-by-step video walkthrough of this Bayes theorem example. Watch how we apply the Bayes rule formula from scratch, building each piece of the equation before combining them into the final answer.
Full Video Transcript
Below is the complete transcript from the video explanation:
- Let’s read the problem together. We’re given sensitivity, specificity, and prevalence, and we want the probability that a person truly has the disease given a positive test. We’ll use Bayes’ theorem and work slowly, step by step.
- This low prevalence will heavily shape the final probability. This is the target probability we need to compute. First, let’s name the events so we can write everything clearly. This keeps our reasoning tidy.
- Now, we translate the prevalence into a probability. 2% have the disease. Sensitivity is the chance the test is positive when the person truly has the disease (95%). Specificity is the chance the test is negative when the person does not have the disease (90%).
- From specificity, we can get the false positive rate using the complement: 1 minus specificity. Similarly, if 2% have the disease, then 98% do not, which is the complement of the prevalence.
- Great, now we apply Bayes’ theorem to combine the prior probability with test accuracy. Let’s compute each piece. First, the numerator: Positive given disease times the prevalence. Next, the false positive contribution in the denominator. This false positive contribution is large relative to the true positive part.
- Add those to get the total chance of a positive result. Finally, divide the numerator by the denominator to get the posterior probability that a person truly has the disease after a positive test.
- So even with a sensitive and fairly specific test, because the disease is rare, the chance of truly having it after a positive is about 16%. Nice work stepping carefully through Bayes’ theorem.
Step-by-Step Explanation
Bayes theorem is really asking one simple question about how many people who test positive actually have the disease. That framing makes the math much more intuitive. Here is how we use Bayes theorem to solve this problem step by step.
Step 1: Define the Events
Before plugging in any numbers, let’s clearly name what we’re working with. Think of it like setting up a confusion matrix with four possible outcomes depending on whether the person has the disease and whether the test fires positive or negative.
- D = person has the disease
- +Â = test result is positive
- We want P(D | +), the probability of having the disease given a positive test
Step 2: Extract the Given Probabilities
This is where prevalence becomes critical. Only 2% of the population has this disease, which means 98% of people who walk in for a test are healthy. That imbalance is what makes the result so surprising.
- P(Disease) = 0.02 <- 2% prevalence (only 2 in 100 people are sick)
- P(No Disease) = 0.98 <- 98% are healthy
- P(+ | Disease) = 0.95 <- 95% sensitivity; test catches most true cases
- P(+ | No Disease) = 0.10 <- 10% false positive rate (1 – 90% specificity)
Notice that the 10% false positive rate applied to 98 healthy people generates far more false alarms than the true positives from just 2 sick people.
Step 3: Apply the Bayes Rule Formula
This is where the Bayes rule formula comes together. It combines the prior probability of disease with how likely the test is to be positive, producing the Bayes theorem probability for our specific scenario. Think of the denominator as asking how many people out of 100 will test positive, sick or not?
- The formula is P(D | +)Â =Â P(+ | D) x P(D)Â /Â P(+)
- Where the total probability of a positive result P(+) accounts for both true positives and false positives: P(+) = P(+ | D) x P(D)Â +Â P(+ | No D) x P(No D)
Step 4: Plug in the Numbers
Let’s put real numbers in and see what happens:
- Numerator (true positives): 0.95 x 0.02 = 0.019
- Denominator (all positives): (0.95 x 0.02) + (0.10 x 0.98) = 0.019 + 0.098 = 0.117
- Result: P(Disease | Positive) = 0.019 / 0.117 = 0.1624 = 16.24%
The aha moment is that even though the test is 95% sensitive, 0.098 of the population are healthy people who still trigger a positive. That false positive pool is five times larger than the true positive pool (0.019), which is why only about 1 in 6 positive results is genuine. This is the posterior probability, the updated belief after receiving a positive test result. The low prevalence of 2% is the key driver.
Frequently Asked Questions
Bayes theorem is a formula that updates the probability of something being true based on new evidence. The formula is P(A | B) = P(B | A) x P(A) / P(B), where P(A) is your prior belief and P(B | A) is how likely the evidence is if A is true.
Conditional probability is the probability of an event given that another has occurred, written as P(A | B). Bayes theorem is built on top of it. It lets you flip the condition around. If you know P(B | A), the Bayes rule formula tells you how to calculate P(A | B) by factoring in the prior probability of A.
Bayes theorem is widely used in real life. Doctors use it to interpret medical test results, email providers rely on it to filter spam, and machine learning systems apply it to classify data. Whenever you start with a prior belief and receive new evidence, Bayes theorem provides a principled way to update that belief.
Want a Step-by-Step Video for Your Own Question?
The above example’s video explanation was generated using Think10x.ai.
Upload a clear photo of any math problem, including probability, algebra, geometry, or calculus, and our tool will turn it into a narrated, animated explanation in about 15 minutes.
👉 Try it at Think10x.ai
Private by default. Captions and transcripts included. Built for tutors and students.