Give an example how Positive Predictive Value changes with disease prevalance

Reading File
Finding Sources
Searching the Web

positive predictive value changes with disease prevalence example calculation

Reading File
Reading File
Reading File
Here's a clear worked example showing exactly how PPV changes as prevalence shifts, using the same test throughout.

Positive Predictive Value (PPV) and Disease Prevalence

PPV = the probability that a patient with a positive test result truly has the disease.
$$\text{PPV} = \frac{\text{True Positives}}{\text{True Positives} + \text{False Positives}}$$
Critically, sensitivity and specificity are intrinsic properties of a test and do not change with prevalence — but PPV does.

The Setup

Use a cardiac marker test with:
  • Sensitivity = 98% (detects 98% of true AMI cases)
  • Specificity = 90% (correctly excludes 90% of non-AMI cases)
Population = 1,000 patients tested in two different settings.

Scenario 1 — High Prevalence (5%): Hospital CCU

50 patients have AMI, 950 do not.
ResultDisease (AMI)No DiseaseTotal
Positive49 (TP)95 (FP)144
Negative1 (FN)855 (TN)856
Total509501000
  • TP = 50 × 0.98 = 49
  • FP = 950 × (1 − 0.90) = 95
$$\text{PPV} = \frac{49}{49 + 95} = \frac{49}{144} = \mathbf{34%}$$

Scenario 2 — Low Prevalence (0.5%): General Outpatient Screening

5 patients have AMI, 995 do not.
ResultDisease (AMI)No DiseaseTotal
Positive~5 (TP)~100 (FP)~105
Negative~0 (FN)~895 (TN)~895
Total59951000
  • TP = 5 × 0.98 ≈ 5
  • FP = 995 × 0.10 ≈ 100
$$\text{PPV} = \frac{5}{5 + 100} \approx \mathbf{4.8%}$$

The Pattern at a Glance

From Henry's Clinical Diagnosis and Management (Table 8.6), using a test with 90% sensitivity and 90% specificity:
PrevalencePPV (90%/90% test)PPV (99%/99% test)
0.01%0.09%0.9%
0.1%0.9%9%
5%32%84%
50%90%99%

Why This Happens

At low prevalence, the population has many more disease-free people — even a small false positive rate (say 10%) generates a huge number of false positives relative to the few true positives. The denominator of PPV balloons.
Key clinical implications:
  1. Specificity drives PPV — a highly specific test minimizes false positives and maintains PPV even at lower prevalence. Sensitivity primarily drives NPV.
  2. Screening vs. diagnosis — the same test that works well diagnostically in a high-risk clinic (high pre-test probability) may be nearly useless as a population-wide screen.
  3. Pre-test probability matters — this is the Bayesian principle: restricting testing to high-risk patients artificially raises "local prevalence," improving PPV. The heparin-induced thrombocytopenia (HIT) anti-PF4 antibody test is a classic example: PPV is ~40% in all suspected HIT patients, but rises substantially when a clinical scoring system (4T score) is used to select only high-probability patients before testing.
"For a disease with low prevalence, even a test with high sensitivity and specificity will yield a low predictive value because most positive test results will be false positives." — Henry's Clinical Diagnosis and Management by Laboratory Methods

Can you give the example of a an actual situation

Finding Sources
Searching the Web

mammography screening PPV positive predictive value low prevalence false positives real world example

Finding Sources
Searching the Web

HIV ELISA screening PPV positive predictive value high risk vs low risk population numbers example

Reading File
Your credits are exhausted. Your daily credits reset tomorrow, or you can purchase a top-up for more.
This is a shared conversation. Sign in to Orris to start your own chat.