Analyzing Diagnostic Test Accuracy: Sensitivity, Specificity, Predictive Values & Likelihood Ratios Explained
- Mayta
- Jul 24
- 3 min read
Introduction
Diagnostic tests are only as useful as their demonstrated performance. Once data from a diagnostic accuracy study have been collected, the next step is analysis: converting raw counts of true and false results into meaningful metrics. This step is crucial because misinterpretation can lead to overuse of poor tests or underuse of effective ones. To ensure responsible interpretation, one must not only calculate key performance indicators but also understand their clinical implications.
Structuring the Data: Building the 2×2 Table
The foundation of diagnostic accuracy analysis is the 2×2 contingency table, which compares test results to the actual disease status (based on a trusted reference standard).
Disease Present (+) | Disease Absent (−) | Total | |
Test Positive (+) | True Positive (TP) | False Positive (FP) | TP + FP |
Test Negative (−) | False Negative (FN) | True Negative (TN) | FN + TN |
Total | TP + FN | FP + TN | N = total |
Example scenario: Suppose a study examines a rapid test for influenza in 300 patients.
60 have flu confirmed by PCR (reference test).
50 of these are correctly identified by the rapid test (TP).
10 cases are missed (FN).
40 non-flu patients test positive falsely (FP).
200 test negative correctly (TN).
From this:
TP = 50
FN = 10
FP = 40
TN = 200
Core Metrics: What They Measure and How to Calculate Them
1. Sensitivity and Specificity
These are intrinsic properties of the test and do not depend on disease prevalence.
Sensitivity = TP / (TP + FN)= 50 / (50 + 10) = 0.833 or 83.3%Interpretation: Among people with flu, the test correctly detects 83.3%.
Specificity = TN / (TN + FP)= 200 / (200 + 40) = 0.833 or 83.3%Interpretation: Among people without flu, 83.3% test negative.
2. Predictive Values
These reflect how test results translate to real-world patient probabilities, and are highly dependent on prevalence.
Positive Predictive Value (PPV) = TP / (TP + FP)= 50 / (50 + 40) = 0.556 or 55.6%Interpretation: If the test is positive, there’s a 55.6% chance the person truly has flu.
Negative Predictive Value (NPV) = TN / (TN + FN)= 200 / (200 + 10) = 0.952 or 95.2%Interpretation: If the test is negative, there’s a 95.2% chance the person does not have flu.
3. Likelihood Ratios (LR)
Unlike predictive values, likelihood ratios are prevalence-independent and help update pre-test to post-test probabilities.
Positive Likelihood Ratio (LR+) = Sensitivity / (1 − Specificity)= 0.833 / (1 − 0.833) = 0.833 / 0.167 ≈ 5.0Meaning: A positive result is 5 times more likely in a flu patient than a non-flu patient.
Negative Likelihood Ratio (LR−) = (1 − Sensitivity) / Specificity= 0.167 / 0.833 ≈ 0.20Meaning: A negative result is 0.2 times as likely in someone with flu.
These LRs are helpful in applying Bayes’ Theorem for individualized clinical decisions.
Step-by-Step Table Summary: Recap of Calculations
Metric | Formula | Value |
Sensitivity | TP / (TP + FN) | 83.3% |
Specificity | TN / (TN + FP) | 83.3% |
Positive Predictive Value (PPV) | TP / (TP + FP) | 55.6% |
Negative Predictive Value (NPV) | TN / (TN + FN) | 95.2% |
Positive Likelihood Ratio (LR+) | Sens / (1 − Spec) | 5.0 |
Negative Likelihood Ratio (LR−) | (1 − Sens) / Spec | 0.20 |
Interpreting the Results
Statistical measures must be interpreted within a clinical context:
A test with high sensitivity is useful for ruling out a disease when the result is negative (“SnNout”).
A test with high specificity is helpful for ruling in disease when the result is positive (“SpPin”).
A test with LR+ >10 or LR− <0.1 is generally considered strong for clinical decision-making.
In low-prevalence settings, even good tests may have low PPVs—raising caution about over-diagnosis.
Conclusion
Analyzing diagnostic accuracy is more than crunching numbers. It’s about understanding the real-world performance of a test and how its results should inform decision-making. By carefully calculating and interpreting sensitivity, specificity, predictive values, and likelihood ratios, clinicians can use tests more wisely—reducing unnecessary treatments and avoiding missed diagnoses.




