P-Values in Regression: Beyond Thresholds—A Clinical-Grade Guide
- Mayta

- Aug 7
- 2 min read
1. What is a p-value in Regression, Really?
A p-value tests how surprising your observed association (e.g., between age and systolic BP) would be if no true effect existed (null hypothesis: β = 0). The smaller the p, the less likely your data are under the null.
Stata in Action: regress systolicBP age bmi Here, p-values are attached to each coefficient. For age:β = 0.78, SE = 0.25, t = 3.12, p = 0.002.
Strong evidence that age affects BP—unlikely random.
2. Why p-Values? The Clinical Logic
In clinical regression, p-values quantify uncertainty. They don’t prove clinical importance—they test statistical surprise:
p < 0.05: Unlikely due to chance → statistical significance.
p > 0.05: Could be random; does not prove absence of effect.
But:
Strength of association: Use the coefficient (β).
Precision: Check the confidence interval (CI).
Chance vs. signal: Use the p-value.
Reporting rule: Always show effect size and CI; p-value is only part of the story.
3. Deep Dive: What Else Shapes a p-Value?
Sample size (n): Larger n → smaller SE → often smaller p-values (even for minor effects).
Variability: More noise = less power = higher p-values.
Model complexity: Overfitting can create misleadingly “significant” p-values.
Real-World Example:
A massive cohort study may yield p < 0.0001 for a β of 0.01.Clinically trivial, statistically “significant.”
Lesson: Always report effect size and CI. Never “chase” low p-values.
4. Practical Workflow for Clinical Researchers
Step 1. Check Model Fit First
Explore data:
summarize systolicBP age bmi, detailed histogram systolicBP
Fit regression:
regress systolicBP age bmi
Examine:
Coefficient (β)
CI
p-value
Step 2. Interpret p-Value with Context
p < 0.05? Is β clinically meaningful? Is CI narrow?
p > 0.05? Is the sample too small? Is the effect real but underpowered?
Step 3. Don’t Ignore Assumptions
Linearity, residuals, independence, no major outliers.
Step 4. Report for Clinical Readers
“Age was associated with an average 0.78 mmHg increase in SBP per year (95% CI: 0.29 to 1.27; p = 0.002).”
5. Pitfalls & Misconceptions to Avoid
P-value ≠ probability effect is real. It’s “probability of the data if no effect.”
Statistical ≠ clinical significance. Tiny p with trivial β is often irrelevant for practice.
Non-significant ≠ no effect. May simply reflect low power.
Multiple testing. Many regressions = false positives. Adjust (Bonferroni, FDR) as needed.
6. When to Ignore p-Values: Modern Best Practices
Focus on effect sizes, CIs, prediction accuracy, and model diagnostics in reporting.
For prediction models (not just inference), calibration and discrimination (e.g., ROC/AUC) are often more valuable.
Use p-values for variable selection sparingly. Let clinical logic and DAGs drive confounder inclusion.
7. Clinical Grade Summary
P-values reflect how surprising your results are if there’s no true effect. They support—but never alone determine—clinical conclusions. Use p-values to quantify uncertainty, but always anchor interpretation in effect size, confidence intervals, and model validity.






Comments