top of page

Prediction vs Causation in Clinical Research: Using the DEPTh Model to Choose the Right Approach, Causal vs Non‑Causal—How to Choose the Right Study Logic

Clinicians often juggle questions that look similar but actually demand different scientific logics. The DEPTh model (Diagnosis, Etiology, Prognosis, Therapeutic, + Methodologic) is your compass. This article gives you a crisp, bedside-ready way to decide when you’re doing prediction versus when you must argue causation—and what that means for design, metrics, and interpretation.

The Two Logics (in one minute)

  • Predictive (non‑causal) asks: “Who has/gets the outcome?”We combine features to forecast an individual’s probability. We judge performance by discrimination and calibration, not by whether factors cause the outcome.

  • Causal (explanatory) asks: “Does X change Y?”We must defend a counterfactual claim—what would have happened without X—by controlling confounding with randomization or DAG‑guided adjustment.

DEPTh at a Glance

DEPTh Type

Core Question

Causal Logic?

Why It’s Framed This Way

Typical Design

Key Metrics

Diagnosis

“Does this test correctly identify who has the disease now?”

No (Predictive)

Tests don’t cause disease; they detect it.

Cross‑sectional (accuracy, prediction)

Se, Sp, LR+/LR−, AUROC 

Prognosis

“Given this disease, what will happen next?”

No (Predictive)

Forecasting the future course, not explaining causes.

Inception/clinical cohort

AUROC, calibration, survival probabilities

Therapeutic

“Does this intervention change the outcome?”

Yes (Causal)

Intent is to alter outcomes; must neutralize confounding.

RCTs, pragmatic trials, quasi‑experiments

RR, RD, HR; ITT/PP/CACE logic

Etiologic

“Does exposure X cause outcome Y?” or  “What factors are linked with Y?”

Can be causal or non‑causal

Etiology has two lanes: explanatory causal vs exploratory/predictive association mapping.

Cohort / case‑control

Causal: RR/OR/HR with confounder control; Predictive: AUROC if modeling risk

Bottom line: Diagnosis & Prognosis are prediction problems. Therapeutic is always causal. Etiology can be causal or non‑causal—you must choose your lane up front.

Mini‑Primers & Bedside Examples

1) Diagnosis (Predictive)

  • What it is: Estimate probability of current disease (e.g., appendicitis) from signs, symptoms, tests.

  • Design & metrics: Cross‑sectional with Se/Sp/LRs for accuracy; AUROC for prediction; avoid spectrum and verification bias. Use QUADAS‑2 for appraisal.

  • Example question: “In ED patients with RLQ pain, what is the probability of appendicitis given pain migration + rebound + WBC?”

  • Secret insight: Point of prediction matters—what data exist at the moment of decision governs which predictors you can use.

2) Prognosis (Predictive)

  • What it is: Forecast future outcomes in people with a defined condition (time‑zero).

  • Design & metrics: Inception cohort; model discrimination (AUROC) and calibration; follow PROGRESS typology—factor research vs model (CPM) research vs stratified medicine.

  • Example question: “Among STEMI patients before PCI, who will develop contrast‑induced AKI?”

  • Secret insight: Prognostic CPMs should output absolute risks (e.g., 30‑day mortality %), not only relative hazards.

3) Therapeutic (Causal)

  • What it is: Estimate the effect of an intervention (intended effect).

  • Design & metrics: RCTs with proper sequence generation, allocation concealment, and blinding when feasible; analyze with ITT for policy relevance, supplement with PP/AT/CACE judiciously. Consider pragmatic features (PRECIS‑2) when generalizability matters.

  • Example question: “Does early invasive strategy vs conservative care reduce 30‑day mortality in NSTEMI?”

  • Secret insight: ITT + non‑inferiority can mask harm/inefficacy; use dual ITT & PP and justify margins.

4) Etiology (Pick the lane)

  • Causal (explanatory): “Does chronic PPI use cause C. difficile infection?” → DAG‑first, control confounding, choose cohort/case‑control carefully, preserve temporality.

  • Non‑causal (exploratory/predictive): “Which features are associated with postpartum hemorrhage?” → association mapping; may feed later into a CPM but does not claim cause.

  • Secret insight: Don’t mix logics. If your goal is causation, treat “confounders” as bias to control. If your goal is prediction, treat them as useful features.

Clinical Prediction Models (CPMs): The Predictive Workhorse

  • Definition: Multivariable tools that combine ≥2 predictors to give an individual’s absolute risk—diagnostic (now) or prognostic (future).

  • Development essentials: Justify need → define precise point of prediction → right design (cross‑sectional vs cohort) → sample‑size planning beyond “10 events per variable” → pre‑specify predictors → proper handling of continuous variables (no crude dichotomies) → address missingness (multiple imputation) → evaluate discrimination, calibration, and decision curve net benefit → external validation (temporal/geographic/domain).

  • Secret insight: If external validation drops performance, recalibrate before discarding—there’s signal worth saving.

Quick Decision Tree (text version)

  1. Do you intend to change outcomes with an intervention? → Yes: Therapeutic (causal). Pick RCT or causal inference alternative.

  2. No intervention—are you predicting who has the condition now? → Yes: Diagnosis (predictive). Accuracy & predictive modeling.

  3. No intervention—are you predicting what will happen next? → Yes: Prognosis (predictive). Cohort; AUROC + calibration.

  4. Are you asking if X causes Y? → Yes: Etiologic (causal). DAG + confounding control. → No, just mapping associations: Etiologic (non‑causal/predictive).

Common Pitfalls (and how to dodge them)

  • Using odds ratios to imply cause in a purely predictive study. (Name your logic; don’t overclaim.)

  • Case‑control data for absolute risk modeling. (Great for etiologic signals; problematic for CPMs that aim for calibrated probabilities.)

  • Dichotomizing continuous predictors. (Lose information; prefer splines/transformations.)

  • Confusing apparent with test performance. (Always evaluate on unseen data; then externally validate.)

  • Ignoring timing. (The point of prediction determines which predictors are available and clinically actionable.)

Write Aims the Right Way (templates)

  • Diagnosis (predictive): “To develop and validate a model that estimates the probability of bacterial meningitis at ED triage using history, exam, and lab data.” 

  • Prognosis (predictive): “To predict 90‑day functional decline in COPD outpatients at clinic visit time‑zero.” 

  • Therapeutic (causal): “To estimate the effect of early steroids vs usual care on 28‑day mortality in septic shock.” 

  • Etiologic (causal): “To estimate the causal effect of shift‑work exposure on incident depression using DAG‑guided adjustment in a prospective cohort.” 

🔍 Secret Insight Sidebar

Don’t import causal habits into prediction. In CPM work, “confounders” aren’t enemies—they’re features that may boost predictive power. Save your DAG swords for questions that truly claim X → Y.

Key Takeaways

  • Diagnosis & Prognosis = predictive problems; judge by AUROC + calibration, not causality.

  • Therapeutic = causal by design; randomization (or its emulation) is your best friend.

  • Etiology can be causal or not—declare your lane early and align design/metrics accordingly.

  • CPMs deliver individual absolute risk; demand clear point of prediction, proper handling of data, and external validation before bedside use.


 
 
 

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Post: Blog2_Post

​Message for International and Thai Readers Understanding My Medical Context in Thailand

Message for International and Thai Readers Understanding My Broader Content Beyond Medicine

bottom of page