How to Read and Appraise a Health Economic Evaluation Using the Drummond Checklist
- Mayta
- Jun 4
- 4 min read
Introduction
As healthcare decision-making increasingly relies on economic evidence, the ability to critically appraise economic evaluation studies becomes essential for clinicians, policymakers, and researchers alike. Cost-effectiveness claims, while compelling, are only as robust as the methods behind them. To systematically assess the reliability and relevance of such studies, one of the most widely adopted tools is the Drummond Checklist—a ten-point framework designed to probe the integrity of cost-effectiveness analyses (CEAs), cost-utility analyses (CUAs), and other economic models.
This article introduces each of the ten domains of the checklist and explains how to apply them in real-world critical appraisal.
I. Is the Research Question Well Defined?
A robust economic evaluation begins with a question that is clear, structured, and testable. The question should:
Include both costs and health outcomes (e.g., costs per QALY gained).
Involve a comparison between alternative strategies.
Specify the perspective of the analysis—e.g., societal, payer, or healthcare system—and frame it within a relevant policy or clinical context.
Example: An economic study evaluating home dialysis versus in-center dialysis should define whether it's viewed from the payer’s or patient’s perspective, as cost structures differ dramatically.
II. Are the Alternatives Clearly Described?
Decision-makers must understand what’s being compared. A good study:
Fully describes who receives what, when, where, and how.
Considers all relevant comparators, including standard care or even the option of doing nothing when appropriate.
Example: In an oncology trial, omitting best supportive care as a comparator would limit the real-world applicability of the findings.
III. Is Clinical Effectiveness Credibly Established?
Economic models rely heavily on clinical data. Valid studies should:
Base effectiveness on randomized controlled trials (RCTs), ideally pragmatic trials that mirror real-world practice.
Use systematic reviews where possible, with transparent search strategies and inclusion criteria.
Acknowledge when observational data or assumptions are used, and critically address potential bias or confounding.
Example: Using real-world registry data for a rare disease may be justified if RCTs are unfeasible, but the study must account for selection bias.
IV. Are All Relevant Costs and Outcomes Included?
A valid economic evaluation captures all consequences and costs, such as:
Direct medical: hospitalizations, drugs, outpatient visits.
Indirect: productivity loss, caregiver burden.
Intangible: pain or anxiety (sometimes estimated via utility scores).
The range must align with the chosen perspective. For example, societal perspectives demand a broader inclusion of indirect and non-healthcare costs.
V. Are Costs and Consequences Measured Accurately?
Quantification must be explicit and standardized:
Units (e.g., hours of physiotherapy, days in ICU) must be clearly defined and justified.
The source of utilization data (clinical trials, billing databases, surveys) must be disclosed.
Special resource use (e.g., shared equipment) must be properly allocated.
Example: If a surgical tool is reused across patients, its cost should be apportioned correctly per use.
VI. Are Costs and Outcomes Valued Credibly?
Once measured, outcomes and costs need appropriate monetary or health utility values:
Values must be justified and sourced, ideally from local or validated datasets.
Market prices should be used for tangible resources. When unavailable, imputed estimates (e.g., shadow prices) are acceptable if transparently derived.
Utilities (for CUA) should come from validated instruments like EQ-5D or SF-6D, reflecting the population of interest.
VII. Are Timing Adjustments Properly Applied?
Future costs and benefits are not equivalent to present values. Studies should:
Apply discounting (usually around 3–5%) to future events.
Justify the chosen discount rate based on guidelines or local practice.
This is particularly relevant for long-term interventions (e.g., childhood vaccination or chronic disease prevention) where most benefits accrue far in the future.
VIII. Is an Incremental Analysis Performed?
The core metric in economic evaluation is the incremental cost-effectiveness ratio (ICER). A valid study must:
Compare additional costs to additional effects when moving from one strategy to another.
Avoid relying on simple average cost or benefit without considering marginal trade-offs.
Example: Comparing Drug A vs Drug B requires knowing how much extra cost Drug A incurs per extra unit of effect it delivers over Drug B.
IX. Is Uncertainty Appropriately Handled?
Since models rely on assumptions and estimates, handling uncertainty is essential:
Statistical methods should be used for variability in patient-level data.
Sensitivity analysis (deterministic or probabilistic) must be reported and justified.
The robustness of conclusions should be discussed in light of parameter uncertainty.
Tools like tornado diagrams, Monte Carlo simulations, and cost-effectiveness acceptability curves help illustrate uncertainty in a policy-relevant way.
X. Are Results Presented and Interpreted Responsibly?
Finally, the study’s presentation and discussion should address:
Whether the ICERs are the basis of conclusions.
How findings compare with existing literature.
Generalizability to other settings, populations, or countries.
Implementation barriers such as infrastructure needs, budget limits, or political feasibility.
Example: An intervention may be cost-effective in principle but fail in implementation if it requires MRI access in a region with none.
Conclusion
Critical appraisal of economic evaluation studies is not about nitpicking—it is about assessing whether a study’s conclusions can be trusted and used for real-world decision-making. The Drummond Checklist offers a rigorous yet accessible framework to evaluate both technical rigor and contextual relevance. When applied thoughtfully, it empowers clinicians, health economists, and policymakers to discern evidence that is not just published, but truly actionable.
Key Takeaways
Economic evaluations must address both costs and health outcomes.
Comparative clarity, valid clinical data, and transparent costing are non-negotiable.
Incremental logic (not average comparisons) is central to policy relevance.
Uncertainty analysis and implementation discussion are hallmarks of strong studies.
The Drummond Checklist is a gold-standard tool for critical economic appraisal.
Comments