top of page

Therapeutic Research on Unintended Effects: A Framework for Studying Harm

  • Writer: Mayta
    Mayta
  • 2 days ago
  • 3 min read

Introduction: Why Research Harms Matters

Therapeutic research is often lauded for its ability to demonstrate benefit, but equally critical is understanding the harms that interventions might cause. “Unintended effects” is a broad term capturing everything from mild side effects to severe adverse drug reactions (ADRs). These effects aren't peripheral—they can be decisive in treatment decisions, risk–benefit evaluations, and patient trust.

As the thalidomide tragedy taught the world, therapeutic enthusiasm must be tempered with rigorous risk vigilance.

1. Terminology and Tragedy: The Historical Imperative

Defining Unintended Effects

Terminologies vary, but all point toward outcomes not part of the intervention's therapeutic goal:

  • Side effects

  • Adverse events/effects

  • Harms, risks

  • ADRs and ADEs (adverse drug events)

The difference is often in severity or predictability, but all are clinically and ethically significant.

The Thalidomide Case (1950s–60s)

Once hailed as a safe sedative and antiemetic for pregnant women, thalidomide caused over 10,000 cases of phocomelia (severely malformed limbs). The catastrophe forced regulatory reform and global awareness that unintended effects must be scrutinized with as much rigor as benefits.

2. Causality in Harm Research: It’s Still Causal Inference

Despite being focused on risk, unintended-effect studies aim to answer the same question as benefit studies:

Did the intervention cause the observed outcome?

The Confounding Triangle

Unintended effects demand total confounding control. In RCTs, randomization can break the backdoor path. But in observational designs, selection bias creeps in—physicians prescribe (or avoid) a treatment based on risk, which itself may influence the outcome.

🔁 Example: COX-2 inhibitors vs. NSAIDs

  • Indication confounding: COX-2s are used in patients with higher GI bleed risk, possibly inflating observed bleeding rates.

  • Contraindication confounding: Physicians may avoid COX-2s in those with high CV risk, creating the illusion of lower CV events in the COX-2 group—even if the drug is harmful.

3. Two Archetypes of Harm: Type A vs. Type B Effects

Type A (Augmented) Effects

  • Predictable, dose-dependent, and related to the drug's mechanism.

  • Examples include hypoglycemia from insulin or bleeding from anticoagulants.

🔍 Confounding trap: Risk factors for starting treatment (e.g., old age, prior stroke) often also elevate harm risk (e.g., bleeding)—leading to confounding by indication.

Type B (Bizarre) Effects

  • Unpredictable, not dose-dependent, and not linked to mechanism.

  • Examples: Anaphylaxis, angioedema, aplasia.

Here, confounding is less of a threat because the patient traits that guide prescribing are not typically associated with the bizarre event (e.g., diabetes is not known to cause angioedema).

🔍 Secret Insight: Type B effects often reveal themselves after a drug is on the market—making post-marketing surveillance critical.


4. Designing Research on Unintended Effects

Object Design: The Occurrence Equation

Unintended Effect = f (Intervention | Confounding)

We aim to isolate the treatment as the cause, not age, comorbidities, or physician behavior.

Method Design: Key Elements

  • Study domain: Who might be prescribed the drug?

  • Study base: Define cohort eligibility, time frames, and follow-up.

  • Determinants: Drugs, surgeries, or devices being assessed.

  • Comparators: Often another treatment, since placebo may be unethical.

📊 Sampling, not census: Especially for rare or delayed-onset harms, a case-control or nested cohort design is more efficient than universal follow-up.

Ethics: Why Randomizing for Harm Is Rare

  • Unethical to randomize for harm as a primary goal.

  • Placebo often inappropriate once effectiveness is proven.

  • Thus, most harm research is observational by necessity.

5. Analysis Design: Making Valid Inference Without Randomization

Three Comparability Pillars (adapted from trial logic):

  1. Observation: Use blinded adjudicators or objective outcomes (e.g., mortality).

  2. Extraneous effects: Compare within similar patient domains to minimize lifestyle/drug covariate noise.

  3. Natural history: Ensure treated vs. untreated groups are on comparable disease trajectories.

🔍 Secret Insight: Comparing treatment groups drawn from different clinical contexts destroys this comparability—introducing hidden biases that can't be adjusted away.

Handling Confounding

Type A effects → Require aggressive confounding control:

  • Design-based: Matching, restriction, stratification

  • Analysis-based: Propensity scoring, regression, instrumental variables

Type B effects → Confounding typically negligible, but still verify no unmeasured bias.

6. Interpreting Harm Metrics

  • Absolute Risk Increase (ARI): Excess cases per 1000 exposed.

  • Relative Risk Increase (RRI): Multiplicative increase over baseline.

  • Number Needed to Harm (NNH): Inverse of ARI—how many treated cause one harm.

Always report these alongside confidence intervals and contextualize against benefit.


Key Takeaways

  • Unintended-effect research is still causal research, not just descriptive.

  • Type A effects confound easily due to shared causal pathways.

  • Type B effects are rare, unpredictable, and usually confound-free.

  • Observational designs are the pragmatic choice for most harm studies.

  • Apply trial-like rigor to observational designs via comparability principles.

  • Interpret harm in the context of the overall risk-benefit balance.

Recent Posts

See All

Comentarios

Obtuvo 0 de 5 estrellas.
Aún no hay calificaciones

Agrega una calificación
Post: Blog2_Post

​Message for International and Thai Readers Understanding My Medical Context in Thailand

Message for International and Thai Readers Understanding My Broader Content Beyond Medicine

bottom of page