New open-access chapter tackles how public health decisions work with incomplete cause-and-effect knowledge

MISRYOUM newsroom reported that a new open-access chapter is now available from the 2024 Routledge Handbook of Causality and Causal Methods, edited by Federica Russo and Phyllis Illari.
The chapter, titled ‘When Decisions Must Be Based on Partial Causal Knowledge,’ is authored by Fredrik Andersen, Rani Lill Anjum and Elena Rocca. It’s framed around a situation public health teams face all the time, even if people don’t always say it out loud: sometimes you have to decide before you can fully prove the cause-and-effect story.
The authors start with a straightforward problem. Public health decisions are needed even if the available knowledge about causes and effects is sometimes incomplete. Then they lay out four scenarios for what “incomplete” can mean in practice. First, the evidential gap might be small—there’s a reasonable amount of causal evidence pointing toward the same causal conclusion.
Second, the chapter considers cases where only limited, isolated causal evidence is available for public scrutiny, and yet decisions might still be necessary despite a big evidential gap. Third, causal knowledge may be incomplete because, within a large body of evidence, different types of evidence point to diverging causal conclusions. And fourth—especially relevant during outbreaks—health emergencies can create incompleteness when producing the necessary evidence through established standards and procedures simply isn’t possible.
For each scenario, Misryoum editorial team stated, the discussion moves beyond evidence alone. The chapter looks at how empirical evidence interacts with philosophical “basic implicit assumptions in science” about causality—what it calls philosophical bias—and, on top of that, ethical considerations. The idea isn’t that ethics and philosophy are an afterthought. Instead, they sit alongside data, influencing how people interpret what the evidence can (and cannot) justify.
There’s a notable emphasis in the last two scenarios. According to Misryoum analysis, when information is fragmented or when evidence can’t be produced quickly enough using normal procedures, identifying and discussing those underlying biases and ethical commitments becomes especially important. The chapter argues that this isn’t just academic hair-splitting; it affects how quickly decisions are made and how confident—maybe even how careful—officials can afford to be.
The chapter ends with a more forward-looking recommendation: cultivating conceptual diversity about causality among scientists and decision makers. Misryoum newsroom reported this is presented as a way to build resilience for health emergencies—essentially, preparing decision systems to handle uncertainty without collapsing into one narrow way of interpreting causation. And you can almost feel the real-world stakes here: in the kind of moment where a lab report is still pending and someone has to decide what to tell the public. It’s not always neat, not always complete. But it can still be thoughtful, if the assumptions and ethics are made visible—maybe even discussed out loud, before the pressure hits.
AHCS Welcomes ACGS to the Professional Bodies Council
Millions of Americans skip meals and stretch prescriptions as costs rise