Science

Why social science needs honesty about uncertainty

A fresh batch of studies has landed, and it’s not exactly comforting. Misryoum newsroom reported that as many as half of all results published in reputable journals in the social sciences can’t be replicated by independent analysis.

That’s part of a longer story—stretching across multiple fields, but showing up most sharply where human behavior is the subject. Misryoum editorial team stated concerns have also been raised in biomedical research, where the pipeline is complicated by real-world constraints. The newest work comes from a seven-year project called Systematizing Confidence in Open Research and Evidence (Score). It has now published three studies looking at 3,900 social science papers, and it found that newer papers, and those published in journals requiring extensive sharing of underlying data, were more likely to be reproduced.

On the surface, that sounds like a technical issue—something for methods people and journal editors to argue over. But it quickly gets political, because reproducibility isn’t just a lab checklist; it’s about trust. Language matters here. Reproducibility looks at whether results can be recreated from the same data and methods. Replication tests whether the finding holds for new data in different contexts. Science rarely delivers exactly identical outcomes, and teasing out why those differences happen is part of how knowledge accumulates.

Then, somewhere between the uncertainty and the conclusions, the argument can sour. Misryoum analysis indicates politicians have increasingly tried to turn uncertainty into denial, recasting normal scientific uncertainty as if it were proof of failure. That’s why a White House executive order in May 2025 emphasised the “reproducibility crisis” in science—essentially a Trumpian call for doubt and inaction. It’s a strange twist: the very caution science uses to stay honest is being used as a reason to stop acting.

One reason the problem feels so stubborn is that large-scale verification projects, like those undertaken by Score, are few and far between. Most academic researchers would rather spend time on work that is more likely to enhance their careers. Score reanalysed existing data and, in separate work, replicated studies from scratch across more than 100 papers. Around 49% still failed to replicate the original result. Reanalysing data is relatively straightforward; carrying out an identical experiment is not. And it’s especially hard to recreate experiments in social and medical research, where outcomes depend on complex human systems.

AI might help decide what to test, but it can’t erase the costs and time involved in duplicating a piece of research. Not every failed replication signals a crisis, though. Some findings don’t matter much; replication studies can themselves be flawed. Results that don’t consistently replicate should be weighed against a wider evidence base when guiding policy. Treating non-replication as disqualification confuses uncertainty with ignorance. There’s a risk of paralysing decision-making where judgment matters most.

Still, more transparency helps. Greater transparency makes outright fraud more difficult and allows errors to be identified. Major funders such as the UK Economic and Social Research Council already require this, and the approach should be universal. Misryoum editorial desk noted that, for some, there’s a comforting view that research “ultimately autocorrects.” But the long-term solution—shifting incentives so existing results are tested—would require restructuring research culture and funding. For now, it remains largely notional. And it’s in that gap—between what science can do and what the system rewards—that the trust problem lives. (I remember the dry click of a printer in the office when the first pages of Score’s summary came in—paper smell, kind of too crisp—and then everyone just stared for a second.)

These studies should strengthen the case for change, serve as a warning, and maybe most importantly, underline something simple: social science is a powerful tool for understanding the world—and that trust will be built by acknowledging uncertainty, not repudiating it.

Some GLP-1 drugs work better for people with a key gene variant

FDA to convene panel on compounding peptides

Artemis II astronauts return to Houston after moonshot

Back to top button