How Firm a Foundation: Pandemic Science Tests The Limits of Our trust

January 20, 2022 | By Raquel Sequeira TD ‘21.5

image description: vibrant red Petri dishes and pills surround a syringe, filled with the same red liquid and laying on a disposable mask.

This piece was written at the Veritas Forum 2020, an annual writing program offered by the Augustine Collective. Students from various universities work with writing coaches to write articles about virtue in the sciences or social sciences.


“I’ve never felt as dependent as I am today on shaky data to make what could be life or death decisions.” I was struck reading the words of Dr. Neel Shah, an obstetrician describing what it’s like to care for pregnant patients in the midst of the coronavirus pandemic. [1] As I watch the world through my internet browser, scientific facts seem to flip-flop like pundits. A graph of biotech stocks, responding to daily progress reports from the companies racing to produce a vaccine, might as well be tracking the sentiments of Facebook users as each new pandemic model urges hope or despair. [2] Staying at home feels like the natural response of resignation to uncertain data, and I often shut my laptop screen on the latest disease statistics in a huff of frustration masking powerlessness. But not everyone has the luxury of stewing in skepticism. “We have to be willing to update what we believe more rapidly—and yet there’s so much information that is hard to trust it makes our jobs very difficult,” says Dr. Shah. For many in this crisis, shaky data and shifting beliefs must be the basis for action: advising pregnant patients, shutting down businesses, distributing ventilators. This is quite a sacred trust to place in science.

 

We trust science not because we think that individual scientists are perfect, but because science as a social institution strives to be trustworthy. The micro and macro methods of scientific research—experimental controls, statistical tests, replication, peer review—are designed to neutralize the biases and mistakes of individual researchers. Philosopher of science Helen Longino calls this emergent objectivity “the social structure of science.” There will always be human error and even corruption, but with many checks in the process from experiment to publication, doctors and engineers and lawmakers should be able to rely on scientific findings with confidence, and the public should be able to trust in their impartiality.

 

The coronavirus pandemic is now exposing every weakness of the social structure of science. The urgent need for data about the disease paired with the high potential impact of every new study creates an incentive for both researchers and reporters to speed past checkpoints. [3] Scientists are offering up the results of small, rushed studies—often just observations—straight to media sources that are hungry for a hyperbolic headline, and the chance for publicity increases the incentive for scientists to exaggerate the implications of their work. Clickbait interpretations of research have been around since long before COVID (exacerbated, no doubt, by social media), and doctors and scientists at the front lines know how to deal with them in normal circumstances. The difficulty of these abnormal circumstances is the publication of research before peer review. Many studies are being published on pre-print sites, where readers have to take the time to be reviewers themselves; but some journals are now publishing studies without indicating whether or not they have been peer reviewed. Even experienced medical professionals like Dr. Shah, who know how to tell good research from bad, are struggling to decide what to trust from this “pandemic paper tsunami”—there’s just not enough time to sort the wheat from the chaff. [4]

 

Bad incentives, fraudulent data, and mis-reported science all contribute to obscuring the truth. We expect scientific truth to be clear when everyone is on their best behavior, both scientists and those who report on their findings. But this is a fundamental misunderstanding of science. There are limits to our confidence in the results of research publications, even when we’re not in the midst of a pandemic.

 

Our fundamental uncertainty about reality means that science will always require some unprovable premises: we have no way of knowing, for example, whether the laws that seem to hold the universe together today will continue to hold it together tomorrow. Scientists can even disagree about whether unobservable things like electrons and forces are real entities or merely useful fictions. Still, laws and bridges are built on the belief that science gives us knowledge that is refined to objectivity through the social structure of science—and peer review is the cornerstone of that structure. Individual researchers have specialization and implicit and explicit bias that shapes their experiments, as well as expectations and beliefs that may influence how they interpret data. The outsider eyes of peer reviewers should be able to point out unsound methods and unjustified conclusions. Even after publication, ongoing review allows editors to issue corrections of errors, or swiftly retract fraud. This is how the social structure of science is supposed to work, anyway. In reality, although the “peer review” label in a headline may read like the blue verification check on a Twitter account, it does not refer to any standardized process of scientific rigor. Meta-scientific analyses over the past decade—using science to study science—have revealed many insufficiencies of the peer review process, and the challenge of untrustworthy data during the coronavirus pandemic is only the latest acceleration of this crisis of confidence in science.

 

The widely-publicized “replication crisis” was a sucker punch to the popular faith in the social structure of science. Large-scale replication projects of studies in psychology and medicine have failed to verify a significant proportion of published results. [5] Failing to reproduce results is not a scientific failure; it’s how you know you were wrong, a crucial self-correcting step in any study. But these replication failures become systemic fault lines when long after a publication has formed the basis for further research. For many non-scientists, the replication crisis has been a wakeup call to question peer-reviewed journal titles as stamps of scientific excellence. For many scientists, the crisis has sparked further investigation of how the social structure of science is falling short.

 

The mother of all meta-science is the Retraction Watch, a blog whose founders compiled an enormous database of all the articles pulled from publication since 1997. [6] Along with newsworthy retractions and frequent offenders, the blog recently published an analysis of their huge database. Though they found that the rate of retractions has gone down, watch dogs warn that there are no unified retraction policies across science. Journal editors are left on their own to decide when to retract versus when to correct an article and how to report. And without “standardized nomenclature” to distinguish retractions due to honest error from those due to misconduct, the stigma associating retraction with fraud adds a temptation to cover up rather than admit error. 

 

The lack of a standardized retraction protocol reflects similar ambiguity in the foundation of the social structure of science—the peer review process. Consider, for example, how reviewers deal with conflict of interest disclosures. A financial conflict of interest means that whoever is funding the experiment has a stake in the outcome, and thus the researcher has a financial stake in the outcome. In a recent study of 800 reviewers for the Annals of Emergency Medicine, the majority of reviewers reported that they believe conflicts of interest impact the quality of research, but that they would be able to account for those biases. [7] Yet the study found that conflict of interest disclosures had no effect on the average ratings that reviewers gave to the research manuscripts they reviewed. It’s possible that conflicts of interest really had no effect on the quality of studies, and the reviewers were justified in their ratings. But reviewers should still have a way to flag studies with conflicts of interest, which increase the risk of biased conclusions. The fear that medical practice is unduly influenced by the industries with the most money doesn’t seem far-fetched when a third of the papers reviewed in this study reported a financial conflict of interest. 

 

The authors of the conflict of interest study conclude with a call for more guidelines and standardization of the peer review process, including how to deal with conflict of interest disclosures, and that call is echoed by other meta-scientific studies. Reviews of research using animal subjects found a troubling lack of reporting on animal treatment and statistical methods—crucial information for critiquing and replicating studies. [8, 9] The author suggests enforcing standardized guidelines across all journals for statistical analysis paragraphs. I’m left thinking: they don’t do this already?

 

Yet for all we can do to strengthen the social structure of science, enforcing uniform standards and eliminating conflicting incentives, science is still a human process. There will always be bad scientists and bad science. A shocking proportion of bad science is outright falsehood. The Retraction Watch analysis gives the following statistics on scientific misconduct:

About half of all retractions do appear to have involved fabrication, falsification, or plagiarism—behaviors that fall within the U.S. government’s definition of scientific misconduct. Behaviors widely understood within science to be dishonest and unethical, but which fall outside the U.S. misconduct definition, seem to account for another 10%. Those behaviors include forged authorship, fake peer reviews, and failure to obtain approval from institutional review boards for research on human subjects or animals.

The authors also note that retractions due to this latter category of shady behavior have increased, suggesting that the U.S. government should update its definition of scientific misconduct. It’s troubling to think that some scientists would need the letter of the law to deter them from submitting a faked peer review. 

 

In the best case scenario, a swift retraction prevents fraudulent science from tainting the water; but headlines over the past two decades have shown that the rare elaborate conspiracies of falsified data can have catastrophic impacts on science and society. [10, 11] And science is certainly not free from the inequity of society at large. (Notable underrepresentation of female authors and reviewers in a self-report by Nature is just one example). [12] According to research ethicist C.K. Gunsalus, “every aspect of science, from the framing of a research question through to publication of the manuscript, is susceptible to influences that can counter good intentions.” [13] No matter how solid the structure of science, the trustworthiness of research ultimately depends on the integrity of individual scientists.

 

Philosopher of science Thomas Kuhn describes science as an oscillation between times of smooth sailing and times of crisis—between when the data fits neatly with your beliefs and when those beliefs are challenged. Now, we are in a meta-scientific crisis, as data about replication, retraction, and peer review undermines our belief in the objectivity of the social structure of science. Accelerated by the coronavirus pandemic, this crisis may well lead to revolution (as Kuhn predicts) as many scientists imagine restructuring the peer review and publication process and establishing institutional cultures of research integrity. [14, 15]

 

In the midst of a public health crisis more than ever, we of course have to rely on science—but not with blind faith. As scientists work to reform the structure of science and raise the standard of intellectual virtue, non-scientists should read science headlines with the same healthy skepticism as all other news. Both must recognize that the badge of peer review is not a guarantee of truth, but reflects a process that is flawed—like any human process—and subject to constant correction. As a society, we need to rethink how we view scientific knowledge. Research gives us evidence, not the dogma that worried Facebook moms or defensive scientists might convey. We should only put as much confidence in the data as the context is reliable. “All of us,” says Dr. Shah, “scientists, health care workers, pregnant people, and the public, need to be willing to rethink what we thought we believed last week.” 

 

We can learn this kind of humility from the responses of scientists themselves. These scientists, who are taking the time to run scores of replication attempts and study hundreds of reviewers and analyze thousands of journal retractions, are only a fraction of the many individuals striving to hold science to higher standards of excellence and virtue. Theirs are the loudest voices calling science to a reckoning, acknowledging that trust is something you have to earn. In the midst of a skeptical, couch-bound existence, these voices give me hope.


  1. https://www.statnews.com/2020/04/11/shaky-evidence-covid-19-confounds-evidence-based-medicine/ 

  2. https://www.marketwatch.com/story/why-virus-stocks-are-driving-market-volatility-2020-05-29 

  3. https://www.bmj.com/content/369/bmj.m2045

  4. https://science.sciencemag.org/content/368/6494/924

  5. https://osf.io/wx7ck/

  6. https://retractionwatch.com/

  7. https://www.bmj.com/content/367/bmj.l5896

  8. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5967836/

  9. https://onlinelibrary.wiley.com/doi/full/10.1002/bies.201900189 

  10. https://www.bmj.com/content/346/bmj.f1738

  11. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2831678/

  12. https://www.nature.com/articles/d41586-018-05465-7

  13. https://www.nature.com/articles/d41586-018-05145-6?platform=hootsuite

  14. https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000273#sec006

  15. https://www.amjmed.com/article/S0002-9343(19)30759-4/abstract

Previous
Previous

The Age of the Prophets Has Ended (Or So We Thought)

Next
Next

Military Mental Health: Whole Persons