What the pandemic has taught us about science

What the pandemic has taught us about science

https://ift.tt/3dk5v7P

Reposted from Dr. Judith Curry’s Climate Etc.

Posted on October 10, 2020 by curryja | 

The scientific method remains the best way to solve many problems, but bias, overconfidence and politics can sometimes lead scientists astray

It’s been awhile since I have been so struck by an article that I felt moved to immediately do a blog post.  Well, maybe because today is Saturday and it is one day after the landfall of Hurricane Delta, I actually have a half hour to do this.

Matt Ridley has published an article in the WSJ What the pandemic has taught us about science, that is highly relevant for climate change as well as for Covid-19.  It is excellent, I agree with and endorse every word of this.

The paper is behind paywall; Dan Hughes kindly sent me a topic of the text.  Here are extensive excerpts

<begin quote>

The Covid-19 pandemic has stretched the bond between the public and the scientific profession as never before. Scientists have been revealed to be neither omniscient demigods whose opinions automatically outweigh all political disagreement, nor unscrupulous fraudsters pursuing a political agenda under a cloak of impartiality. Somewhere between the two lies the truth: Science is a flawed and all too human affair, but it can generate timeless truths, and reliable practical guidance, in a way that other approaches cannot.

In a lecture at Cornell University in 1964, the physicist Richard Feynman defined the scientific method. First, you guess, he said, to a ripple of laughter. Then you compute the consequences of your guess. Then you compare those consequences with the evidence from observations or experiments. “If [your guess] disagrees with experiment, it’s wrong. In that simple statement is the key to science. It does not make a difference how beautiful the guess is, how smart you are, who made the guess or what his name is…it’s wrong.”

So when people started falling ill last winter with a respiratory illness, some scientists guessed that a novel coronavirus was responsible. The evidence proved them right. Some guessed it had come from an animal sold in the Wuhan wildlife market. The evidence proved them wrong. Some guessed vaccines could be developed that would prevent infection. The jury is still out.

Seeing science as a game of guess-and-test clarifies what has been happening these past months. Science is not about pronouncing with certainty on the known facts of the world; it is about exploring the unknown by testing guesses, some of which prove wrong.

Bad practice can corrupt all stages of the process. Some scientists fall so in love with their guesses that they fail to test them against evidence. They just compute the consequences and stop there. Mathematical models are elaborate, formal guesses, and there has been a disturbing tendency in recent years to describe their output with words like data, result or outcome. They are nothing of the sort.

An epidemiological model developed last March at Imperial College London was treated by politicians as hard evidence that without lockdowns, the pandemic could kill 2.2 million Americans, 510,000 Britons and 96,000 Swedes. The Swedes tested the model against the real world and found it wanting: They decided to forgo a lockdown, and fewer than 6,000 have died there.

In general, science is much better at telling you about the past and the present than the future. As Philip Tetlock of the University of Pennsylvania and others have shown, forecasting economic, meteorological or epidemiological events more than a short time ahead continues to prove frustratingly hard, and experts are sometimes worse at it than amateurs, because they overemphasize their pet causal theories.

A second mistake is to gather flawed data. On May 22, the respected medical journals the Lancet and the New England Journal of Medicine published a study based on the medical records of 96,000 patients from 671 hospitals around the world that appeared to disprove the guess that the drug hydroxychloroquine could cure Covid-19. The study caused the World Health Organization to halt trials of the drug.

It then emerged, however, that the database came from Surgisphere, a small company with little track record, few employees and no independent scientific board. When challenged, Surgisphere failed to produce the raw data. The papers were retracted with abject apologies from the journals. Nor has hydroxychloroquine since been proven to work. Uncertainty about it persists.

A third problem is that data can be trustworthy but inadequate. Evidence-based medicine teaches doctors to fully trust only science based on the gold standard of randomized controlled trials. But there have been no randomized controlled trials on the wearing of masks to prevent the spread of respiratory diseases (though one is now under way in Denmark). In the West, unlike in Asia, there were months of disagreement this year about the value of masks, culminating in the somewhat desperate argument of mask foes that people might behave too complacently when wearing them. The scientific consensus is that the evidence is good enough and the inconvenience small enough that we need not wait for absolute certainty before advising people to wear masks.

This is an inverted form of the so-called precautionary principle, which holds that uncertainty about possible hazards is a strong reason to limit or ban new technologies. But the principle cuts both ways. If a course of action is known to be safe and cheap and might help to prevent or cure diseases—like wearing a face mask or taking vitamin D supplements, in the case of Covid-19—then uncertainty is no excuse for not trying it.

A fourth mistake is to gather data that are compatible with your guess but to ignore data that contest it. This is known as confirmation bias. You should test the proposition that all swans are white by looking for black ones, not by finding more white ones. Yet scientists “believe” in their guesses, so they often accumulate evidence compatible with them but discount as aberrations evidence that would falsify them—saying, for example, that black swans in Australia don’t count.

Advocates of competing theories are apt to see the same data in different ways. Last January, Chinese scientists published a genome sequence known as RaTG13 from the virus most closely related to the one that causes Covid-19, isolated from a horseshoe bat in 2013. But there are questions surrounding the data. When the sequence was published, the researchers made no reference to the previous name given to the sample or to the outbreak of illness in 2012 that led to the investigation of the mine where the bat lived. It emerged only in July that the sample had been sequenced in 2017-2018 instead of post-Covid, as originally claimed.

These anomalies have led some scientists, including Dr. Li-Meng Yan, who recently left the University of Hong Kong School of Public Health and is a strong critic of the Chinese government, to claim that the bat virus genome sequence was fabricated to distract attention from the truth that the SARS-CoV-2 virus was actually manufactured from other viruses in a laboratory. These scientists continue to seek evidence, such as a lack of expected bacterial DNA in the supposedly fecal sample, that casts doubt on the official story.

By contrast, Dr. Kristian Andersen of Scripps Research in California has looked at the same confused announcements and stated that he does not “believe that any type of laboratory-based scenario is plausible.” Having checked the raw data, he has “no concerns about the overall quality of [the genome of] RaTG13.”

As this example illustrates, one of the hardest questions a science commentator faces is when to take a heretic seriously. It’s tempting for established scientists to use arguments from authority to dismiss reasonable challenges, but not every maverick is a new Galileo.

Peer review is supposed to be the device that guides us away from unreliable heretics. Investigations show that peer review is often perfunctory rather than thorough; often exploited by chums to help each other; and frequently used by gatekeepers to exclude and extinguish legitimate minority scientific opinions in a field.

Herbert Ayres, an expert in operations research, summarized the problem well several decades ago: “As a referee of a paper that threatens to disrupt his life, [a professor] is in a conflict-of-interest position, pure and simple. Unless we’re convinced that he, we, and all our friends who referee have integrity in the upper fifth percentile of those who have so far qualified for sainthood, it is beyond naive to believe that censorship does not occur.” Rosalyn Yalow, winner of the Nobel Prize in medicine, was fond of displaying the letter she received in 1955 from the Journal of Clinical Investigation noting that the reviewers were “particularly emphatic in rejecting” her paper.

The health of science depends on tolerating, even encouraging, at least some disagreement. In practice, science is prevented from turning into religion not by asking scientists to challenge their own theories but by getting them to challenge each other, sometimes with gusto. Where science becomes political, as in climate change and Covid-19, this diversity of opinion is sometimes extinguished in the pursuit of a consensus to present to a politician or a press conference, and to deny the oxygen of publicity to cranks. This year has driven home as never before the message that there is no such thing as “the science”; there are different scientific views on how to suppress the virus.

Anthony Fauci, the chief scientific adviser in the U.S., was adamant in the spring that a lockdown was necessary and continues to defend the policy. His equivalent in Sweden, Anders Tegnell, by contrast, had insisted that his country would not impose a formal lockdown and would keep borders, schools, restaurants and fitness centers open while encouraging voluntary social distancing. At first, Dr. Tegnell’s experiment looked foolish as Sweden’s case load increased. Now, with cases low and the Swedish economy in much better health than other countries, he looks wise. Both are good scientists looking at similar evidence, but they came to different conclusions.

Prof. Ritchie argues that the way scientists are funded, published and promoted is corrupting: “Peer review is far from the guarantee of reliability it is cracked up to be, while the system of publication that’s supposed to be a crucial strength of science has become its Achilles heel.” He says that we have “ended up with a scientific system that doesn’t just overlook our human foibles but amplifies them.”

Organized science is indeed able to distill sufficient expertise out of debate in such a way as to solve practical problems. It does so imperfectly, and with wrong turns, but it still does so.

How should the public begin to make sense of the flurry of sometimes contradictory scientific views generated by the Covid-19 crisis? The only way to be absolutely sure that one scientific pronouncement is reliable and another is not is to examine the evidence yourself. Relying on the reputation of the scientist, or the reporter reporting it, is the way that many of us go, and is better than nothing, but it is not infallible. If in doubt, do your homework.

<end quote>

Like this:

Like Loading…

Related

Superforest,Climate Change

via Watts Up With That? https://ift.tt/1Viafi3

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s