A substantial number of Republican voters are losing faith in science.
As of April 2020, 14 percent reported to Pew Research that they had little or no faith that scientists would “act in the best interest of the public.” By October 2023, that figure had risen to 38 percent.
Over the same period, the share of Democrats who voiced little or no confidence rose much less and from a smaller base line — to 13 percent from 9 percent.
“Empirical data do not support the conclusion of a crisis of public trust in science,” Naomi Oreskes and Erik M. Conway, historians of science at Harvard and Caltech, write in their 2022 article “From Anti-Government to Anti-Science: Why Conservatives Have Turned Against Science.” But the data “do support the conclusion of a crisis of conservative trust in science.”
A paper published by the Journal of the American Medical Association on July 31, “Trust in Physicians and Hospitals During the COVID-19 Pandemic in a 50-State Survey of U.S. Adults,” by doctors and health specialists at Harvard, Northeastern, Rutgers, the University of Rochester and the University of South Carolina, reports that “in every sociodemographic group in this survey study among 443 455 unique respondents aged 18 years or older residing in the US, trust in physicians and hospitals decreased substantially over the course of the pandemic, from 71.5 percent in April 2020 to 40.1 percent in January 2024.”
“During the COVID-19 pandemic,” the authors write,
medicine and public health more broadly became politicized, with the internet amplifying public figures and even physicians encouraging individuals not to trust the advice of public health experts and scientists. As such, the pandemic may have represented a turning point in trust, with a profession previously seen as trustworthy increasingly subject to doubt.
In “The Polarization and Politicization of Trust in Scientists,” a paper presented last week at the annual meeting of the American Political Science Association, James Druckman and Jonathan Schulman, of the University of Rochester and the University of Pennsylvania, write:
Consider in 2000, 46 percent of Democrats and, almost equivalently, 47 percent of Republicans expressed a great deal of confidence in scientists. In 2022, these respective percentages were 53 percent and 28 percent. In twenty years, a partisan chasm in trust (a 25-percentage point gap) emerged.
Matthew Dallek, a political historian at George Washington University, wrote by email:
Distrust of science is arguably the greatest hindrance to societal action to stem numerous threats to the lives of Americans and people worldwide. Americans died because they had read or heard that MRNA vaccines were more dangerous than a bout of Covid.
Some people suffer from poor dental health in part because their parents distrusted fluoridation of drinking water. The national failure to invest until recently in combating climate change has raised the odds of pandemics, made diseases more rampant, destabilized entire regions, and spurred a growing crisis of migration and refugees that has helped popularize far-right nativism in many Western democracies.
Trump’s MAGA movement, Dallek argued,
turbocharged anti-science conspiracy theories and attitudes on the American right, vaulting them to an even more influential place in American politics. Bogus notions — vaccines may cause autism, hydroxychloroquine may cure Covid, climate change isn’t real — have become linchpins of MAGA-era conservatism.
The most recent precipitating event widening the split between Democrats and Republicans regarding the trustworthiness of science has been the partisan divide over how to deal with the Covid pandemic, especially support for and opposition to mandatory vaccination.
Between 2018 to 2021, the General Social Survey found that the spread between the percentage of Democrats and Republicans who said they have “a great deal of confidence in the scientific community” rose to 33 points (65-32) from 13 points (54-41).
Adrian Bardon, a professor of philosophy at Wake Forest and author of “The Truth About Denial: Bias and Self-Deception in Science, Politics and Religion,” described in an email the partisan shift in attitudes toward science that began in the early 1970s:
Whereas up through the 1960s the left would have more of a reputation as the anti-science wing. The standard story explaining this is the fact that the most salient science in public perception up through the 60s was what we now call ‘production science’ — the science of industrial production. Plastics, cars and fossil fuels in general, military tech, nuclear power, industrial agriculture, etc.
The 60s saw the rise of ‘impact science’ — the science of the impacts of industrial production and consumption. Note the famous 1964 World’s Fair exhibition “Tomorrowland,” depicting the industrial and technological utopia of the future — featured just as Rachel Carson is writing about DDT, and air and water pollution are becoming more evidently a problem to casual observers.
The direction of the partisan response, Bardon wrote, is driven by “who the facts are favoring, and science currently favors bad news for the industrial status quo. People who identify with the status quo social and economic hierarchy (white males, economic conservatives, white evangelicals, and social conservative MAGAs) are going to feel more threatened by the bad news for status quo systems.”
The roots of the divergence, however, go back at least 50 years with the creation in of the Environmental Protection Agency and the Occupational Safety and Health Administration in 1970, along with the enactment that same year of the Clean Air Act and two years later of the Clean Water Act.
These pillars of the regulatory state were, and still are, deeply dependent on scientific research to set rules and guidelines. All would soon be seen as adversaries of the sections of the business community that are closely allied with the Republican Party, although each of these agencies and laws were backed by a Republican president, Richard M. Nixon.
These agencies and laws fostered the emergence of what Gordon Gauchat, a professor of sociology at the University of Wisconsin at Milwaukee, calls “regulatory science.” This relatively new role thrust science into the center of political debates with the result that federal agencies like E.P.A. and O.S.H.A. “are considered adversarial to corporate interests. Regulatory science directly connects to policy-management and, therefore, has become entangled in policy debates that are unavoidably ideological.”
In their 2022 article, Oreskes and Conway, write that conservative hostility to science
took strong hold during the Reagan administration, largely in response to scientific evidence of environmental crises that invited governmental response. Thus, science-particularly environmental and public health science-became the target of conservative anti-regulatory attitudes.
Oreskes and Conway argue that the strength of the anti-science movement was driven by the alliance in the Reagan years between corporate interests and the ascendant religious right, which became an arm of the Republican Party as it supported creationism (which attributes life to a supernatural creator) over evolution (which explains life through natural processes):
As the Republican Party has become identified with conservative religiosity-in particular, evangelical Protestantism-religious and political skepticism of science have become mutually constitutive and self-reinforcing.
Meanwhile, individuals who are comfortable with secularism, and thus secular science, concentrate in the Democratic Party. The process of party-sorting along religious lines has helped turned an ideological divide over science into a partisan one.
Matt Motta, a professor of health law, policy and management at Boston University, argued in an email that partisan divisions in the electorate over the legitimacy of science result in large part from divisions among elites:
As partisan elites have staked out increasingly clear positions on issues related to climate change, vaccine hesitancy, and other science-related policy issues, the public has polarized in response.
People look to their political leaders to provide them with information (‘cues’ or ‘heuristics’) about how they ought to think about complex science-related issues. This creates a feedback cycle, whereby — once public opinion polarizes about science-related issues — political elites have an electoral incentive to appeal to that polarization; both in the anti-science rhetoric they espouse, and in expressing opposition to evidence-based policies.
In a January 2023 paper, “Is Cancer Treatment Immune From Partisan Conflict? How Partisan Communication Motivates Opposition to Preventative Cancer Vaccination in the U.S.,” Motta explores differences in responses to a promising medical treatment — anticancer vaccinations — between Republicans and Democrats:
In a demographically representative survey of 1,959 U.S. adults, I tracked how intentions to receive preventative cancer vaccines (currently undergoing clinical trials) vary by partisan identity. I find that cancer vaccines are already politically polarizing, such that Republicans are less likely than Democrats to intend to vaccinate.
I conceptually replicate these findings in application to a second hypothetical vaccine for noncommunicable illness; experimental preventatives for Alzheimer’s disease. Critically, I find that when elite Democrats claim credit for funding cancer research, Republicans become even less likely to intend to vaccinate. Collectively, these results suggest that partisan asymmetries in vaccine uptake extend to developmental vaccines that could prevent life-threatening, noncommunicable disease.
Another key factor driving a wedge between the two parties over the trustworthiness of science in the striking partisan difference over risk tolerance and risk aversion.
In their 2023 paper, “Gender Differences in Preferences,” Rachel Croson and Uri Gneezy, economists at the University of Minnesota and the University of California-San Diego, reviewed a wide range of studies of the relationship between risk and gender.
Their conclusion: “We find, on average, that women are more risk averse than men.”
Similarly, Melissa Finucane, Paul Slovic, C.K. Mertz, James Flynn and Theresa Satterfield wrote in “Gender, Race and Perceived Risk: The ‘White Male’ Effect,”
Our survey revealed that men rate a wide range of hazards as lower in risk than do women. Our survey also revealed that whites rate risks lower than do nonwhites. Nonwhite females often gave the highest risk ratings. The group with the consistently lowest risk perceptions across a range of hazards was white males.
Furthermore, we found sizable differences between white males and other groups in sociopolitical attitudes. Compared with the rest of the sample, white males were more sympathetic with hierarchical, individualistic, and anti-egalitarian views, more trusting of technology managers, less trusting of government, and less sensitive to potential stigmatization of communities from hazards. These positions suggest greater confidence in experts and less confidence in public-dominated social processes.
In other words, white men — the dominant constituency of the Republican Party, in the what is known in the academic literature as “the white male effect” — are relatively risk tolerant and thus more resistant (or less committed) to science-based efforts to reduce the likelihood of harm to people or to the environment, while major Democratic constituencies are more risk-adverse and supportive of harm-reducing policies.
Insofar as people tend to accept scientific findings that align with their political beliefs and disregard those that contradict them, political views carry more weight than knowledge of science.
Dan M. Kahan, a Yale law professor, reported in his 2015 paper “Climate-Science Communication and the Measurement Problem,” that comparing the answers to scientific questions among religious and nonreligious respondents revealed significant insight into differing views of what is true and what is not.
When asked whether “electrons are smaller than atoms” and “what gas makes up most of the earth’s atmosphere, hydrogen, nitrogen, carbon dioxide or oxygen,” almost identical shares of religious and nonreligious men and women who scored high on measures of scientific knowledge gave correct answers to the questions.
However, when asked “human beings, as we know them today, developed from earlier species of animals, true or false,” the religious students high in scientific literacy scored far below their nonreligious counterparts.
In other words, Kahan argues, the evolution question did not measure scientific knowledge but instead was a gauge of “something else: a form of cultural identity.”
Kahan then cites a survey that asked “how much risk do you believe climate change poses to human health, safety or prosperity?” The survey demonstrated a striking correlation between political identity and the level of perceived risk: Strong Democrats saw severe risk potential; strong Republicans close to none.
Kahan suggests that the different responses offered by religious and nonreligious respondents to the evolution question were similar to the climate change responses in that they were determined by “cultural identity” — in this case, political identity.
Kahan continues:
Indeed, the inference can be made even stronger by substituting for, or fortifying political outlooks with, even more discerning cultural identity indicators, such as cultural worldviews and their interaction with demographic characteristics such as race and gender. In sum, whether people ‘believe in’ climate change, like whether they ‘believe in’ evolution, expresses who they are.
In their 2023 PNAS paper, “Prosocial Motives Underlie Scientific Censorship by Scientists,” Cory J. Clark, Steven Pinker, David Buss, Philip Tetlock, David Geary and 34 others make the case that the scientific community at times censors itself: “Our analysis suggests that scientific censorship is often driven by scientists, who are primarily motivated by self-protection, benevolence toward peer scholars, and prosocial concerns for the well-being of human social groups.”
The authors go on:
The fundamental principle of science is that evidence — not authority, tradition, rhetorical eloquence, or social prestige — should triumph. This commitment makes science a radical force in society: Challenging and disrupting sacred myths, cherished beliefs, and socially desirable narratives. Consequently, science exists in tension with other institutions, occasionally provoking hostility and censorship.
Clark and her coauthors argue that
Prosocial motives for censorship may explain four observations: 1) widespread public availability of scholarship coupled with expanding definitions of harm has coincided with growing academic censorship; 2) women, who are more harm-averse and more protective of the vulnerable than men, are more censorious; 3) although progressives are often less censorious than conservatives, egalitarian progressives are more censorious of information perceived to threaten historically marginalized groups; and 4) academics in the social sciences and humanities (disciplines especially relevant to humans and social policy) are more censorious and more censored than those in STEM.
In an email, Clark wrote,
We see that perceptions that political values influence the work of academic disciplines is similarly related to reduced trust and increased skepticism.
The explicit politicization of academic institutions, including science journals, academic professional societies, universities, and university departments is likely one causal factor that explains reduced trust in science.
Dietram A. Scheufele, who is a professor in science communication at the University of Wisconsin, was sharply critical of what he calls the scientific community’s “self-inflicted wounds”:
One is the sometimes gratuitous tendency among scientists to mock groups in society whose values we see as misaligned with our own. This has included prominent climate scientists tweeting that no Republicans are safe to have in Congress, popularizers like Neil deGrasse Tyson trolling Christians on Twitter on Christmas Day.
Scheufele warned against
Democrats’ tendency to align science with other (probably very worthwhile) social causes, including the various yard signs that equate science to B.L.M., gender equality, immigration etc. The tricky part is that most of these causes are seen as Democratic-leaning policy issues. Science is not that. It’s society’s best way of creating and curating knowledge, regardless of what that science will mean for politics, belief systems, or personal preferences.
For many on the left, Scheufele wrote,
Science has become a signaling device for liberals to distinguish themselves from what they see as “anti-science” Republicans. That spells trouble. Science relies on the public perception that it creates knowledge objectively and in a politically neutral way. The moment we lose that aspect of trust, we just become one of the many institutions, including Congress, that have suffered from rapidly eroding levels of public trust.
When Reagan quipped in 1986, “The nine most terrifying words in the English language are ‘I’m from the government and I’m here to help,’ ” he was signaling the escalation of the conservative antigovernment movement.
The Republican Party signed on and hasn’t let go. Over the following decades, that message has become ever more entrenched. Trump and his MAGA movement have been occupied since 2015 not only with spreading incessant lies but also with disbursing a corrosive loss of faith, leaving advances in modern science as one of many casualties.
The post Republican Science Denial Has Nasty Real-World Consequences appeared first on New York Times.