The earth has been exceptionally warm of late, with every month from June 2023 until this past September breaking records. It has been considerably hotter even than climate scientists expected. Average temperatures during the past 12 months have even been above the goal set by the Paris climate agreement: to keep global warming below 1.5 degrees Celsius over preindustrial levels.
We know human activities are largely responsible for the long-term temperature increases, as well as sea level rise, increases in extreme rainfall and other consequences of a rapidly changing climate. Yet the unusual jump in global temperatures starting in mid-2023 appears to be higher than our models predicted (even as they generally remain within the expected range).
While there have been many partial hypotheses — new low-sulfur fuel standards for marine shipping, an unusual volcanic eruption in 2022, lower Chinese aerosol emissions and El Niño perhaps behaving differently than in the recent past — we remain far from a consensus explanation even more than a year after we first noticed the anomalies. And that makes us uneasy.
Why is it taking so long for climate scientists to grapple with these questions? It turns out that we do not have systems in place to explore the significance of shorter-term phenomena in the climate in anything approaching real time. But we need them badly. It’s now time for government science agencies to provide more timely updates in response to the rapid changes in the climate.
Weather forecasts are generated regularly come rain or come shine. Scientists who do near-real-time attribution for extreme weather are also able to react quickly to tease out the effect of global warming on any new event.
But climate science research is more used to working on approximately seven-year cycles to produce reports that summarize the evolving science about the long-term changes in climate. And the data that went into the latest round of climate model simulations are based on observations that only run through 2014, and so they don’t reflect recent changes such as newer pollution controls, volcanic eruptions or even the effects of Covid. Similarly, the forecasts are stuck with scenarios that were common in the early 2000s. Business (and everything else) has changed sharply since then.
As a result of all of this, a gap has opened up between what the general public and policymakers want and what is available.
To fix this, we need to create a better way for climate models to reflect new observations. That means more comprehensive and faster data gathering from satellites, in situ measurements and economic statistics, converted by analysts for the climate and weather models. This needs to be matched by a commitment by the roughly 30 labs around the world that maintain the models of the earth’s climate system to update their simulations each year to reflect the latest data.
Some of the information that goes into climate models currently take years to produce. For instance, while data on greenhouse gas levels and energy from the sun are available within weeks of their observations being taken, emissions of industrial and agricultural air pollutants need to be estimated from economic data, and this can take years to collect and process.
Scientists should be able to provide “good enough” estimates of these inputs faster using reasonable assumptions. Just as economic analysts frequently update statistics after an initial announcement, such as a quarterly jobs report, scientists could provide data for industrial emissions of pollutants, the activity of the sun, the impacts of volcanoes and greenhouse gas levels on two or more tracks — an initial estimate using as much data as is available quickly, and a fully revised estimate later once more data is in.
We think that a goal of analyzing data in under six months is achievable if the data-gathering and climate-modeling labs prioritize it. This entails a small shift by the U.S. agencies, such as the National Oceanic and Atmospheric Association and the Department of Energy, and international agencies such as Copernicus, the European climate service provider, toward sustained funding instead of one-off research grants.
Other groups, such as weather forecasters, would also be able to take advantage of this new data stream. When they’re doing seasonal or longer-term forecasts, they are also not working from the most up-to-date information and would be able to use this to improve the forecasts.
The public would benefit from more definitive knowledge on what is going on, too. Water-resource managers and urban planners could be more confident that they were using the most current scenarios and projections, helping them avoid underestimating or over-preparing for future change. If climate projections were better calibrated to recent changes, we could narrow the likely range of future impacts.
Some of the unease that people feel about climate change comes from a sense that things are out of our control — that the climate is changing faster than we can adapt. However, many of the most dire risks lie not with the most likely outcomes but in the worst-case possibilities, for example, the collapse of the West Antarctic ice sheet, or the drying up of the Amazon and other potential tipping points. But there is a lot we don’t know about if and when those tipping points will come to pass.
The good news is that climate science could easily become more agile in understanding the rapid changes we are seeing in the real world, incorporating them into our projections of the future and, hopefully, reducing that uncertainty.
The post We Study Climate Change. We Can’t Explain What We’re Seeing. appeared first on New York Times.