Early on the morning of 11 September 2023, floodwaters that were up to 7 m deep surged down the normally dry riverbed that cuts through the city of Derna in eastern Libya [
1]. Two nearby dams, both built in the 1970s from clay, rock, and dirt to forestall just this scenario, had ruptured after heavy rainfall from a near-hurricane strength storm (Fig. 1) [
2]. According to Libyan government statistics, more than 4500 people died after the deluge inundated much of the city [
3].
The Derna disaster is one of many examples of infrastructure that failed in recent years after heavy precipitation. In 2019, a dam near Brumadinho, Brazil, that held back nearly 1.3 × 10
7 m
3 of mine tailings collapsed, killing 237 people [
4]. In 2015, more than 50 dams in the United States state of South Carolina burst after a storm dropped more than 41 cm of rain in only six hours [
5]. Globally, much more infrastructure is similarly at increased risk as climate change stokes larger, more powerful storms [
6].
Now, a key estimate that for more than 75 years has guided the design and engineering of dams, nuclear power plants, and other infrastructure in the United States and other countries is getting an upgrade for the climate change era. An expert panel convened by the US National Academies of Sciences, Engineering, and Medicine has recommended a new approach for determining probable maximum precipitation (PMP), an indicator of how much rain could fall over a given area in a given time [
7]. Spelled out in a June 2024 report, the panel’s recommendations include incorporating more accurate weather data, making greater use of climate models, and revising the definition of PMP to better reflect current understanding of maximum precipitation [
8]. A revamped PMP could enable construction of more resilient infrastructure. But PMP is also useful for evaluating the safety of existing dams and nuclear power plants, so a revised method for estimating it could also help protect these structures, according to the committee’s chair, James Smith, professor of engineering and applied science at Princeton University in Princeton, NJ, USA.
The upgrade is welcome and long overdue, say independent experts. “What they are proposing is the right thing to do,” said Tapio Schneider, professor of environmental science and engineering at the California Institute of Technology in Pasadena, CA, USA. “It is modernizing an archaic practice.”
For the United States, the current official definition of PMP is the largest amount of precipitation that a storm of a certain size could theoretically produce in a specific duration over a given area and at a particular time of year [
8]. PMP is important because it allows estimation of the maximum flood that certain kinds of infrastructure must withstand [
8]. In the United States, PMP is the design and safety standard for structures whose failure would be disastrous, including more than 50 nuclear power plants and about 16 000 so-called high-hazard dams [
8], [
9]. However, a different standard, precipitation frequency, applies for other types of infrastructure, including bridges, drainage systems, levees, and some dams [
8]. Other countries also rely on PMP to design and maintain their infrastructure.
The current methods for estimating PMP rest on four pillars [
8]. The first is precipitation data from storm catalogs that record past weather events. Some of the measurements come from official rain gauges, but some come from unorthodox devices. The record precipitation total for under 5 hours—780 mm that fell in 4.5 hours near Smethport, PA, USA, in 1942—was measured in a mason jar [
8]. Another record accumulation, the 580 mm that fell in 2.75 hours near D’Hanis, TX, USA, in 1935, was measured in a horse trough [
8]. Although these so-called bucket surveys may seem crude, they are valuable because they go back farther in time than traditional rain gauges, said Paul Roebber, professor of atmospheric sciences at the University of Wisconsin in Milwaukee, WI, USA, who had no connection to the report. “What you lose in accuracy you gain in length of the record.”
Another pillar of PMP estimation is the assumption that a storm in one area provides guidance about the behavior of a storm in another area, a concept called storm transposition that allows experts to use data from past weather to project precipitation from future events [
8], [
10]. Topography modifies rainfall, and a third PMP pillar is a set of procedures to correct for these effects [
8]. The final pillar is inherent in the name probable maximum precipitation—only so much rain can fall in a given time and place [
8].
All four pillars are shaky, the National Academies report noted. For instance, the precipitation records for estimating PMP are inadequate. Even the longest bucket survey records only go back a few decades. But climate change means that storms that in the past would have only occurred once in every 1000 years or even less frequently are becoming more common [
8], [
11], [
12]. Current methods for estimating PMP do not reflect the increased likelihood of these storms.
Moreover, the idea of maximum precipitation may be all wet. “It seems intuitive—there is only so much water on the planet,” said report coauthor Dan Cooley, professor of statistics at Colorado State University in Fort Collins, CO, USA. However, researchers have not been able to confirm an upper bound for precipitation, he said. Assuming that precipitation does max out was essential for developing and using PMP, said Smith. “It was the original sin and a brilliant engineering stroke” that allowed for very safe designs. As a result, although some dams have collapsed in the United States because of extreme weather (
Fig. 2), the country has largely avoided catastrophic dam failures, he said.
In addition, PMP methods do not account for climate change, since the information for estimating it comes from past weather. “Using decades-old data to talk about risk in the future is inherently problematic,” said Cooley.
The report’s recommendations for modernizing PMP start with revising its definition to reflect the uncertainty over whether precipitation tops out. The new definition sets PMP as the amount of precipitation for a specific time, area, and storm duration that is unlikely to be exceeded, given a particular climate [
8]. To meet that probabilistic definition, results from multiple climate models with a resolution of 1 km or less should ultimately be the basis for PMP estimates, the report argues [
8]. Researchers will also need to use a statistical approach called extreme value analysis [
12] to estimate the rainfall amounts that storms are extremely unlikely to overshoot [
8].
But completing those steps will likely take five to ten years. So, the report also spells out an interim solution to improve PMP estimates over the next five years. For example, one recommendation involves beefing up the storm catalog. Adding more accurate and more recent precipitation measurements from Next-Generation Weather Radar (NEXRAD) [
13], a network of 159 high-resolution S-band Doppler weather radars introduced in 1992 and operated by the US National Weather Service, would increase the reliability of PMP estimates [
8]. So would the inclusion of rainfall projections from simulations of past storms [
8]. However, the report does not propose tossing out the bucket surveys, Smith said. “You really need the old observations.”
The recommendation to shift to PMP estimates based on climate models is sound, said Schneider. “We need to estimate risk from extreme weather, and using models is the way we can do that.” He notes that in 2023, the US President’s Council of Advisors on Science and Technology drew similar conclusions about the importance of modeling for predicting future weather, particularly extreme events [
14]. The bottleneck in PMP reform is the availability of results from these simulations, said Smith. Researchers have developed models with the necessary resolution but have not used them extensively because they require so much supercomputer time. To produce new PMP estimates, researchers will have to run large numbers of simulations that cover a range of possible conditions [
8]. Artificial intelligence, which is improving short-term weather forecasting [
15], can boost these models’ performance [
8]. Still, said Smith, “it will be a while before you can do enough of them to estimate PMP.”
The catch is that two laws passed by the US Congress in 2021 and 2022 require the National Oceanic and Atmospheric Administration to issue new national PMP estimates by 2030 [
7]. Given the time limitation, the new values will have to be the interim estimates that are not wholly based on modeling, said Smith. But he thinks that it is possible to make the mandated deadline. “We can do it,” he said.