There is always some degree of uncertainty inherent to any future projections. In order to accurately interpret and apply future projections for planning purposes, it is essential to quantify both the magnitude of the uncertainty as well as the reasons for its existence. Each of the steps involved in generating projections-future scenarios, global modeling, and downscaling-introduces a degree of uncertainty into future projections; how to address this uncertainty is the focus of this section.
It is a well-used axiom that all models are wrong (but some can be useful). The Earth's climate is a complex system. It is only possible to simulate those processes that have been observed and documented. Clearly, there are other feedbacks and forcing factors at work that have yet to be documented. Hence, it is a common tendency to assign most of the range in future projections to model, or scientific, uncertainty.
Future projections will always be limited by scientific understanding of the system being predicted. However, there are other important sources of uncertainty that must be considered; some that can even outweigh model uncertainty for certain variables and time scales.
Sources of Uncertainty in Global and Regional Climate Change
Uncertainty in climate change at the global to regional scale arises primarily due to three different causes: (1) natural variability in the climate system, (2) scientific uncertainty in predicting the response of the Earth's climate system to human-induced change, and (3) socio-economic or scenario uncertainty in predicting future energy choices and hence emissions of heat-trapping gases (Hawkins & Sutton, 2009).
It is important to note that scenario uncertainty is very different, and entirely distinct, from scientific uncertainty in at least two important ways. First, while scientific uncertainty can be reduced through coordinated observational programs and improved physical modeling, scenario uncertainty arises due to our fundamental inability to predict future changes in human behaviour. Scenario uncertainty can only be reduced by the passing of time, as certain choices (such as depletion of a non-renewable resource or implementation of an emissions control policy) eliminate or render certain options less likely. Second, scientific uncertainty is often characterized by a normal distribution, where the mean value is more likely than the outliers. Scenario uncertainty, however, hinges primarily on whether or not the primary emitters of heat-trapping gases, including traditionally large emitters such as the United States as well as nations with rapidly-growing contributions such as India and China, will enact binding legislation to reduce their emissions or not. There is no reason per se to assume a mid-range scenario is the most likely. For example, if these nations do enact legislation, then the lower emission scenarios become more probable. If they do not, then the higher scenarios become more probable. The longer such action is delayed, the less likely it becomes to achieve a lower, as compared to a mid-low, scenario because of the carbon dioxide that continues to accumulate in the atmosphere. Hence, scenario uncertainty cannot be considered to be a normal distribution. Rather, the consequences of a lower vs. a higher emissions scenario must be considered independently, in order to isolate the role that human choices are likely to play in determining future impacts.
Figure 5.1. Percentage of uncertainty in future temperature projections one decade in the future (top row), four decades in the future (middle row) and nine decades in the future (bottom row) that can be attributed to natural variability (left column), model uncertainty (center column), and scenario uncertainty (right column). Source: Hawkins & Sutton, 2009.
Figure 5.1 illustrates how, over timescales of years to several decades, natural chaotic variability is the most important source of uncertainty. By mid-century, scientific or model uncertainty is the largest contributor to the range in projected temperature and precipitation change. By the end of the century, scenario uncertainty is most important for temperature projections, while model uncertainty continues as the dominant source of uncertainty in precipitation. This is consistent with the results of the projections for the Mobile Bay region discussed in this report, where there is a significant difference between the changes projected under higher vs. lower scenarios for temperature-based metrics, but little difference for precipitation-based metrics.
Dealing with Uncertainty
The first source of uncertainty can be addressed by always averaging or otherwise sampling from the statistical distribution of future projections over a climatological period - typically, 20 to 30 years. In other words, the average winter temperature should be averaged over several decades, as should the coldest day of the year. No time stamp more precise than 20 to 30 years should ever be assigned to any future projection. In this report and accompanying data files, simulations are always averaged over four 30-year climatological time periods: historical (1980-2009), near-term (2010-2039), mid-century (2040-2069) and end-of-century (2070-2099).
The second source of uncertainty, model or scientific uncertainty, can be addressed by using multiple global climate models to simulate the response of the climate system to human-induced change (here, 10 models for the B1 and A2 scenarios, 4 models for A1FI as that is all that were available at the time of publication). As noted above, the climate models used here cover a range of climate sensitivity; they also cover an even wider range of precipitation projections, particularly at the local to regional scale.
Again, while no model is perfect, most models are useful. Only models that demonstratively fail to reproduce the basic features of large-scale climate dynamics (e.g., the Jet Stream or El Niño) should be eliminated from consideration, as multiple studies have convincingly demonstrated that the average of an ensemble of simulations from a range of climate models (even ones of varied ability) is generally closer to reality than the simulations from one individual model, even one deemed "good" when evaluated on its performance over a given region (e.g., Weigel et al., 2010; Knutti, 2010). Hence, wherever possible, impacts should be summarized in terms of the values resulting from multiple climate models while uncertainty estimates can be derived from the range or variance in model projections. This is why most plots in this report show both multi-model mean values as well as a range of uncertainty around each value.
The third and final primary source of uncertainty in future projections can be addressed through generating climate projections for multiple futures: for example, a "higher emissions" future where the world continues to depend on fossil fuels as the primary energy source (SRES A1FI, A2), as compared to a "lower emissions" future focusing on sustainability and conservation (SRES B1).
Over the next 2 to 3 decades, projections can be averaged across scenarios as there is no significant difference between scenarios over that time frame due to the inertia of the climate system in responding to changes in heat-trapping gas levels in the atmosphere (Stott & Kettleborough, 2002). Past mid-century, however, projections should never be averaged across scenarios; rather, the difference in impacts resulting from a higher as compared to a lower scenario should always be clearly delineated. That is why, in this report, future projections for mid-century and beyond are always summarized in terms of what is expected for each scenario individually.
Uncertainty and Bias in Downscaling
Downscaling climate projections from global models to the scale of individual weather stations introduces a fourth source of uncertainty, that of the downscaling model used to relate large-scale weather patterns to local-scale varability. For a statistical downscaling model, this uncertainty in turn can be attributed to three distinct sources: (1) the degree to which the limited set of observations used to train the statistical method fail to capture the larger range in possible weather conditions at that location; (2) the inability of the statistical model to perfectly reproduce the relationship between large-scale weather and local conditions; and (3) limitations in the ability of the global climate model to simulate regional conditions.
The extent to which these three sources of uncertainty and error affect the accuracy of local-scale projections can be evaluated through a cross-validation process. Typically, a statistical downscaling model is trained on all available historical data in order to maximize the sample of naturally-occurring weather conditions. The trained model is then used to downscale future simulations using the relationship it has developed between large-scale climate and local weather conditions during the historical period.
During cross-validation, however, the statistical model is trained on all but one year of the historical observations (e.g., 1961-2009), and then used to downscale that single year (1960). This produces one years' worth of simulated historical conditions that are entirely independent of the data used to train the model.
The model is then trained on the years 1960 and 1962-2009 (leaving out 1961) and used to downscale the single year 1961. There are now two years' worth of simulated historical values that are independent of the data used to train the model. This process can be repeated N times, where N is equal to the number of years available in the observational record. The end result is a timeseries of daily simulated variables equal in length, but independent of, the observed record used to train the downscaling method.
The probability, or density, distribution of this cross-validated independent time series can be directly compared to observed maximum and minimum temperature and wet-day precipitation. This comparison is shown in Fig. 5.2. Black lines are observations, while red lines represent the various global models that have been downscaled to the Mobile Airport station.
This comparison shows that simulated maximum & minimum temperature tends to match observed values more closely than wet-day precipitation. It also shows how one or two of the 10 global climate models used in this analysis (individual models indicated by red curves) tend to be outliers, incapable of reproducing the distribution of local temperature or precipitation to the same degree as the majority of the models.
The cross-validated simulations can also be used to quantify the bias in various quantiles of the distribution of daily climate variables introduced by the downscaling, by essentially 'slicing' the distribution at the quantile of interest. Comparing the bias across various global climate models helps to illustrate the component of this error that is due to limitations in the global climate model that the downscaling method is unable to correct for. In other words, a good downscaling model can convert most global climate model simulations into something resembling observations; but its ability is naturally limited by the quality of the input fields from the global model.
As illustrated in Figure 5.3, absolute biases towards the ends of the temperature distribution (0.1 and 99.9th quantiles) tend to be much greater than the biases for quantiles towards the center of the distribution. This reflects the fact that there is much less observational data available to train the model at the tails of the distribution as compared to the center. For temperature, biases at the ends of the distribution can be as great as +/-1oF; whereas biases in the center tend to average around +/-0.2oF. Biases also tend to be higher for Tmin as compared to Tmax.
For precipitation, which has an asymmetrical or gamma-like distribution, biases in high precipitation values are generally greater than biases in lower precipitation amounts. (In Fig 5.2, the log value of wet-day precipitation is plotted to better highlight the ability of simulations to reproduce the observed distribution.) Biases also tend to be positive, between 20-30% for the 99th and 99.9th quantile of the distribution, indicating that the simulations consistently over-estimate values relative to observed. For lower precipitation quantiles, biases tend to be between 5-10% relative to observed precipitation amounts, except for biases in the 1st quantile which are higher. The absolute values of these biases tend to be on the order of a tenth of an inch or less, suggesting that the spike in biases at the 1st quantile might plausibly be a symptom of the tendency of global models to simulate more "drizzle" than observed in the real world, and the inability of the downscaling approach to completely correct for that flaw. Comparing the full distribution of precipitation to temperature in Fig. 5.2 confirms that the statistical model has more difficulty in simulating precipitation than temperature, due at least in part to its much greater spatial and temporal variability as compared.
For both temperature and precipitation, and for nearly every quantile value shown in Fig. 5.3, biases associated with an individual climate model can range from zero to the maximum value. This range illustrates the third uncertainty listed above, that of the differing abilities of the global models to reproduce the features of regional climate that affect conditions at each weather station.
Biases for all quantile values averaged across all climate models are non-zero. These values illustrate the second uncertainty listed above, the ability of the downscaling approach to accurately capture the relationship between large-scale climate and local conditions.
Finally, higher biases at the tails as compared to the center of the distribution illustrate both the first and second uncertainty, the first being the limited sample of historical data available to train the downscaling model, and the second being the ability of the statistical model to capture features of the distribution towards the tails.
This last conclusion, that biases tend to be larger at the tails of the distribution, can be shown more clearly by calculating the average bias for quantiles that are extreme (0.1, 1, 99, 99.9th quantiles) and comparing those averages to the average bias for quantiles that are closer to the center of the distribution (10, 25, 50, 75, 90th quantiles) as shown in Figure 5.4.
From Fig. 5.4, it is clear that the highest biases are in precipitation, and the lowest in maximum temperature. Also, some models tend to have higher biases than others.
Does this information help to identify any global models that might provide more accurate simulations of climate change? This comparison does not readily identify any particular model or set of models as "best" (although it does provide some basis for potentially removing one model (CNRM) that performs poorly for precipitation across the entire distribution). Rather, this provides insight into the various abilities of the models to perform better when downscaled to maximum or minimum temperature or precipitation, or to the center or tails of the distribution, and therefore what amount of confidence should be attached to simulated values.