The following is a synopsis of the contributions and responses regarding travel demand model (TDM) uncertainty. Approaches noted by contributors as well as potential considerations and criteria for measuring model uncertainty are also reviewed. This synthesis represents contributions to the e-mail list that were initially made in 2005, revisited again in 2008 and briefly addressed in June, 2009. Additional contributions were augmented with discussions regarding assessing reasonable forecast year model results, forecasting error, and predicted versus actual – validation checks of travel patterns. These topics were raised in 2005, 2007 and 2009 respectively.
Model uncertainty exists at many levels including model inputs, model application, model structure, and model results. Consequently, measuring, managing, and planning appropriately while acknowledging model uncertainty are specific challenges facing the modeling community. Indeed, addressing or clearly articulating the extent of uncertainty surrounding model results may initially appear incongruent given the amount of time and money invested to arrive at a forecast. As noted by one contributor, there really isn't any incentive to do so and any analysis that includes rigorous testing of the model results may, "provide advocacy groups (e.g. transit, highway, and bicycle) with ammunition to criticize a forecast". While acknowledging model uncertainty can be misconstrued as diminishing model credibility, many contributors to the contrary felt that incorporating uncertainty analysis may offer greater insight into potential outcomes and the interpretation of alternative model results.
Significant attention is given to interpreting model results on numerous occasions in the e-mail list. As noted by several contributors, the use of a single statistic to communicate model results (e.g. vehicle miles traveled, ridership) is unrealistic since it conveys more precision than is warranted for a discrete answer. Given the range of sources of uncertainty that may influence outcomes, arriving at a "reasonable" answer may be purely random considering the extent of uncertainties and assumptions that support a forecast development.
Based on contributions to the e-mail list, two significant sources of error contribute to uncertainty in the model results:
Other items noted as sources of error included:
And considerable emphasis was given to the uncertainty associated with land use models.
Specific mention was given to the fact that most Metropolitan Planning Organizations (MPOs) will calibrate to a base year and make the leap forward to a single forecast scenario year without the insights obtained from interim year forecasts. This may exacerbate the amount of error propagation associated with inputs and uncertainty in the outer forecast year. As noted, even minor base year anomalies can have potentially significant consequences when factoring the temporal distance between the base and forecast year. As one contributor stated, "we forecast something for which data is scarce and follows the ever changing laws of human behavior/preferences and changes in technology".
Propagation of errors and uncertainty can also occur as a result of zealous base year model validation efforts. Several contributors offered cogent warnings about manipulating variables or superimposing synthetic functions without any justification except to merely improve base year comparisons to count since these ramifications are carried forward in forecast applications. As noted by several contributors, a model may compare favorably to existing conditions but may not necessarily represent a good forecasting tool.
The most common approaches from the e-mail list contributions for identifying, testing and possibly vetting uncertainty in the model forecast are as follows:
Based on contributions to the e-mail list, most types of sensitivity tests (with the possible exception of toll projections) are rarely conducted because of time and budget constraints.
As noted above, there are a number of potential factors that can influence the predictive capabilities of the models. Specific recommendations for communicating model uncertainty tended to focus on the results rather than those issues associated with inputs and other variables. In addition to individual contributions, specific guidance and recommendations were given by two sources: the Federal Transit Administration (FTA) and the United Kingdom's Transportation Analysis Group (TAG)). These include:
In addition to the solutions noted above, it was recommended that corresponding documentation be developed that accurately conveys and describes the consequences and implications of different scenarios and alternatives. Documentation regarding the basis for the decisions (e.g. explicit assumptions that this scenario will occur) was another recommendation.
As noted in the supplemental information provided by TAG, the incorporation of uncertainty analysis should avoid introducing optimism bias when reviewing the plausibility of schemes or alternatives. This also applies to approaches that incorporate the quantification of dependent and independent variables that may influence the schemes (e.g. the likelihood of a land use scenario or network alternative). As one contributor noted, "It's also not clear as to whether the original developers of a model can objectively present the test results – so the question comes up as to who should perform the tests". Moreover, as with any future alternative, there are variables that are beyond the scope of capture, such as random events, or unforeseen national economic issues which can unduly influence forecast results.
It appears from the contributions that introspective examination of the models rarely occurs and if it does, it is difficult to quantify. As one contributor noted regarding a specific modeling example, "I have not attempted to determine what is and what is not "reasonable", merely what converges". Uncertainty analysis recognizes that there is more than one probable scenario outcome given different parameters since the plausibility of achieving a certain outcome is dependent on events that may or may not occur. Yet, communicating forecasts as reasonable, likely or plausible is challenging in and of itself. The issue then becomes what is the definition of reasonable. As one contributor added -- Is it something other than correct?
Defining the orders of magnitude of error or uncertainty associated with the results is another significant issue. As one contributor noted, "While it is fairly easy to define a series of sensitivity tests, it is not so simple to determine whether the test versus base difference are of the right magnitude – or, in some cases, it's not clear what the right DIRECTION for a change should really be". Specific guidance is absent in most of the contributions.
The Transportation Analysis Group in the UK provided the following four input classification schemes for future input values:
Each of the categories noted above is then used to establish the scenario plan against which to judge the core appraisal.
As more MPOs move to more complex and perceivably more robust platforms (e.g. tour and activity-based), analyzing the results may be a legitimate consideration. As one contributor posed, "does a complex model limit uncertainty analysis"? A few contributors noted suspicion as to whether a more complex model (e.g. activity-based) could even be measured for uncertainty. The perception being that complex models are more concerned with precision rather than accuracy. Other contributors disagreed with the premise since, "additional features should result in a more accurate model system".
Debate exists regarding concrete measures to quantify uncertainty and risk in the forecast. Relatively few contributions provided anything beyond simplified approaches despite a number of contributors endorsing the idea of documenting model results variance coupled with some level of uncertainty acknowledgement and analysis. As one contributor noted, "It appears there is enough concern and knowledge about model uncertainty, that its consideration could be much more rigorously addressed in our model process". Beyond toll and major transit investments studies however, there haven't been compelling reasons to invest in such activities to date (e.g. air quality determination essentially requires adherence to a single solution set in a fixed point in time). Thus, the challenge appears to be identifying an effective means of communicating model uncertainty without compromising or diminishing the value of the models. Without properly documenting model assumptions and associated uncertainties, the results may be narrowly interpreted when in reality there are a number of dynamic influences that could affect the likelihood of the forecast. Not acknowledging these uncertainties contributes unintended additional veracity to conjectural forecast results.
The objective of the series is to provide technical syntheses of current discussion topics generating significant interest on the TMIP e-mail list. Each synthesis is drawn from e-mails posted to the TMIP e-mail list regarding a specific topic. The syntheses are intended to capture and organize worthwhile thoughts and discussions into one concise document. They do not represent the opinions of FHWA and do not constitute an endorsement, recommendation or specification by FHWA. These syntheses do not determine or advocate a policy decision/directive or make specific recommendations regarding future research initiatives. The syntheses are based solely on comments posted to the e-mail list.