Office of Planning, Environment, & Realty (HEP)
PDF files can be viewed with the Acrobat® Reader®
|Location||Date||Exchange Host Agency||Peer Review Panelists|
|Oakland, California||December 2-3, 2004||Metropolitan Transportation Commission, Planning Section||Professor Chandra Bhat, University of Texas|
|Mr. Joe Castiglione, pbConsult, formerly with San Francisco County Transportation Authority|
|Mr. Ken Cervenka, North Central Texas Council of Governments|
|Mr. Bill Davidson, pbConsult|
|Professor Kostas Goulias, University of California, Santa Barbara|
|Professor Frank Koppelman, Northwestern University|
|Mr. Keith Lawton, formerly of Portland Metro|
|Mr. Ted Matley, FTA|
|Ms. Maren Outwater, Cambridge Systematics, Inc.|
|Ms. Mayela Sosa, FHWA|
|Ms. Supin Yoder, FHWA|
Prepared by: Planning Section
Metropolitan Transportation Commission
101 Eighth Street
Oakland, California 94607
The content of this peer review report does not represent the opinions of FHWA nor does it constitute an endorsement, recommendation or specification by FHWA. The content of the report does not determine or advocate a policy decision/directive or make specific recommendations regarding future research initiatives. The report is based solely on discussions and comments during the peer review.
The following report summarizes the results of a Peer Review Panel for the San Francisco Bay Area's metropolitan planning organization (MPO), the Metropolitan Transportation Commission (MTC). This Peer Review Panel was sponsored through the Travel Model Improvement Program (TMIP), a program managed by the Federal Highway Administration (FHWA) and Federal Transit Administration (FTA) and administered by the Volpe National Transportation Systems Center (VNTSC).
The Metropolitan Transportation Commission (MTC) Planning Section hosted the two-day Peer Review on December 2-3, 2004, at the offices of the Commission in Oakland, California. Academic and practitioner representatives from various federal agencies, MPOs, consultants and universities around the nation attended this meeting.
The primary focus of the Peer Review Panel was to review MTC's plans and desires for building the next generation of travel behavior model systems for the San Francisco Bay Area.
The Peer Review Panel discussed the following topics:
MTC intends to use the intelligence gathered in this two-day peer review as a platform for developing a Phase I model development scope-of-work, for tentative release in Spring, 2005.
These findings and recommendations are MTC staff descriptions as opposed to formal findings drafted and vetted by the peer review panel. A draft of this full report has been provided to the peer review panel for their review and comments. Comments were incorporated in the final version of this report.
The following is a summary of the findings and recommendations from the two-day MTC Travel Model Peer Review Panel.
MTC staff provided a background of the history of travel modeling in the San Francisco Bay Area, including some of the groundbreaking activities in developing and applying discrete choice models in the 1970s, and in-house efforts over the past twenty years.
The MTC staffing environment has undergone considerable change in the past few years, with retirement of the computer programming staff responsible for writing model application code; and the turnover of staff where now only two out of five MTC modelers/planners have on-the-job experience in estimating discrete choice models.
MTC has $250,000 in the current fiscal year (2004/05) budget for consultant work on travel models. This compares to the previous decade where MTC had only $25,000 in consultant contracts for a consultant-led in-house training program. (This excludes the extensive investments in household travel surveys conducted in the Bay Area in 1965, 1981, 1990, 1996 and 2000.)
MTC has the inclination to go with a tour-based model system, with models estimated in-house by re-trained MTC staff. MTC's thought is that the most direct approach is to build on the San Francisco County Transportation Authority's (SFCTA) activity-based model system. This would mean re-estimating the SFCTA models using the 2000 Bay Area Travel Survey (BATS2000), while retaining the structure of the SFCTA model system.
MTC probably does not have sufficient resources to do parallel development tracks (e.g., trip-based and activity-based models developed at the same time.)
The current set of MTC travel demand models are typical of advanced trip-based travel models. MTC staff estimated these models in the mid-1990s using data from the 1990 Bay Area household travel survey (BATS1990). Previous to 1990, MTC staff estimated models in the 1980s using the 1981 Bay Area household travel survey (BATS1981). Consultant teams developed the landmark, textbook examples of nested, disaggregate models used in an aggregate model system (MTCFCAST) in the 1970s, based on data from the original, 1965 home interview survey (BATS1965).
The current trip-based models are a blend of disaggregate and aggregate demand models, all applied at an aggregate, zonal level with extensive market segmentation. Auto ownership models are nested logit choice in form, and include transit/highway accessibility variables. Trip generation models are either disaggregate household, worker or student trip production or aggregate zonal trip production/attraction in form, using hybrid cross-classification / multiple regression forms. Trip distribution models are standard gravity model formulations. (Previous generations of trip distribution models were logit destination choice models in application). Mode choice models are nested logit choice. Non-motorized trips (separate modes for bicycle and walk) are included in all mode choice models. Departure time choice for work trips is a binomial logit choice, whereas departure time choice for non-work trips is based on traditional trip peaking factors. Trip assignment procedures focus on daily traffic and transit trips, and AM peak period traffic volumes and speeds. Customized speed-flow delay curves are used in traffic assignment, including an Akçelik formulation for representing arterial speeds. The model system methodology incorporates full feedback from trip assignment back through auto ownership. Trip assignment outputs (district-to-district travel times and costs) are also used as input to the land use allocation model (POLIS) used by MTC's sister agency, the Association of Bay Area Governments (ABAG).
Several counties within the Bay Area have adapted the MTC models for various planning studies. The Santa Clara Valley Transportation Authority (SCVTA), for example, added route choice sub-nests to the MTC mode choice models. The Alameda County Congestion Management Agency, as another example, has added toll / not-toll nests to the mode choice models for the purpose of analyzing high occupancy toll (HOT) lanes.
The current generation of MTC trip-based models includes non-motorized (bicycle and walk) travel. Non-motorized networks are derived from the regional highway networks by removing freeways and restricted-access facilities. The problem faced by MTC was in the use of a common set of non-motorized zone-to-zone distance skims for both bicycle and walk modes. This approach tends to be suitable for bicycle trips, but exaggerates the intra-zonal distances for walk travel.
The MTC modeling group has just completed major forecasting work for the update of the Regional Transportation Plan, and has the opportunity to begin some creative work in model redevelopment. MTC's inclination is a transition towards tour-based models, but is considering four model development tracks:
Panelists encouraged an aggressive approach to model re-estimation, explaining that the model expansion plans of many MPOs have often been truncated in the second step (Track "b"), never making a fundamental paradigm shift. Successful model structures making full paradigm shifts have been recorded and the risk of activity-based approaches has decreased substantially. It may not be appropriate to continue investing in trip-based methods when only marginal model benefits are returned. An MPO is always able to resort to existing trip-based models (Track "a") as a backup for future forecasting studies. Additionally, some panelists have suggested that there is the opportunity to perform parallel testing between activity-based and traditional trip-based models.
MTC is looking to other MPOs that have implemented tour-based models for examples of how to proceed. Panelists discussed some early efforts in tour-based model development, discussing the work of Portland, Oregon's MPO, Metro. Metro had difficulties and challenges in developing their first generation activity-based model. There were difficulties in calibrating the models when estimating both destination and mode choice elements - getting the trip distribution right was difficult when mode was changed. Metro's ultimate desire is to fully transition to a tour-based model, and is piloting their TRANSIMS project towards that end. Metro does, however, maintain a trip-based model for transit projects, such as light-rail forecasting work. TRANSIMS is tour-based with microsimulation at the back end. The pilot has worked well for destination choice tour-based models. It is important to note that non-work trips that are part of a work tour behave very differently in tour-based models than when there is no "anchor" in the tour. Metro has plans to use the front end of TRANSIMS to model the demand side, and to do more with network microsimulation.
One panelist noted that, while developing the new model, MTC should be mindful of not over-specifying model elements, a concern raised by Jim Ryan of the Federal Transit Administration (FTA). He identified important examples: the stratification of all constants by market segment, and whether it is appropriate for non-included model behavior (e.g. hours of service and station comfort) to vary by market segment. Ryan has indicated a preference for non-included factors to be factored in the utility expression rather than in the modal constants.
The travel modeling culture in the Bay Area is very diverse at the region, county, city and transit operator levels. Practice ranges from "three-step" (no mode choice) vehicle trip models in many cities and several of the smaller counties, to direct demand models (e.g., transit operators), advanced trip-based models (MTC and several of the larger urban counties), and first-generation activity-based models (SFCTA).
Panelists noted that it would be interesting to understand the willingness of Bay Area counties to transition to tour-based models. Differences between jurisdictions will pose a difficulty for comparability and regional data maintenance, including differences between zone systems and the need to maintain separate aggregations of socioeconomic data. Panelists noted that integration between county-level and the regional forecasts will be streamlined if MTC takes the lead with counties working in cooperation. The greatest difficulty will be discrepancies between forecasts. Being that the initial level of risk associated with adopting tour-based models is over, it might be appropriate to make a commitment to one model.
Some panelists asserted that, if the regional models were microsimulation-based, counties would no longer have to develop their own focused and detailed models. The need for a zone system will also be eliminated. In the current system, however, each county must expend considerable staff and consultant resources to add more detail to the MTC regional zone systems, so such a move towards microsimulation on MTC's part will provide the counties with a significant benefit. Point-to-point trip patterns could be readily aggregated for conducting traditional trip assignments, and customized for MTC's or a county's or a city's or a transit operator's specific needs.
One panelist explained that in his microsimulation work his group aggregates trips to the zonal level to perform the network assignment. He believes that point-to-point assignment does not necessarily represent a significant improvement over zone-level assignment. He underscored a point made by Jim Ryan about the importance of re-evaluating network access details, as these can significantly bias forecast outputs. Re-evaluation of these details is long overdue, the panelist noted.
One panelist commented that point-to-point assignment is the direction of all three major modeling software packages (CUBE/Voyager, TransCAD, and EMME/2). He explained that TRANSIMS works without a centroid, with activity locations at the individual link level. Of special concern, point-to-point assignments will substantially increase the amount of routing that needs to be done.
There was considerable discussion about the political climate surrounding consolidation or restructuring the MTC/county modeling interface. MTC and the individual counties have different needs and objectives, and it is hard for MTC to support some specific county-level studies. MTC gets pressure from the state for projects with region-wide and even statewide importance (e.g., High-Speed Rail). When MTC moves to activity-based models, it may be in a position to lessen the work of counties through better research and production coordination, and communication. It would be valuable to have a dialogue with county folks about this issue. Currently counties conduct a lot of project-level and corridor studies, and therefore must do a sizable amount of independent model development (e.g., study-specific zone systems and networks). A better interface may enable the counties to better consolidate resources that counties spend on development. Often, project-level modeling relies on a project-funding stream for model development. A regional model change might save on these development costs.
It is also important to note that many of the counties don't have full-time modeling staff available to assist with a transition to tour-based models. Such a transition would likely require additional MTC staffing, a measure supported by the panel. Another approach might be the "rent-a-planner" approach, bringing people temporarily on staff for projects on an as-needed basis. One panelist explained that the Oregon Modeling Steering Committee (OMSC) has this kind of dynamic. There are formal agreements in place to move around staff from the state and MPO level, to share resources, and to trade work. People who specialize in a particular task or with a specific software application are thereby able to share their skills with other agencies.
Such coordination with the counties would require substantial discussion. Generally the larger, urban counties have larger budgets for travel forecasting studies, but these are derived from projects. Thus, it may be more difficult to direct general funds to this kind of multi-agency cooperation.
MTC's transportation models use the Association of Bay Area Governments (ABAG) socio-economic forecasts as inputs. ABAG, a regional council of governments established in 1961, is a sister agency to MTC, producing demographic and land use forecasting information for the nine-county San Francisco Bay Area. ABAG projections are disaggregated into the 1,405 census tracts that comprise the Bay Area, and their forecasts are further disaggregated (by MTC) to the 1,454 travel analysis zones used for travel forecasting.
Panelists discussed whether ABAG data could be forecast at a smaller level of aggregation. Concern was expressed about a considerable degree of "measurement error" at a very-fine-grained level of geography. Some counties and cities use data at the 3-to-4-block level of analysis. Research may be necessary to determine the errors at this level, in addition to other relevant small-area demographic forecasting issues. Provided this question is resolved, GIS tools can be used to assign census tract data to much smaller neighborhood-level zones for microsimulation purposes.
Some panelists assert that errors in population need not be an issue in this process. Microsimulation can locate households so control totals are met. It may be that small-area details aren't as important when applying disaggregate demographic forecasting information to travel demand models. Control totals for demographic forecasting and using uniform zones with a simple demographic scheme have demonstrated immediate improvement to travel demand model forecasting accuracy.
With MTC's emphasis on Smart Growth, it will be important to not assume uniform development within each TAZ. It may be that census tract data is used with some redistribution to account for Smart Growth. Related to this, simulating the number of households within varying distances of transit stops will also be important.
One panelist noted that the underlying land use at smaller aggregation levels might represent a greater source of error than population. Properly capturing employment at a very disaggregate, or parcel level may also be difficult. In order to perform point-to-point traffic and transit assignments it will require parcel-level land use data. There are examples of metropolitan areas that have worked at assembling parcel-level land use data, including Sacramento and Honolulu. The Bay Area is very large and such a database could be very cumbersome to maintain. There may be some opportunity to work with the counties on developing and maintaining parcel-level data. In San Francisco, for example, the City is already considering development of parcel-based future-year scenarios.
ABAG has a fairly recent vintage and very detailed (500,000+ polygon) land use database to work from, though not at the assessor's parcel level of detail. This land use database may be a good starting point for MTC efforts for the next generation of models. In addition, MTC leases a major commercial street database (GDT, Inc.) that could be used for disaggregate network modeling uses.
The North Central Texas Council of Governments (NCTCOG) is working with GDT data and adding value to these databases. Fields for street and freeway directionality and grade separation do exist in the GDT data, and NCTCOG staff has added the number of lanes based on information from aerial photography. Adding this level of detail can be conducted in-house. One outcome might be a travel model network that consists of all streets and roads in the region. This would provide a greater level of detail for highway and transit assignments, but would also require completely recoding the existing transit network.
MTC finished this topic by observing that, while the trend in the industry appears to be moving away from zone-to-zone to point-to-point forecasting, tract-level control totals still seem appropriate and necessary.
The panel discussed the best way to handle the labor force and employment in counties bordering the Bay Area, and outside of MTC's jurisdiction. Labor force and employment forecasting data from adjacent counties is not as detailed as it is within the nine-county Bay Area. Better data of this type would help MTC produce more accurate forecasts and to better simulate the impact of increased investment on interregional commuter rail systems.
MTC is currently undertaking a $1.4 million study to develop a statewide model for the California high-speed rail (HSR) system, for use in analyzing various Bay Area alignment options. This high-speed rail model system should be very beneficial in providing usable inter-regional transit and vehicle trip forecasts.
MTC is considering applying a hybrid approach to modeling the inter-county movements between the Bay Area and external counties. MTC's approach would combine the MTC model for intra-regional travel, and using statewide modeling efforts (e.g., the statewide HSR model update) to capture inter-regional travel.
Panelists noted that a hybrid approach to capture internal and external movement would have to be careful to address the employment gap between the Bay Area and surrounding counties. Currently there is more employment in the nine-county Bay Area then there are workers to serve it. Housing prices have also pushed people outside the region. There is a potential for discrepancy between the regional population and employment numbers.
MTC currently uses a Fratar method to forecast inter-regional commute travel. This is a challenge because of an inconsistency, at the statewide level, in the availability of population, labor force, and employment forecasts. Forecasting efforts and regional estimates are improving. However, neighboring counties don't often have the socio-economic variables that ABAG provides MTC. MTC would prefer to not have to purchase data for counties outside its jurisdiction, instead relying on census journey-to-work data and any information derived from, or input to the statewide model.
Recent increases in computing power are supportive of the move to activity-based models and more disaggregate forecasting. One can vary the modeling approach to work within limits of computing power while still providing a desired level of detail. Computing constraints associated with activity-based models are associated less with microsimulation and more with the increased number of matrix manipulations generated in such models. Panelists noted, however, that matrices can be stored in memory to minimize querying time, and memory (RAM) is generally cheap and readily available. Overall, processing time is substantially reduced following this approach. It may be that memory limitations become an obstacle at some threshold of operation, but research by panelists hasn't demonstrated it to be a substantial obstacle.
Panelists discussed that some level of aggregation is important. People travel to an area, not necessarily to a specific point or parcel. It might be beneficial to think about the appropriate level of geography for aggregation. Additionally, it was noted that it might be desirable to use zones for trip distribution. With TRANSIMS, for example, there is a two-stage process: the first uses zones, and then a distance function is used within the zones for point-to-point distribution.
There is a major benefit in conducting model development in-house. While it becomes infeasible with more complex modeling to do everything in-house, it is important to have staff involved so that they are not just pushing buttons on black boxes. Staff has to lead the consultant direction, not the other way around. Ultimately staff has to take possession of the model system.
To the extent that the next generation's model estimation work can be performed in-house, it is MTC's goal to provide employees that opportunity. Panelists discussed the employee training that would be necessary under the different estimation approaches considered. Historically, MTC has focused on logit estimation, but is open to other techniques, likely dependent on the structure of models MTC chooses to develop. Panelists encouraged MTC to begin the process thinking about MTC's role in the region, and what service it would like to provide. The model should be designed around that, and the training ultimately designed around the proposed modeling structure.
Given the budgetary constraints, it will be important to not re-create work - including programming scripts - potentially wasting time and budget. It might be helpful to bootstrap MTC's modeling efforts to other current modeling efforts elsewhere in the U.S.
Panelists discussed options to train staff that included utilizing someone (likely a PhD candidate) studying in a nearby university, with the product of their work being MTC's new model development. MTC's model could be an outgrowth of such a student's dissertation work. This relationship could be mutually beneficial to MTC and academic research, providing a real test case for a university partnership. As beneficial as such an arrangement would be for MTC, it is also good for academics to ground their work in industry (government) projects.
The panel enumerated skills that will be valuable for staff to learn for this next generation of model development. One panelist expressed the need for improvement in time-of-day modeling, explaining that it is potentially a more important improvement than spatial improvements in modeling. A lot of emphasis, he explained, is put into disaggregating spatial areas into even smaller zones, while the day is only broken down into two-to-three time periods. Examining the effects of travel demand management strategies on spreading traffic requires an improvement in time-of-day modeling. More thought will have to be given around whether MTC will model discrete time periods, or continuous time.
One panelist noted that a core element of modeling background is a strong foundation in hardcore model estimation and econometrics. It is important that people on the project have the capability to look through model structures to find the right models for MTC's needs. Other necessary skills include advanced work in discrete choice modeling, simulation, and time modeling methods. A year's solid work in time modeling was recommended. A standard 5-day seminar will likely not be sufficient.
Another suggestion is that MTC could acquire a coach for model re-estimation. It would be valuable to hire someone with hands-on experience in state-of-the-art estimation techniques - someone who could support staff in doing the work. MTC supports the idea of having a coach, and feels that this approach may be contingent on budget availability and staff turnover. Metro has followed the coaching model for a while. They started with the ALOGIT (logit estimation) package and have progressed into a range of other methods, including simulation. It is important to have staff that has experience applying a host of methods. The skill set is large, and the modeling sophistication has changed substantially since the early days of logit estimation.
It is important to be able to apply judgment and have a behavioral understanding of models when applying or estimating them. Early models were simple, with few parameters to adjust. One panelist stressed the importance of thoroughly understanding model underpinnings and not relying on "canned" estimation software programs (ALOGIT, LIMDEP, etc.). People often use these software packages inappropriately and such packages don't allow full control in design of estimation behavioral structures. Someone may use one of these packages to estimate a model without understanding the underlying behavioral econometrics or the subtleties of advanced models.
Panelists stressed that someone should be immersed working with the models, and that model production work responsibilities, which can be heavy at MTC, may detract from a focus on model estimation. One immersion approach suggested might be to educate MTC staff in a university setting for this work. One panelist suggested that one semester could consist of training, and the next semester to model development.
Having mentoring resources available are always helpful, regardless of one's level of experience. An ongoing peer review panel is also helpful. The panel could be divided in different ways, or by area of expertise. Computer or telephone conferencing can help facilitate knowledge sharing. Development of the Oregon Statewide Model provides a good example of long-running cooperation, including working with people overseas.
A mentor or coach available over the course of a year, working one-on-one, may be a better way to learn. MTC expressed that a 7-10 day course, with ongoing support might be helpful.
Responding to MTC's request for resources within the area, panelists noted that Kenneth Train of U.C. Berkeley would be a very good person to contact. The panel discussed that MTC shared data and model development activities with the University of California in the 1970s, but that no such partnership currently exists. Other resources exist if MTC relaxes the need to work locally with someone. Long-distance mentoring might also work. In certain situations, however, face-to-face interactions are helpful in encouragement- type of communication, supporting people to fulfill their potential skills in a project.
One panelist noted that the tutorial method is the best way to learn. It would be helpful to have mentors (or mentor groups) available by topic (e.g. hazard duration models). Such a relationship with university students is done at Northwestern University. They are tasked to a specific assignment, and sometimes return with questions. In this type of dynamic building rapport needs to be done on the front end, but ongoing work can be done remotely. Sometimes additional face-to-face time may be necessary, but not always. Additionally, sometimes university staff will do this on a pro bono basis. This type of approach to learning is best achieved in the context of a specific task, with student and mentor working toward a specific goal. A mentor should sign on and be committed to the goal. The process works best when responsibility is placed on both parties, so that there is a continuing relationship with a common goal in mind.
Academic panelists noted that, often, new ideas or forward-thinking approaches resulting from this kind of mentorship-partnership might be valuable in lieu of money. If researchers see this as a benefit, possibly publishing the outcome of the process, MTC might see more buy-in from researchers.
Consulting companies can also provide mentors. They may be helpful in ways that are different from academics. They spend more time on strict schedules, and are often subject to greater pressure to insure a timely product on budget. Consultants may also be more pragmatic and less abstract in their approach to a work product.
One panelist explained that Northwestern University is continuing its work on a self-instructing course-book in discrete choice models. There was an update 6-7 years ago awaiting full evaluation by FHWA. Northwestern has an updated contract to extend, refine, and finish this project. Jim Ryan of FTA is the project manager. By December 2005 the research team estimates an advanced draft, enriched with at least another urban model. Its ultimate release date from FTA is uncertain.
Panelists discussed other training opportunities led by:
Unfortunately, funding is limited for an NHI/NTI-sponsored detailed model estimation course. NHI provides some training in modeling - including freight forecasting - and there is some debate about going to more of a TMIP format. The funding scenario isn't optimistic, however. FTA has recently acquired some research funding, for the first time in roughly 20 years, and none of it is earmarked for university research. This is generally a reflection of a national trend of decreased model estimation work. Most MPOs don't have staff estimating models, so there is not a demand in the public sector for this kind of training. In the private sector, most work is isolated in a few firms. Were this kind of training offered by a public agency, there is a question about whether it would be well attended.
As a final thought, panelists discussed the turnover rate at MPOs, and whether training and coaching them with new skills might induce them to move on to more lucrative positions in consulting. The panel questioned what, in addition to salary, an MPO is able to offer to an employee. One panelist noted that the strongest appeal of this work lies in having a strong relationship with a mentor.
MTC described some of the difficulties of previous surveys and provided a background on its most recent travel survey, the Bay Area Travel Survey 2000 (BATS2000). BATS2000 data is ready for use as an input for modeling trip-chaining behavior. The BATS2000 is an activity-based travel survey conducted throughout calendar year 2000. Two-day activity diaries (including weekends) were collected from over 15,000 Bay Area households. Data was collected for both in-home and out-of-home activities. Some important features of BATS and other surveys identified by the panel are discussed below.
MTC explained that, as is the case with all such surveys, there is a fundamental question about whether the households responding to the survey were representative. One panelist noted that, because it is a telephone survey, the people captured by the survey are at home and likely the least active.
The panel discussed whether weighting should be used for estimation or not, and noted the wide weighting range (1.14 to 4589) of BATS2000 records. At least one panelist believed that estimation is more efficient without weighting samples. Another panelist noted that, if a sample has aggregate shares representative of a population, then sample shares are the same as market shares, and weights aren't needed in the estimation phase. If the assumption is that these factors are exogenous - e.g. low-income persons are more likely to use transit - then they are already included in the data set without weighting. One panelist noted that introducing explanatory variables in the models allows one to use corrections. MTC explained that sample correction factors boosted the numbers for transit modal shares.
The panel noted that a lot was asked of the survey, but that its number of respondents allows research into such nuances. One panelist noted that, if BATS2000 is used to move MTC's model towards a tour-based framework, data associated with many households would need to be discarded because of inconsistencies or gaps. For example, for modeling interactions in household between household members, all households with discrepant data must be removed. And, while removing these data items improves the quality of the overall sample, it may lead to a greater dependence on sample weighting.
The panel also noted that within members of a household time-of-day reporting (for the same activity) might have significant discrepancies. This is less of a problem when using a survey to design trip-based models. With tour-based models, however, reconciliation of individuals in a household is something that cannot be ignored. For trip-chaining analyses (tour-based models) checking for intra-household congruencies will be important. This might be accounted for in a survey's Computer-Assisted Telephone Interview (CATI) phase of surveying. Many surveys don't ask which household member a participant is traveling with on a given trip. One panelist noted that relevant research exists suggesting household members may state destinations and purposes differently for the same trip. Some of these discrepancies can be resolved when one household member reports for everyone, though proxy reporting may have other problems related to inaccurate or forgotten travel.
The panel noted that TransCAD has a feature that allows one to input travel survey data and it will depict household-level activity. Consultants have done some work with it and notes that this feature improves data processing and is helpful for spot-checking survey data and investigating data anomalies.
One panelist asked about under-reporting in BATS2000. Consultant research found that when CalTrans and SCAG used GPS surveys side-by-side with conventional surveys, a 17% underreporting of household trips was found. The three significant variables related to the underreporting of trips were age, income, and trip duration. New weights were developed to account for this underreporting.
MTC staff discussed the underreporting of travel issue. On one hand, the aggregate, survey-expanded transit trips are fairly accurate. On the other hand, there appears to be a significant underreporting of very small truck travel made by workers.
Survey work done in Calgary may provide some insight into this phenomenon. Work conducted on the Calgary tour-based models included surveys with a large commercial vehicle component. One of the findings was that, without utilizing a GPS component, it could be very difficult to account for underreported travel. Bill McFarlane (SANDAG) has also done some research on this phenomenon, comparing the reporting of households with and without GPS.
The Alameda County Component of the California Statewide Travel Survey found that underreporting of trips was not related to trip purpose or duration. Work trips were underreported at the same rate as non-work trips, and short and long trips were both underreported. This research only looked at trips, not tours, so it can't account for whether there were missed stops, i.e., if multi-stop journeys were reported as a single stop. Consultants in Ohio discovered that it is not just intermediate stops that are missed, but complete tours.
MTC is currently undergoing trip distance GIS analysis of BATS reported trips. Additionally, two-day odometer information will be used to cross-validate the results of the GIS analysis.
In the 1996 NCTCOG household travel survey, the expanded home-based work trips (only considered home-based work if there were no intermediary stops) were close, though the home-based non-work trips and the non-home based trips were low. Overall, the VMT numbers were low. Increasing the home-based non-work and non-home based trips helped things match up well. Non-home based trips are generally underreported in the survey, while home-based work trips are similar to those in the work-based survey.
In tour-based modeling, it is difficult to exchange one kind of trip for another. If, at the end of the estimation process, total trips are short, factoring up more active households is one means of meeting total expected VMT. For the TRANSIMS work done at Metro, more active households were sampled more often.
The panel discussed creating trip tours for MTC's model by applying trip-chaining procedures, and the extent to which trip tours per capita vary by market segment. If MTC performs the trip-chaining in-house, then results from the 2000 survey can be compared with those from older surveys and trends can be studied. The most appropriate comparison of chained trips would probably be comparing the 1996 with the 2000 BATS, both activity-based surveys.
The panel proposed other survey analysis ideas, including:
MTC discussed the GIS analysis it anticipates for BATS2000 data analysis. MTC has conducted a substantial amount of work in-house. MTC has also worked with Professor John Radke of U.C. Berkeley to test point-level accessibility procedures.
Activity locations given by BATS respondents have been geocoded to a GDT street network. MTC is interested in looking at the built environment near home and other activity locations. With trip-based modeling, MTC was originally interested in zone-to-zone level of service. GIS analysis will better facilitate point-to-point distance and travel time calculations for more disaggregate modeling applications. Such analysis will also greatly improve non-motorized and transit modeling. Towards that end, MTC is currently beta-testing a version of ESRI's Network Analyst to batch assign trips from BATS to a GDT street network.
The panel discussed the quality of BATS geocoded attraction data. Initially, MTC explained, the consultant geocoded about 90% of attraction addresses indicated by survey respondents. When respondents did not provide addresses, the consultant researched them before geocoding. MTC re-geocoded the attraction points using the geocoded addresses provided by the consultant. One panelist noted that, generally, outside of city core areas, the geocoding error rate is typically higher.
The panel discussed using GIS tools to relate the built environment to trip making behavior. One panelist noted that Larry Frank (University of British Columbia) has completed a significant amount of research in this area. Additionally, in Portland Metro created buffers around geocoded coordinates, calculating the number of retail establishments, service employees, total households, and number of employees around different radii to relate these variables to travel characteristics. Metro found that people walk much further than originally thought, and certainly longer than the quarter-mile planning standard for walking. This approach can be very helpful with modeling mode choice, which depends heavily on the activity location. It also useful calculating such measures as the number of jobs accessible within 30 minutes by transit, etc.
GIS analysis may be very useful in gathering, by activity location, information about the different trip purposes within varying distances. Distances could be defined differently for different variables. For example, buffers for activities within biking distance might have a greater diameter than those defined for walking. Industrial classification information might be applied to this process.
MTC relies on ABAG for regional employment information. MTC has parcel-level data from ABAG, including a parcel database containing over 500,000 polygons. The database was assembled from several sources: individual county records, SIC data, aerial photography, local assessor data, and Census Transportation Planning Package (CTPP) block group data that has been allocated to the parcel level.
Panelists noted that business inventories may also be helpful in assembling a land use database, and that they might reconcile discrepancies identified between other data sources. Other sources cited include California State Employment Development Department (EDD) ES-202 data, InfoUSA, American Business List, and Dun and Bradstreet. When compared with commercial sources, EDD data has been found to have serious discrepancies, frequently undercounting, even among retail locations.
Panelists noted that proximity to nearby landmarks or significant destinations can be used to capture the attractiveness of a given destination or for residential choice in land use modeling. Examples are the distance from a major downtown, distance to a major park, or the acreage of a nearby park. Certain amenities within the park such as bike paths, jogging facilities, or other special characteristics might increase the desirability of such destinations. Other factors such as area crime and local school quality should also be factored into residential choice. Additionally, housing price is important in location choice, and may be a proxy for neighborhood amenities. There was some discussion about the availability of housing value data from assessor data. This data source is rich with information but may have many gaps or errors. Specifically, information about the value of a property may not be current.
When MTC asked for information regarding other land use model development efforts, the panel recommended looking at work being done by Paul Waddell (University of Washington), Larry Frank (University of British Columbia), and Kevin Krizek (University of Minnesota). Other consultants have developed ArcGIS automated scripting for their various MPO projects that might be useful for MTC's model development.
The panel discussed trip linking in BATS, and specifically examined how trips with an intermediate stop are counted. The example discussed by the panel was, if a home-daycare-work linked trip should be counted as having an occupancy of two for the entire duration of the trip. More analysis of unlinked trips from the BATS2000 survey may be necessary to discover this information. In this example, the exact location of daycare (or carpool pickup, etc.) is important in terms of considering the number of people in the vehicle. One option is to separate direct home to work trips from trips with an interim stop, though this doesn't necessarily resolve the question of occupancy. NCTCOG used an averaging approach to this question, looking at interim trip stop location and making a determination of whether the interim stop was closer to the beginning or closer to the end - then assigning it to the closer location.
Other topics relating to GIS:
Population microsimulation in the San Francisco Bay Area will be largely based on the census public use microdata sample (PUMS). The Bay Area has 54 Public Use Microdata Areas (PUMA), with 7 in the City of San Francisco. The current SFCTA model is based on the 1990 5% PUMS, as 2000 data was not yet available. Future model updates for SFCTA will use Census 2000 PUMS data.
Additionally, yearly PUMS data are available from the American Community Survey (ACS), though ACS data will be much sparser than that provided by the Decennial Census. Once the ACS begins full implementation (January 2005), a 1% PUMS set will be available at the PUMA level on an annual basis. Data comparable to the decennial census 5% sample will be available from the ACS after 5 years of data collection. The challenge at that time, however, will be figuring out how to aggregate and use 5 years of PUMS data.
ACS PUMS for 2003 is currently released only at the primary metropolitan statistical area (PMSA) level. PUMA-level data from the 2003 ACS is not available. A full ACS PUMS set for 2005 should become available in mid-2006.
MTC, as well as the SFCTA, will need advice on how to make best use of annual PUMS data for population microsimulation.
One panelist noted that, in creating a population synthesizer, a logical approach is to use iterative proportional fitting with PUMS data to obtain control totals. Another panelist explained that the Oregon model uses a Microanalytic Integrated Demographic Accounting System (MIDAS)-style model microsimulation. Panelists noted that there is considerable study in this area and consistency across this research milieu. A few researchers recommended by the panel include: Sonny Conder at Metro; Mark Bradley; John Bowman; and Peter Vovsha's work in Atlanta.
MTC is considering conducting a mini-study for population microsimulation using various decennial PUMS data. The panel discussed using PUMS from 1980, 1990, and 2000 to test development of a population synthesizer. Using 1980 PUMS, testing predictions for a microsimulated population for 2000 (knowing control totals from 1990 and 2000) can be performed. Concerns about using PUMS data included that the 2000 PUMS (currently the best-available PUMS data set) is already nearly 5 years old, and that the preferable level of geographic aggregation for control variables needs to be determined. Panelists agreed that the control variables used in the SFCTA model were the most powerful predictors of travel behavior:
One panelist added that any control variables included must account for demographic changes and migration patterns. Additionally, if some variables, such as race, are not well predicted with the selected control variables, then it is important to consider using other control variables. Specifically, there must be some method for predicting marginal totals.
MTC explained that details about changes in population and/or composition due to in/out-migration, neighborhood changes, and development of the land market and the associated relocation of individuals are within ABAG's purview for demographic forecasting. ABAG provides projection data, producing estimates of population by age by census tract. MTC explained that ABAG uses their Projective Optimization Land Use Information System (POLIS) land use allocation model, along with other models for predicting migration.
Panelists discussed the trend in the travel modeling community to push demographic models to include more disaggregate and market-based forecasting. Many professionals, including sociologists employing hazard models for their studies, have not looked at research from a disaggregate standpoint.
Panelists noted that the improvements in demographic forecasting would pair well with cutting edge travel model practices, and thought that TMIP might want to advocate such a position. MTC explained that ABAG is generally very responsive to its needs vis-à -vis demographic model improvements, and noted that additional detail for income and other variables might be appropriate additions. One panelist recommended that stratifying income by hinge-points of different travel behavior might be more appropriate than stratifying by quartile.
Panelists suggested that race/ethnicity is important to forecast, and noted that race may significantly contribute to differences in travel behavior. Others clarified that, regardless of ethnicity, culture may matter more. For example, the length someone has been in this country may matter more than country or culture of origin. This data is not collected by MTC within the BATS survey. It is, however, available from census PUMS data.
One panelist asked whether MTC has performed or intends research into a model for non-resident, visitor microsimulation. The San Francisco County Transportation Authority has a model for this purpose. MTC's principal focus, however, is modeling intra-regional travel by Bay Area residents, and development of a good visitor (non-resident) model is complex and expensive. Additionally, one panelist noted, the impact of visitors is greatest on San Francisco County, and may not substantially affect the rest of the region.
One panelist inquired how gender is accounted for in these models. Most demographic models include gender but may not do a good job at estimating the impact in travel behavior differences caused by gender. Additionally, the impact of gender on travel patterns varies by circumstance. For example, when stratifying data by gender and age one sees that, in regards to transit ridership, women view the world differently in terms of safety and comfort. Additionally, some research suggests that, after a certain age, women become the principal drivers in a household. In the coming years, with Baby Boomers beginning to retire, this phenomenon could be a significant factor in gender-related travel patterns. Another panelist noted that the SFCTA model used gender-specific variables, and initial results suggested women with children have a higher propensity to make intermediate stops.
ABAG provides detailed age-sex cohort information at the county level for all projection years. Data is then collapsed into five age categories (not by sex) at the census tract level. There is a possibility that these ABAG "census tract control totals" could be expanded both by age and sex.
Additionally, if MTC is moving towards implementing tour-based models, it may be helpful to have analysis done in a household context, looking at the number of adults and children, number of workers, and gender as a group. This is important for capturing information about vehicle sharing and trip allocation. A significant amount of research uses both household and individual controls, implementing multi-level iterative proportional fitting for model development.
Most of the panel discussion was oriented to the demand side and did not address issues of network structure and roadway performance, including: volume-delay, dynamic network loading and queuing, acceleration-deceleration curves, etc. Most of the research addressing these questions looks at inputs and demand by time of day in small time increments. Work in this area in network simulation/microsimulation may be 10 to 20 years out, but identifying research needs in this area, even if not yet completely feasible, is important. One panelist responded that, with the level of congestion in larger urban areas, BPR curves and static assignments are questionable in value, and require fixes.
MTC's model currently uses a lot of speed validation work to resolve this difficulty, adjusting the Akçelik curve to fit validation data. MTC's departure time models do an excellent job of moving trips out of the peak, but tend to clump them around the shoulder periods. MTC would like time of day models to be sensitive to travel time, tolls and congestion.
Panel comments on time-of-day modeling:
Day One adjourned discussing how the above observations relate to larger transportation planning issues. Most places in the country are not building many more road projects, few areas can rely on transit to accommodate new growth, and many other solutions (such as gas tax increases) are not politically acceptable. And yet, regardless of increases to congestion, peak period vehicle volumes will continue to climb. Panelists noted that operational improvements, land use changes, and smart growth and pricing might be the only reasonable solutions that can be implemented.
MTC described quick fixes incorporating smart growth concepts into its trip-based models for the 2005 update of the Regional Transportation Plan (RTP). There is a major policy shift in the Bay Area towards smart growth and transit-oriented development, especially in terms of providing incentives for new development around transit stations. This is the policy direction from the MTC board and from the FTA, as well.
Panelists noted that a fully disaggregate modeling approach may be an improvement in terms of the ability to capture smart growth development. At this point a panelist clarified terminology: tour-based models are not necessarily disaggregate in design, and trip-based models can be disaggregated and microsimulated. The panelist added encouragement that model development be disaggregate in both model estimation as well as model application.
Panelists noted that trip-based models may under-predict transit ridership gains associated with smart growth development. They may not have the level of detail necessary to capture small-level smart growth changes in neighborhoods. One panelist, with regards to a trip-based model, explained that a current year model run may not necessarily underestimate transit, and added that one has to be careful with smart growth adjustments so as to not overcorrect.
MTC explained that representing smart growth might simply involve a redistribution of household and job locations within a travel analysis zone. MTC measures smart growth using density and the mix of land uses, with additional indicator variables at the parcel level. One panelist indicated that GIS analysis could provide information for testing potential variables, if the necessary level of geographic detail is available.
Addressing the relationship between density, land use mix, and travel behavior, Metro has completed some non-peer-reviewed research into a pedestrian trip-end approach. In some dense areas within Portland, pedestrian mode share is as high as 30%. Related to this, Metro realized it had overestimated auto ownership in Portland's core area. Metro looked at the number of people and jobs (among other variables) within a quarter-mile or half-mile. There is a paper co-authored by Bud Reiff about this research on the ODOT website, under the Oregon Modeling Steering Committee (OMSC).
In discussing the SFCTA model, MTC noted the difficulty storing vehicles at one's residence in San Francisco, and explained that this phenomenon might influence the vehicle availability model. Because of this parking constraint, it becomes increasingly difficult to own and store additional vehicles. The SFCTA model is calibrated and utilizes constants to account for this. One panelist explained that one could also apply an indirect cost to account for this. MTC thought it might be helpful to take a survey of garaging prices by neighborhood to get a fix on variations in parking cost. One panelist commented that, from the home perspective, most people are probably not parking in garages. Street parking, while not abundant, is available. The SFCTA model does include parking costs that serve as a proxy for this phenomenon. SFCTA surveyed parking availability by radius, and created an index of the people and jobs competing for the spaces within these areas. This was done in the mode choice step for areas with parking difficulties, within areas of aggregation called parking districts. It was noted that this approach could also be applied by MTC in other parts of the Bay Area.
Similar university research utilizes a mix of land use and density for characterizing smart growth. Other indices of smart growth, not as readily available, might be appropriate: connectivity measures such as intersections per square mile, link to node ratio, or average block face length. MTC's Census 2000 unpublished research found that areas with smaller blocks typically have more transit usage.
Robert Cervero completed a lot of research work describing the built environment. One panelist had questions about the conclusions of Cervero's work, specifically about correlation issues of the data and measure descriptiveness. He added that, in three places - Pleasant Hill, Fruitvale, and Hayward BART stations - there is a lot of opportunity to survey residents and employees to gather transit usage data. Self-selection may be quite a factor in transit ridership within smart growth areas, and may skew any conclusions. That is, people who prefer a transit commuting option may move to places where this amenity is available. A paper by Susan Handy provides some insight on this phenomenon. Randall Crane and Robert Cervero were on the committee as well.
San Francisco City is an extreme smart growth case given its mixed-use and density, and MTC asked if GIS measures could be implemented at a very small scale. Panelists recommended MTC review the following TOD research ideas:
One panelist inquired if Bay Area policies for smart growth are linked with housing affordability, and if affordability data can be used to forecast who will move into these units and what kind of travel behavior they will have. MTC explained that, yes, many of the units in these areas are reserved for lower-income people, and that census renter/owner tenure data is available to track tenancy and related variables. MTC added, however, that the land use allocation models are limited in their ability to resolve the housing price and housing affordability mismatches.
Many panelists noted that housing affordability is one index of urban sprawl. As housing becomes more expensive, people move further away from the urban core in search of cheaper housing. One panelist noted that housing demand does not always move outward; rather, in some instances it is redistributed within developed areas. For example, in Portland many retirees, empty nesters and young people with high salaries have moved into the more urban Pearl District.
Within the Bay Area, ABAG performs the residential choice modeling, using district-to-district travel times and distances provided by MTC as an input. ABAG is currently documenting their model update efforts that examine, in part, the impact of housing prices on residential choices.
MTC initiated discussion about truck and/or commercial models. The current data-free commercial vehicle models were estimated in the early 90s, with validation to existing truck counts. These truck models and truck count data exclude the very small, two-axle, four-tire commercial trucks. MTC noted that it does not have good commercial vehicle data, and that it is looking at other consultant work and Calgary's truck classification study (in addition to other studies) for direction. Panelists explained that many MPOs are in a similar position with respect to lacking truck data.
One panelist explained that, barring better data, interim truck/commercial models could come from commodity flow data, combined with locally generated truck rates. In the long run, a commercial vehicle survey may be most desirable.
Panelists noted the difficulties associated with administering commercial vehicle surveys:
The commercial modeling work done in Calgary was establishment-based, asking firms for travel diaries of all their truck movements. Establishment-based surveys also may have their difficulties. For example, often goods brokers don't know how things are shipped, or exactly which vehicles are used.
Panelists discussed using the number and size of trucks in a firm, time of day, and number of deliveries to calculate an attraction rate. There are discrepancies associated with this approach. In Calgary, for example, mixes of service and goods trips in each tour are done with empty trucks. Additionally, some trips are associated with bulking and breaking loads.
One panelist noted that service vehicles represent one of the biggest commercial sectors missed in surveying, including such vehicles as gardeners, cleaning services, plumbers, etc. A complicating factor in measuring commercial vehicles is that many service vehicles have just four wheels, and are therefore difficult to distinguish from passenger cars. Often this fact requires validation fixes that increase vehicles to match road counts. Consultants have completed some research for FHWA on the magnitude and distribution of commercial vehicles, categorizing commercial vehicles into twelve types of separate service vehicles, including: public service vehicles, utility vehicles, taxis, paratransit - commercial vehicles. This analysis was based on data from twelve cities.
One approach recommended for modeling commercial vehicles was looking at the interrelationships between freight activity and the economy - essentially using economic variables as a surrogate for goods and services movement. There can be difficulties with this approach, however. In many cases freight movement varies with the individual needs of commercial operators. For example, variations in the costs of goods storage and the need for just-in-time goods delivery may keep goods at ports, on the roads, in various states of storage, etc.
Canada has good truck survey data taken from roadside surveys. Generally, the Canadian government has a good relationship with truck drivers, better than is found in the United States. Vancouver was specifically mentioned as having a good commercial survey and model.
Other surveys track commercial operations at the break bulk point. Additionally, the Baltimore MPO survey used an advertisement on vehicles as an indicator it being commercial. A single survey may not be adequate. Panelists noted that commercial models are quite different from those for personal travel, and may require taking data available from different surveys (and truck count data) into account, and then "backing our way into the model." Transferring from other city's models may also be appropriate.
MTC made a final note in saying that there have been some have been some intercept surveys conducted at Bay Area ports. These could possibly be used in trip generation and trip destination choice model development activities.
The other auxiliary markets of interest to MTC are airport access and airport choice models. The importance of such models in the Bay Area is associated with the longstanding desire to connect BART to SFO - and eventually to have transit rail access for all three major Bay Area airports. MTC has conducted comprehensive sets of airline passenger surveys at five-year intervals since 1975. The most recent survey was conducted in 2001/02. MTC is working with the Federal Aviation Administration on funding an airline passenger survey during the 2005/06 fiscal year. These two surveys would be very helpful in determining how BART's extension to SFO may have changed people's behavior in traveling to and from the airport.
There was some discussion about linking airport access modeling to the high-speed rail project, given that the HSR and airports might be competitive markets for certain travel corridors. MTC noted that a consultant study for conducting high-speed rail forecasts was currently underway, and understanding the interaction between the high-speed rail and the regional airport systems is a high priority for the study.
One panelist noted that, regarding airport choice, people might not consider all area airports when making a decision. For example, in the San Francisco Bay Area, there are three airports that serve many of the same destinations. However, price, distance, frequency and number of stops may differ between airports. MTC noted that airports in the Bay Area predict future-year enplanements, and mode of access may be the only necessary requirement for this model. MTC may not need to model airport choice.
MTC explained that the late Greig Harvey produced developed the current set of sample enumeration-based, nested logit airport access models. MTC's efforts at converting the Harvey models into existing software have stalled due to competing priorities.
Panelists made recommendations about other contacts/resources:
Regarding mode choice to the airport, one panelist noted that a high transit mode share often reflects employee use. Chicago is an example of this. This is common because airport employees have no baggage, often live in the inner city, and have lower incomes. Related to this, MTC underscored the difficulty in simulating pieces of luggage that people are carrying, and the effect luggage has on mode of access to/from the airport.
The discussion then shifted to forecasting for new rail starts, and transit forecasting issues.
Panelists discussed that, regarding new rail starts to airports, there is generally more scrutiny on regional projects at the highest level of modeling. Because of the volume of new starts proposals, FTA is applying pressure on regional models at a project level, and putting more emphasis on the early stages of analysis.
Panelists discussed BART overcrowding. The MTC forecasts are showing very high growth in BART transbay ridership. During the peak, ridership is currently approaching the maximum capacity of the system. Capacity of the transbay tube is 2.25-minute headways, which could be a concern for the future. MTC doesn't currently model station capacity constraint, so access to some stations may be over-predicted. One panelist noted that the transit travel times and costs should feed back into the land use model. That is, the capacity for land use to intensify is limited by available transportation access. This is supported by the fact that, like BART access, bridge access is also maxing out during certain times of the day.
One panelist asked whether entirely separate modeling systems are constructed for new starts - both within the Bay Area and nationally - to assure competing projects are modeled similarly. The level of analysis prior to funding commitment has historically not been rigorous enough. In the past the thinking was that that project-level planning will work out the cost-benefit numbers. Following this approach, many projects passed a first screen to regional plan adoption, thereby front-ending the whole analysis process. The desire now is to push the rigor of analysis much earlier on. Additionally, historically regional models were often not used for new starts funding, and this may become a requirement in the near future.
MTC explained that, should it decide to expand the SFCTA model for regional application, hands-on immersion in the model would be important for its staff. Experience performing sensitivity analyses was mentioned specifically.
Panelists expressed support for applying the SFCTA model regionally, with a few words of warning and encouragement. There are a lot of things to consider if upgrading this model for regional use, including how the model was developed and the data used. The SFCTA model is based on data from just San Francisco residents, and uses the MTC trip-based forecasts for the rest of the region traveling into San Francisco. Additionally, there have been advancements in tour-based modeling since development of the San Francisco model, and MTC and the SFCTA should examine and evaluate these improvements.
In the short-term it may be appropriate to use the model for evaluation and sensitivity testing. In the long term, however, it may be appropriate to augment the San Francisco model with recent tour-based advancements.
Panelists noted that MTC could pursue one of two tracks: (1) expand the model for regional use, maintaining the same coefficients; or (2) re-estimate the model using the BATS2000 data, with other recommended improvements. Provided MTC is able to acquire the computing power, expanding the model first and then calibrating it to the region is an option.
The SFCTA model was the 2nd such tour-based application after the Portland system. Much has been learned since then, and a substantial increase in computing power has become available for increased model complexity.
For MTC staff, the SFCTA model might provide exposure to tour-based models and assist in developing confidence with this modeling approach, but other models should be examined. One panelist explained that there is value in calculating the model specifications first, and that MTC might be making a mistake trying to implement and run the SFCTA models regionally. Some simplifying modeling assumptions were made with the SFCTA model, for example, the treatment of the Muni Third Street line. There are inconsistencies throughout that affect the user benefits calculations. The panelist added that it is useful to look at the model, and it was good for the time it was developed, but even computer programming complexity for modeling has improved substantially since the SFCTA model's development.
Panelists noted other weaknesses of the model: inter-household interactions are not covered in the model, the time periods are broad (only 5 periods) - particularly with respect to Bay Bridge pricing schemes - and trip purposes may be too broad and not specific enough with respect to land use questions. There are also many issues with destination choice and questions about intermediate stop choices. It might be desirable to add a propensity measure for the choice to stop based on mode. Additionally, shadow pricing schemes have been necessary to reconcile MTC and SFCTA data. If MTC has the resources, it could benefit from making the model more robust. All that said, validation for transit has been good without fudge factors and many panelists agreed that there is a lot of value in using this model as a starting point.
MTC is interested in working with the SFCTA model before full-phase model redevelopment, and is interested in what sensitivities are available for model application. MTC is specifically interested in looking at elasticities that can't be found in a trip-based modeling system. One panelist thought it a good idea to have some practitioner communication of sensitivity tests available for different activity-based models, specifically addressing models from Portland, Columbus, Bhat's models at the University of Texas work, Ram Pendyala's University of Southern Florida work, SFCTA, and others, with emphasis on the estimation side of models. This could be very useful and novel. One panelist added that sensitivity tests are rarely performed on traditional trip-based models, and that such models could also benefit from more sensitivity testing.
Panelists discussed comparing trip-based and activity-based models. Some cities (such as Portland and San Francisco) maintain both model types for the same geographical area, allowing for ready comparisons. San Francisco is already making such comparisons. One panelist, however, explained that comparing the two methods is difficult, and thought the relevant questions to ask when attempting such comparisons are:
The panelist explained that this evaluation/validation should go beyond reasonableness checking, and that it would be appropriate to initiate a national-level effort coalescing around the development of such standards. Another panelist explained that the discussion is mostly about mode choice sensitivities (and maybe somewhat about distribution), but that there is a greater opportunity to talk descriptively about model differences. For example, with trip-based models, there is not good information about destination choice relative to a person's departure point. Also, non-home-based and commercial trips are not handled well with trip-based models. These points prompted a question asked how one might test the improvements brought by an activity-based model.
MTC explained that there is a fairly finite list of activity-based sensitivities helpful for staff to train on: transit and highway projects, income, gas prices, parking, etc. A panelist noted that gas pricing sensitivity is hard to calculate, that it's not modeled well now, and that at some point gas price information should be included in destination choice models.
Panelists discussed the potential for a pooled-fund case study of comparisons between activity-based and trip-based models. MTC explained that AASHTO has a better chance than AMPO for facilitating such a study. The NCTCOG is beginning its own RFP process for development of an activity-based model. They are considering pooling standard planning funds with three-to-four MPOs. Plans include pooling survey data and skims for development of a universal activity-based model, and maintaining a peer review panel for model evaluation.
MTC explained that it favors a synthesis of practice for its new model, and added that it would be ideal if the next model generation were ready for the RTP update. If so, the new model could be experimented with, while the old model would remain available as a backup. One panelist thought completing by the RTP update would be difficult, given the prospective workload, including: coordination of sensitivity analyses, expansion and re-estimation of the SFCTA model, and development of a new model system.
Returning to the idea of a pooled study, one panelist proposed pooling multiple agency resources for one model development stream, and added that the specific application would be paid for locally. He made the point that there is more agreement than divergence between jurisdictions - such as similar levels of geographical aggregation and an emphasis on point-to-point rather than zone-to-zone analysis. Another panelist dissented, explaining that while there are subsets of transportation planning professionals who want a standard model, two diverse places, such as Dallas and the Bay Area, have very different travel behavior patterns. Both infrastructure and behavior can vary widely between places. And, while the backbone of different models can be similar, the opportunity for developing a common model backbone is much smaller in activity-based models than with trip-based models. One panelist added that the modeling community is even a long way from defining the standards for a model skeleton that might be used in such a pooling approach.
Panelists encouraged MTC to take the potential risk in developing new models, explaining that the more sophisticated MPOs have to be frontrunners with new technologies/approaches. A measured and patient, but persistent approach to new model development was the advice of most panelists. MTC took a large risk in the early 70s with early model development, and advanced the state of the art. Its pioneering work became the starting point for many others in the nation.
One panelist noted that model sophistication is often above the knowledge level of MPO staff, and that experience is important in mastering the next model generation. Another responded that there might be advantages for MPO employees who haven't worked in four-step models. He explained that such employees are not contaminated with four-step model jargon, and that, conceptually, a tour-based modeling system is easier to understand. Productions and attractions, the home-based work trip purpose, and other tenets related to trip-based modeling are difficult to explain.
MTC explained its challenge working with detailed GDT networks to perform link-level or node-level assignments. MTC has been performing zone-to-zone assignments for a long time, so a move to disaggregate network assignment is unfamiliar territory. Other panelists explained that trips could be aggregated at a later stage in the modeling process for analysis and to save computing power. The SFCTA model, for example, performs its skim building and assignments at an aggregate level (though at a smaller geographic level of aggregation than MTC). Model development can be completed in several steps, and point-to-point assignment can be added as a feature later in the development process, if budget and time allow.
MTC explained that, in the past, it has prioritized travel forecasting work over traffic operations analysis. Having assistance with a transition towards dynamic traffic assignment will be helpful. Panelists discussed dynamic assignment education projects in the works or newly completed:
One panelist explained that these projects are creating new algorithms, but not necessarily new software. Panelists noted that most major travel forecasting packages are adding these new tools for dynamic simulation, and they are becoming available. However, they are not microsimulation efforts, but rather mesoscopic approaches to dynamic simulation.
MTC asked about the operational models at Caltrans, and discussed the desire to link forecasting outputs with highway operation models. A panelist explained such programs (e.g. Paramics software) provide microsimulation on a corridor basis, with larger information taken from the regional model. For demand modeling, traffic microsimulation is designed in, but still only for a subset of the network, not for the entire region.
NCTCOG analyzed some detailed projects using VISSIM based on a sub-area trip table from TransCAD. The regional model provided a seed table for smaller-area sub-analysis. Their findings were that dynamic simulation may be very helpful for short-term planning efforts, such as evaluating traffic management operations (signal timing, ITS work, etc.), but NCTCOG did not have confidence in the ability of such systems to make long-term predictions about traffic movements. Predicting small-area traffic movements - such as left turns, etc. - from long-term planning outputs seems a bit unreasonable.
At this point the panel noted an important clarification: dynamic assignment is not necessarily microsimulation; rather, it looks at other things. For example, mesoscopic models can handle queuing.
One panelist noted that dynamic simulation could be helpful in overcoming difficulties associated with very high volume-to-capacity ratios. Dynamic simulation represents a significant improvement over the static method, which is very limited after the volume to capacity ratio exceeds 1.0. MTC explained that its Akçelik curves alleviate some of the difficulties associated with static assignments, and asked about whether dynamic simulation can handle queuing, specifically in the Bay Area. Related to this, MTC has forwarded data to CSI for a U.C. Irvine study looking at queuing, and awaits feedback.
A final note on this topic is that dynamic assignment consumes a lot of computing power. There are, however, ways around this. For example, Los Alamos National Lab has a project that distributes networks and assignment over various computers.
In this section, panelists discussed topics not previously covered, including a lengthy discussion on model estimation.
One panelist commented that MTC has time and a relatively clean survey to work with for this effort. Though most of the MTC staff does not have a lot of experience estimating models, one of MTC's strengths is descriptive analysis of census and survey databases. This descriptive analysis work will be greatly beneficial for laying the foundation for further model estimation work.
One panelist inquired about the existing model limitations in terms of testing policies and projects. MTC explained that future operations and maintenance of the transportation system is a concern, specifically overcrowding of transit and highway systems. While some peak spreading work has been done on the highway side to account for overcrowding, trips to transit may be over-simulated. MTC added that it is highly desirable to improve on current efforts on departure time choice models.
A panelist asked if legal challenges of MTC plans influenced the way modeling is done at MTC, and if they affected the financial support for modeling. MTC explained that, in terms of day-to-day modeling practice, there is probably little influence. Management has been very supportive of major data collection projects, including a $1.5 million survey in 2000.
The panelists were asked about the appropriateness of logit and non-logit models for travel model development. The response to this question was two-part:
Related to the above point, the panelist explained one of the challenges of microsimulation. It requires a cycling similar to the trip-based modeling process and consists of: simulating population, simulating patterns of behavior, loading the system, doing the microsimulation, then cycling again. This process takes one-to-two orders of magnitude longer than the equivalent cycling with the trip-based process. Computer capabilities have improved dramatically, so facilitating this type of modeling is not insurmountable.
MTC asked the panel about when to apply ordered logit. A panelist explained that it is used in any situation in which there is ordering (first, second, next, next, etc.) In time-of-day modeling, it is used in defining discrete time periods. In the panelist's own research using ordered logit, nine discrete time periods were tested, with one adjacent time period and one step removed tested. The research found marginal improvement in adjacent time periods and less improvement with the one step removed case. The shift people make in this case will be small. In urban travel, we're generally not looking at 24-hour days - rather we're looking at a four-hour peak period - reducing the scope of the problem and allowing one to construct shorter time intervals within that four-hour period. During the peak period, 15-minute intervals are not unreasonable.
Panelists discussed continuous time. In continuous time, time groupings are eliminated - minutes are really minutes. The SIMAP research by Kulkarni & McNally is focused on continuous time. Researchers there have estimated hazard duration models, taking into account that people typically report their time in 5-minute intervals - and typically round to 10-15 minute intervals for activities over two hours long. They've used underlying continuous time hazard models. When one takes an underlying time hazard model and tries to work in estimation, the formulation becomes exactly an ordered response model. The dynamics of duration are expressed through the threshold of an ordered response model. At the end, one can specify intervals of 30 seconds or a minute. A panelist noted that this approach is not difficult. The model becomes almost like linear regression in application. Application of a hazard model is easier if restructured this way. There are many discrete choice models, with all levels of sophistication.
There are statistical packages for this type of estimation work, but one panelist stressed that canned software for estimation can cause problems. It is important to understand the underlying mechanics of models, and to have a sense of reasonableness of results. That said, once estimates of the model are done appropriately, application of models should be relatively straightforward, using code of any type. Along these lines, he explained that SIMAP is not an estimation platform, and provided an example of the difficulty in using canned models. If, for example, one might need to generate correlation in a particular way, a skilled practitioner could calculate things directly. In LIMDEP, or other estimation software, this type of procedure is limited and more difficult, and requires manipulation and cajoling.
Another panelist noted that estimating models using customized model estimation programs assumes a certain proficiency in computer programming and model estimation, skills that the average MPO employee may not possess.
The newer, more advanced models are even more susceptible to misuse than the previous generation. A panelist stressed that the analyst has to know very well what is going on, and should have a relationship with researchers in academia. Developers of these programs promise that there are improvements, but there is still concern about their misuse.
One panelist explained that every package has its limitations and though we can anticipate improvements to estimation software, no package will ever keep up in terms of providing the flexibility necessary for leading-edge research. He explained that core multinomial logit/nested logit is not that difficult to program. Any flexible matrix-accessible or object-oriented computer language can be used to design an application for estimation. The panelist added that good estimation software should have a feedback process to respond to the user community. LIMDEP apparently has this capability, while it is not yet available with ALOGIT.
One panelist thought it frustrating that the modeling community has been working with generalized logit and nested logit for 25+ years. There is research into new estimation methods, but it has not migrated into professional use. For many practitioners, cutting-edge estimation methods are difficult to understand, and communication between researchers and practitioners about these methods is insufficient. A fundamental question is how does one facilitate getting a user-friendly version of newly developed estimation tools into practitioners' hands.
Panelists agreed that packaged estimation software could be helpful if a resource community is available for mentoring and assistance. To some degree, this type of communication network already exists within the modeling community. Various suggestions were given on how to nurture the communication between researchers and practitioners, including federal involvement, contractual arrangements, etc. One panelist noted that it might require having an interactive screen on the computer to help look at code. He added that this has been done in various other software environments, and shouldn't pose a problem. He also explained that, in the long term, it would be desirable to have a new code package that goes further and has better interfaces, especially a better output interface.
The University of Texas recently completed research formulating a mixed continuous stream discrete choice model, allowing a person to choose multiple alternatives at one time. In the past, there has only been a single choice discrete system. The University of Texas model is an important step towards efficiently estimating utility maximization in time use. Information about vehicle type and vehicle miles is put to use. The level of sophistication is that of any traditional generalized nested logit (GNL) or mixed-logit model.
A point about the ease of model design is that generalized nested logit or generalized extreme value (GEV) models are much harder to code than a mixed logit model. However, GNL or GEV models may be preferable from an efficiency standpoint, once coding is complete. They are easier to integrate when all is said and done. Instead of a 1000-dimensional integral, one can get away with a 5-dimensional integral. The University of Texas has general code in GAUSS available to estimate any flexible GEV model, making GEV models easier to interpret.
There needs to be a better vehicle for moving such tools into practice. Developing such a process creates a continuum and no gaps in shared knowledge. The process, which used to be called "technology transfer" in the past, is not often discussed now, and funding available for such efforts has dwindled or is nonexistent. In the past, this kind of cooperation helped tremendously with early model development. For example, the early generation of utility specifications yielded poor results, but became convention through improvements due to shared ideas on model development.
Panelists commented that long-term learning and technology transfer need not only be transmitted from the university to practitioners; it can come from a research team, from practice, or from shared information among practitioners. In some instances consulting firms leave their client with a software package and then back them up to insure proper use.
Panelists discussed the recruitment of PhD students in MPOs and consulting firms. In many instances, students with master's degrees are not typically equipped to work with advanced models. And, while some PhD candidates gravitate towards industry, many with the skills and training to run such models stay in academia. A panelist explained that some consultants actively recruit and hire a high number of PhD students.
Other educational factors matter as well. In some instances, students with master's degrees can be better suited than PhDs because of special skills they've acquired (e.g., computer programming). Additionally, the academic program one attends can be very important. PhD programs often teach highly specific skill sets. For example, if a firm wants a person with GEV skills, you get a different person than one with other types of skills. One panelist asked a general question about what percentage of RFPs/workload require these types of specialized skills, and if hiring PhDs for specialized, but rarely used, skills is the most appropriate approach to filling this need. Another panelist noted that a lot of good potential employment candidates and current employees are lost to academia, into PhD programs, wanting to be professors. And, while this may be a potential short-term detriment to the industry, it is often a good thing for practitioners to return to school and teach. It helps prepare the next generation of students with more solid mentors and experienced academicians.
One panelist explained that MPOs have a history of on-the-job learning. However, in the past MPOs have not created a two-tiered system of hiring (i.e. hiring more educated/ experienced people to develop the model and less experienced staff to perform coding functions, etc.). Such a system could facilitate a kind of mentoring-apprentice relationship. Incessant production work can be a burden on MPO staff resources, and may not encourage the hiring of PhD students.
A panelist pointed out that there are other staff organizational structures. Some consultants, for example, have different groups of people performing different tasks, and there is not necessarily a funnel through which people are tracked in terms of experience. Instead, people go different paths. Another panelist added that, at every job skill level, it is useful to have people who really understand the work and have a developed knowledge base. For example, it is helpful to have people doing network coding who might know how to automate this process to save time. At many MPOs, one panelist observed, modeling may not represent a clear career development opportunity for new hires, representing a diminution of the importance of modeling, and deterring good candidates from entering the field.
One of the academic panelists noted that, in the academic world, there is a push to graduate PhDs to become full professors. If PhD candidates go into industry instead of into academia, this deprives a university of academic cachet.
One panelist commented that, outside of a few select consulting firms, the level of talent is not high. In many instances there isn't much new innovation in consulting firms. Work completed for one MPO is repackaged and implemented in another. Additional consultant training (beyond the few select firms) will be an important improvement.
One panelist drew a distinction between MPOs. Some have higher objectives, more difficult regions, and push to solve different kinds of problems. More complex MPOs seek internal staff and consultants who can solve these higher-order difficulties. Over time, an evolution will occur. Middle level MPOs will get closer to the most complex MPOs. It is important to put the best modeling efforts where they will make the most impact. In smaller MPOs, the decision process is not as greatly affected as it is in larger MPOs. They don't need the same level of sophistication.
In this vein, MTC is always in search of continuing education and professional development opportunities for staff. And, while model estimation trainings can be helpful, more prolonged immersion environments are necessary to learn such skills. However, such skills are difficult to develop at MPOs because model estimation is done much less frequently than it is at consulting firms.
A panelist suggested that, instead of teaching or knowledge transfer, MTC should consider inviting someone to spend three months at MTC. Not so much to work independently, but rather to work with staff. Another option is to contract with the University of California and recruit faculty directly, or for MTC to fund a post-doctoral position.
A final point made was the importance of being forward-thinking, focusing on the direction for MTC for the next 5-10 years of model development, not just looking at the near term needs for the RTP update.
One panelist wanted to affirm that MTC is on a good track. He acknowledged that a large share of the model estimation work could be performed in-house, but thought MTC may have expected to complete too much on its own. Strong consulting support is advocated for. Additionally, when designing the role for consultant assistance it would be valuable to leave relevant parts of any agreement open-ended for input from the study consultant. This includes consultant suggestions on new approaches that may benefit MTC.
In a final note about coaching/mentoring, one panelist explained that his group has worked with a coaching routine in the past, and has found that not everything needs to be completely thought out. Keeping a coach/mentor around for assistance allows for more flexibility in model design.
There was some disagreement about the value of pooling data for model development. Surveys have been successfully pooled for the Oregon Statewide Model. One panelist dissented, however, explaining that survey transferability is highly overrated, and that data pooling weakens the relevance of survey data to its place of origin. That is, pooling data gives one an average model for all places, but not necessarily a great model for any individual place. Additionally, transferring parameters from one place to another can be difficult. Another panelist explained that consultants often transfer parameters, and very rarely re-estimate models based on these parameters.
The panel discussed federally mandatory stipulations for model specifications. Many panelists believe that they can be overly prescriptive and may weaken a model's ability to characterize an individual place. One panelist explained that the FTA, for new starts, would specify certain specifications, such as a requirement for coefficients to be within certain ranges, etc. Another clarified that these specifications relate only to the home-based-work trip purpose. Additionally, one may deviate from such specifications, but had better have a good reason for the deviation. A panelist added that, to the extent that there are not-included attributes, or regional differences, these items would not be accounted for.
The panel discussed model transferability and asked if differences in travel patterns might be correlated to geographic location. That is, for example, if people in Portland have different travel patterns than people in, say, Washington D.C.
MTC explained that it hadn't seen descriptive work characterizing this. One panelist noted that models, even simple ones, differ between areas in very major ways, and that all work on model transferability is about transferring an individual model within a multiple model set. The panelist did not have the data to comment on transferability success of other models, but added that people in the field generalize from a small transferability success to the mistaken notion that model transferability will always being successful. Another added that certain models might be transferable for 85% or so of cases, but not for every case, and that this may impact policy decisions. One therefore has to be very careful about determining which cases might be transferable.
As an example of regional differences, weather variations and their effect on travel behavior were discussed. Panelists noted that, while it has never been done systematically, weather differences might be captured via use of a climate variable or other similar method.
Another panelist stressed that, while activities might differ between areas, models may still remain transferable. There haven't been enough transferability studies looking at different context variables. Results of such analysis must be regarded with caution, one panelist explained, as these models may not be transferable in other cases. Studies on transferability mentioned include those by Frank Koppelman of Northwestern University and Eric Miller of the University of Toronto.
For MTC, the issue of transferability is of principal importance as it relates to transferring/expanding the SFCTA model. One panelist noted that this will be a good case study to figure out if disaggregate models transfer better than aggregate models. A panelist added that regions with non-uniformly developed cities (or varied areas within the same city), give one the opportunity to add context variables to the model, such as transit connectivity around a city. These may work in some contexts, but may not be transferable to the rest of the world.
The panel meeting concluded Friday, December 3, 2004, at 3:00 PM. A draft report of the peer review meeting would be prepared by MTC within two months. The panelists would be provided a month to offer comments, and MTC staff would finalize the report by mid-March 2005.
|Thursday, December 2, 2004|
|9:00||Welcome and Introductions|
|9:30||Goals and Objectives|
|10:00||Existing Trip-Based Models, Zones, Networks (#3)|
|11:30||Break for Lunch (Catered)|
|1:00||MTC Staff Training (#1)|
|2:00||Travel Survey Data Preparation, & GIS (#2)|
|4:00||Representing Smart Growth & TOD (#7)|
|Friday, December 3, 2004|
|9:00||Using the Structure of the SFCTA Model System (#4)|
|11:30||Break for Lunch (Reservations at Phnom Penh at 11:45)|
|1:00||Population Microsimulation (#5)|
|2:00||Other Auxiliary Models (#6)|
|3:00||Adjourn & Next Steps|
(Numbers in Parentheses refer to "Detailed Questions to Pose to Peer Review Panel" in the 6/29/04 MTC outline. #1 in the 3-ring binder.)
The purpose of the upcoming peer review panel is to provide guidance and assistance to management and staff of the Metropolitan Transportation Commission (MTC) in the development of a new system of travel demand models for the San Francisco Bay Area.
The peer review panel is intended to guide MTC staff in preparing a consultant work scope that will comprise various activities, including:
The peer review panel would also assist MTC in addressing various policy, planning and data issues, including:
The purpose of the panel is not to critique the current generation of MTC trip-based travel demand models, but to provide forward-looking guidance on staff training needs and model development goals. This peer review panel is not seeking to rubber-stamp any existing process or product. The initial meeting of the peer review panel will need to discuss current MTC efforts in terms of zonal systems and networks, but the focus will be on preparing the BATS2000 survey for use in developing new model systems, and providing guidance on how to maximize in-house talents and abilities, complemented with a very frugal consultant budget.
The MTC peer review panel will be funded through the USDOT Travel Model Improvement Program (TMIP). The Volpe National Transportation Systems Center (VNTSC, or Volpe) will handle reimbursements for peer review panel members for air and local travel, lodging and per diem meal expenses.
The initial meeting of the MTC peer review panel is anticipated for Fall 2004, for a two-day meeting sometime between September and December 2004. The meeting would be held at the offices of MTC in downtown Oakland, California. The panel would be comprised of upwards of nine professional colleagues, and may include other participants from the FHWA and FTA. Approximately five to six MTC staff would be involved in work with the peer review panel. MTC would prepare additional advance reading materials, and detailed questions for the panel to consider at the Fall 2004 meeting. Volpe staff would prepare a write-up of the peer review panel meeting.
A second meeting of the MTC peer review panel is not anticipated, unless the panel at the Fall 2004 meeting recommends it.
The current set of MTC travel demand models are typical of advanced trip-based travel models. MTC staff estimated these models in the mid-1990s using data from the 1990 Bay Area household travel survey (BATS1990). Previous to 1990, MTC staff estimated models in the 1980s using the 1981 Bay Area household travel survey (BATS1981). Consultant teams developed the landmark, textbook examples of nested, disaggregate models used in an aggregate model system (MTCFCAST) in the 1970s, based on data from the original, 1965 home interview survey (BATS1965).
The current trip-based models are a blend of disaggregate and aggregate demand models, all applied at an aggregate, zonal level with extensive market segmentation. Auto ownership models are nested logit choice in form, and include transit/highway accessibility variables. Trip generation models are either disaggregate household, worker or student trip production or aggregate zonal trip production/attraction in form, using hybrid cross-classification / multiple regression forms. Trip distribution models are standard gravity model formulations. (Previous generations of trip distribution models were logit destination choice models in application). Mode choice models are nested logit choice. Non-motorized trips (separate modes for bicycle and walk) are included in all mode choice models. Departure time choice for work trips is a binomial logit choice, whereas departure time choice for non-work trips is based on traditional trip peaking factors. Trip assignment procedures focus on daily traffic and transit trips, and AM peak period traffic volumes and speeds. Customized speed-flow delay curves are used in traffic assignment, including an Akçelik formulation for representing arterial speeds. The model system methodology incorporates full feedback from trip assignment back through auto ownership. Trip assignment outputs (district-to-district travel times and costs) are also used as input to the land use allocation model (POLIS) used by MTC's sister agency, the Association of Bay Area Governments (ABAG). Detailed travel model specifications for this "BAYCAST-90" model system are provided in reference (1).
In terms of zones, networks, and software, MTC currently has 1,454 regional travel analysis zones covering a nine-county region of 7,149 square miles and a population of 7.0 million. The regional highway and non-motorized networks consist of about 30,000 links, and there are 1,120 routes in the regional transit network. Socio-economic forecasts are prepared by ABAG at the census tract level (1,405 tracts in the nine-county region.) MTC currently uses Cube/Voyager (formerly TP+/VIPER) for network analysis, and compiled FORTRAN programs for various demand model applications. MTC also uses ESRI ArcGIS software that is highly compatible with our Voyager/VIPER-based networks.
MTC has basically gone as far as we can go with trip-based travel modeling systems. Trip-based models will always have the limitation of being a blend of disaggregate and aggregate model components, and it is apparent that a fully disaggregate demand model system, be it "tour-based" or "activity-based," is well within our capabilities and reach.
MTC staff is also highly motivated and talented, but there are limitations:
Additional MTC "plans for improvement" are described in a pair of MTC papers:
The Metropolitan Transportation Commission (MTC) is strongly committed to maintaining and enhancing travel modeling systems in the Bay Area. Travel modeling has been a core competency of MTC staff since the mid-1960s. Staff commitments include five full-time professionals in the MTC "travel modeling unit," all with advanced degrees in urban planning and civil engineering.
Past funding commitments, for example, include the $1.5 million Bay Area household activity-travel survey conducted in 2000 (BATS2000).
The MTC overall work program (OWP) includes a line item for a $250,000 consultant study for fiscal year 2004/05, to fund the "Travel Model Specification and Training Study." Need for consultant funds for future fiscal years (2005/06 and beyond) have not been identified.
The following is a list of the various application and needs for travel forecasts in the San Francisco Bay Area:
1. "Travel Demand Models for the San Francisco Bay Area: BAYCAST-90" June 1997, available at: http://www.mtc.ca.gov/maps_and_data/datamart/forecast/baycast1.htm
2. "Incorporating the Effects of Smart Growth and Transit Oriented Development in San Francisco Bay Area Travel Demand Models: Current and Future Strategies" November 2003, available at: http://www.mtc.ca.gov/maps_and_data/datamart/research/
3. "Extended GIS Analysis for the Bay Area Travel Survey 2000 (BATS2000): Prospectus" December 2003, available at: ftp://ftp.abag.ca.gov/pub/mtc/planning/BATS/BATS2000/
Professor Frank Koppelman, Northwestern University
Professor Chandra Bhat, University of Texas
Professor Kostas Goulias, University of California, Santa Barbara
Mr. Ken Cervenka, North Central Texas Council of Governments
Mr. Keith Lawton, formerly of Portland Metro
Ms. Maren Outwater, Cambridge Systematics, Inc., Seattle/Oakland
Mr. Bill Davidson, PB, San Francisco
Mr. Joe Castiglione, PB, Boston, formerly with San Francisco County Transp. Authority
Mr. George Naylor, Santa Clara Valley Transportation Authority (absent due to illness)
Ms. Mayela Sosa, FHWA, Sacramento, California
Ms. Supin Yoder, FHWA Resource Center, Olympia Fields, Illinois
Mr. Ted Matley, FTA, San Francisco
Chuck Purvis, Principal Transportation Planner/Analyst
Rupinder Singh, Associate Transportation Planner/Analyst
Ben Espinosa, Associate Transportation Planner/Analyst
Shimon Israel, Associate Transportation Planner/Analyst
Rachel Gossen, Associate Transportation Planner/Analyst
Kearey Smith, GIS Coordinator
Garlynn Woodsong, Assistant GIS Planner/Analyst
Jon Rubin, Chair
San Francisco Mayor's Appointee
John McLemore, Vice Chair
Cities of Santa Clara County
City and County of San Francisco
Irma L. Anderson
Cities of Contra Costa County
U.S. Department of Housing and Urban Development
James T. Beall Jr.
Santa Clara County
Sonoma County and Cities
Contra Costa County
Napa County and Cities
Dorene M. Giacopini
U.S. Department of Transportation
San Francisco Bay Conservation and Development Commission
Marin County and Cities
Cities of San Mateo County
Michael D. Nevin
San Mateo County
State Business, Transportation and Housing Agency
James P. Spering
Solano County and Cities
Association of Bay Area Governments
Cities of Alameda County
Therese W. McMillan
Charles L. Purvis
Principal Transportation Planner/Analyst
(Contributing Author & Editor)
Planner/Analyst (Principal Author)
Planner/Analyst (Contributing Author)
Associate Transportation Planner/Analyst
Associate Transportation Planner/Analyst
Manager, Planning Section