Skip to content U.S. Department of Transportation/Federal Highway AdministrationU.S. Department of Transportation/Federal Highway Administration

Office of Planning, Environment, & Realty (HEP)
Planning · Environment · Real Estate

HEP Events Guidance Publications Glossary Awards Contacts

An Introduction to Panel Surveys in Transportation Studies

4. ISSUES IN CONDUCTING A PANEL SURVEY

4.1 OVERVIEW

There are numerous choices that must be made during the design and implementation of a panel survey. These include such basic issues as:

Current practice. To help frame our discussion of these issues, Table 9 presents the relevant features of two general-purpose travel panels and two other prominent panel surveys on labor force behavior. These successful panel surveys illustrate some of the common solutions to the problems raised in planning and carrying out longitudinal studies.

Table 9 Comparison of Methods Used in Four Panel Surveys

Comparison of Methods Used in Four Panel Surveys
** Paper-and-pencil personal interview ** Computer-assisted personal interview. *** Self-administered questionnaire.

4.2 DESIGN ISSUES

DEFINITION OF THE SAMPLE UNIT

With any study design, it is necessary to specify a sampling unit. Most personal travel surveys use households as their sampling units. The PSTP and the DNMP 1 follow this convention [21,22]. Other possible units for travel panels include persons or housing units.

The issues surrounding the definition of the sample unit can get a little complicated when the study involves a panel design. The complication arises because the units may change over time and one must decide which units to keep in the panel.

When the sampling unit is a person, as in the National Longitudinal Survey of Youth (NLSY), it is usually clear whether or not to retain the individual in the panel. Generally, persons who leave the population (for example, by moving out of the study area or by becoming institutionalized) are dropped from the sample, but all others are retained. With both households and housing units, however, things are not so straightforward. Households can divide because of divorce or for other reasons. One member may leave the household and join a different one. New members may be born into a sample household or may join it by marriage or adoption. The sample design must include rules for dealing with each of these situations. A common strategy is to collect data from all persons in any household that includes at least one respondent from the first round of the survey (provided that these persons meet the other eligibility criteria for the study, such as living in the study area). For example, if a household in one wave consisted of a couple that subsequently splits up, then in later waves both of the resulting households would be included in the sample. Similarly, if a new member joins a sample household, then data are collected from that new member. (However, if that new member subsequently leaves, he or she would not be followed unless his or her new unit includes a respondent from the first round of the survey.) This strategy entails following respondents who move out of their original household into a new one and collecting data on the other members of the new household. The process of following respondents and collecting data on their households can get a little complicated when the original panel includes households shared by single persons since splits and new combinations are especially common in this group. [2]

It is also possible for housing units to subdivide over time into two or more units. (For example, an apartment may be remodeled into two units.) Again, each new unit formed from the units in the original sample should be included in later waves of the survey. This strategy is followed in the Current Population Survey (CPS), which uses a sample consisting of housing units [23].

Recommendation: CHOOSING A SAMPLING UNIT
Use the household as the sampling unit for panel surveys. Follow initial respondents to new households and add any additional household members to the panel.

THE NUMBER AND SPACING OF ROUNDS

Another set of design issues concerns the number and spacing of the rounds or waves of data collection. The best spacing will depend on such factors as the rate of change in the phenomena of interest and the need for up-to-date information. For example, the more rapidly travel demand is changing, then the more frequently data should be collected. Another consideration is the need for timely figures for administrative or other reporting purposes. The CPS is conducted every month in order to meet the need for monthly figures on the unemployment rate.

Another consideration affecting the spacing between rounds is the memory burden imposed by the data collection. Some panel surveys collect a continuous record for the entire period between rounds. In each new round of the NLSY, for instance, respondents are asked to report their employment history for the entire period between the current and preceding round of data collection [24]. In such cases, it is important to keep the spacing between rounds relatively short to reduce the impact of forgetting on the accuracy of the data. The effect of the spacing of rounds on memory burden does not appear to be a consideration in either the PSTP or the DNMP; in both of these travel panels, the main data collection instrument is a multi-day travel diary that covers only a short period preceding each round.

The DNMP used a six-month interval between waves of data collection. The PSTP uses a one-year interval (although the PSTP collected some additional attitudinal data between waves of travel data collection) [21,22]; similarly, the NLSY has used a one-year spacing between rounds for most of its life [24].

Taken together, the spacing of the waves and the total life of the panel determine the total level of burden on the respondents. It is unreasonable to expect sample members to provide accurate information during many waves of data collection over a very long period of time. Instead, panel members are likely to drop out and the quality of the information they provide is likely to decline as the number of rounds increases. A rotation design can help limit reporting burden on each member of the sample. The CPS uses a scheme in which sample members participate for four months, are given eight months off, and then participate for four additional months [23]. The other illustrative surveys in Table 9 do not use rotation designs. The NLSY is now entering its 18th round of data collection and the PSTP is beginning its 10th round. The DNMP came to an end in March of 1989, after 12 rounds of data collection.

Recommendation: COLLECT DATA ONCE A YEAR
Conduct waves of data collection once a year unless more frequent data collecting is required to obtain the desired information.

METHOD OF DATA COLLECTION

The method used to collect data is another key design decision. Different methods of data collection differ in terms of cost, coverage of the population, likely response rates, and data quality. Data quality is usually measured in terms of the rates of missing or inconsistent information. In general, in-person data collection is the most expensive, but produces the most complete coverage and highest response rates; in addition, it affords greater opportunities for aids to the respondent. Telephone data collection tends to be next most expensive, but omits the portion of the population without telephones. Telephone data collection also 3 yields lower response rates than in-person data collection. Data quality may suffer somewhat as compared to data collected in a face-to-face interview. Finally, data collection by mail is the cheapest of the three modes; it offers, in principle, coverage similar to that of in-person data collection, but a lower response rate and poorer data quality. (When the questions are sensitive, however, a mail questionnaire may yield more accurate answers because respondents need not worry about an interviewer's reaction.) These points are summarized in Table 10.

Table 10 Mode of Data Collection and Their Features

Mode of Data Collection and Their Features

Most personal transportation surveys use a combination of telephone and mail data collection [1]. Telephone interviews are used to identify eligible households initially and to enlist their cooperation in the main data collection. Then, sample households are mailed a diary or some other data collection instrument. The data collection form may be mailed back by the respondents or the information recorded on it may be retrieved by telephone. Both the PSTP and the DNMP relied on some combination of mail and telephone to collect their travel data.

Recommendation: UTILIZE MORE EXPENSIVE MODES IN THE INITIAL WAVE
In the first wave of the survey, contact respondents by telephone or in-person to maximize the initial response rate. Thereafter, adopt less expensive modes of data collection if necessary.

SAMPLE SIZE

Another key design decision concerns the choice of a sample size for the survey. The process of choosing a sample size usually consists of three steps:

Setting the precision level. The process of choosing a sample size begins with an assessment of the amount of error that can be tolerated in the survey estimates. In the case of a panel survey, the assessment usually focuses on the estimates of change since they are of primary concern. The objective is to determine how precise the estimates must be to satisfy the goals of the survey. This determination requires information on the kinds of questions that will asked of the data and the types of analyses that will be performed. Once this information is obtained, then the precision level is set at the value that will meet the analysis goals of the survey. While it is possible to obtain estimates of even higher precision, the costs of doing so usually outweigh the benefits.

Calculating sample size. Once the target precision level is set, then the number of cases, n , required to reach that level can be estimated using standard formulas for sample size estimates [25]. The formulas applied in this step will depend on the sampling design of the survey. In any case, the formulas will require some information about the expected rarity, rate, and variability of changes in the variables of interest. When the statistical properties of the variables are expected to differ, a separate computation is usually performed for each critical variable (variables that are essential to accomplishing the goals of the survey) since the computations will, as a rule, yield different values of n. When the numbers are reasonably close in value, the largest n is typically selected if resources for the survey can support a sample of that size. When there is considerable variation among the numbers, the desired level of precision may be relaxed for some variables or some variables may be dropped from the survey if they can not be measured with an acceptable level of precision given the resources available.

Adjusting for attrition, nonresponse, and eligibility rates. In the final step of the process, estimated sample sizes are usually adjusted to take into account the effects of nonresponse, attrition, and rate of eligibility. Since nonresponse and attrition reduce the size of the sample, the number of units in the initial sample must be larger than the required n to yield a final sample of the desired size. The adjustment for these losses is made by dividing the estimated sample size by the product of the expected response rate for the first wave and the cumulative retention rate for the remaining waves. (The cumulative retention rate is the proportion of first wave respondents who go on to complete all waves.) In situations where the sampling frame includes units who are not eligible to participate in the survey, the sample size must also be adjusted by the expected eligibility rate, the proportion of sample units expected to qualify for inclusion in the study. In this case, estimated sample size is divided by the product of the response, retention, and eligibility rates to yield an estimate of the number of cases that must be drawn from the sampling frame to obtain a final sample of the desired size.

4.3 MAINTAINING THE PANEL

FRESHENING THE SAMPLE

Freshening the sample refers to adding units to the sample over time. It is done in order to represent new members of the population (such as households that moved into the study area after the original sample was selected), to compensate for losses from attrition, or both. In rotation group designs, the addition of new rotation groups in later rounds of the survey is a built-in feature of the design.

Adding new units to improve or maintain representativeness of the sample. New units may be added to the original sample in later rounds of data collection so that the sample accurately reflects changes in the population over time. In a transportation panel study, it may be important to represent households that are new to the study area (either because they are newly formed or because they have moved in from outside the study area) . New units may be found by 4 screening a cross-sectional sample of households. For example, a sample of telephone numbers may be selected and asked screening questions to determine whether the household could have been included in the initial sample.

The longer the panel study is continued, the less representative the panel will become of the current population. As a result, the decision about whether to incorporate new selections in later rounds is likely to depend in part on the expected life of the panel. When the panel continues for five or more years, inferences about the current population are likely to be inaccurate when they are based on the original sample.

Recommendation: ADD CASES TO MAINTAIN REPRESENTATIVENESS
If a panel continues for five or more years, or if there is substantial immigration to the study area, add a supplemental sample to cover new households not represented in the original sample.

Adding new units to maintain sample size. In all panel surveys, a certain percentage of respondents drop out over time; thus, the cumulative retention rate will be less than 100 percent of the original sample in subsequent waves of data collection. To make up for this loss, some panel studies add new units to maintain a sample size adequate to support the required analyses. The Puget Sound Transportation Panel and the Dutch National Mobility Panel both introduced new units in later waves to maintain adequate sample sizes when respondents dropped out of the panel study [21,22]. The required sample size is typically determined at the beginning of the panel study and it must be maintained throughout the length of the study.

There are several alternatives to adding units in subsequent rounds. They include 1) maintaining a low rate of attrition; 2) planning the initial sample size to include an allowance for the expected rate of attrition over time, and 3) using a rotation group design, in which old cohorts are replaced by new ones after a certain period. Adding cases to replace nonrespondents raises difficult statistical issues for weighting the data. We recommend against this practice.

Recommendation: ALLOW FOR ATTRITION IN PLANNING THE SIZE OF THE PANEL SAMPLE
To avoid having to add cases later on, the initial sample size should allow for losses due to attrition in later waves and the survey procedures should attempt to minimize attrition. Adding cases to replace nonrespondents should be done only as a last resort.

Adding new rotation groups. The main purpose of rotation group designs is to reduce the reporting burden on panel survey respondents. Asking the same respondents to supply information in every data collection period, especially if the waves are closely spaced (for example, every month, as with the CPS) and the survey is scheduled to last for an indefinite or multiyear period, may substantially increase the attrition rate, introducing biases and reducing the precision of sample estimates [23]. A rotation group design limits the participation of each member of the sample, while preserving the advantages of overlap in the sample from wave to wave. When a large number of rounds of data collection are planned, a rotation group design may represent a good combination of the features of a cross-sectional and panel survey.

MAINTAINING HIGH RESPONSE RATES ACROSS WAVES

A panel survey faces all the same obstacles to a high response rate as a cross-sectional study. Some sample members will be reluctant to participate; others will be difficult to contact or locate. Still others will refuse to participate unless the survey accommodates their special needs. Unless measures are taken to overcome these obstacles, initial response rates are likely to be low. 5 A panel survey faces the additional issues of following households that move over time and maintaining cooperation across multiple rounds of data collection. Most panel studies use several techniques in an effort to minimize attrition, including:

Tracing movers. Panel respondents may change residence between waves of data collection, and time and money are needed to locate such respondents. The NLSY uses an elaborate locating method to trace movers. First, a locating letter is sent four months prior to the next data collection period which asks respondents to send an address or telephone update if their addresses or phone numbers have changed. The envelope requests the post office to send address corrections rather than forwarding the letters. Thus, updated address information may be obtained either from the panel respondent or the post office. If no information is received, then it is assumed that there is no change in locating information.

Based on the response to the advance letter, the locating information is updated. Letters returned by the post office without a forwarding address are sent to a "locating shop , along with any information about the sample member, such as his 6 or her social security number, locating information for friends and family, the work address, and so on. The locating shop first attempts to locate respondents by 7 checking one or more publicly available databases. If these electronic searches fail to produce an address, field staff begin by calling the previous telephone number in case a recording is left with information about the new number. Friends, family, and work may also be called to obtain new addresses and telephone numbers for sample members who have moved. Due to such an extensive tracing system NLSY had maintained an overall retention rate of 89% through 1994.

It is important to trace respondents who have moved or changed telephone numbers so that the panel study can maintain the required sample size and reduce attrition. Adopting a comprehensive locating procedure is essential to minimize nonresponse bias.

Recommendation: DEVELOP A LOCATING PROTOCOL
To reduce attrition, develop a locating protocol to track respondents who have moved since the last round of data collection.

Maintaining contact with households between waves. The time frame between waves in panel surveys may vary from a month as in the CPS to a couple of years as in the NLSY. During the interval between consecutive interviews or waves, it is important to maintain contact with the respondents in a panel survey. The PSTP uses a number of methods, including follow-up postcards, summary reports mailed after each wave, and reminder letters sent out before each data collection period.

These techniques help keep the respondents interested in the study, give them a sense of its importance, and remind them about upcoming waves of data collection. In addition, they can yield updated information on the respondent's whereabouts.

Recommendation: MAINTAIN CONTACT WITH RESPONDENTS BETWEEN WAVES
To maintain respondent interest and get updated locating information, send postcards, holiday greetings, and summaries of results to respondents between waves.

Providing incentives. To encourage participation, many surveys provide cash or gifts to respondents; such incentives may be especially useful in panel surveys, which must maintain cooperation across multiple rounds of data collection.

Retaining wave nonrespondents. To minimize the effects of attrition, it is important not to write off sample members who become nonrespondents after the initial wave. Many of these "wave nonrespondents" may be willing to participate in later rounds. If wave nonrespondents are kept in the sample and some are "converted" in later waves, the effects of attrition may not be cumulative. The fact that, say, 10% of the initial respondents do not take part in the second wave should not necessarily impose a ceiling on subsequent retention rates. (Retention rates refer to the proportion of the first wave respondents who complete later waves of data collection.) Cases that could not be located in one wave may be found later on; cases that were too busy to take part in one round may have more time in the next. In any panel sample, there will be cases that insist on being dropped from the panel; it may make sense to simply write off such cases since chances of converting them are very low. But a substantial portion of wave nonresponse is due to temporary circumstances and wave nonrespondents should not be automatically dropped from the panel.

Recommendation: DROP ONLY HARDCORE REFUSALS FROM THE PANEL
Many cases who fail to participate in one wave of data collection will participate in later waves if given the chance. To reduce the effects of attrition, wave nonrespondents should not be automatically dropped from the panel.

MODIFYING THE QUESTIONNAIRES ACROSS ROUNDS

A defining feature of a panel design is the administration of the same items to a sample of respondents on several occasions over time. It is this feature of panel designs that permits the direct measurement of change in individual units. It would, therefore, seem logical that questionnaires and data collection instruments should be kept the same across each wave of a panel study. Any changes in appearance, content, or wording of the instruments, or in the data recording or coding procedures, could compromise the comparability of the data in the different waves.

Two considerations may, however, make it necessary to change the data collection instruments used in a panel survey. In the first place, new issues may arise and the panel sample may be the best means for collecting information about them. As we noted in Section 2, one of the virtues of a panel study is its ability to provide timely information about emerging issues. When new issues arise, it may make sense to add a module or supplement to the existing instruments. In effect, the panel sample can be used to collect cross-sectional data on the new topic. Although this strategy may not capitalize on all the strengths of a panel design, it can save time and money compared to selecting and interviewing a new cross-sectional sample. In addition, the data collected about the panel members in previous waves may enrich the analysis of the data collected in the new module. However, since adding questions to the instrument will increase the burden placed on the panel respondents, the number of new items should be kept to a minimum. In some cases, it may be better to conduct a separate survey than to jeopardize the success of an ongoing panel.

A second circumstance that can argue for change in a panel questionnaire involves problems with an item. When a question yields unreliable data in each wave, the estimates of change become doubly unreliable. For this reason, it is important even in panel surveys to rewrite poorly worded questions or questions that appear to yield suspect data for other reasons. Although replacing faulty questions or instruments interrupts the sequence of comparable measurements, it may be necessary if the measurements are to be interpretable at all. Fortunately, the likelihood of finding faulty items can be substantially reduced through pilot testing of the instruments in advance of the main survey. However, sometimes the problem with an item is not that it was poorly conceived in the first place, but that it becomes less and less meaningful over time. The CPS was recently overhauled for the first time since 1967. Over the intervening years, many items that were once perfectly sensible no longer yielded the required information. When the core items- those repeated in each wave- must be modified, it is often useful to carry out a calibration experiment, in which the old and new questionnaires are administered to different portions of the sample. The results of the calibration study can help analysts disentangle the effects of changes in the instruments from true change in the respondents.

Recommendation: ADD NEW MODULES AS NEW ISSUES ARISE
Although changes to the core instruments in a panel should be kept to a minimum, as new issues arise, modules can be added to get timely data. If a core instrument needs to be overhauled, a calibration study should be done to determine the effect of the change.

Updated: 03/28/2014
HEP Home Planning Environment Real Estate
Federal Highway Administration | 1200 New Jersey Avenue, SE | Washington, DC 20590 | 202-366-4000