U.S. Department of Transportation
Federal Highway Administration
1200 New Jersey Avenue, SE
Washington, DC 20590
Federal Highway Administration Research and Technology
Coordinating, Developing, and Delivering Highway Transportation Innovations
|This report is an archived publication and may contain dated technical, contact, and link information|
Publication Number: FHWA-RD-96-177
Date: October 1997
Development of Human Factors Guidelines for Advanced Traveler Information Systems and Commercial Vehicle Operations: Definition and Prioritization of Research Studies
Eight human factors experts, highly familiar with ITS and ground transportation, provided the raw data. Six raters were drawn from key authors of the working papers (Dingus-University of Iowa, Kantowitz-Battelle, Landau-GM Hughes, Lee-Battelle, McCauley-Monterey Technologies, Wheeler-Battelle). In addition, two distinguished university faculty also completed the lengthy questionnaire (Professors Moray-University of Illinois and Triggs-Monash University). Since they did not participate in writing the working papers it was hoped that their ratings would not reflect any possible biases potentially acquired by Battelle team members during the course of their intense collaboration on this project.
The rating forms (appendix B) were created by concatenating tables 2 and 3. Thus each form contained 2,184 cells (91 issues by 24 criteria). Since the 8 raters completed all cells, there are a total of 17,472 rating entries in the data set. A 5–point scale was used for ratings (appendix B). Completing the rating form took approximately 8 h. Raters were self–paced.
After the ratings were scored, raters were given three new lists of candidate issues to evaluate. List A (appendix B) contained the highest rated 16 issues for the entire data set, weighted as described later in this paper. List B contained the highest rated 16 issues based only on the data for the individual rater but using the same weighting scheme. Thus, eight unique List B forms were generated, one for each rater. List C (appendix B) contained a stratified random sample of 16 candidate issues with 4 issues drawn from each quartile, again with the same weighting scheme. Raters were told to treat each list independently and were not informed how the lists were created. For each list, raters were instructed to delete as many issues as they wished if they believed a particular issue was either not important or not practical given resource limits of the project. They could also add one issue to each list.