Featuring developments in Federal highway policies, programs, and research and technology.
|This magazine is an archived publication and may contain dated technical, contact, and link information.|
|Federal Highway Administration > Publications > Public Roads > Vol. 62· No. 6 > Human Factors Recommendations for TMC Design|
Human Factors Recommendations for TMC Design
by Nazemeh Sobhi and Michael Kelly
Traffic incidents may have a detrimental effect on traffic congestion from the time they occur until long after the actual incident is cleared. Prompt identification and clearance of incidents can minimize this damage.
Traditional incident detection methods, typically relying on phone notification by citizens or a fortuitous discovery by a law enforcement officer, may require relatively long periods of time to detect and begin clearing accidents. Intelligent transportation systems (ITS) technology, including closed-circuit television, traffic-flow sensors, and computer-based incident detection systems, can identify roadway incidents much more quickly. In addition, traveler information systems can alert drivers approaching an incident to the need to drive cautiously to avoid a secondary accident.
An ITS-class traffic management center (TMC) incorporates a large number of sensors to collect and communicate traffic-flow data. They may employ data fusion and automated information processing to combine and translate the information into a form that can enhance operators' knowledge of the status of the roadway environment. A TMC can automate some routine decisions and actions to help manage the operators' workload. Finally, a TMC provides new channels to communicate information to drivers and vehicles, including presentation of brief messages on variable message signs (VMS) and presentation of more complex information on short-range highway advisory radio (HAR) transmitters.
The success of advanced traffic management systems (ATMS) operations ultimately depends on how the human operator interacts with the systems' computers and devices. The human operator is a critical component whose capabilities and limitations must be integrated into the design and operation of a TMC. The need to integrate human operators into the traffic management system dictates that the principles of user-centered design and human factors engineering must be incorporated into the overall system design and system engineering process.
A series of experiments were conducted in a high-fidelity TMC human factors research simulator to address some of the important questions that were being asked about how best to integrate the human operator into the high-technology TMC. Some of the questions concerning systems for data collection and information promulgation are discussed in this article.
Methods of Controlling Remote CCTV Cameras
With such a large number of remote cameras in use, a TMC operator may spend considerable time selecting remote cameras for viewing on available monitors and manipulating (panning, tilting, and zooming) the camera view. If the design of the cameras affects the cameras' effectiveness and ease of use, the design will consequently affect the effectiveness of the entire traffic management system.
The human factors design issues associated with these cameras include:
Four different manual interfaces were tested for selecting and controlling remote cameras: the joystick interface, the keyboard interface, the touch-screen interface and the mouse interface.
In the joystick interface, left and right movements of the joystick result in left or right pans of the camera view. Backward and forward movements of the joystick result in downward and upward tilts of the camera. Zooming is accomplished by simultaneously pressing a button on top of the joystick and moving the stick forward or backward. Cameras are selected for viewing by typing an identifier string using the numerical keypad of the computer keyboard. This configuration matches a common arrangement, currently in use in TMCs, that features a joystick and associated keypad to select and control cameras.
The keyboard interface is very similar to the joystick. Rather than left, right, forward, and backward movement, subjects use the left, right, up, and down arrow keys beside the keypad. Zooming is accomplished by simultaneously pressing the rightmost control key (relabeled ZOOM) and the up or down arrows.
The touch-screen interface uses a set of icons on the map display for selecting cameras. The map is shown on the upper-right corner of the computer workstation beside the video window where the camera video is displayed. The subject first touches an upper-level icon that designates the desired roadway. The map then switches to a zoomed-in view of a map of the chosen roadway, and a separate icon is displayed for each camera along that roadway. The operator then touches the icon associated with the desired camera, and that camera view is displayed in the video window. A control pad at the bottom of the screen controls camera movement.
The mouse interface is similar to the touch-screen. Mouse users view a map on the workstation map display. They open a box containing a region of the overall map by clicking and dragging the cursor. Cameras are then selected by clicking on a specific camera icon. The control pad near the bottom of the screen manipulates the camera just as with the touch-screen except that users "click" with the left mouse button to press a control button rather than pressing it with their fingertips.
A relatively large number of selection errors made by the touch-screen group indicated that a touch-screen interface should not be used to select cameras. Touch-screen interfaces have inherent limitations due to the required sizes of touch zones and the possibility of errors due to visual angle. As the number of touch icons increases and their size and the distance between them decreases, the occurrence of errors increases.
The higher percentage of transition (camera movement) time for joystick users operating manual cameras is related to the tendency of joystick users to make relatively few short panning movements accompanied by fewer tilt and zoom movements. Mouse and keyboard users tend to make more tilt, zoom, and short panning movements. These findings suggest that mouse and keyboard users were likely to conduct a more thorough inspection of camera sites. It is partly on this basis that we recommend that mouse and keyboard interfaces be selected for manually controlling remote cameras.
Interfaces using preset cameras appear to be superior to pure manual controllers. The preset functions allow for a quick look at a location and prevent the operator from spending time on manually panning the camera around to view the roadway from opposite directions. The manual functions, on the other hand, allow the operator to examine a particular location in great detail when necessary. Our recommendation, therefore, is that a camera-control design should incorporate preset capabilities for approximate camera aiming along with manual control capabilities for fine aiming.
Incident Detection Algorithms
TMC operators are concerned with several performance parameters of incident detection algorithms - hits, misses, false alarms, and detection latencies. A hit occurs when the system correctly detects and reports the presence of an incident. A miss occurs when the system fails to detect and report an existing incident. A false alarm occurs when the system reports that an incident has occurred when there is none. Detection latency refers to the amount of time required by the system to detect and report an existing incident. An automated incident detection system should provide a high percentage of hits, a low percentage of misses and false alarms, and a short detection latency.
In reality, these parameters interact with each other. For example, the detection algorithm can be tuned to reduce the false-alarm rate by also reducing the hit rate. The detection latency can be reduced by accepting a higher false-alarm rate or a lower hit rate. A crucial question for an automated support system designer is the effect of various approaches to this tradeoff on operator and TMC performance. One existing philosophy, for example, is that the system should be designed to eliminate the "cry wolf" problem by minimizing false alarms.
Our operator performance data suggest that automated incident detection systems enhance the operators' performance by reducing misses and decreasing the detection latency. The data further suggest that the incident detection system should be implemented with the highest hit rate and lowest detection latency that are practical to achieve.
While this will ultimately translate into a higher false-alarm rate, operators are able to compensate for this if the system provides a means to easily identify and dispose of false-alarm reports. The means might include automated identification of recommended camera sites and other sensors appropriate to evaluate the data as well as color-coded map displays and CCTV monitors to supplement the incident detection system.
In addition, the operator needs a means of easily rejecting incident reports or delaying their management. During our experiments, operators were given a menu of four choices that appeared with each incident detection system report, allowing the operator to report the incident, reject the incident as a false alarm, declare the report a duplicate of a previously reported incident, or place the report in a background log for later handling.
Management of VMS and HAR Messages
VMS systems are being increasingly used as a means of transmitting such traffic information. In areas with a large number of VMS, the TMC operators may be challenged by the task of deciding which messages to post on particular signs, when to change the messages, and when to clear the signs. Computer-based support systems may be used to assist the operator.
One major issue in controlling a combined VMS/HAR information system was a conflict between the interface for selecting VMS messages versus the interface for controlling highway advisory radio broadcasts.
An interface for controlling VMS was used as a starting point. This interface allowed the operator to see the current message posted on the VMS and provided mechanisms for clearing the message and for posting a new message.
The method for posting a new message was to select from a menu of messages. Each possible message was prepared ahead of time. An abridged form of the message appeared in the menu listing, and upon selection, the full text of the message appeared. For example, the full message might be "ACCIDENT ON I-20. THREE LANES BLOCKED. USE MLK AS ALTERNATE." The associated menu item might be "Accident, 3 lanes, alternate MLK." This approach for selecting a message for VMS was preferred by the evaluators.
When this approach was applied to the HAR broadcasts, however, it led to too many menu options. The preferred approach for the HAR broadcasts was a composition method. Composing a message began by selecting the condition to report - for example, accident or congestion. The operator continued by selecting the roadway, the direction of travel, the nearest one or two exits, the number of lanes blocked, and finally the alternate (if any) to suggest. Upon composing the message, the operator selected the HAR transmitter(s) from which the message should be broadcast.
Although the evaluators agreed in preferring the menu approach for VMS and the composition approach for HAR, they also agreed that they did not like using a separate method for managing the two information outlets. For the combined system, they recommended using the composition approach for both HAR and VMS.
A second issue was the difficulty that operators had in reviewing and understanding the response to an incident in terms of VMS and HAR messages. In one study, we simulated a support system that automatically selected and posted messages on VMS when the operator placed an incident report. Operators were informed of each action taken by the support system. We found that operators had difficulty detecting an over-response or an under-response to an incident when the individual messages were presented separately. The previous study was limited to the use of VMS and specifically did not include HAR.
Our evaluators strongly preferred that the same level of automation used for VMS should also be used for HAR broadcasts. That is, the automatic posting of VMS messages should be accompanied by automatic broadcast of comparable messages on HAR. We further found that the presence of the HAR messages exacerbated the problem of detecting support system errors. The evaluators recommended that the total response (VMS and HAR) to an incident be summarized in a single dialog on the workstation monitor.
Another issue was the complexity of clearing multiple VMS and HAR messages after an incident was cleared. We found that operators needed a streamlined way of clearing a VMS and an indication on the asset map of whether a given VMS was cleared. Additional streamlining is needed for clearing a message from HAR because, in contrast to a VMS response, a HAR broadcast can be a group of messages about multiple incidents. The clearance of a given incident means that the message associated with that incident should be cleared from HAR, but other incidents may remain. Thus, to check a HAR broadcast to see that a given message has been removed, the operator needs more information than simply the cleared versus uncleared state of HAR.
These studies highlighted the importance of providing the operator with an integrated traffic management system rather than a suite of separate support systems. The operator interface should feature a consistent "look and feel" for all of the incorporated elements.
The following are lessons learned about the algorithms used for automated incident detection systems:
The following are specific lessons learned concerning design of support systems related to the dissemination of information to drivers and other users of real-time traffic data:
Nazemeh Sobhi is a highway engineer in the Office of Safety Research and Development, Federal Highway Administration. Her expertise is in the human factors aspects of intelligent transportation systems. She received a bachelor's degree in computer science from Radford University in 1987 and a master's degree in transportation engineering from the Virginia Polytechnic Institute and State University in 1989. Currently, she is a doctoral candidate in civil engineering at the University of Maryland.
Michael Kelly is a principal research scientist and head of the Human Factors Branch at the Georgia Tech Research Institute. He is the principal investigator on the study to develop a handbook of human factors design guidelines for advanced transportation management centers based on simulator research and on lessons learned by existing centers. He received his doctorate in engineering psychology from The Johns Hopkins University in 1975.
Page Owner: Office of Corporate Research, Technology, and Innovation Management
Scheduled Update: Archive - No Update
Technical Issues: TFHRC.WebMaster@dot.gov