|This fact sheet is an archived publication and may contain dated technical, contact, and link information|
Publication Number: FHWA-HRT-10-022
The Exploratory Advanced Research Program Fact Sheet: Real–Time Pedestrian Detection Layered Object Recognition System for Pedestrian Collision Sensing
Exploratory Advanced Research . . . Next Generation Transportation Solutions
PDF version of Fact Sheet (253 kb)
PDF files can be viewed with the Acrobat® Reader®
The Current State of Technology
At this time pedestrian detection can be achieved using fixed cameras pointing at predetermined zones, or through the use of vehicle–mounted camera systems. Both systems can detect pedestrians, while vehicle–mounted systems can also detect additional objects such as other vehicles, buildings, and zebra crossings. This EAR project moves beyond current systems and integrates two– and three–dimensional technology to provide cues for potential locations of pedestrians.
Using multi–object classification, the system distinguishes seven classes of objects–ground, pedestrians, vehicles, buildings, bushes, trees, and other tall vertical structures such as light poles. The layered pedestrian–detection algorithm can then recognize objects that are not pedestrians and eliminate them from analysis, gaining real–time performance speed in hazard identification.
Layered–Processing Framework: Technical Approach
The goal of this EAR project is to produce a commercially adoptable, rapid, pedestrian–detection system with high accuracy and low false alerts. With this in mind, the system has already been tested to achieve a positive detection rate of 90 percent at a distance of up to 35 m (114 ft), travelling at speeds up to 64 km/h (40 mi/h) under daylight and twilight conditions. The system does this using a layered framework, multi–cue multi–classifiers, and contextual scene understanding. The layered processing framework detects pedestrians and accurately and rapidly isolates them from other road objects. This is done by processing the layers through steps, each one increasing the certainty of detection and recognition. The project team aims to further enhance the speed and accuracy of the system and to issue different levels of warning, depending on the estimated time to collision.
One of the major challenges on this project has been in the area of object recognition and context. The system has been developed to recognize the seven predetermined classes of roadway objects simultaneously, in real time at speeds up to 64 km/h (40 mi/h), and then infer context from the scene. Although it is only possible to obtain partial and variable pose views of people and vehicles, the system has been successfully developed to generalize and handle multiple intra–class variations with limited examples and without models.
Over the project period, the following technical challenges and objectives have been met:
"The success of this EAR project offers many benefits to roadway users," says Wei Zhang at FHWA. "It has the potential to significantly reduce accidents and casualties between motor vehicles and other roadway users, such as bicycles and motorcycles, eventually at a price and performance point that crucially enables wide and mainstream adoption by vehicle manufacturers." Zhang continues, "In addition, the high level of detection being developed could eventually enable more than just visual warning systems to be activated for pedestrian protection but could even be tied into activating passive safety systems. In fact, as decision quality increases, less expensive, single–use safety devices could be used with less risk of inadvertent deployment and subsequent additional cost to customers."
For more information on this EAR Program project, contact Wei Zhang at FHWA, 202–493–3317 (email: email@example.com).
Topics: research, exploratory advanced research, safety, environment, nanotechnology
Keywords: research, exploratory advanced research program, nanoscale, nanotechnology