|This summary report is an archived publication and may contain dated technical, contact, and link information|
Publication Number: FHWA-HRT-13-037
Date: December 2012
PDF files can be viewed with the Acrobat® Reader®
This document is disseminated in the interest of information exchange under the sponsorship of the Department of Transportation. The United States Government assumes no liability for its contents or use thereof. This report does not constitute a standard, specification, or regulation.
The United States Government does not endorse products or manufacturers. Trade and manufacturers’ names appear in this report only because they are essential to the object of the document.
Quality Assurance Statement
The Federal Highway Administration (FHWA) provides high-quality information to serve government, industry, and the public in a manner that promotes public understanding. Standards and policies are used to ensure and maximize the quality, objectivity, utility, and integrity of its information. FHWA periodically reviews quality issues and adjusts its programs and processes to ensure continuous quality improvement.
Technical Report Documentation Page
|1. Report No.
|2. Government Accession No.||3 Recipient's Catalog No.|
|4. Title and Subtitle
Automated Video Feature Extraction Workshop Summary Report
5. Report Date
|6. Performing Organization Code|
8. Performing Organization Report No.
9. Performing Organization Name and Address
Woodward Communications, Inc.
10. Work Unit No. (TRAIS)
11. Contract or Grant No.
|12. Sponsoring Agency Name and Address
Office of Safety Research and Development and Office of Corporate Research, Technology, and Innovation Management
|13. Type of Report and Period Covered
14. Sponsoring Agency Code
HRDS-2 and HRTM-30
15. Supplementary Notes
FHWA's Contracting Officer's Task Manager (COTM): Zachary Ellis, HRTM-30
This report summarizes a 2-day workshop on automated video feature extraction. Discussion focused on the Naturalistic Driving Study, funded by the second Strategic Highway Research Program, and also involved the companion roadway inventory dataset. The specific objectives of the workshop were to begin a discussion on how Government, academia, and the private sector can cooperate to advance the state of the practice in the automated analysis of video data from naturalistic driving studies. A panel of expert speakers presented the state of knowledge in video feature extraction and demonstrated and described a range of analytical capabilities that could be automated. Following the presentations, the participants discussed what could be learned from the data, identified naturalistic data challenges, examined near- and long-term technical approaches, and reviewed organizational approaches for advancing the practice of automated feature extraction.
|17. Key Words
Automated Video Feature Extraction, Naturalistic Driving Study, Video Analytics, Automated Analysis, Video Data, Real-Time Analysis, Computer Vision, Big Data, Data Sets, Driver Behavior, Human Factors, Driver Distraction.
|18. Distribution Statement
No restrictions. This document is available to the public through the National Technical Information Service, Springfield, VA 22161.
19. Security Classification
20. Security Classification
21. No. of Pages
|Form DOT F 1700.7||Reproduction of completed page authorized|
On October 10–11, 2012, at the Turner-Fairbank Highway Research Center in McLean, VA, the Federal Highway Administration’s Office of Safety Research and Development and Exploratory Advanced Research Program convened a 2-day workshop on automated video feature extraction. The objective of the workshop was to begin to answer the question: how can Government, academia, and the private sector cooperate to advance the state of the practice in the automated analysis of data from naturalistic driving studies?
With new, smaller, and less obtrusive sensor technology, researchers are able to engage for the first time in gathering massive amounts of data about driving behavior. Naturalistic driving studies—including the one undertaken by the Strategic Highway Research Program 2, the largest ever naturalistic driving study—provide detailed information about driver behavior, vehicle state, and roadway using video cameras and other types of sensors. Naturalistic driving data provide an opportunity to improve understanding of vehicle crashes, particularly providing useful information on driver distraction and driver behavior leading up to a crash.
The workshop began with an introduction from U.S. Department of Transportation representatives and a brief background to the workshop. A panel of expert speakers then presented on state of knowledge in video feature extraction and demonstrated and described a range of analytical capabilities that could be automated.
Virginia Tech Transportation Institute’s Jon Hankey provided video examples of the data collection system in action. Hankey also provided information on privacy issues and ways to improve data access. Participants were told about remote secure enclaves and a smaller driver study offering easier access. John Lee of the University of Wisconsin–Madison’s Department of Industrial and Systems Engineering explained the importance of putting driving into context and the unique and valuable role naturalistic driving data can play in that process. Lee also covered some of the challenges that come with big data before highlighting the importance of fully understanding driver distraction and avoiding disciplinary myopia.
Human–computer interaction was the focus of Margrit Betke, from Boston University’s Computer Science Department. Betke addressed some of the research challenges—including privacy issues—and then made several recommendations, including the possibility of creating a three-dimensional (3D) facial reconstruction to assist with data privacy. Qiang Ji of the Rensselaer Polytechnic Institute’s Department of Electrical, Computer, and Systems Engineering then explained his research into real-time driver-state monitoring and recognition. Video demonstrations showed face detection and tracking, and Ji highlighted the ability to characterize driver state using facial expression analysis.
Next, Yuanqing Lin of NEC Laboratories America’s Department of Media Analytics explained the concept of low-level sensing and high-level understanding, also highlighting the importance of 3D reconstruction. Finally, Mohan Trivedi of the Laboratory for Intelligent and Safe Automobiles (LISA) at the University of California at San Diego discussed some key remarks regarding computer vision, driving context, and adopting a holistic approach to understand driving. Trivedi addressed the importance of focused research and provided an overview of LISA’s research into a complete driving context capture system.
The second part of the workshop included an opportunity for workshop participants—including safety regulators, driver behavior researchers, and human factors experts—to review some of the key points raised during part one before moving on to address data needs, identify challenges, and collaborate on potential solutions.
Throughout the workshop, contextual data were consistently discussed as essential to the behavioral analysis of video data. It was noted the parties to the conversation have complementary needs: researchers in computer science, image processing, and machine learning need access to data and some degree of financial support to do their work; researchers in highway safety and driver behavior need tools to process the existing and forthcoming data.
Several possible approaches were identified for further investigation, including the concept of “remote secure enclaves” and creating reduced datasets to bypass some of the existing privacy management issues. Generating more manageable datasets was a key discussion point and several methods were discussed to achieve this goal, including developing tools to help researchers identify data of interest from thousands of hours of footage.
Figure 1. Diagram. The data acquisition system.
Figure 2. Photo. Footage is placed into a single grid.
Figure 3. Photo. An intentionally blurred still image of the inside of the car.
Figure 4. Photo. An instrumented van.
Figure 5. Photo. The van produces a sequence of photographs for a photo log of its journey.
Figure 6. Photo. Light "blooming" can impede facial recognition.
Figure 7. Screen Capture. A facial recognition system from earlier research.
Figure 8. Chart. 3D trajectory analysis.
Figure 9. Diagram. System approach.
Figure 10. Photo. Facial feature detection automatically superimposes 28 points onto a face.
Figure 11. Photo. Infrared is used to detect cornea reflection.
On October 10-11, 2012, at the Turner-Fairbank Highway Research Center (TFHRC) in McLean, VA, the Federal Highway Administration's (FHWA) Office of Safety Research and Development (R&D) and Exploratory Advanced Research (EAR) Program convened a 2-day workshop on automated video feature extraction.
Workshop discussion focused on the Naturalistic Driving Study (NDS), funded by the second Strategic Highway Research Program (SHRP2), and also involved the companion roadway inventory database. The largest study of its kind ever undertaken, once competed researchers will have access to almost 3,700 driver years and a database containing an estimated 2.5 million trip files.
Feature extraction is designed to simplify the resources required to analyze a large dataset accurately; however, large datasets make conventional manual coding techniques unfeasible. Therefore, to maximize the use of NDS data there is an urgent need to develop automated feature extraction and efficiently reduce the data to more manageable elements of interest. It is essential to identify short-term solutions and incremental improvements to help analyze data more efficiently for application in the safety community and beyond.
The specific objectives of the workshop were to begin a discussion on how Government, academia, and the private sector can cooperate to advance the state of the practice in the automated analysis of video data from naturalistic driving studies. A panel of expert speakers presented the state of knowledge in video feature extraction and demonstrated and described a range of real-time analytical capabilities.
Following the presentations, the participants—including safety regulators, driver behavior researchers, and human factors experts—discussed what could be learned from the data, identified naturalistic data challenges, examined near- and long-term technical approaches, and reviewed organizational approaches for advancing the practice of automated feature extraction.
The Government's goals are both short and long term. In the short term, the Government wants to begin to extract value from the NDS data, and welcomes immediately deployable and partial solutions toward that goal. In the long term, the Government wants to ensure that the data being collected will improve transportation safety to the maximum possible extent.
Topics: research, exploratory advanced research
Keywords: research, exploratory advanced research, Automated Video Feature Extraction, Naturalistic Driving Study, Video Analytics, Automated Analysis, Video Data, Real-Time Analysis, Computer Vision, Big Data, Data Sets, Driver Behavior, Human Factors, Driver Distraction
TRT Terms: research, Information organization, Activities leading to information generation, Research, Research projects