U.S. Department of Transportation
Federal Highway Administration
1200 New Jersey Avenue, SE
Washington, DC 20590
202-366-4000
Federal Highway Administration Research and Technology
Coordinating, Developing, and Delivering Highway Transportation Innovations
![]() |
This report is an archived publication and may contain dated technical, contact, and link information |
|
Publication Number: FHWA-RD-01-143 Date: October 2003 |
The manual review discussed in the preceding chapters proved to be extremely labor-intensive. At roughly one-half hour per graph, complete processing of the 10,000-plus graphs equates to roughly 5,000 person-hours. To incorporate new distress survey data as they become available, automation of this process was imperative. Accordingly, this chapter describes the software developed to automate the preliminary data QC review process.
A program was written in VisualBasic to perform this review, the products of which are two files. One file contains the data that meet the QC criteria and are automatically included in the LTPP database and the consolidated distress tables. The second file contains a listing of the discrepant data sets and likely explanations of the causes of the discrepancies.
Currently, there are three different programs, one for each surface type. The user's manual for these programs is provided in appendix B. The program performs a linear regression of the data and then compares the actual data points to the predicted values. If these data are not within a specified tolerance, the data for that section undergo a series of additional checks. These checks assess the data in terms of logic (i.e., increasing distress with time and threshold values). If the data fail all of these checks, the distress is recorded as discrepant.
The software writes two data files for each surface type, PASS*.DAT and NOTPASS*.DAT, where * indicates the surface type. Data passing the software checks are written to PASS*.DAT. The section, distress type, and an error message identifying the potential cause of the discrepancy are written to the NOTPASS*.DAT file.
The software will allow users to examine distress data from the database on a regular basis. It has been recommended that the software be used to provide a closer review of the distress data and to make corrections as needed to the discrepancies noted. However, it is probable that some of the discrepancies will not be resolvable.
A subroutine is included that identifies and excludes outliers from the time series trend for a given distress. If a discrepant survey is identified that cannot be resolved, it may be excluded from further review and from the consolidated data set. As new survey data become available, surveys previously excluded may be reevaluated and included in the consolidated data set if appropriate.
A comparison was made between the results of the manual review and the results of the automated review. In general, the automated and manual reviews noted the same discrepancies-although the causes noted were not always identical. Since the cause was merely an indication of where the data review should begin, the difference in results between the automated and the manual reviews is not considered significant.
Appendices C, D, and E provide the consolidated data for the HMA, JC, and CRC-surfaced sections, respectively. Table 17 contains the number and percentage of data elements that were included in the consolidated data set. The automated review placed fewer surveys in the consolidated data set for the reasons discussed below.
In the manual reviews, an attempt was made to identify the individual survey that caused the discrepancy. However, during the automated reviews, no such examination was attempted. Identifying the specific survey causing the discrepancy in an automated fashion was impossible. If a single survey failed the initial check with the regression line, then the whole data set was excluded from the consolidated data set. To reduce the potential for erroneously including discrepant data in the data set, the decision was made to exclude the whole data set rather than try to identify the individual survey that caused the discrepancy. On the other hand, the data should be thoroughly examined in order to determine which survey is causing the discrepancy and why. Once errors are identified and resolved, the data set will pass the review and be included in the consolidated data set. Appendices F, G, and H contain the list of discrepancies that were found by the software for the HMA, JC, and CRC-surfaced sections, respectively.
Table 17. Number and percentage of data elements in the consolidated data sets
HMA
Distress | Number | Percentage |
---|---|---|
Fatigue | 3,148 | 59 |
Block Cracking | 4,489 | 84 |
Longitudinal, WP | 2,509 | 47 |
Longitudinal, NWP | 1,880 | 35 |
Transverse, No. | 2,154 | 40 |
Transverse, Length | 2,089 | 39 |
Patch, No. | 4,548 | 85 |
Patch, Area | 4,492 | 84 |
JC
Distress | Number | Percentage |
---|---|---|
Corner Breaks | 1,493 | 90 |
Longitudinal | 1,375 | 83 |
Transverse, No. | 1,163 | 70 |
Transverse, Length | 1,150 | 70 |
Rigid Patch, No. | 1,530 | 93 |
Rigid Patch, Area | 1,503 | 91 |
Flexible Patch, No. | 1,490 | 90 |
Flexible Patch, Area | 1,492 | 90 |
CRC
Distress | Number | Percentage |
---|---|---|
Longitudinal | 307 | 75 |
Transverse, No. | 72 | 18 |
Transverse, Length | 69 | 17 |
Rigid Patch, No. | 403 | 99 |
Rigid Patch, Area | 403 | 99 |
Flexible Patch, No. | 398 | 97 |
Flexible Patch, Area | 393 | 96 |
Punchouts | 286 | 70 |