|This report is an archived publication and may contain dated technical, contact, and link information|
Publication Number: FHWA-HRT-12-027
Date: May 2012
The project was divided into five tasks, which are described in the sections that follow.
Task A: Form Technical Working Panel and Select Databases
The project team began forming a working panel of at least four SHA representatives. To achieve this goal, eight agencies were nominated: Kansas, Arizona, New Jersey, Texas, Washington, Louisiana, Pennsylvania, and California. The candidate agencies were contacted to determine their interest. The team was familiar QA activities in Kansas, Arizona, New Jersey, Pennsylvania, and California.
After contacting most of the agencies and deliberating further, the team narrowed the selection to SHAs in Indiana, Kansas, Georgia, and Arizona. Although Indiana and Georgia were not in the original nominated list, they were selected after further deliberation. These finalists were selected based on their cooperation with the study, robust databases that could be mined, extensive QA programs, and experience with warranties for construction projects. Letters were sent to each contact in early August 2008
The first panel meeting took place on December 17, 2008, in Indianapolis, IN. The Indiana Department of Transportation (INDOT) agreed to host the meeting attended by the project team and representatives of the Georgia Department of Transportation, Kansas Department of Transportation, and INDOT. Representatives from the Arizona Department of Transportation could not attend the meeting and later informed the team that the agency could not continue in the project. The project continued with three agencies under the agreement that four specifications (two HMA and two PCC) would still be analyzed.
Due to the problems encountered in obtaining the necessary data from SHAs, the second and final panel meeting that was scheduled for May 2009 was cancelled in favor of spending money on research.
One paramount criterion for candidate agencies was the status of the agency’s database. The following items are critical to a good QA program:
The team determined that the SHAs remaining in the project met these criteria. However, as explained later in this report, the team was overly optimistic regarding access to the necessary information in the databases to perform the intended analyses.
Task B: Revise Procedure
The procedure in TRR 1813 using the average and standard deviation was considered as one way to identify an effective specification. This procedure is included in appendix A.
However, other measures explored in recent research, including the performance modeling and life-cycle cost analysis capability provided in the SPECRISK software, were also examined to assess effectiveness. One of the most difficult aspects of this study was to decide how to integrate the quality measures into the analysis procedure. Just as TRR 1813 used computer simulation to convert average absolute deviation (AAD) to AQL, simulation programs were necessary to conduct the required analysis in this study. Fortunately, the team had extensive experience in this area and was able to accomplish this task.
The SPECRISK analysis software requires both AQL and RQL to be defined in terms of percent within limits (PWL) (or percent defective (PD)). Determining AQL that is not explicitly defined was straightforward in that it was taken to be that level of PWL (or PD) that produced 100 percent payment when entered into the pay equation. However, determining RQL that is not explicitly defined or is defined but not in terms that can be used by SPECRISK was generally more difficult and required certain assumptions, as discussed under each specification analyzed.
As demonstrated in TRR 1813, sample size is a critical issue when assessing effectiveness. Many agencies accept product on as few as n =¦3 test results and some, as noted in Transportation Research Board Synthesis 346, use n = 1. Although the TRR 1813 analysis did not indicate as large a difference as expected with sample size, this factor must be taken into account when assessing effectiveness. This consideration will most likely show up when an operating characteristic (OC) curve or an expected payment (EP) curve analysis is done.
The largest problem encountered in trying to use the TRR 1813 procedure in the present study was the inability to obtain the necessary data. Because these data were not available, other procedures were used.
Task C: Verify and Apply Revised Procedure
The research team first attempted to validate the improved procedure using the same FDOT construction quality database analyzed using the TRR 1813 procedure. Two series of questionnaires were sent to the selected SHAs to obtain the information needed for the analysis, but the necessary data were not available. Personnel had changed positions, and the workload of the participants had increased such that they could not devote the necessary time to retrieve the data. An alternative procedure was necessary.
Revised Scope for Task C
Because of the aforementioned problems obtaining the data necessary to conduct task C, a revised scope was developed. As an alternative approach, the team decided to analyze agencies’ QA programs using the precise details of the acceptance procedures contained in the construction specifications instead of using actual QA data. The detailed work plan for the project, specifically, the scope of subtask C.1 in the prospectus, was modified. The number of specifications to be analyzed was not changed (two HMA pavements and two PCC pavements specifications from the three agencies in the study). The revised scope included the use of software tools such as SPECRISK, Prob. O. Prof, and computer simulation, where appropriate.
Task D: Prepare Implementation Recommendations
With the change in scope for task C, task D was modified. The summary and conclusions sections contain suggestions for the agencies to consider for possible revisions to their specifications. These recommendations can also be used by other agencies as a means to evaluate their specifications.
In summary, four SHA construction specifications were selected for analysis from three SHAs, two for HMA pavements and two for PCC pavements. The study shows that either SPECRISK or computer simulation can be used to analyze the statistical risks of most, if not all, specifications. Both HMA pavement specifications and one of the PCC pavement specifications were amenable to analysis by SPECRISK because they were based on PWL or PD as the statistical quality measure. The remaining PCC pavement specification was based on averages and had to be analyzed by computer simulation. Prob.O.Prof was anticipated to be useful in the analysis. However, it was found to require data that were not available and thus could not be used.
Specifications 1 and 2 were for HMA pavements, and specifications 3 and 4 were for PCC pavements. Specifications 1 through 3 were analyzed using SPECRISK, and specification 4 was analyzed using computer simulation.
The SPECRISK analysis accomplished two goals. First, it demonstrated in a general way how much can be learned about an acceptance procedure and pay schedule using just the "Analyze Selected" feature of the software. This method offers considerable time savings for more complex acceptance procedures with several acceptance quality characteristics and demonstrates the ease with which some of these analyses can be done on home computers with modest capacity and computing speed. Second, the analysis revealed interesting findings about a moderately complex acceptance specification based on four acceptance quality characteristics. (A multicharacteristic analysis such as this would not have been feasible before SPECRISK was developed.) It also provided information to both the SHA and contractors concerning the benefits of producing a higher-than-AQL product and the consequences of producing a lesser quality product.
The SPECRISK analyses required certain assumptions and transformations in all cases. The need for the assumptions and transformations is an indication that a risk analysis probably had not been completed on these specifications. In many existing specifications, the AQL and RQL were either not explicitly defined or were not defined in the conventional way (level of PWL or PD) such that they could be analyzed by SPECRISK. In cases such as these, reasonable assumptions were made and documented so that the analyses could proceed.
The team determined that the best way to analyze the acceptance plan in specification 4 was by using computer simulation. This specification differed from others in the use of averages for the evaluation of some acceptance quality characteristics, the range rather than the standard deviation, differing AQL populations for different sample sizes, and the infinite combination of population means and standard deviations that can be considered AQL populations. In this case, Minitab® statistical software was used to generate samples of varying sizes from different populations.
EP and OC curves that show the risks that exist were developed for all four specifications.
Task E: Prepare Documentation
Draft and Final Guidance Document
A draft guidance document was submitted in accordance with task E. After review by FHWA, changes were made, and a final document was submitted for approval.
Draft and Final White Paper
Once the final guidance document has been accepted by FHWA, excerpts will be used to prepare a white paper for review and approval.
Topics: research, infrastructure
Keywords: Quality assurance, Percent within limits, SPECRISK
TRT Terms: research, strategic planning