Skip to contentUnited States Department of Transportation - Federal Highway AdministrationSearch FHWAFeedback

Pavements

Quality Assurance Stewardship Review Summary Report for Fiscal Years 2003 and 2004

Background

A revision of FHWA's sampling and testing regulations titled, "Quality Assurance Procedures for Construction," was published on June 29, 1995, as Title 23, Code of Federal Regulations, Part 637 (23 CFR 637). The regulations require each State agency to have in place an approved Quality Assurance (QA) Program for materials used in Federal-aid highway construction projects. Provided certain checks and balances are in place, the regulations provide flexibility in sampling and testing by allowing the use of contractor test results in the overall Agency acceptance decision. In addition, consultants may be used in performing Dispute Resolution or Independent Assurance (IA) if the laboratories have been AASHTO accredited. The States may also use a system approach to IA instead of establishing frequencies based on individual project quantities.

The regulations also add several additional requirements. They include: (1) the State agency's central laboratory was required to become accredited by the AASHTO Accreditation Program by June 30, 1997, and (2) all testing personnel and laboratories must be qualified using State procedures by June 29, 2000.

This review is part of the Federal Highway Administration overall stewardship activities for State agency QA Programs.

Scope

The objective of this activity is to review the State agencies' QA Program practices and procedures, and to ascertain the status of the States' implementation of the 23 CFR 637 QA regulation.

The reviews looked at the entire QA Program in each State. Prior to the start of the reviews in FY 2004 there was some concern expressed over the use of contractor supplied test results in the acceptance decision. Thirty-three States currently allow the use of contractor testing in the acceptance decision. As a result of the number of States that are using contractor test results and the concerns over the implementation of that provision, the reviews that were conducted during FY 2004 had an emphasis on the use of contractor supplied data used in the Agency acceptance decision.

The assessments were a joint effort involving the State agency and FHWA Headquarters, Resource Center and Division personnel. Material practices involving the regulation were examined at the State Headquarters, Region/District and construction project level.

Four stewardship reviews were completed in FY 2003, Maine, Missouri, Colorado, and Oklahoma. Three of those reviews include States that used contractor test results in the acceptance decision, Missouri, Colorado, and Oklahoma. Four stewardship reviews were completed in FY 2004, California, Georgia, North Carolina, and New York. All of the States that were reviewed in 2004 were States that use contractor test results in the acceptance decision.

Assessment Procedures

The stewardship reviews included (1) interviews with State agency Headquarters, Region/District and field office personnel and FHWA personnel, (2) review of State agency implementation strategies including policy and procedure documents and office records where applicable, (3) visits to construction projects to assess field practices as appropriate, and (4) identification of best practices.

Entrance conferences were held, as appropriate, with top FHWA Division and State agency personnel to explain the assessment intent and process. Closeout meetings were held with the division and State agency offices to share information obtained from the assessment.

Organization of Report

This is a "state of the practice" report that covers the reviews completed during FY 2003 and 2004. The remainder of the report will cover the positive findings and opportunities for improvement that were found during the reviews and additional resources that are available.

Findings


Positive Findings

The positive findings will be discussed in two broad categories, general items that can be portrayed across a significant number of States and specific items that can be attributed to one or two of the States. Only significant positive findings which can be used by others are reported.

  1. General

    In most States, except as noted below, the Independent Assurance (IA) programs are well designed and understood. The programs are being implemented properly and the programs are yielding the desired results. IA is a program of split sampling and testing or reference sample testing to help ensure that the testing is being performed correctly on properly calibrated equipment.

    In all States the Technician Qualification programs for project-produced materials have been understood, designed, and implemented properly. In most States, except as noted below, the Laboratory Qualification Programs have been designed well and implemented properly.

  2. Specific

    1. Electronic Materials Management Systems. States are making progress in electronic management of materials data. Several States are using the AASHTO site manager software, while a couple of States have either an in-house developed system or a third party developed system. The creation of databases has allowed the States to examine their specification limits more easily and ultimately will allow analysis of data to create performance related specifications.

    2. Sample Control System. The State allows the contractors to transport cores for asphalt density and asphalt box samples. However, they use security tape on the box samples that indicates tampering if removed. The cores are shipped in plastic totes that have numbered security tags to prevent tampering.

    3. Joint biannual reviews of the IA program. The State and Division Office perform a biannual review of the IA program. The review covers all aspects of the IA program including implementation at the project and the regional office.

    4. Materials reference sample program. A couple of States have developed internal materials reference sample programs to verify qualification of laboratories and or technicians.

    5. Participation and use of the National Transportation Product Evaluation Program (NTPEP). States are moving toward further use of the NTPEP program. In particular, States are specifying in their specifications that materials need to be tested by NTPEP before the material is considered for approval. The NTPEP is an AASHTO program which tests select manufactured materials. The results of the program can be used as part of a State's approved products list program. The NTPEP program can result in increased assurance of product quality

    6. Meetings with field materials personnel. Most States hold monthly or quarterly meetings with district or regional materials engineers to discuss materials testing issues. This is a good forum for ensuring consistency of interpretation of specifications, test procedures and policy. These meetings also ensure that problems with specifications are identified in a timely manner.

    7. Multiple layers of materials review by the State. Several States have different groups of highly qualified personnel that observe the operations at the plant. This process allows for identifying issues before they become serious problems.

    8. Reduce variability by performing split sample testing prior to production. The State's program calls for split sample testing of flexural samples by all contractor and State testing personnel prior to the beginning of production. This testing has reduced the amount of variability between contractor and State test results.

Opportunities for Improvement

The comments in this section will be discussed in four areas, use of contractor test results, IA issues, other items and fraudulent activities.

  1. Use of Contractor test results

    In most States it was found that the States' validation system needed to be strengthened. The following items were noted:

    1. not using independent samples for State verification samples,
    2. no statistical comparison of contractor and State data,
    3. low State to contractor test comparison ratio of 1 vs. 10 results, and one vs. one comparisons of test results for acceptance,
    4. lack of control of contractor supplied data,
    5. lack of a defined time for comparing test results,
    6. not increasing testing frequencies when test results don't compare,
    7. States are not controlling the sampling location and timing, and
    8. States are allowing biased retesting provisions

    The following is a further explanation of each of the areas noted above:

    1. Use of independent samples. It was noted in several States that verification was being performed based on split samples taken by the contractor. Verification of test data needs to be based on independent samples taken by the State. Split samples are an important part of the overall system and can help determine problems associated with sampling and testing procedures and equipment problems. That is why IA testing is required. However, split samples taken by the contractor will not detect fraudulent activity by the contractor which may consist of fabricating samples, switching samples, or taking samples from biased locations.

    2. Use of a statistical comparison. It was noted in some States that a statistical comparison was not being performed between the contractor's results and the State's results. The comparison was being based solely on a one vs. one comparison of results. This method of verification is very weak and will only detect severe problems with contractor test results.

    3. Number of independent samples being compared for validation. It was noted in some States that comparison ratio's of State to contractor results was one vs. five or one vs. ten. When the F-test and t-test are used for comparing test results, a minimum of seven to ten State test results and a maximum of 20 to 30 contractor test results should be used for a reasonable comparison. It is suggested that a method of rolling comparison be incorporated to solve this problem. States can also increase their sampling frequency at the beginning of projects in order to start the comparison earlier. It was noted in some States that there is no limit to the number of test results that are included in the comparison procedure. It is recommended that the number of contractor test results be limited to a maximum of 20 to 30 results because large numbers of tests in the comparison can mask problems in individual test results. It is also recommended that when a non-comparison occurs that a new comparison process begin with the next test results.

    4. Control of contractor supplied data. A need to control the documentation for contractor supplied test results was noted in some States. In some cases the State is not receiving the documentation until three days after the paving. Some States also do not require the contractors to retain the source documentation for the required 3 years and are not periodically reviewing the records. The States should be reviewing source documentation, requiring proper retention of documents and require the submission of test results the next day and before the State supplies their results.

    5. Defined time frame for comparing test results. It was noted that in some cases there were no limits on the time required for validating contractor test results with the State verification test results. Validation of test results should occur as soon as possible due to the risks to both the State and the contractor that the material being supplied and incorporated into the project does not meet specifications.

    6. Increasing test frequency. When contractor and State tests do not validate the State should increase their frequency of testing. This will increase the ability of the validation process to detect differences and also reduce risks for both parties if the States results are ultimately used for payment.

    7. Control of sampling location. It was noted in several States that the time and or location of sampling was being telegraphed to the contractor. In one case separation paper was being placed on the existing hot mix asphalt mat before placement of the next lift. The paper located the area that cores were going to be taken before the lift was placed and compacted. In other cases the random numbers for the sample locations are being given to the contractors for the entire project at the beginning of the project or at the beginning of the day for the entire day. The State must control the sampling location and timing, limit the pre-notification of sampling, and limit the ability of the contractor to modify sampling locations. Also, sampling behind the paver can avoid telegraphing sampling times to the plant operators. Saws can also be provided to separate the layers of cores.

    8. Biased retesting provisions. It was noted that some States allow the retesting of material any time a failing test result occurs and replaces the failing test result with the new result. This practice is highly biased toward the contractor. Under no circumstances should a test result be thrown out unless it is known that the sample is flawed, i.e. poor or damaged sample, or poor test procedures. If additional tests are taken the analysis process needs to be modified to take into consideration the additional number of test results.

  2. Independent Assurance (IA) Program

    1. Many States should review their test result comparison tolerances. In some instances tolerances were developed in the early 1970's and had not been thoroughly examined since then. In many cases the testing variability has improved due to certification programs and improvements to test procedures. Therefore, the tolerances may be too large.

    2. The IA inspectors were taking independent samples. IA should consist of a program of split sampling and testing or reference sample testing to help ensure that the testing is being performed correctly on properly calibrated equipment. Independent samples do not efficiently isolate issues associated with testing and equipment. Unless large numbers of independent samples are taken they cannot detect problems with sampling and test procedures or equipment problems.

    3. IA forms referred to specification compliance. IA is specifically for determining testing competence and not specification compliance.

    4. Gyratory compaction was not included in the IA program. The IA program should cover all test procedures that are used in the acceptance decision.

    5. The IA program did not cover technicians in the QC Laboratories. All technicians including State personnel, contractor personnel or consultant personnel that are performing testing that is used in the acceptance decision must be qualified.

  3. Other Items Noted

    1. Tester's names were not indicated on reports. All test reports should indicate the person that was responsible for sampling and testing the material.

    2. A few States need to improve the use of electronic means to track data. A good system of data collection will allow future analysis of data to continually improve specifications.

    3. Some States that are gathering electronic data are not using the data to continuously an analyze and improve their specifications based on the actual results obtained within the State.

    4. The dispute resolution system was not formally established and documented. When contractor test results are used for acceptance the State must establish a dispute resolution system to address the resolution of discrepancies occurring between the verification sampling and testing and the contractor sampling and testing.

    5. Qualification programs for all contractor and State laboratories used in the QA program were not established.

    6. Absolute Average Deviation or other inefficient quality measures are being used for acceptance and pay adjustments. The States should move toward a statistically sound Percent Within Limits (PWL) type of specification.

    7. A couple of States did not include volumetric properties as part of the acceptance decision for asphalt mix and should move towards having volumetric properties included. Many States allow multiple options in their test procedures and sampling locations. This provides an increased variability in sampling and testing. A single test method and sampling location will reduce the overall variability of the test results.

  4. Fraudulent Activities

    No fraudulent activities were discovered during the reviews.

    However, questions concerning fraudulent activities were asked during the four States that were assessed during FY 2004. (California, Georgia, North Carolina and New York) In response to those questions, two States indicated current ongoing investigations and two States indicated that they had revoked technician certifications due to fraudulent activities. Questions concerning fraudulent activities or the revocation of technician certifications were not asked during the reviews performed in FY 2003.

Available Resources

The following resources are currently available for assistance in dealing with issues raised in this report:

  1. The guideline for these reviews is available on the FHWA Office of Pavement Technology web site. It is available at http://www.fhwa.dot.gov/pavement/materials_notebook/qareview.htm

  2. 23 CFR Part 637," Subpart B - Quality Assurance Procedures for Construction, Federal Highway Administration, Federal Register, Washington, DC, April 2003, http://www.access.gpo.gov/nara/cfr/waisidx_03/23cfr637_03.html.

  3. Technical Advisory 6120.3, "Use of Contractor Test Results in the Acceptance Decision, Recommended Quality Measures, and the Identification of Contractor/Department Risks",Federal Highway Administration, August 2004. It is available at http://www.fhwa.dot.gov/legsregs/directives/techadvs/t61203.htm

  4. A contract has been awarded for the delivery of NHI Course 134042, "Materials Control and Acceptance - Quality Assurance." The course is four days long and covers the basic essentials of QA. A two day version of the course is also available.

  5. A 1 ½ day workshop titled "PWL Specifications: A Risk Analysis Approach" will be offered by the FHWA Office of Pavement Technology starting in the spring of 2005.

  6. A software package (PWL-Risk) is being developed by FHWA as a tool to help analyze risks associated with Percent Within Limit (PWL) specifications. PWL-Risk will be used as part of the 1 ½ day workshop "PWL Specifications: A Risk Analysis Approach."

  7. Optimal Procedures for Quality Assurance Specifications", Publication No. FHWA-RD-02-095, Federal Highway Administration, Washington, DC, April 2003, http://www.fhwa.dot.gov/pavement/pub_details.cfm?id=89.

  8. "Evaluation of Procedures for Quality Assurance Specifications", Publication No. FHWA-HRT-04-046, Federal Highway Administration, Washington, DC, October 2004

Status of other Quality Assurance Activities

The following resources are being developed to address issues that are not being covered by existing resources:

  1. The rewrite of the AASHTO Standard Recommended Practice R 9-97 (2000), "Acceptance Sampling Plans for Highway Construction" was completed in the summer 2004 and will be published in the 2005 AASHTO Standards. This guide will assist the States in developing specifications.

  2. A contract is underway to rewrite and improve the software that was developed under Demonstration Project 89. This software will assist the States in analyzing and validating the risks associated with QA specifications. The software will be available at the end of 2006.

  3. The Quality Assurance Technologist Course that was developed by the New England Transportation Technician Certification Program (NETTCP) has been finalized and will be available shortly through the Transportation Curriculum Coordination Council (TCCC).

  4. A contract for developing NHI Course 134059 - "Quality Assurance Specification Development and Validation Course" is expected to be awarded at the end of 2005. The course is expected to be available by the middle of 2007. The course will use the software that is currently being developed to assist the States in developing and validating the risks associated with QA specifications.

  5. A task order is being developed with the National Center for Asphalt Technology (NCAT) to explore innovative methods for the acceptance of materials.

Conclusion

The stewardship reviews will continue next year and beyond along with the continued development and updating of resources in order to continuously improve the QA program.

 
Updated: 04/07/2011
 

FHWA
United States Department of Transportation - Federal Highway Administration