Skip to contentUnited States Department of Transportation - Federal Highway AdministrationSearch FHWAFeedback

Construction

<< PreviousContentsNext >>

Performance Contracting Framework
Fostered by Highways for LIFE

Performance Measurement Methodology

Process

If you set goals, you need to measure performance against those goals to determine to what extent they were met. The measurement methodology will define what gets measured by whom and when, as well as what to do with the results. In performance contracts, there are numerous techniques for monitoring performance. This methodology considers the logistics for how the performance goals will be measured, evaluated and scored. Recommendations and lessons learned from real-world performance contracts are provided. Materials that may be used by the Owner Agency to implement the processes involved with measuring performance also have been included. The overall process is summarized in the figure below.

Figure 4. Process for Defining the Measurement Methodology
Click here for text version

Flow chart showing the process for defining the measurement methodology.

What Gets Measured (When, By Whom, and How)
Frequency

There are a number of frequency options for measuring performance, including:

  1. Continuous Measurement
  2. Cyclic (Hourly, Daily, Quarterly, Weekly, Monthly, Annually)
  3. Start of project, end of project and at project milestones
  4. Long-term

The frequency will depend largely on the specific performance goal, and the frequency of measure should be defined for each performance goal (see Table 3). For example, the frequency of measurement of some congestion goals may need to be continuous, but the pavement smoothness likely would be measured at the end of the project, and perhaps on a long-term basis. Unscheduled or "surprise" inspections can be incorporated into the project's evaluation as well.

The frequency of data collection or testing, which will affect the frequency of the overall performance evaluation, may be impacted by the innovations introduced on the project. For example, if a long-lasting material is proposed and implemented, this may necessitate longer-term intermittent site visits to collect resulting data.

The frequency of data dissemination and presentation is also important to consider. For example, as dictated by specific performance goals such as motorist delay, the contractor logically would collect these hourly data on an hourly basis, but it may be more reasonable to present these data once a week or once a month to the project team. There may be other economic, temporal, or spatial impacts that allow for reduced work and data collection/analysis efforts. Many of these may be outside of the contractor's control (i.e., extreme weather, terrorist attacks, etc.).

We have provided, as an example, a discussion on organizing quarterly milestone evaluations in this section. At quarterly milestone intervals throughout the project (i.e., 0%, 25%, 50%, 75% and 100% project completion), the project team (i.e., Owner Agency staff, the Contractor, other parties) performs an evaluation of the project, the work zone and/or the Contractor's records of actions completed in that period to review Contractor progress and performance.

The frequency of measuring performance will also assist the contractor and Owner Agency when planning and performing work. The following figure demonstrates an example iterative process associated with a quarterly performance review. It begins with a review and inspection of the work, follows with the production of a digital record (i.e., DVD) and a report showing what was found during the review, and ends with the project team planning and performing work based upon the findings.

Figure 5. Quarterly Milestone Review Process.

The figure shows a photo of DVDs and of a sample report page. The Owner Agency performs a quarterly review and produces DVDs and a report. The contractor then uses the report to plan/perform work, the results of which get measured in the next quarterly review. Thus, the process is circular.

Measuring performance will also assist the contractor and Owner Agency when planning and performing work on an annual basis. The following figure demonstrates an example iterative process associated with a comprehensive annual performance review.

Figure 6. Comprehensive Annual Review Process.

The figure shows a sample comprehensive annual review process. The Owner Agency chooses sample for review and performs an evaluation. The contractor uses the results of the evaluation to plan/perform work. The cycle then begins again for the next evaluation with choosing samples. Thus, the process is circular.

Evaluators

The evaluators for this effort generally would come from three pools: the construction contractor, an independent evaluator, the Owner Agency. We recommend using an unbiased, independent evaluator to measure performance against the goals. FHWA-required verification testers could also be included as a fourth pool of evaluators.

The independent evaluator should not be involved in the day-to-day activities of the project to ensure a fair evaluation; this would also prevent the evaluator from focusing solely on either positive or negative data, records, areas, or other project information.

This evaluator could be supplied by FHWA. Alternately, a consultant familiar with the type of project may serve as an evaluator. However, the Owner Agency and the contractor may need or choose to collect some or all of the data as appropriate. An assignment of responsibility must be made for each performance goal (see Table 3).

When developing goals, it is important to consider how they will be measured on a technical level and to define this in the solicitation package. For example, if specialized equipment is required for an evaluation, will the contractor or the Owner Agency furnish this equipment? A decision such as this one will influence the ultimate cost of the project and must be carefully taken into account and described in the solicitation package.

We recommend recording all evaluations via electronic media to provide a record of the condition of the project. The electronic media record serves to document the changes to the project setting over time and is useful as reference material in planning future work. We suggest that each organization involved on the project receive a copy of each of the evaluations via DVD.

Measures of Effectiveness

For each goal, it is important to define the measure of effectiveness, the unit of measure, and the method that will be used to measure performance. The measure of effectiveness is the entity under scrutiny (i.e., each user satisfaction survey, each acre of wetlands restored, etc.). The units of measure will focus on the size of the sample to be taken (i.e., 1/10 mile, one month, entire work zone, etc).

The methods of measurement seek to answer the following questions:

  1. How does one measure this entity?
  2. What processes must be followed to obtain a reliable measurement?

The method of measurement should be nationally accepted, if possible. As an example, AASHTO-accepted standard measurement practices will provide good methods of measurement for some goals. The Owner Agency will need to define units and methods for each goal.

Sampling Strategies

The Owner Agency must choose a sampling strategy for the reviews, whether they are daily, weekly, quarterly, monthly, or any other alternative. The options are sampling data, items or assets to show a representative view of the work or using a 100% sample. We recommend avoiding a 100% appraisal due to the large cost and time expenditures associated with these reviews.

We suggest randomly sampling a portion, perhaps 10%, of the category items to be reviewed. A randomly-generated sample will prevent the evaluators from focusing solely on either good or poor sections. Randomly-selected samples may be generated for each item included in each category for each review. We also recommend specifying the sampling process clearly in the solicitation package, as the process will influence the price.

Scoring Styles

Two primary styles or strategies for measuring performance are:

  1. Subjective
  2. Objective

Of course, it is also possible to combine the two styles. For example, a subjective monthly evaluation could coexist independently with an objective annual evaluation. Alternatively, a monthly evaluation could have both objective and subjective components.

We recommend that these evaluations be objective evaluations of the Contractor's performance against the performance goals. While subjective evaluations are helpful in capturing the project team's perceptions of the work completed and remaining, objective evaluations minimize the amount of debate over performance against the goals. It is also simpler to incorporate incentive and disincentive fees into the solicitation package when using unbiased, hard data and scores. The Owner Agency and any evaluators should review the work completed or in progress and the results; then the Owner Agency and any evaluators should assign the appropriate Level of Performance score.

The evaluator personnel should be kept as consistent as possible to ensure comparability of the reviews. If new evaluators are introduced sporadically, this may bring delays. The Owner Agency would spend time and money in getting the new evaluators up to speed on the project, data, and context. Again, the objective evaluator should not be involved in the day-to-day minutiae of the project, but the evaluator should be familiar with the appropriate technical logistics of the project.

What to Do With the Results

When you evaluate performance, you end up with a large number of scores for various samples and performance goals. The challenge is to use these scores to communicate the effectiveness in meeting the performance goals. Different levels of management will have different needs in terms of level of detail. Upper level management tends to be interested in seeing summary scores at the project level, where project level management tends to be interested in the detailed results.

The Owner Agency and independent evaluator or consultant should generate reports that summarize the review's findings. Results could be presented in both data summaries and written commentary sections. Deficiencies and problems found during the evaluation should be highlighted; simple bulleted lists or checklists may be used to convey this information.

We recommend that the Owner Agency, independent evaluator, or consultant generate periodic reports that summarize any review's findings. To help identify trends, the project team should compare the review results against the results for previous periods. It is useful to compare the results against either the baseline condition (i.e., the project at 0% completion, one day before the contract starts) and/or the previous comprehensive evaluation.

We recommend that the project team discuss the results of the evaluations. We also recommend that the Owner Agency use partnering sessions, as needed, to resolve any issues unearthed by the reviews. The Owner Agency also should report a general level of performance satisfaction along with recommendations and concerns. The contractor may bring issues to the attention of the project team, along with solutions and suggestions for future activities.

Methods for Combining Results into Summary Scores

The evaluation team could use the resulting scores for each goal (i.e., Recycling and Reuse) to obtain an overall category score (i.e., environmental) and an overall project score. A sample process to obtain the two latter scores is detailed below.

Each performance goal should be assigned a computed relative weight. The Table below assigns relative weights to two sample performance goals; each goal in the project would need an associated weight. The owner agency can determine the criteria against which to rate the goals. For sample purposes here, we have used the main HfL high level goals.

For each performance goal below, the Owner Agency assigns rating for each goal for each category (in this case - "Improve safety," "Reduce Congestion due to Construction," "Improve Quality," and "Improve User Satisfaction"). For the scale, 5 is very important and 1 is not as important. The Computed Relative Weight is determined by summing the relative weights for the performance goal and dividing by 2 (to obtain a score out of 10). As the computed relative weight increases, so does the importance of the goal.

In the example below, Capacity (with a computed relative weight of 6.5) was determined to be more important than Recycling/Reuse (with a computed relative weight of 4.5).

These weights will help determine which goals have more of an impact on the overall score (and thus the application of incentive and disincentive fees, as discussed below). In the example, Capacity will have more impact on the score than Recycling/Reuse.

Table 2. Example Relative Weights for Two Sample Performance Goals
CategoryElementPerformance Goala. Improve Safetyb. Reduce Congestion due to Constructionc. Improve Qualityd. Improve User SatisfactionComputed Relative Weight (a+b+c+d)/2
Construction CongestionCapacityCapacity in the work zone [or work zone and alternate route(s)] during peak traffic periods is greater than or equal to 90% of the pre-construction capacity35146.5
Environmental Recycling and ReuseCapture and recycle/recover 90% of recyclable materials used on project11434.5
Determining Incentive/Disincentive Fees with the Scores

Incentive and Disincentive fees are an innovative approach to motivate the contractor to meet the Performance Goals. If the Owner Agency is going to apply incentives and disincentives, there needs to be an objective system of determining the fees applied. The Owner Agency also needs to determine what reasonable incentive and disincentive fees are for the project and the locale. We have used 5% here as a sample, but 10% might be more appropriate depending on the situation. As Owner Agencies and contractors get more experienced with incentives and disincentives, the process will evolve.

The incentives and disincentives must be reasonable and meaningful. The incentives also must be achievable, or they will not have an impact.

While including disincentives is appropriate, the Owner Agency must realize that this level of risk will come at a cost. Disincentives are effective to push performance up to a point, but after that, you also need an incentive.

The measurement methodology sample materials included below provide language for the solicitation package on applying incentive and disincentive fees. Some assumptions are made in this contract language. For example, the contractor shall be eligible for an incentive fee or subject to a disincentive fee for each Category, which is tied to the comprehensive annual evaluation, and is based upon performance throughout the year. This award is designed to reward performance that meets or exceeds the performance goals.

We also recommend that the Owner Agency form a Performance Evaluation Board (PEB) that will advise the Contracting Officer on the amount of the total incentive fee to be received by the Contractor or the disincentive fee to be applied to the Contractor. The PEB personnel should be kept consistent throughout the project's life. After each evaluation, the PEB would convene to review the scores and determine the appropriate course of action. In the sample materials, the PEB generates a score with a scale of 0 to 100; in the scale, 40 points come from monthly evaluation scores, 50 points come from the comprehensive annual score, and 10 points are subjectively produced:

(40%*Monthly Evaluations) + (50%*Comprehensive Evaluation) + 10% subjective score

Note that the materials provide a detailed breakdown of how to compute the incentive/disincentive fees. For the incentive/disincentive fee structure, the Owner Agency must decide how these should be linked to performance. For example, should each category be tied to individual incentive/disincentive fees, or should the collective categories be tied to an overall incentive/disincentive fee? We recommend tying each category to individual incentive and disincentive fees, as this allows the Owner Agency to adjust any project management strategies in case a category is deficient or problematic.

Lessons Learned from Real-World Performance Contracts

The following measurement methodology and incentive/disincentive lessons have been learned in real-world performance contracts:

  • Specify the performance measurement methodology clearly in your RFP/IFB - it will impact the price
  • Need to choose between sampling and 100% reviews
  • If sampling, choose the samples randomly - resist the temptation of only looking at the problems
  • Capture the performance reviews on video
  • Generate reports from the review that will be shared with all project partners
  • Present the results in a variety of ways to satisfy different interest levels
  • Weighted averages work for combining scores across multiple samples and categories, but can hide problems
  • Do not focus too long on the overall score - it is just an indicator
  • Make incentives achievable (and worthwhile)
  • Include disincentives to push performance
  • Incentives and disincentives should be balanced, fair, and reasonable
  • Incentive and disincentive fee structures evolve from one project to the next as Owner Agencies and Contractors develop experience
  • While including disincentives is appropriate, realize that you are going to pay for that risk up front
  • Be fair!

Sample Measurement Methods and Solicitation Package Materials

The following table provides the sample set of measurement methods for the menu of performance measures and categories presented in the Performance Goals Section of this framework. All entries in this table can be adjusted to be suitable for the specific locality of the project.

Following the Measurement Methods Table are sets of sample supplemental wording for RFP Sections:

  • E - Measurement Methodology, Inspection, and Acceptance
  • F - Deliveries or Performance
  • H - Special Contract Requirements
Table 3. Sample Measurement Methods for the Performance Measure Menu in the Performance Goals Section
Click here to view full-screen version
CategoryElementPM #Performance GoalMeasure of Effectiveness?
Units of Measure?
How to measure? What processes?How often? When?Who will Evaluate This?
Safety
  • Injuries - (Workers)

Contractor / Sub- Contractors on site personnel, Government representatives, Consultant, Vendors, Delivery Personnel

1 Incident Rate (IR) for Worker injuries is less than 4.0 Incident Rate for the Entire Project Contractor's officially reported Incident Rate End of Project Construction Contractor or Independent Evaluator or State DOT
  • Crashes
2 Site Crash Rate during construction divided by the Crash Rate prior to construction is equal to 1.0 Site Crash Rate for the Entire Project divided by the Site Crash Rate prior to Construction Each State Agency / Contractor shall record the Crash Rate during construction. For long term projects, the annual Crash Rate during construction should be used and divided by the Crash Rate prior to construction. For short term projects, the overall Crash Rate during construction should be used. End of Project Construction Contractor or Independent Evaluator or State DOT
OR
  • Crashes
3 Work zone crash rate equal to pre- construction crash rate Work Zone Crash Rate for the Entire Project Compared to the Pre- Construction Crash Rate Each State Agency / Contractor shall record the Crash Rate during construction. For long term projects, the annual Crash Rate during construction should be used and divided by the Crash Rate prior to construction. For short term projects, the overall Crash Rate during construction should be used. End of Project Construction Contractor or Independent Evaluator or State DOT
  • Speed Band
4 85% of the motorists travel at the posted speed limit or less Percentage of Motorists traveling at the posted speed limit or less each day Monitoring devices, police radar, police tickets Each Day Construction Contractor or Independent Evaluator or State DOT
AND
  • Speed Band
5 No one travels more than 20 mph over the posted speed limit. Frequency of recorded speeds greater than 20 mph over the posted speed limit Monitoring devices, police radar, police tickets Each Day Construction Contractor or Independent Evaluator or State DOT
Construction Congestion
  • Travel time / delay during construction
6 Rural: Average motorist delay less than 15 minutes (as compared to pre- construction travel time)

Urban: Average motorist delay less than 20 minutes (as compared to pre- construction travel time)
Average Delay for Each Hour for Each Direction (as applicable) computed using to Baseline Pre- Construction Travel Time Options:
  • Pilot Vehicles
  • Cell Phone Tracking
  • License Plate Matching
  • Estimation Software
1 hour averages for the entire work zone period
  • need both before (baseline) and after data
Construction Contractor or Independent Evaluator or State DOT
7 Average travel time through the work zone is equal to or less than the established target Average Travel Time for Each Hour for Each Direction (as applicable) compared to the Established Target Travel Time Options:
  • Pilot Vehicles
  • Cell Phone Tracking
  • License Plate Matching
  • Estimation Software
1 hour averages for the entire work zone period
  • need both before (baseline) and after data
Construction Contractor or Independent Evaluator or State DOT
  • Queue Length During Construction
8 No stopped queue (speed less than 10 mph) Maximum Stopped Queue Length for Each Day for Each Direction (as applicable) RTMS (or similar) units placed upstream of the work zone at 0.5 mile increments End of Each Day Construction Contractor or Independent Evaluator or State DOT
9 Rural: < ½ mile moving queue (travel speed 20% less than posted speed)

Urban: < 1 ½ mile moving queue (travel speed 20% less than posted speed)
Maximum Moving Queue Length for Each Day for Each Direction (as applicable) RTMS (or similar) units placed upstream of the work zone at 0.5 mile increments End of Each Day Construction Contractor or Independent Evaluator or State DOT
10 Peak period queue length is equal to typical pre- construction peak period queue length Maximum Queue Length during AM Peak and PM Peak compared to Baseline Typical Queue Length during Pre- Construction AM Peak and PM Peak RTMS (or similar) units placed upstream of the work zone at 0.5 mile increments End of AM Peak Period and End of PM Peak Period Construction Contractor or Independent Evaluator or State DOT
  • Incident Clearance Time
11 Non-injury incidents are cleared from the travel lanes within 20 minutes Clearance Time for Each Non-Injury Incident compared to Target Clearance Time Electronic or Paper Log capturing reporting time and clearance time Each Incident for entire project - Incident scores are averaged to obtain overall score Construction Contractor or State DOT or independent evaluator
Capacity 12 Capacity in the work zone [or work zone and alternate route(s)] during peak traffic periods is greater than or equal to 90% of the pre- construction capacity Measured or Computed Capacity for Each Work Zone Configuration 3 Options:
  • Compute based on traffic data (volume and speed)
  • Compute using Highway Capacity Manual
  • Modeling
Must be computed or measured for each change in work zone configuration Construction Contractor or State DOT or independent evaluator
Quality
  • Quality Index
13 The Contractor achieves a Quality Index Score of 0.8 Quality Index computed on the basis of a number of project or agency- specific quality- related measures

Note: The Quality Index needs to be defined in the Contract along with a description of how it is determined.
The Owner Agency would define the quality goals important for their project. The Owner Agency would develop 5 levels of performance for each performance measure, and weight each performance measure. The Quality Index would be computed as a weighed average across the various quality performance measure scores. The Owner Agency may consider defining a rejection level. As desired by the Owner Agency - This could be computed monthly, annually or at the end of the project. Construction Contractor or State DOT or independent evaluator
  • Pavement Smoothness
14 Inertial Profile, IRI, less than 48 inches per mile IRI (inches per mile) for each Lane for entire length of project Continuously reported IRI using inertial profiler At project completion Independent Evaluator
  • Pavement Noise
15 Noise less than 96 dBA based on OBSI Method Each Lane for entire length of project On-Board Sound Intensity Method At project completion Independent Evaluator
Time
  • Overall Project Schedule
16 Project completed ahead of the contract completion date Actual project completion date versus initial scheduled contract completion date Compare to proposed schedule End of project Construction Contractor or State DOT or independent evaluator
17 Reduce contractor's actual days on the road by 20% compared to the State DOT MAX working days Actual days on the road (for example, days in which lane or shoulder closures are required) versus State DOT MAX working days Use actual calculated days and the States' records End of project Construction Contractor or State DOT or independent evaluator
OR
  • Schedule Improvements
18 Reduce working days to complete project by 20% when compared to the State DOT's MAX working days. Total working days to complete the project versus State DOT MAX working days Use actual completion time for the project and the States' records. End of project Construction Contractor or State DOT or independent evaluator
OR
  • Schedule Improvements
19 Achieve a score of < 1 using the equation "Actual Working Days divided by State DOT MAX working days" Project actual working days divided by the State DOT MAX working days Use actual working days as reported by the contractor and verified by the Owner Agency compared to the SDOT MAX working days End of project Construction Contractor or State DOT or independent evaluator
  • Scheduling Milestones
20 Complete all major milestones on time Major milestone scheduled date versus major milestone completion date for each major milestone Use project scheduling software to track major task completion by the contractor End of each agreed upon major milestone Construction Contractor or State DOT or independent evaluator
  • Scheduling
21 No contract days where no work is being performed when work is able to be performed and traffic is impacted in the work zone Actual contract days where no work is performed when work could be performed Contractor reporting or physically monitoring the work zone or electronically using cameras or other data capture technology End of each day Construction Contractor or State DOT or independent evaluator
Cost Savings
  • Contract cost savings due to value engineering
22 Eliminate actual contract growth by achieving a score of 1 using the equation of final cost divided by original contract allotment. Total final contract costs divided by the original allotment Using actual final contract cost data versus DOT established contract allotment. End of contract Construction Contractor or State DOT or independent evaluator
Customer Focus/ User Satisfaction
  • Customer Satisfaction
23 Based on survey results, 80% of travelers were satisfied with their driving experience during the project Each 5-point Likert scale survey (i.e., 1 = very satisfied, 2 = somewhat satisfied, 3 = averagely satisfied, 4 = not satisfied, 5 = very dissatisfied) Likert scale with one question on user satisfaction (i.e., How satisfied were you with your driving experience?). At 25%, 50%, 75% and 100% project completion Construction contractor, Independent Evaluator or State DOT
Environ- mental
  • Watershed Quality Management
24 Reduce sediment loads to 5% less than the pre- construction conditions Turbidity Turbidity Meter At pre- construction, on a set schedule, and at project completion Construction contractor, Independent Evaluator or State DOT
  • Recycling and Reuse
25 Capture and recycle / recover 90% of recyclable materials used on project Tons for project Ratio of recycled / recovered tons over available tons At 25%, 50%, 75% and 100% project completion Construction contractor, Independent Evaluator or State DOT
  • Construction Noise
26 Noise due to construction work = 95 dBA 100 yards from the construction site dBA levels for project Sound level measuring device Hourly Construction contractor, Independent Evaluator or State DOT
Innovation
  • Implementation
27 Implementation of project innovations is equal to the project goal Innovations Implemented on the Project Compared to Innovations Proposed by the Contractor for the Project Ratio of innovations implemented over innovations proposed by Contractor At project completion Construction contractor, Independent Evaluator or State DOT
Sample RFP Section E - Measurement Methodology, Inspection And Acceptance

E.1 Performance monitoring is a key component of this contract. Both the Contracting Officer's Technical Representative (COTR) and the Contractor must actively monitor performance to ensure that the construction is successfully completed and the Performance Goals are met.

E.2 The Contractor is free to use any reasonable method it believes appropriate to monitor performance, discover issues, and take remedial action as appropriate to meet the Performance Goals.

E.3 The Owner Agency's intent is not to dictate how the Contractor chooses to monitor its own performance, but rather to know that the Contractor is meeting the Performance Goals set forth in this RFP. As a result, this section defines the Owner Agency's performance monitoring program. The Contractor must also have its own performance monitoring program, which must be described in the Contractor's Quality Management Plan and proposal (see Section F).

E.4 The Owner Agency and Federal Highway Administration (FHWA) representatives will conduct periodic (i.e., daily, quarterly, monthly, annually, etc.) performance monitoring and evaluations. The combination of the selected monitoring levels shall help ensure progress and acceptable performance throughout the term of the contract.

E.5 The COTR and the Contractor will conduct performance monitoring. The Owner Agency inspectors may inspect the quality of the work performed to ensure that it meets applicable specifications. The COTR's role is to verify that the desired outcome (construction is completed and Performance Goals are met) is produced.

E.6 Daily Monitoring

E.6.1 The Contractor shall maintain a daily log for the Project. The log must contain information regarding:

  1. Activities of the Contractor's crews, including the locations where work is performed;
  2. Complaints received from the general public for which Contractor response is required;
  3. Unusual or unexpected conditions uncovered in the course of work activities;
  4. Incidents involving safety either of the general public or Contractor work forces; and
  5. Quality testing results.

E.6.2 The COTR shall track the daily activities against the work schedule. The Contractor shall advise the COTR of any variations from the work schedule.

E.6.3 The Contractor shall monitor the daily activities of the field crews, and obtain the following data:

  1. Types of work being performed and location;
  2. Issues and situations encountered or reported by the public and actions taken to mitigate them;
  3. Coordination among Contractor staff, Owner Agency personnel, utility operations, and others whose work impacts the items under this RFP.

E.6.4 The Contractor's daily reports must be available to the COTR to assist in verifying daily progress under the contract. A good working relationship between the COTR and the Contractor's day-to-day Project manager is essential for Project success.

E.6.5 The Owner Agency or its representative will conduct reviews. If it is determined during any review that work does not meet the quality standards outlined in the Standard Specifications, or the required contract Performance Goals, the Owner Agency or the Contractor will address the issue at no additional cost to the Owner Agency.

E.7 Cyclical Evaluations

E.7.1 Note: This section will specify the frequency of the evaluation. There are a number of frequency options for measuring performance, including:

  1. Continuous Measurement
  2. Cyclic (Hourly, Daily, Quarterly, Weekly, Monthly, Annually)
  3. End of project or at project milestones
  4. Long-term

The frequency will depend largely on the specific performance goal, and must be defined for each goal. For example, the frequency of measurement of some congestion goals may need to be continuous, but the pavement smoothness would likely be measured at the end of the project, and perhaps on a long-term basis. Also, as dictated by specific Performance Goals, the Contractor should collect hourly data on an hourly basis, but it may be more reasonable to present these data once a month to the COTR.

E.7.2 At specified intervals throughout the project, the COTR or his designee(s) and the Contractor (or its representative) shall perform an evaluation of the work zone and/or the Contractor's records of actions completed in that period to review Contractor progress and performance.

The COTR also reserves the right to perform unscheduled or "surprise" inspections. These evaluations shall be objective evaluations of the Contractor's performance against the Performance Goals. The evaluators will review the work completed or in progress and shall assign the appropriate Level of Performance score.

The evaluator personnel shall be kept as consistent as possible to ensure comparability of the reviews from month to month. Randomly selected samples may be generated for items included in each category each period; this will help the COTR and Contractor avoid reviewing only problematic or successful areas. An approximate 10% sampling rate may be used to select the review items.

The frequency of data collection may be impacted by the innovations introduced on the Project. For example, if a long-lasting material is proposed and implemented, this may necessitate only intermittent site visits to collect resulting data. There may be other economic, temporal, spatial, or other indicators that allow for reduced data collection/analysis efforts.

The COTR or his designee(s) shall generate reports that summarize the review's findings. The COTR or his designee(s) shall note deficiencies throughout the Evaluation, and the COTR or his designee(s) shall include these deficiencies in the quarterly report.

E.7.3 To help identify trends, the Owner Agency or its designee(s) shall summarize and compare the review results against the results for previous periods. The Owner Agency shall also compare the results against either the baseline condition or the previous Comprehensive Evaluation.

E.7.4 The COTR shall discuss the results of the Evaluations with the Contractor. The COTR shall also report a general level of performance satisfaction along with recommendations and concerns. The Contractor also may bring issues to the attention of the COTR, along with suggestions for future activities. Periodically, the COTR may visit sites where Project personnel have reported deficiencies and for which the Contractor must perform remedial work.

E.7.5 The Owner Agency shall record these Evaluations via electronic media to provide a record of the condition of the project. The Owner Agency shall provide a copy of each recording to the Contractor.

E.8 Comprehensive Evaluation

E.8.1 The COTR or his designee (or representative) will perform an extensive, objective Evaluation at least once in every 12-month period. To measure performance, the Owner Agency or its designee(s) will compute performance scores for each performance goal, as well as an overall summary score. The Owner Agency or its designee(s) will compute scores based on averaging results across multiple samples. The Owner Agency or its designee(s) will average the scores across the samples to obtain the score for the performance goal. The Owner Agency or its designee(s) will use these summary scores as an indicator of the Contractor's performance, and will use these scores to compute incentive and disincentive fees. While the averaging technique will be used to generate the summary scores, it must be stressed that the minimum requirement is to have all groups meet the performance goals. The Contractor shall meet with the COTR after each Comprehensive Evaluation to discuss remediation plans for any items that do not meet the performance goals, whether or not the performance goal is met when scores are averaged across multiple samples. Continued failure to perform, as determined by the Owner Agency or its designee(s), may result in default.

E.8.2 In computing the overall summary Performance Score, the Owner Agency shall apply their preferred for the various categories; the example below uses the weights shown in the table for the various categories.

CategoryCategory Weight1
  1. The Owner Agency will determine the weights for each Category. The weights should add to 100.
SafetyTBD
Construction CongestionTBD
QualityTBD
Time TBD
Cost SavingsTBD
Customer Focus/User SatisfactionTBD
EnvironmentalTBD
InnovationTBD
Total100

E.8.3 The COTR will compare the results of the Comprehensive Evaluation with prior years' inspections and with the baseline conditions. The COTR will report all failures to meet performance goals. The contractor shall advise the COTR of the actions proposed to remedy any deficiencies along with the time frame for taking those actions. The Contractor must repair all noted problems to meet the performance goals.

E.8.4 To compute the Total score for the Comprehensive Evaluation, the Owner Agency will:

  • average the sample scores for each performance measure
  • perform a weighted average of the performance measure scores to compute the score for the Category
  • perform a weighted average of the Category scores to compute the Total score.
Sample RFP Section F Additional Materials
F.3 Quality Management Plan

F.3.1 Quality Management Plan

Within 30 Days from the Contract Award Date, the Contractor shall submit to the COTR an electronic copy and 10 bound paper copies of a detailed Quality Management (QM) Plan that describes by Category how the contractor shall monitor its own performance to ensure that Performance Goals are achieved. The QM Plan shall define the procedures to ensure that all work meets or exceeds the Performance Goals. The QM Plan also shall define reporting procedures to the Owner Agency to ensure approval of proposed work, services, and products. The Contractor is allowed to deviate from the Plan only with the express consent of the COTR. The Contractor must highlight innovations that deviate from the specifications set forth in the Owner Agency Standard Specifications in the Quality Assurance/Quality Control Plan. If approved in writing by the CO, these deviations shall become the specifications for this contract. Otherwise, the standard specifications shall govern all work performed under this contract.

  1. The Contractor must consult with the COTR and appropriate Owner Agency staff in preparing the QM Plan. The following elements shall be required:
    1. The Contractor's patrolling QA/QC Plan to identify areas that are not meeting the Performance Goals.
    2. The Contractor's QA/QC Plan to ensure that quality work is performed.
    3. The Contractor's QA/QC Plan to monitor quality after work has been completed.
    4. The Contractor's facilities, equipment, and materials available to perform all tasks set forth in this RFP.
    5. The Contractor's QA/QC Plan to ensure that all equipment remains in good working order and is available to perform necessary work.
    6. The Contractor's QA/QC Plan to ensure that all materials meet appropriate specifications, are stored properly, and are available as needed.
    7. The Contractor's QA/QC Plan to conduct regular public surveys to determine the public's satisfaction with the overall quality and condition of the work covered under this contract.
    8. The Contractor's QA/QC Plan for reporting repair needs that are outside of the scope of this contract.
    9. The Contractor's QA/QC Plan for proposing and receiving approval on any innovations.
Sample RFP Section H Additional Materials - Special Contract Requirements
H.1 Performance Incentives And Disincentives

H.1.1 The Contractor shall be eligible for an incentive fee or subject to a disincentive fee for each Category, which is tied to the Comprehensive Evaluation, and is based upon performance throughout the year. This award is designed to reward performance that meets or exceeds the Performance Goals. If the Owner Agency determines the Contractor's performance to be above or below the Performance Goals for the Project, the Owner Agency shall compute the incentive fee or disincentive fee as described in Sections H.1.2, H.1.3, H.1.4, H.1.5 and H.1.6.

H.1.2 The amount the Contractor is eligible to receive for performance in a given year shall not exceed five percent (5%) of the fixed price amount paid to the Contractor under this contract for that year. The disincentive fee shall also not exceed five percent (5%) of the fixed price amount paid to the Contractor under this contract for that year. After the Comprehensive Evaluation, the Performance Evaluation Board (PEB) shall advise the Contracting Officer on the amount of the total incentive fee to be received by the Contractor or the disincentive fee to be applied to the Contractor. The Contracting Officer shall exercise the independent discretion in determining whether or not to award to the Contractor an incentive fee or assess a disincentive fee.

H.1.3 In advising the Contracting Officer on the amount of the incentive fee to be received or the disincentive fee to be applied, the PEB shall examine each of the Performance Goals and, based upon the Contractor's reports and reports by Owner Agency personnel, determine the extent to which the Performance Goals have been met or exceeded. The PEB shall generate a PEB score with a scale of 0 to 100 with 40 of the 100 points being made up of Monthly Evaluation scores, 50 of the 100 points being made up of the Comprehensive score, and 10 of the 100 points for a subjective score (See section H.1.4). These proportions or "weights" reflect the Owner Agency's priorities, and the fact that the Contractor must perform throughout the year, and not just at the time of the Comprehensive Evaluation. The Owner Agency will calculate the Monthly Evaluation score portion of the PEB score by taking the average of the 11 Monthly Evaluation scores (out of 100) for that year and multiply by 40/100. The Owner Agency will calculate the Comprehensive score portion of the PEB score by dividing the Comprehensive Evaluation score (out of 100) by 2. The PEB shall carefully consider the results of the Monthly and Comprehensive Evaluations in determining the award.

H.1.4 The final 10% of the PEB score shall be subjective, and shall be assigned by the PEB. In assigning this score, the PEB shall consider to what extent the Contractor has met the Performance Goals system-wide (score of 4 or higher for each sample in the Comprehensive Evaluation), as well as to what extent the contractor has met the Partnering goals that shall be established in the Partnering process.

H.1.5 The PEB will compute the PEB score by summing the Monthly Evaluation score portion, the Comprehensive Evaluation score portion, and the subjective score, as described in H.1.3 and H.1.4.

H.1.6 In advising the Contracting Officer on the incentive fee or the disincentive fee, the PEB shall use the table below. If the PEB score falls between two scores in the table, the PEB will compute the Incentive Fee percentage or Disincentive Fee percentage using a proportional scale. For example, if the PEB score were 98, the percentage of the 5% Incentive Fee awarded would equal:

Step 1. Looking at the table below the example PEB score of 98 falls between the PEB scores of 97.5 and 100 with corresponding percentage of 5% Incentive Fee Awarded of 95 and 100 respectively.

Step 2. Calculate the difference between the example PEB score of 98 and the next lower PEB score from the table below (which is 97.5). The difference is: 98-97.5=0.5.

Step 3. Divide the 0.5 from step 2 by the difference between the PEB scores of (100-97.5=2.5), which would be 0.5/2.5=0.2

Step 4. Multiply 0.2 from step 3 by the corresponding differences between the Percentage of 5% Incentive Fee Awarded (100-95=5), which would be 0.2x 5=1.0.

Step 5. To obtain the Percentage of 5% Incentive Fee Awarded for the example PEB score, add 1.0 from step 4 to the Percentage of 5% Incentive Fee Awarded for a score of 97.5 (95), which would be 95+1.0=96

Steps 1-5 as described above, can also be shown in the following mathematical equation:

95+(((98-97.5)/(100-97.5))*(100-95)) = 96

PEB ScorePercentage of 5% Incentive Fee AwardedPercentage of -5% Disincentive Fee Applied
1001000
97.5950
95850
92.5750
90650
87.5500
8500
82.500
8000
77.5045
75060
72.5075
70095
Less than 700100

H.1.7 Example of Incentive Fee Calculation

Assumptions:

  1. The 11 Monthly Evaluation Scores (there is no Monthly Evaluation in the month that the Comprehensive Evaluation is conducted):
    78828687
    80848687
    808687
  2. Comprehensive Evaluation Score: 87
  3. Subjective Rating Score from the PEB: 8
  4. Amount paid to the Contractor during the period being evaluated: $10M

Calculation:

Step 1. Find the average of the 11 Monthly Evaluation Scores by adding the eleven Monthly scores and dividing it by eleven, which is:

(78+80+80+82+84+86+86+86+87+87+87)/11=83.9

Step 2. As described in section H.1.2 "The PEB shall generate a PEB score with a scale of 0 to 100 with 40 of the 100 points being made up of Monthly Evaluation scores, 50 of the 100 points being made up of the Comprehensive Evaluations score, and 10 of the 100 points for a subjective score (See section H.1.3)." Take 40% of the average of the Monthly Evaluation scores (83.9) calculated in step 1 above, which is: 0.4 x 83.9 = 33.6

Step 3. The PEB Score would be equal to 33.6 from step 2 plus 50% (50/100=.5) of 87.5 (assumption B), plus 8 (assumption C) which is equal to 85.1 (as shown in the following mathematical equation): 33.6 + (.5 x 87.5) + 8 = 85.1

Step 4. In order to calculate the Percentage of 5% Incentive Fee Awarded, follow the steps 1-5 of section H.1.6 as follows:

Step 5. Looking at table in section H.1.6 the calculated PEB score of 85.1 falls between PEB scores of 85 and 87.5 with corresponding Percentage of 5% Incentive Fee Awarded of 0 and 50 respectively.

Step 6. Calculate the difference between the calculated PEB score of 85.1 and the next lower PEB score from the table shown in section H.1.6 (85). The difference is: (85.1-85=0.1).

Step 7. Divide the 0.1 from step 6 by the difference between the PEB scores of (87.5-85=2.5), which would be 0.1/2.5=0.04

Step 8. Multiply 0.04 from step 7 by the corresponding differences between the Percentages of 5% Incentive Fee Awarded (50-0=50), which would be 0.04x50=2

Step 9. To obtain the Percentage of 5% Incentive Fee Awarded, add 2 from step 8 to the corresponding Percentage of 5% Incentive Fee Awarded of 0, which would be 0+2=2

Steps 5-9 as described above can also be shown in the following mathematical equation:

0 +(85.1-85)× (50-0) = 2%
 
(87.5-85)

Step 10. So if the price of the Category over the period being evaluated was $10M, the incentive fee awarded would be 2% (2/100=0.02) of 5% (5/100=0.05) of $10M, which can mathematically be shown as:

0.02 x 0.05 x $ 10M = $10,000

<< PreviousContentsNext >>

More Information

 
 
Updated: 04/04/2011
 

FHWA
United States Department of Transportation - Federal Highway Administration