Skip to contentUnited States Department of Transportation - Federal Highway AdministrationSearch FHWAFeedback

Construction

Performance Contracting for Construction

Performance Measurement Methodology

Process

If you set goals, you need to measure performance against those goals to determine to what extent they were met. The measurement methodology will define what gets measured by whom and when, as well as what to do with the results. In performance contracts, there are numerous techniques for monitoring performance. This methodology considers the logistics for how the performance goals will be measured, evaluated and scored. Recommendations and lessons learned from real-world performance contracts are provided. Materials that may be used by the Owner Agency to implement the processes involved with measuring performance also have been included. The overall process is summarized in the figure below.

Flow chart showing the process for defining the measurement methodology.
Figure 4. Process for Defining the Measurement Methodology

What Gets Measured (When, By Whom, and How)

Frequency
There are a number of frequency options for measuring performance, including:
  1. Continuous Measurement
  2. Cyclic (Hourly, Daily, Quarterly, Weekly, Monthly, Annually)
  3. Start of project, end of project and at project milestones
  4. Long-term

The frequency will depend largely on the specific performance goal, and the frequency of measure should be defined for each performance goal (see Table 3). For example, the frequency of measurement of some congestion goals may need to be continuous, but the pavement smoothness likely would be measured at the end of the project, and perhaps on a long-term basis. Unscheduled or "surprise" inspections can be incorporated into the project's evaluation as well.

The frequency of data collection or testing, which will affect the frequency of the overall performance evaluation, may be impacted by the innovations introduced on the project. For example, if a long-lasting material is proposed and implemented, this may necessitate longer-term intermittent site visits to collect resulting data.

The frequency of data dissemination and presentation is also important to consider. For example, as dictated by specific performance goals such as motorist delay, the contractor logically would collect these hourly data on an hourly basis, but it may be more reasonable to present these data once a week or once a month to the project team. There may be other economic, temporal, or spatial impacts that allow for reduced work and data collection/analysis efforts. Many of these may be outside of the contractor's control (i.e., extreme weather, terrorist attacks, etc.).

We have provided, as an example, a discussion on organizing quarterly milestone evaluations in this section. At quarterly milestone intervals throughout the project (i.e., 0%, 25%, 50%, 75% and 100% project completion), the project team (i.e., Owner Agency staff, the Contractor, other parties) performs an evaluation of the project, the work zone and/or the Contractor's records of actions completed in that period to review Contractor progress and performance.

The frequency of measuring performance will also assist the contractor and Owner Agency when planning and performing work. The following figure demonstrates an example iterative process associated with a quarterly performance review. It begins with a review and inspection of the work, follows with the production of a digital record (i.e., DVD) and a report showing what was found during the review, and ends with the project team planning and performing work based upon the findings.

The figure shows a photo of DVDs and of a sample report page. The Owner Agency performs a quarterly review and produces DVDs and a report.  The contractor then uses the report to plan/perform work, the results of which get measured in the next quarterly review.  Thus, the process is circular.
Figure 5. Quarterly Milestone Review Process.

Measuring performance will also assist the contractor and Owner Agency when planning and performing work on an annual basis. The following figure demonstrates an example iterative process associated with a comprehensive annual performance review.

The figure shows a sample comprehensive annual review process.  The Owner Agency chooses sample for review and performs an evaluation.  The contractor uses the results of the evaluation to plan/perform work. The cycle then begins again for the next evaluation with choosing samples.  Thus, the process is circular.
Figure 6. Comprehensive Annual Review Process.

Evaluators

The evaluators for this effort generally would come from three pools: the construction contractor, an independent evaluator, the Owner Agency. We recommend using an unbiased, independent evaluator to measure performance against the goals. FHWA-required verification testers could also be included as a fourth pool of evaluators.

The independent evaluator should not be involved in the day-to-day activities of the project to ensure a fair evaluation; this would also prevent the evaluator from focusing solely on either positive or negative data, records, areas, or other project information.

This evaluator could be supplied by FHWA. Alternately, a consultant familiar with the type of project may serve as an evaluator. However, the Owner Agency and the contractor may need or choose to collect some or all of the data as appropriate. An assignment of responsibility must be made for each performance goal (see Table 3).

When developing goals, it is important to consider how they will be measured on a technical level and to define this in the solicitation package. For example, if specialized equipment is required for an evaluation, will the contractor or the Owner Agency furnish this equipment? A decision such as this one will influence the ultimate cost of the project and must be carefully taken into account and described in the solicitation package.

We recommend recording all evaluations via electronic media to provide a record of the condition of the project. The electronic media record serves to document the changes to the project setting over time and is useful as reference material in planning future work. We suggest that each organization involved on the project receive a copy of each of the evaluations via DVD.

Measures of Effectiveness

For each goal, it is important to define the measure of effectiveness, the unit of measure, and the method that will be used to measure performance. The measure of effectiveness is the entity under scrutiny (i.e., each user satisfaction survey, each acre of wetlands restored, etc.). The units of measure will focus on the size of the sample to be taken (i.e., 1/10 mile, one month, entire work zone, etc).

The methods of measurement seek to answer the following questions:
  1. How does one measure this entity?
  2. What processes must be followed to obtain a reliable measurement?

The method of measurement should be nationally accepted, if possible. As an example, AASHTO-accepted standard measurement practices will provide good methods of measurement for some goals. The Owner Agency will need to define units and methods for each goal.

Sampling Strategies

The Owner Agency must choose a sampling strategy for the reviews, whether they are daily, weekly, quarterly, monthly, or any other alternative. The options are sampling data, items or assets to show a representative view of the work or using a 100% sample. We recommend avoiding a 100% appraisal due to the large cost and time expenditures associated with these reviews.

We suggest randomly sampling a portion, perhaps 10%, of the category items to be reviewed. A randomly-generated sample will prevent the evaluators from focusing solely on either good or poor sections. Randomly-selected samples may be generated for each item included in each category for each review. We also recommend specifying the sampling process clearly in the solicitation package, as the process will influence the price.

Scoring Styles
MDOT M-115 Pilot Project: MDOT conducted a survey before and after construction to learn about users’ opinion on the following topics: proposed construction schedule, daytime construction plan, work zone safety, pavement and ride quality conditions, and traveling delays. The outcomes of the survey were included in the SEP-14 report as a performance indicator.

Two primary styles or strategies for measuring performance are:

  1. Subjective
  2. Objective

Of course, it is also possible to combine the two styles. For example, a subjective monthly evaluation could coexist independently with an objective annual evaluation. Alternatively, a monthly evaluation could have both objective and subjective components.

We recommend that these evaluations be objective evaluations of the Contractor's performance against the performance goals. While subjective evaluations are helpful in capturing the project team's perceptions of the work completed and remaining, objective evaluations minimize the amount of debate over performance against the goals. It is also simpler to incorporate incentive and disincentive fees into the solicitation package when using unbiased, hard data and scores. The Owner Agency and any evaluators should review the work completed or in progress and the results; then the Owner Agency and any evaluators should assign the appropriate Level of Performance score.

The evaluator personnel should be kept as consistent as possible to ensure comparability of the reviews. If new evaluators are introduced sporadically, this may bring delays. The Owner Agency would spend time and money in getting the new evaluators up to speed on the project, data, and context. Again, the objective evaluator should not be involved in the day-to-day minutiae of the project, but the evaluator should be familiar with the appropriate technical logistics of the project.

What to Do With the Results

When you evaluate performance, you end up with a large number of scores for various samples and performance goals. The challenge is to use these scores to communicate the effectiveness in meeting the performance goals. Different levels of management will have different needs in terms of level of detail. Upper level management tends to be interested in seeing summary scores at the project level, where project level management tends to be interested in the detailed results.

The Owner Agency and independent evaluator or consultant should generate reports that summarize the review's findings. Results could be presented in both data summaries and written commentary sections. Deficiencies and problems found during the evaluation should be highlighted; simple bulleted lists or checklists may be used to convey this information.

We recommend that the Owner Agency, independent evaluator, or consultant generate periodic reports that summarize any review's findings. To help identify trends, the project team should compare the review results against the results for previous periods. It is useful to compare the results against either the baseline condition (i.e., the project at 0% completion, one day before the contract starts) and/or the previous comprehensive evaluation.

We recommend that the project team discuss the results of the evaluations. We also recommend that the Owner Agency use partnering sessions, as needed, to resolve any issues unearthed by the reviews. The Owner Agency also should report a general level of performance satisfaction along with recommendations and concerns. The contractor may bring issues to the attention of the project team, along with solutions and suggestions for future activities.

Methods for Combining Results into Summary Scores

The evaluation team could use the resulting scores for each goal (i.e., Recycling and Reuse) to obtain an overall category score (i.e., environmental) and an overall project score. A sample process to obtain the two latter scores is detailed below.

Each performance goal should be assigned a computed relative weight. The Table below assigns relative weights to two sample performance goals; each goal in the project would need an associated weight. The owner agency can determine the criteria against which to rate the goals. For sample purposes here, we have used the main HfL high level goals.

For each performance goal below, the Owner Agency assigns rating for each goal for each category (in this case - "Improve safety," "Reduce Congestion due to Construction," "Improve Quality," and "Improve User Satisfaction"). For the scale, 5 is very important and 1 is not as important. The Computed Relative Weight is determined by summing the relative weights for the performance goal and dividing by 2 (to obtain a score out of 10). As the computed relative weight increases, so does the importance of the goal.

In the example below, Capacity (with a computed relative weight of 6.5) was determined to be more important than Recycling/Reuse (with a computed relative weight of 4.5).

These weights will help determine which goals have more of an impact on the overall score (and thus the application of incentive and disincentive fees, as discussed below). In the example, Capacity will have more impact on the score than Recycling/Reuse.

Table 2. Example Relative Weights for Two Sample Performance Goals
Category Element Performance Goal a. Improve Safety b. Reduce Congestion due to Construction c. Improve Quality d. Improve User Satisfaction Computed Relative Weight (a+b+c+d)/2
Construction Congestion Capacity Capacity in the work zone [or work zone and alternate route(s)] during peak traffic periods is greater than or equal to 90% of the pre-construction capacity 3 5 1 4 6.5
Environmental Recycling and Reuse Capture and recycle/recover 90% of recyclable materials used on project 1 1 4 3 4.5
Determining Incentive/Disincentive Fees with the Scores

MDOT M-115 Pilot Project: The ride quality is set with an incentive as listed below:

Incentive per ½ Mile Direction:

  • Ride quality index between 20 and 30 inches per mile - $2,500
  • Ride quality index between 0 and 20 inches per mile - $5,000

Bonus Incentive Entire Project:

  • Ride quality index below 30 inches per mile - $25,000

No disincentives apply; the Ride Quality Index must be 30 inches per mile or less.

Incentive and Disincentive fees are an innovative approach to motivate the contractor to meet the Performance Goals. If the Owner Agency is going to apply incentives and disincentives, there needs to be an objective system of determining the fees applied. The Owner Agency also needs to determine what reasonable incentive and disincentive fees are for the project and the locale. We have used 5% here as a sample, but 10% might be more appropriate depending on the situation. As Owner Agencies and contractors get more experienced with incentives and disincentives, the process will evolve.

The incentives and disincentives must be reasonable and meaningful. The incentives also must be achievable, or they will not have an impact.

While including disincentives is appropriate, the Owner Agency must realize that this level of risk will come at a cost. Disincentives are effective to push performance up to a point, but after that, you also need an incentive.

The measurement methodology sample materials included below provide language for the solicitation package on applying incentive and disincentive fees. Some assumptions are made in this contract language. For example, the contractor shall be eligible for an incentive fee or subject to a disincentive fee for each Category, which is tied to the comprehensive annual evaluation, and is based upon performance throughout the year. This award is designed to reward performance that meets or exceeds the performance goals.

We also recommend that the Owner Agency form a Performance Evaluation Board (PEB) that will advise the Contracting Officer on the amount of the total incentive fee to be received by the Contractor or the disincentive fee to be applied to the Contractor. The PEB personnel should be kept consistent throughout the project's life. After each evaluation, the PEB would convene to review the scores and determine the appropriate course of action. In the sample materials, the PEB generates a score with a scale of 0 to 100; in the scale, 40 points come from monthly evaluation scores, 50 points come from the comprehensive annual score, and 10 points are subjectively produced:

(40%*Monthly Evaluations) + (50%*Comprehensive Evaluation) + 10% subjective score

Note that the materials provide a detailed breakdown of how to compute the incentive/disincentive fees. For the incentive/disincentive fee structure, the Owner Agency must decide how these should be linked to performance. For example, should each category be tied to individual incentive/disincentive fees, or should the collective categories be tied to an overall incentive/disincentive fee? We recommend tying each category to individual incentive and disincentive fees, as this allows the Owner Agency to adjust any project management strategies in case a category is deficient or problematic.

Lessons Learned from Real-World Performance Contracts

The following measurement methodology and incentive/disincentive lessons have been learned in real-world performance contracts:

  • Specify the performance measurement methodology clearly in your RFP/IFB - it will impact the price
  • Need to choose between sampling and 100% reviews
  • If sampling, choose the samples randomly - resist the temptation of only looking at the problems
  • Capture the performance reviews on video
  • Generate reports from the review that will be shared with all project partners
  • Present the results in a variety of ways to satisfy different interest levels
  • Weighted averages work for combining scores across multiple samples and categories, but can hide problems
  • Do not focus too long on the overall score - it is just an indicator
  • Make incentives achievable (and worthwhile)
  • Include disincentives to push performance
  • Incentives and disincentives should be balanced, fair, and reasonable
  • Incentive and disincentive fee structures evolve from one project to the next as Owner Agencies and Contractors develop experience
  • While including disincentives is appropriate, realize that you are going to pay for that risk up front
  • Be fair!

Sample Measurement Methods and Solicitation Package Materials

The following table provides the sample set of measurement methods for the menu of performance measures and categories presented in the Performance Goals Section of this Guide. All entries in this table can be adjusted to be suitable for the specific locality of the project.

Following the Measurement Methods Table are sets of sample supplemental wording for RFP Sections:

  • E - Measurement Methodology, Inspection, and Acceptance
  • F - Deliveries or Performance
  • H - Special Contract Requirements

Table 3. Sample Measurement Methods for the Performance Measure Menu in the Performance Goals Section

First of 5 images of the contents of Table 3.
Second of 5 images of the contents of table 3.
Third of 5 images of the contents of Table 3.
Fourth of 5 images of the contents of Table 3.
Fifth of 5 images of the contents of Table 3.

Table 3b. Measurement Methods for the Performance Measures Adopted in the Michigan Pilot Project

First of 2 images of the contents of Table 3b.
Second of 2 images of the contents of Table 3b.

SAMPLE RFP SECTION E - MEASUREMENT METHODOLOGY, INSPECTION AND ACCEPTANCE

E.1 Performance monitoring is a key component of this contract. Both the Contracting Officer's Technical Representative (COTR) and the Contractor must actively monitor performance to ensure that the construction is successfully completed and the Performance Goals are met.

E.2 The Contractor is free to use any reasonable method it believes appropriate to monitor performance, discover issues, and take remedial action as appropriate to meet the Performance Goals.

E.3 The Owner Agency's intent is not to dictate how the Contractor chooses to monitor its own performance, but rather to know that the Contractor is meeting the Performance Goals set forth in this RFP. As a result, this section defines the Owner Agency's performance monitoring program. The Contractor must also have its own performance monitoring program, which must be described in the Contractor's Quality Management Plan and proposal (see Section F).

E.4 The Owner Agency and Federal Highway Administration (FHWA) representatives will conduct periodic (i.e., daily, quarterly, monthly, annually, etc.) performance monitoring and evaluations. The combination of the selected monitoring levels shall help ensure progress and acceptable performance throughout the term of the contract.

E.5 The COTR and the Contractor will conduct performance monitoring. The Owner Agency inspectors may inspect the quality of the work performed to ensure that it meets applicable specifications. The COTR's role is to verify that the desired outcome (construction is completed and Performance Goals are met) is produced.

E.6 DAILY MONITORING

E.6.1 The Contractor shall maintain a daily log for the Project. The log must contain information regarding:

  1. Activities of the Contractor's crews, including the locations where work is performed;
  2. Complaints received from the general public for which Contractor response is required;
  3. Unusual or unexpected conditions uncovered in the course of work activities;
  4. Incidents involving safety either of the general public or Contractor work forces; and
  5. Quality testing results.

E.6.2 The COTR shall track the daily activities against the work schedule. The Contractor shall advise the COTR of any variations from the work schedule.

E.6.3 The Contractor shall monitor the daily activities of the field crews, and obtain the following data:

  1. Types of work being performed and location;
  2. Issues and situations encountered or reported by the public and actions taken to mitigate them;
  3. Coordination among Contractor staff, Owner Agency personnel, utility operations, and others whose work impacts the items under this RFP.

E.6.4 The Contractor's daily reports must be available to the COTR to assist in verifying daily progress under the contract. A good working relationship between the COTR and the Contractor's day-to-day Project manager is essential for Project success.

E.6.5 The Owner Agency or its representative will conduct reviews. If it is determined during any review that work does not meet the quality standards outlined in the Standard Specifications, or the required contract Performance Goals, the Owner Agency or the Contractor will address the issue at no additional cost to the Owner Agency.

E.7 CYCLICAL EVALUATIONS

E.7.1 Note: This section will specify the frequency of the evaluation. There are a number of frequency options for measuring performance, including:

  1. Continuous Measurement
  2. Cyclic (Hourly, Daily, Quarterly, Weekly, Monthly, Annually)
  3. End of project or at project milestones
  4. Long-term

The frequency will depend largely on the specific performance goal, and must be defined for each goal. For example, the frequency of measurement of some congestion goals may need to be continuous, but the pavement smoothness would likely be measured at the end of the project, and perhaps on a long-term basis. Also, as dictated by specific Performance Goals, the Contractor should collect hourly data on an hourly basis, but it may be more reasonable to present these data once a month to the COTR.

E.7.2 At specified intervals throughout the project, the COTR or his designee(s) and the Contractor (or its representative) shall perform an evaluation of the work zone and/or the Contractor's records of actions completed in that period to review Contractor progress and performance.

The COTR also reserves the right to perform unscheduled or "surprise" inspections. These evaluations shall be objective evaluations of the Contractor's performance against the Performance Goals. The evaluators will review the work completed or in progress and shall assign the appropriate Level of Performance score.

The evaluator personnel shall be kept as consistent as possible to ensure comparability of the reviews from month to month. Randomly selected samples may be generated for items included in each category each period; this will help the COTR and Contractor avoid reviewing only problematic or successful areas. An approximate 10% sampling rate may be used to select the review items.

The frequency of data collection may be impacted by the innovations introduced on the Project. For example, if a long-lasting material is proposed and implemented, this may necessitate only intermittent site visits to collect resulting data. There may be other economic, temporal, spatial, or other indicators that allow for reduced data collection/analysis efforts.

The COTR or his designee(s) shall generate reports that summarize the review's findings. The COTR or his designee(s) shall note deficiencies throughout the Evaluation, and the COTR or his designee(s) shall include these deficiencies in the quarterly report.

E.7.3 To help identify trends, the Owner Agency or its designee(s) shall summarize and compare the review results against the results for previous periods. The Owner Agency shall also compare the results against either the baseline condition or the previous Comprehensive Evaluation.

E.7.4 The COTR shall discuss the results of the Evaluations with the Contractor. The COTR shall also report a general level of performance satisfaction along with recommendations and concerns. The Contractor also may bring issues to the attention of the COTR, along with suggestions for future activities. Periodically, the COTR may visit sites where Project personnel have reported deficiencies and for which the Contractor must perform remedial work.

E.7.5 The Owner Agency shall record these Evaluations via electronic media to provide a record of the condition of the project. The Owner Agency shall provide a copy of each recording to the Contractor.

E.8 COMPREHENSIVE EVALUATION

E.8.1 The COTR or his designee (or representative) will perform an extensive, objective Evaluation at least once in every 12-month period. To measure performance, the Owner Agency or its designee(s) will compute performance scores for each performance goal, as well as an overall summary score. The Owner Agency or its designee(s) will compute scores based on averaging results across multiple samples. The Owner Agency or its designee(s) will average the scores across the samples to obtain the score for the performance goal. The Owner Agency or its designee(s) will use these summary scores as an indicator of the Contractor's performance, and will use these scores to compute incentive and disincentive fees. While the averaging technique will be used to generate the summary scores, it must be stressed that the minimum requirement is to have all groups meet the performance goals. The Contractor shall meet with the COTR after each Comprehensive Evaluation to discuss remediation plans for any items that do not meet the performance goals, whether or not the performance goal is met when scores are averaged across multiple samples. Continued failure to perform, as determined by the Owner Agency or its designee(s), may result in default.

E.8.2 In computing the overall summary Performance Score, the Owner Agency shall apply their preferred for the various categories; the example below uses the weights shown in the table for the various categories.

Category Category Weight
Safety TBD
Construction Congestion TBD
Quality TBD
Time TBD
Cost Savings TBD
Customer Focus/User Satisfaction TBD
Environmental TBD
Innovation TBD
Total 100
1The Owner Agency will determine the weights for each Category. The weights should add to 100.

E.8.3 The COTR will compare the results of the Comprehensive Evaluation with prior years' inspections and with the baseline conditions. The COTR will report all failures to meet performance goals. The contractor shall advise the COTR of the actions proposed to remedy any deficiencies along with the time frame for taking those actions. The Contractor must repair all noted problems to meet the performance goals.

E.8.4 To compute the Total score for the Comprehensive Evaluation, the Owner Agency will:

  • average the sample scores for each performance measure
  • perform a weighted average of the performance measure scores to compute the score for the Category
  • perform a weighted average of the Category scores to compute the Total score.

SAMPLE RFP SECTION F ADDITIONAL MATERIALS

F.3 QUALITY MANAGEMENT PLAN

F.3.1 Quality Management Plan

Within 30 Days from the Contract Award Date, the Contractor shall submit to the COTR an electronic copy and 10 bound paper copies of a detailed Quality Management (QM) Plan that describes by Category how the contractor shall monitor its own performance to ensure that Performance Goals are achieved. The QM Plan shall define the procedures to ensure that all work meets or exceeds the Performance Goals. The QM Plan also shall define reporting procedures to the Owner Agency to ensure approval of proposed work, services, and products. The Contractor is allowed to deviate from the Plan only with the express consent of the COTR. The Contractor must highlight innovations that deviate from the specifications set forth in the Owner Agency Standard Specifications in the Quality Assurance/Quality Control Plan. If approved in writing by the CO, these deviations shall become the specifications for this contract. Otherwise, the standard specifications shall govern all work performed under this contract.

  1. A. The Contractor must consult with the COTR and appropriate Owner Agency staff in preparing the QM Plan. The following elements shall be required:
    • The Contractor's patrolling QA/QC Plan to identify areas that are not meeting the Performance Goals.
    • The Contractor's QA/QC Plan to ensure that quality work is performed.
    • The Contractor's QA/QC Plan to monitor quality after work has been completed.
    • The Contractor's facilities, equipment, and materials available to perform all tasks set forth in this RFP.
    • The Contractor's QA/QC Plan to ensure that all equipment remains in good working order and is available to perform necessary work.
    • The Contractor's QA/QC Plan to ensure that all materials meet appropriate specifications, are stored properly, and are available as needed.
    • The Contractor's QA/QC Plan to conduct regular public surveys to determine the public's satisfaction with the overall quality and condition of the work covered under this contract.
    • The Contractor's QA/QC Plan for reporting repair needs that are outside of the scope of this contract.
    • The Contractor's QA/QC Plan for proposing and receiving approval on any innovations.

SAMPLE RFP SECTION H ADDITOINAL MATERIALS- SPECIAL CONTRACT REQUIREMENTS

H.1 PERFORMANCE INCENTIVES AND DISINCENTIVES

H.1.1 The Contractor shall be eligible for an incentive fee or subject to a disincentive fee for each Category, which is tied to the Comprehensive Evaluation, and is based upon performance throughout the year. This award is designed to reward performance that meets or exceeds the Performance Goals. If the Owner Agency determines the Contractor's performance to be above or below the Performance Goals for the Project, the Owner Agency shall compute the incentive fee or disincentive fee as described in Sections H.1.2, H.1.3, H.1.4, H.1.5 and H.1.6.

H.1.2 The amount the Contractor is eligible to receive for performance in a given year shall not exceed five percent (5%) of the fixed price amount paid to the Contractor under this contract for that year. The disincentive fee shall also not exceed five percent (5%) of the fixed price amount paid to the Contractor under this contract for that year. After the Comprehensive Evaluation, the Performance Evaluation Board (PEB) shall advise the Contracting Officer on the amount of the total incentive fee to be received by the Contractor or the disincentive fee to be applied to the Contractor. The Contracting Officer shall exercise the independent discretion in determining whether or not to award to the Contractor an incentive fee or assess a disincentive fee.

H.1.3 In advising the Contracting Officer on the amount of the incentive fee to be received or the disincentive fee to be applied, the PEB shall examine each of the Performance Goals and, based upon the Contractor's reports and reports by Owner Agency personnel, determine the extent to which the Performance Goals have been met or exceeded. The PEB shall generate a PEB score with a scale of 0 to 100 with 40 of the 100 points being made up of Monthly Evaluation scores, 50 of the 100 points being made up of the Comprehensive score, and 10 of the 100 points for a subjective score (See section H.1.4). These proportions or "weights" reflect the Owner Agency's priorities, and the fact that the Contractor must perform throughout the year, and not just at the time of the Comprehensive Evaluation. The Owner Agency will calculate the Monthly Evaluation score portion of the PEB score by taking the average of the 11 Monthly Evaluation scores (out of 100) for that year and multiply by 40/100. The Owner Agency will calculate the Comprehensive score portion of the PEB score by dividing the Comprehensive Evaluation score (out of 100) by 2. The PEB shall carefully consider the results of the Monthly and Comprehensive Evaluations in determining the award.

H.1.4 The final 10% of the PEB score shall be subjective, and shall be assigned by the PEB. In assigning this score, the PEB shall consider to what extent the Contractor has met the Performance Goals system-wide (score of 4 or higher for each sample in the Comprehensive Evaluation), as well as to what extent the contractor has met the Partnering goals that shall be established in the Partnering process.

H.1.5 The PEB will compute the PEB score by summing the Monthly Evaluation score portion, the Comprehensive Evaluation score portion, and the subjective score, as described in H.1.3 and H.1.4.

H.1.6 In advising the Contracting Officer on the incentive fee or the disincentive fee, the PEB shall use the table below. If the PEB score falls between two scores in the table, the PEB will compute the Incentive Fee percentage or Disincentive Fee percentage using a proportional scale. For example, if the PEB score were 98, the percentage of the 5% Incentive Fee awarded would equal:

Step 1. Looking at the table below the example PEB score of 98 falls between the PEB scores of 97.5 and 100 with corresponding percentage of 5% Incentive Fee Awarded of 95 and 100 respectively.

Step 2. Calculate the difference between the example PEB score of 98 and the next lower PEB score from the table below (which is 97.5). The difference is: 98-97.5=0.5.

Step 3. Divide the 0.5 from step 2 by the difference between the PEB scores of (100-97.5=2.5), which would be 0.5/2.5=0.2

Step 4. Multiply 0.2 from step 3 by the corresponding differences between the Percentage of 5% Incentive Fee Awarded (100-95=5), which would be 0.2x 5=1.0.

Step 5. To obtain the Percentage of 5% Incentive Fee Awarded for the example PEB score, add 1.0 from step 4 to the Percentage of 5% Incentive Fee Awarded for a score of 97.5 (95), which would be 95+1.0=96

Steps 1-5 as described above, can also be shown in the following mathematical equation:

95+(((98-97.5)/(100-97.5))*(100-95)) = 96

PEB Score Percentage of 5% Incentive Fee Awarded Percentage of -5% Disincentive Fee Applied
100 100 0
97.5 95 0
95 85 0
92.5 75 0
90 65 0
87.5 50 0
85 0 0
82.5 0 0
80 0 0
77.5 0 45
75 0 60
72.5 0 75
70 0 95
Less than 70 0 100

H.1.7 Example of Incentive Fee Calculation

Assumptions:

A. The 11 Monthly Evaluation Scores (there is no Monthly Evaluation in the month that the Comprehensive Evaluation is conducted):
78 82 86 87
80 84 86 87
80 86 87

B. Comprehensive Evaluation Score: 87

C. Subjective Rating Score from the PEB: 8

D. Amount paid to the Contractor during the period being evaluated: $10M

Calculation:

Step 1. Find the average of the 11 Monthly Evaluation Scores by adding the eleven Monthly scores and dividing it by eleven, which is: (78+80+80+82+84+86+86+86+87+87+87)/11=83.9

Step 2. As described in section H.1.2 "The PEB shall generate a PEB score with a scale of 0 to 100 with 40 of the 100 points being made up of Monthly Evaluation scores, 50 of the 100 points being made up of the Comprehensive Evaluations score, and 10 of the 100 points for a subjective score (See section H.1.3)." Take 40% of the average of the Monthly Evaluation scores (83.9) calculated in step 1 above, which is: 0.4 x 83.9 = 33.6

Step 3. The PEB Score would be equal to 33.6 from step 2 plus 50% (50/100=.5) of 87.5 (assumption B), plus 8 (assumption C) which is equal to 85.1 (as shown in the following mathematical equation): 33.6 + (.5 x 87.5) + 8 = 85.1

Step 4. In order to calculate the Percentage of 5% Incentive Fee Awarded, follow the steps 1-5 of section H.1.6 as follows:

Step 5. Looking at table in section H.1.6 the calculated PEB score of 85.1 falls between PEB scores of 85 and 87.5 with corresponding Percentage of 5% Incentive Fee Awarded of 0 and 50 respectively.

Step 6. Calculate the difference between the calculated PEB score of 85.1 and the next lower PEB score from the table shown in section H.1.6 (85). The difference is: (85.1-85=0.1).

Step 7. Divide the 0.1 from step 6 by the difference between the PEB scores of (87.5-85=2.5), which would be 0.1/2.5=0.04

Step 8. Multiply 0.04 from step 7 by the corresponding differences between the Percentages of 5% Incentive Fee Awarded (50-0=50), which would be 0.04x50=2

Step 9. To obtain the Percentage of 5% Incentive Fee Awarded, add 2 from step 8 to the corresponding Percentage of 5% Incentive Fee Awarded of 0, which would be 0+2=2

Steps 5-9 as described above can also be shown in the following mathematical equation:
0 + (85.1-85)/(87.5-85) x (50-0) = 2%

Step 10. So if the price of the Category over the period being evaluated was $10M, the incentive fee awarded would be 2% (2/100=0.02) of 5% (5/100=0.05) of $10M, which can mathematically be shown as:
0.02 x 0.05 x $ 10M = $10,000

More Information

Contact

Jerry Yakowenko
Office of Program Administration
202-366-1562
E-mail Jerry

Construction Feedback
E-mail Construction

 
 
Updated: 08/28/2012
 

FHWA
United States Department of Transportation - Federal Highway Administration