U.S. Department of Transportation
Federal Highway Administration
1200 New Jersey Avenue, SE
Washington, DC 20590
202-366-4000


Skip to content
Facebook iconYouTube iconTwitter iconFlickr iconLinkedInInstagram

Federal Highway Administration Research and Technology
Coordinating, Developing, and Delivering Highway Transportation Innovations

 
REPORT
This report is an archived publication and may contain dated technical, contact, and link information
Back to Publication List        
Publication Number:  FHWA-HRT-17-104    Date:  June 2018
Publication Number: FHWA-HRT-17-104
Date: June 2018

 

Using Multi-Objective Optimization to Enhance Calibration of Performance Models in the Mechanistic-Empirical Pavement Design Guide

PDF Version (3.77 MB)

HTML Version of Errata for FHWA-HRT-17-104

PDF Version of Errata

PDF files can be viewed with the Acrobat® Reader®

 

The following changes were made to the document after publication:

Location Corrected Values URL
Technical Report Documentation Page *Yuxiao Zhou /publications/research/infrastructure/pavements/ltpp/17104/index.cfm#errata

 

FOREWORD

This report documents research that applied Long-Term Pavement Performance Data to develop an improved approach to calibrating the American Association of State Highway and Transportation Officials’ (AASHTO) AASHTOWare® Pavement ME Design performance models.(1) Whereas the current AASHTO guidelines used in the Pavement ME Design software for calibration of the performance prediction models to local conditions (e.g., materials, traffic, and climate) relies on single-objective minimization of bias and standard error (STE), this report investigates the use of multi-objective optimization to enhance the calibration of the performance models.

The multi-objective optimization results in a final pool of tradeoff solutions where none of the viable sets of calibration factors are prematurely eliminated. This report also demonstrates the application of engineering judgment and qualitative criteria to select reasonable calibration coefficients from the final pool of solutions that result from the multi-objective optimization. More reasonable calibration factors result in a more justifiable pavement design when considering multiple aspects of pavement performance. This investigation revealed that simply evaluating the bias and STE is not adequate for a comprehensive evaluation of performance prediction models. This report is intended for pavement engineers and State transportation departments.

Cheryl Allen Richter, Ph.D., P.E.
Director, Office of Infrastructure
Research and Development

Notice

This document is disseminated under the sponsorship of the U.S. Department of Transportation (USDOT) in the interest of information exchange. The U.S. Government assumes no liability for the use of the information contained in this document.

The U.S. Government does not endorse products or manufacturers. Trademarks or manufacturers’ names appear in this report only because they are considered essential to the objective of the document.

Quality Assurance Statement

The Federal Highway Administration (FHWA) provides high-quality information to serve Government, industry, and the public in a manner that promotes public understanding. Standards and policies are used to ensure and maximize the quality, objectivity, utility, and integrity of its information. FHWA periodically reviews quality issues and adjusts its programs and processes to ensure continuous quality improvement.

 

Technical Report Documentation Page

1. Report No.

FHWA-HRT-17-104

2. Government Accession No.

 

3 Recipient's Catalog No.

 

4. Title and Subtitle

Using Multi-Objective Optimization to Enhance Calibration of Performance Models in the Mechanistic–Empirical Pavement Design Guide

5. Report Date

June 2018

6. Performing Organization Code

 

7. Author(s)

Nima Kargah-Ostadi, Jose Rafael Menendez, and Mina Yuxiao Zhou

8. Performing Organization Report No.

 

9. Performing Organization Name and Address

Fugro Consultants, Inc.
8613 Cross Park Drive
Austin, TX 78754

10. Work Unit No. (TRAIS)

 

11. Contract or Grant No.

DTFH61-14-C-00025

12. Sponsoring Agency Name and Address

Federal Highway Administration
Office of Research, Development, and Technology
Turner-Fairbank Highway Research Center
6300 Georgetown Pike
McLean, VA 22101-2296

13. Type of Report and Period Covered

Final report; July 2014–September 2016

14. Sponsoring Agency Code

HRDI-30

15. Supplementary Notes

The FHWA Contracting Officer’s Representative was Deborah Walker (HRDI-30).

16. Abstract

This research study devised two scenarios for application of multi-objective optimization to enhance calibration of performance models in the American Association of State Highway and Transportation Officials (AASHTO) AASHTOWare® Pavement ME Design software.(1) In the primary scenario, mean and standard deviation of prediction error are simultaneously minimized to increase accuracy and precision at the same time. In the second scenario, model prediction error on data from Federal Highway Administration’s Long-Term Pavement Performance test sections and error on available accelerated pavement testing data are treated as independent objective functions to be minimized simultaneously. The multi-objective optimization results in a final pool of tradeoff solutions, where none of the viable sets of calibration factors are eliminated prematurely. Exploring the final front results in more reasonable calibration coefficients that could not be identified using single-objective approaches. This report demonstrates the application of engineering judgment and qualitative criteria to select reasonable calibration coefficients from the final pool of solutions that result from the multi-objective optimization. More reasonable calibration factors result in a more justifiable pavement design considering multiple aspects of pavement performance. This investigation revealed that simply evaluating the bias and standard error is not adequate for a comprehensive evaluation of performance prediction models.

17. Key Words

Mechanistic–Empirical Pavement Design Guide (MEPDG), AASHTOWare® Pavement ME Design software, multi-objective optimization, calibration, validation, pavement performance models, evolutionary algorithms

18. Distribution Statement

No restrictions. This document is available to the public through the National Technical Information Service, Springfield, VA 22161.
http://www.ntis.gov

19. Security Classification
(of this report)

Unclassified

20. Security Classification
(of this page)

Unclassified

21. No. of Pages

152

22. Price

 

Form DOT F 1700.7 (8-72) Reproduction of completed page authorized

SI* (Modern Metric) Conversion Factors

 

TABLE OF CONTENTS

EXECUTIVE SUMMARY

CHAPTER 1. INTRODUCTION

CHAPTER 2. REVIEW OF LITERATURE ON CALIBRATING THE MECHANISTIC–EMPIRICAL PAVEMENT PERFORMANCE MODELS

CHAPTER 3. PREPARATION OF MEPDG INPUTS FROM LTPP DATA

CHAPTER 4. PROGRAMMING METHODOLOGY

CHAPTER 5. COMPARISON OF MULTI-OBJECTIVE TO SINGLE-OBJECTIVE CALIBRATION RESULTS

CHAPTER 6. SUMMARY OF FINDINGS AND RECOMMENDATIONS

APPENDIX A. DETAILS OF CALIBRATION INPUT DATA

APPENDIX B. PROGRAMMING CODES

APPENDIX C. COMPARISON OF SIMULATED RUTTING CALCULATIONS TO ME SOFTWARE RESULTS

REFERENCES

LIST OF FIGURES

Figure 1. Flowchart. The AASHTO recommended procedure for local calibration of MEPDG performance models, steps 1 through 5
Figure 2. Flowchart. The AASHTO recommended procedure for local calibration of MEPDG performance models, steps 6 through 11
Figure 3. Screenshot. Location of sections within the wet, no freeze climate and on coarse subgrades from InfoPaveTM
Figure 4. Screenshot. Location of sections within the wet, no freeze climate and on fine subgrades from InfoPaveTM
Figure 5. Illustration. Pavement structure in Florida SPS-1 test sections
Figure 6. Chart. Average rutting measurements on SPS-1 test sections 120107 to 120109.
Figure 7. Chart. Average rutting measurements on SPS-1 test sections 120104 to 120161.
Figure 8. Illustration. Pavement structure in Florida SPS-5 test sections
Figure 9. Chart. Average rutting measurements on SPS-5 test sections
Figure 10. Chart. Average rutting measurements on SPS-5 test sections
Figure 11. Illustration. FDOT DASR project sections
Figure 12. Chart. Rutting for the four sections tested under FDOT DASR project
Figure 13. Illustration. FDOT ARB project sections
Figure 14. Chart. Rutting for the seven sections tested under FDOT ARB project
Figure 15. Flowchart. Multi-objective calibration framework
Figure 16. Chart. Example comparison of the simulated calculation to Pavement ME software output (on SPS-1 test section 120102)
Figure 17. Flowchart. Framework for comparison of the calibrated performance models
Figure 18. Chart. Dynamic plot of SSE in single-objective optimization on Florida SPS-1 data
Figure 19. Scatterplot. Measured versus predicted single-objective calibration results of rutting models for new pavements on calibration dataset for Florida SPS-1
Figure 20. Scatterplot. Measured versus predicted single-objective calibration results of rutting models for new pavements on validation dataset for Florida SPS-1
Figure 21. Chart. Dynamic plot of SSE in single-objective optimization on Florida SPS-5 data
Figure 22. Scatterplot. Measured versus predicted single-objective calibration results of rutting models for overlaid pavements on calibration dataset for Florida SPS-5
Figure 23. Scatterplot. Measured versus predicted single-objective calibration results of rutting models for overlaid pavements on validation dataset for Florida SPS-5
Figure 24. Scatterplot. The final nondominated solution set for two-objective calibration of rutting models for new pavements on Florida LTPP SPS-1 data
Figure 25. Scatterplot. Measured versus predicted two-objective calibration results of rutting models for new pavements on calibration dataset for Florida SPS-1
Figure 26. Scatterplot. Measured versus predicted two-objective calibration results of rutting models for new pavements on validation dataset for Florida SPS-1
Figure 27. Scatterplot. The final nondominated solution set for two-objective calibration of rutting models for overlaid pavements on Florida LTPP SPS-5 data
Figure 28. Scatterplot. Measured versus predicted two-objective calibration results of rutting models for overlaid pavements on calibration dataset for Florida SPS-5
Figure 29. Scatterplot. Measured versus predicted two-objective calibration results of rutting models for overlaid pavements on validation dataset for Florida SPS-5
Figure 30. Scatterplot. The final nondominated solution set for four-objective calibration of rutting models for new pavements: F1 and F2 are SSE and STE on Florida LTPP SPS-1 data, and F3 and F4 are SSE and STE on FDOT APT data
Figure 31. Chart. The final nondominated solution set for four-objective calibration of rutting models for new pavements: RMSE and STE on Florida SPS-1 and FDOT APT data
Figure 32. Scatterplot. Two-dimensional representation of the final nondominated solution set for four-objective calibration: SSE on Florida LTPP SPS-1 versus SSE on FDOT APT data
Figure 33. Scatterplot. Two-dimensional representation of the final nondominated solution set for four-objective calibration: STE on Florida LTPP SPS-1 versus STE on FDOT APT data
Figure 34. Scatterplot. Measured versus predicted four-objective calibration results of rutting models for new pavements on calibration dataset for Florida SPS-1
Figure 35. Scatterplot. Measured versus predicted four-objective calibration results of rutting models for new pavements on validation dataset for Florida SPS-1
Figure 36. Bar chart. Comparison of the quantitative criteria for the calibrated rutting models on SPS-1
Figure 37. Bar chart. Comparison of the quantitative criteria for the calibrated rutting models on SPS-5
Figure 38. Bar chart. Comparison of the qualitative criteria for the calibrated rutting models on SPS-1
Figure 39. Bar chart. Comparison of the qualitative criteria for the calibrated rutting models on SPS-5
Figure 40. Chart. Predicted and measured rutting deterioration on FL SPS-1 section 120108
Figure 41. Chart. Predicted and measured rutting deterioration on FL SPS-5 section 120509
Figure 42. Flowchart. Framework for implementation of multi-objective calibration
Figure 43. Chart. Comparison of simulated rutting calculations to ME software results for test section 120102 with β𝑟1 = 1.05, β𝑟2 = 0.9, β𝑟3 = 0.85, βGB = 1.0, βSG = 1.0
Figure 44. Chart. Comparison of simulated rutting calculations to ME software results for test section 120102 with β𝑟1 = 1.05, β𝑟2 = 1.15, β𝑟3 = 0.85, βGB = 1.0, βSG = 1.0
Figure 45. Chart. Comparison of simulated rutting calculations to ME software results for test section 120102 with β𝑟1 = 1.0, β𝑟2 = 0.9, β𝑟3 = 0.9, βGB = 1.0, βSG = 1.0
Figure 46. Chart. Comparison of simulated rutting calculations to ME software results for test section 120102 with β𝑟1 = 0.7, β𝑟2 = 1.02, β𝑟3 = 1.06, βGB = 1.0, βSG = 1.0
Figure 47. Chart. Comparison of simulated rutting calculations to ME software results for test section 120502 with β𝑟1 = 0.51, β𝑟2 = 1.0, β𝑟3 = 0.7, βGB = 1.0, βSG = 1.0
Figure 48. Chart. Comparison of simulated rutting calculations to ME software results for test section 120502 with β𝑟1 = 0.9, β𝑟2 = 1.0, β𝑟3 = 1.0, βGB = 1.0, βSG = 1.0
Figure 49. Chart. Comparison of simulated rutting calculations to ME software results for test section 120502 with β𝑟1 = 1.0, β𝑟2 = 0.9, β𝑟3 = 1.0, βGB = 1.0, βSG = 1.0
Figure 50. Chart. Comparison of simulated rutting calculations to ME software results for test section 120502 with β𝑟1 = 1.0, β𝑟2 = 1.0, β𝑟3 = 0.9, βGB = 1.0, βSG = 1.0
Figure 51. Chart. Comparison of simulated rutting calculations to ME software results for test section 120502 with β𝑟1 = 1.25, β𝑟2 = 1.04, β𝑟3 = 0.94, βGB = 1.0, βSG = 1.0
Figure 52. Chart. Comparison of simulated rutting calculations to ME software results for test section 120502 with β𝑟1 = 1.17, β𝑟2 = 1.1, β𝑟3 = 1.05, βGB = 1.0, βSG = 1.0
Figure 53. Chart. Comparison of simulated rutting calculations to ME software results for test section 120502 with β𝑟1 = 1.17, β𝑟2 = 1.1, β𝑟3 = 1.05, βGB = 1.15, βSG = 0.9

LIST OF TABLES

Table 1. Calibration factors in prediction models for rutting and fatigue cracking in flexible pavements
Table 2. Sensitive design inputs for rutting and fatigue cracking models. NSIm±2s values are given in parentheses
Table 3. Elasticity of MEPDG calibration factors in rutting and fatigue cracking models for Washington State DOT flexible pavements
Table 4. Major State efforts for calibration of MEPDG performance models
Table 5. Local calibration factors for MEPDG fatigue cracking and rutting prediction models
Table 6. Available number of test sections for each LTPP climatic region and subgrade type
Table 7. General information on the 52 flexible test sections on coarse subgrade soils in Florida
Table 8. Source and availability of traffic data for the selected 52 flexible sections in Florida
Table 9. Source and availability of structure data for the selected 52 flexible sections in Florida
Table 10. Availability of rutting data for the selected LTPP flexible pavements in Florida
Table 11. General project information
Table 12. Performance criteria
Table 13. Traffic input data sources and default values
Table 14. Climate information
Table 15. Layer thickness and type of material
Table 16. Mixture volumetric data
Table 17. Binder properties
Table 18. Mixture properties
Table 19. LTPP data tables and fields for backcalculated moduli
Table 20. Additional AC layer properties
Table 21. LTPP data sources for unbound materials properties
Table 22. Bedrock material properties
Table 23. C-values to convert the backcalculated layer modulus values to an equivalent resilient modulus measured in laboratory
Table 24. LTPP data source for rutting measurements (wire reference method)
Table 25. AASHTOWare® Pavement ME Design software data files
Table 26. “Verification” of the global rutting model for new pavements on Florida SPS-1
Table 27. “Verification” of the global rutting model for overlaid pavements on Florida SPS-5
Table 28. Single-objective calibration results of rutting models for new pavements on Florida SPS-1
Table 29. Single-objective calibration results of rutting models for overlaid pavements on Florida SPS-5
Table 30. Candidate solutions from the two-objective nondominated front for SPS-1, with minimum difference in skewness and kurtosis between predicted and measured distributions
Table 31. Two-objective calibration results of rutting models for new pavements on Florida SPS-1
Table 32. Solutions from the two-objective nondominated front for SPS-5, with difference in skewness and kurtosis between predicted and measured data distributions
Table 33. Two-objective calibration results of rutting models for overlaid pavements on Florida SPS-5
Table 34. Candidate solutions from the four-objective nondominated front with difference in skewness and kurtosis between the predicted and measured data distributions
Table 35. Four-objective calibration results of rutting models for new pavements on Florida SPS-1 data
Table 36. Final selected calibration factors
Table 37. AAE of calibrated models in predicting the rutting deterioration rates
Table 38. Dynamic modulus for the Florida SPS-1 and SPS-5 experiment test sections.
Table 39. Calculated resilient modulus of unbound materials
Table 40. Average measured rut depth for Florida SPS-1 test sections 120107 to 120111.
Table 41. Average measured rut depth for Florida SPS-1 test sections 120112 to 120105.
Table 42. Average measured rut depth for Florida SPS-1 test sections 120101 to 120161.
Table 43. Average measured rut depth for Florida SPS-5 test sections 120502 to 120565.
Table 44. Average measured rut depth for Florida SPS-5 test sections 120509 to 120504.
Table 45. Average measured rut depth for Florida SPS-5 test sections 120562 to 120564.
Table 46. Average measured rut depth (mm) for FDOT ARB experiment sections
Table 47. Average measured rut depth (mm) for FDOT DASR experiment sections
Table 48. Developed source codes for multi-objective calibration of MEPDG rutting models
Table 49. Range of the calibration factors reported in the literature

LIST OF ABBREVIATIONS

AADTT average annual daily truck traffic
AAE average absolute error
AASHTO American Association of State Highway and Transportation Officials
AC asphalt concrete
ANN Artificial Neural Network
ANNACAP Artificial Neural Networks for Asphalt Concrete Dynamic Modulus Prediction
APADS Asphalt Pavement Analysis and Design System
APT accelerated pavement testing
ARB asphalt rubber binder
ATB asphalt-treated base
AVC Automatic Vehicle Classification
BAA Broad Agency Announcement
BSG bulk specific gravity
DASR dominant aggregate size range
DOT department of transportation
EA evolutionary algorithm
EICM Enhanced Integrated Climatic Model
ES evolution strategy
FDOT Florida Department of Transportation
FHWA Federal Highway Administration
FWD falling weight deflectometer
GA genetic algorithm
GB granular base
GPS General Pavement Studies
GRG generalized reduced gradient
GSA global sensitivity analysis
HCD historical climate data
HMA hot-mix asphalt
HVS Heavy Vehicle Simulator
IDE integrated development environment
LTPP Long-Term Pavement Performance
MEPDG Guide for Mechanistic–Empirical Design of New and Rehabilitated Pavement Structures
MERRA Modern-Era Retrospective Analysis for Research and Applications
MOEA multi-objective evolutionary algorithm
NCHRP National Cooperative Highway Research Program
NSGA nondominated sorted genetic algorithm
NSI Normalized Sensitivity Index
OAT one-at-a-time
PG performance grade
PLUG Pavement Loading User Guide
PMA polymer-modified asphalt
PMS pavement management system
RAP recycled asphalt pavement
RMSE root-mean-squared error
RSM response surface model
SG specific gravity
SDR Standard Data Release
SPS Specific Pavement Studies
SSE sum of squared errors
STE standard error
TRF traffic
VBA Visual Basic for Applications
WIM weigh in motion
XML Extensible Markup Language

 

 

 

Federal Highway Administration | 1200 New Jersey Avenue, SE | Washington, DC 20590 | 202-366-4000
Turner-Fairbank Highway Research Center | 6300 Georgetown Pike | McLean, VA | 22101