U.S. Department of Transportation
Federal Highway Administration
1200 New Jersey Avenue, SE
Washington, DC 20590
202-366-4000


Skip to content
Facebook iconYouTube iconTwitter iconFlickr iconLinkedInInstagram

Federal Highway Administration Research and Technology
Coordinating, Developing, and Delivering Highway Transportation Innovations

 
REPORT
This report is an archived publication and may contain dated technical, contact, and link information
Back to Publication List        
Publication Number:  FHWA-HRT-12-023    Date:  December 2012
Publication Number: FHWA-HRT-12-023
Date: December 2012

 

Simplified Techniques for Evaluation and Interpretation of Pavement Deflections for Network-Level Analysis

CHAPTER 5 - Procedures for Optimum Deflection Test Spacings and Frequency for PMS Applications

One important aspect of optimum deflection test spacing and frequency is the measurement accuracy. The accuracy is mainly a function of the combined effects of different sources of variability, such as number of measurements, equipment variability, and spatial variability, which is often associated with the inherent section variability.(29) In addition to measurement accuracy, optimum deflection test spacing and frequency for PMS applications is a function of the following considerations:

Structural models in pavement management systems range from the very simple to relatively complex. The simplest models utilize deflections or deflection basin parameters to characterize subgrade and pavement structural properties. For example, the outer deflections can be used to estimate subgrade stiffness, while the inner deflections are indicative of the degree of support provided by the pavement layers above the subgrade. The more complex structural models utilize pavement layer moduli (derived from deflections) and pavement layer thicknesses and material types to calculate pavement response which is then used to predict failure, much like project-level pavement design analysis. Any PMS system using the latter more complex approach would undoubtedly need more deflection information than the former more simplistic approach.

PMS inventory mileage is another consideration in deflection spacing. Texas maintains approximately 89,000 centerline mi (143,290 centerline km) of pavement, while Alaska maintains approximately 5,000 centerline mi (8,050 centerline km). Certainly, it would be easier to collect deflection data on the majority system for a State with fewer miles, and the deflection spacing could be closer.

Most States and local agencies only have a handful of FWDs, and these are mainly used to collect project-level deflection data for scoping M&R work and for research purposes. PMS deflection data collection are, in most cases, prioritized below project-level work, so the equipment availability for network level data collection is often limited.

Data collection for PMS requires equipment operators to be in the field for long periods of time, often weeks at a stretch, as it is not efficient for the operator to mobilize back and forth from the home base to the job site. Multiple operators are required, and the agency must be flexible in its overtime policies as it is more efficient to work a 10- or 12-h day in the summer as opposed to the traditional 8-h day, 40-h work week. Personnel turnover is an issue as well. Many operators are motivated by the high-tech aspects of operating the FWD. They tend to be highly capable and multitalented and, as such, are often quickly promoted through the agency, leaving a void to fill. FWD operator turnover within the agency is often higher than other positions, so the issue of training new operators must be addressed.

Traffic levels are a significant factor when determining optimum (i.e., "minimum possible") test spacing. Higher traffic facilities require expensive lane closures. Moderate traffic requires at least a sign truck and a crash attenuator mounted on a large vehicle, typically a flatbed single unit truck. These operations require three personnel, one for each vehicle. Ironically, the lowest traffic facilities are typically ranked lowest in priority in the PMS data collection effort but afford the opportunity to collect the most data.

Given the above, the annual agency budget ultimately controls the quantity of deflection data that can be collected in any given year. Objective recommendations and guidelines are provided in this report to determine optimum test spacings, but in the end, the optimum spacings for any agency, network, or portion of the network will be dictated by data collection priorities (project versus network), total mileage to be tested, equipment and personnel availability, traffic levels, and the portion of the annual budget available for network-level testing.

Analysis of Test Spacings for FWD Data Collection

The objective of this analysis was to develop an approach to determine the optimum spacing between FWD tests for use in network applications. The approach is based on an evaluation of the probability of introducing errors as a function of different test spacings and pavement section lengths. Different spacings were evaluated in a probabilistic procedure, resulting in a set of expected error curves for various reliability levels that can be used in the future for determining the optimum test spacing. The error represents the expected difference between the sample of the data and the idealized true value of the population, which, in this case, is represented by the average deflection value of a homogeneous road segment. Monte Carlo simulations were used to model the error function. They are particularly useful in this type of problem in which variables are stochastically distributed and analytical solutions are difficult to obtain. The effectiveness of this approach was verified using data from various road segments in five states. The expected outcome of this study is a procedure that can be easily implemented in a pavement management system during the planning stages of the survey campaigns by simply defining an acceptable magnitude of error and a reliability level.

Modeling the Error Using a Monte Carlo Simulation

The main purpose of this task was to evaluate the sources of variability of FWD testing associated with different sampling strategies and their impact on the average deflection values measured in a road segment. This analysis also provides an opportunity to compare a desirable level of accuracy with the costs associated with the associated sampling strategy (i.e., level of expected error versus number of data points in the sample).

An assumption has to be made about the pavement segment for which the sampling strategy is being defined. The segment must be homogenous in the following characteristics:

The characteristics are likely to provide a pavement segment with a deflection profile without significant variations in deflection magnitudes. These conditions are necessary for any sampling strategy to be effective and produce meaningful results.

The basic approach involves the use of a Monte Carlo simulation to assess the effects of each source of variability. The Monte Carlo simulation is an iterative method for evaluating a deterministic model using sets of random numbers as inputs. This method is useful when the model is complex, nonlinear, or involves more than just a few uncertain parameters. By simulating the probability distributions for each source of variability, it is possible to evaluate the overall error of average FWD measurements when different sampling procedures are selected at network-level. Figure 38 represents the process to evaluate different sampling alternatives.

Figure 38. Illustration. Monte Carlo simulation. This illustration describes the Monte Carlo simulation. The top part of the illustration consists of three bell-shaped distributions of different sizes. The left one corresponds to precision and the number of repeated measures. The middle curve corresponds to the deflection variability associated with time interval between surveys.The right curve corresponds to inherent section variability, spatial correlation, and test spacing. An arrow pointing down links all three curves to a hexagonal shape, which has the words “Monte Carlo Simulation” written in it. Inside the hexagonal shape, there is a circular arrow showing that this is a repeated process. An arrow from the hexagonal shape points down to another bell-shaped distribution, which gives the variability of deflection measurements.
Figure 38. Illustration. Monte Carlo simulation.

To begin a Monte Carlo simulation, operators should select random deflection data that were generated for a 10-mi- (16.1-km)-long section at an assumed 0.1-mi (0.161-km) interval between test points. In each scenario, different means and standard deviations were simulated. The chosen values for means and standard deviations were selected based on observations of data in the LTPP database and State transportation department data available for this research. The interval of 0.1 mi (0.161 km) was chosen because it represents the typical spacing used at FWD surveys for project-level designs. The average deflection of this randomly generated profile was used as the true deflection value for the section (i.e., the errors associated with sampling strategies were determined in relation to this value).

First, the entire data in each randomly generated deflection profile were divided in groups of increasing spacing by skipping up to 19 deflection points, which corresponds to a 0.2- to 2-mi (0.322- to 3.22-km) spacing. These subgroups represented different sampling strategies defined by the spacing between deflection points. All possible combinations of data points that yield the target spacing were generated. For instance, the first spacing was achieved by skipping one deflection point. In this case, two combinations were possible, one starting with the first data point and skipping the second and the other by skipping the first and starting with second one. For each consequent spacing, the number of combinations was increased by one. This is exemplified in figure 39 for spacings of 0.2 and 0.3 mi (0.322 and 0.483 km).

Figure 39. Illustration. Deflection measurement pairings based on 0.1-, 0.2-, and 0.3-mi (0.161-, 0.322-, and 0.483-km) spacings. This illustration describes the way the deflection data were paired to create groups of deflection data with increasing spacing. The diagram consists of three parts. The first one is the 0.1-mi (0.161-km) increment. There are 12 dots numbered 1 through 12 corresponding to 12 deflection measurements 0.1 mi (0.161 km) apart. All of the data points are connected by double-headed arrows. The second part is the 0.2-mi (0.322-km) increment section. Similarly, there are 12 dots numbered 1 through 12. In this case, the double-headed arrows connect every other data point, creating two groups of data with a 0.2-mi (0.322-km) increment. The third part is the 0.3-mi (0.486-km) increment section. Similar to the first two parts of the diagram, 12 data points are numbered 1 through 12. To create groups of deflection data at 0.3-mi (0.486-km) increments, double-headed arrows are used to connect points by skipping two points at a time. Three groups of 0.3-mi (0.483-km) spacing are shown
Figure 39. Illustration. Deflection measurement pairings based on 0.1-, 0.2-, and 0.3-mi (0.161-, 0.322-, and 0.483-kg) spacings.

After each subgroup was defined, the mean for each spacing combination was calculated and compared to the mean of the entire dataset created at a 0.1-mi (0.161-km) spacing. (Recall that the mean of the 0.1-mi (0.161-km) spacing data was considered to be the true mean.) The errors associated with each spacing were calculated as a percentage of the true mean. This process was repeated for each random simulation in the Monte Carlo process. A total of 5,000 simulations were run, and the results were used to create a distribution of average errors with spacing. The error distribution as function of test spacing is plotted in figure 40. This graph indicates that as the spacing between test points increases, the error increases in relation to the reference value (i.e., true mean). The error is interpreted as the accuracy of the average deflection of the homogeneous segment associated with a selected sampling strategy (spacing) when compared to the true mean given by a FWD survey at a 0.1-mi (0.161-km) spacing.

In addition to modeling the expected average error, the results from the Monte Carlo simulation can be also used to model a probabilistic component to the calculation of expected error. Therefore, expected levels of reliability can be included in the analysis, which is an important characteristic in pavement design and evaluation today (e.g., the MEPDG).(27) The standard deviation of the error was computed for each spacing combination. Normal distribution of the error was assumed, and the expected error at different spacings and different reliability levels could be calculated. The average expected error is shown in figure 41 for various spacings and different probability levels for sections that are 10 mi (16.1 km) long.

Figure 40. Graph. Expected average error as function of spacing for 10-mi (16.1-km)-long sections. This graph shows a scatter plot of expected average error as a function of spacing for 10-mi (16.1-km)-long sections. The x-axis represents spacing from zero to 2.5 mi (zero to 4.025 km), and the y-axis represents the average error from zero to 8 percent. The data points have an increasing trend, with the first point starting at a 0.2-mi (0.322-km) spacing and 1.5 percent average error. The last data point at a 2-mi (0.322-km) spacing has an average error of 6.6 percent. A power curve has been fit to the data. The R-squared value of the curve is 0.9953. The equation of the curve is y equals to 0.0445 times x raised to the power 0.6123.
1 mi = 1.61 km
Figure 40 Graph. Expected average error as function of spacing for 10-mi (16.1-km)-long sections.

Figure 41. Graph. Expected average errors for 10-mi (16.1-km)-long section at different spacings and probability levels. This graph shows a scatter plot of expected errors for 10-mi (16.1-km)-long section at different spacings and probability levels. The x-axis represents the spacing from zero to 2.5 mi (zero to 4.025 km), and the y-axis represents the expected error from zero to 16 percent. There are six data series corresponding to various probability levels: 50, 70, 80, 90, 95, and 99.5 percent. They all start at a 0.2-mi (0.322-km) spacing and end at a 2-mi (3.22-km) spacing and have an increasing trend in expected error values for increasing spacing. A power curve is fit to all data points for each series. The data points corresponding to the highest expected errors and are linked to the 99.5 percent probability. The first and last points have an expected error of 5.18 and 15.01 percent, respectively. The data points corresponding to the lowest expected error values are for the average error, or the 50 percent probability. The first and last points have an expected error of 1.5 and 6.6 percent, respectively. The other four curves fall in between these two curves and they correspond to 70, 80, 90, and 95 percent probabilities.
1 mi = 1.61 km
Figure 41. Graph. Expected average errors for 10-mi- (16.1-km)-long section at different spacings and probability levels

An example of how the process works is illustrated in figure 42. For a given spacing, s, the probability distribution can be computed based on the average error and standard deviation obtained from the Monte Carlo simulation. In the figure, ε90 is the error calculated for a 90 percent probability at s. This means that P (ε < ε90) = 0.9. If ε90 is an acceptable error, choosing s implies that there is a 90 percent probability that the error associated with this sampling strategy is less than ε90.

Figure 42. Graph. Normal distribution. This graph shows a normal distribution, which is a bell-shaped curve. The x-axis represents the error from zero to 20 percent, and the y-axis represents the probability from zero to 0.18. The top of the curve corresponds to a probability of about 0.16. A dashed vertical line goes through the center of the curve, and the intersection with the x-axis is denoted as mu subscript epsilon. Another vertical line is drawn within the bell curve on the second half, and the intersection with the x-axis is denoted as epsilon subscript 90. The area under the curve from the left up to this curve is shaded to show that there is a 90 percent probability that the error associated with this sampling strategy is less than epsilon subscript 90.
Figure 42. Graph. Normal distribution.

A concern was raised about the influence of the section length on the expected error, since the error in samples is a function of the number of test points in the sample. The hypothesis was that for the same spacing, errors would increase for shorter sections and decrease for longer sections. For this reason, a set of 5,000 simulations were run for sections that were 2, 3, 5, 10, 15, 20, and 25 mi (3.22, 4.83, 8.05, 16.1, 24.15, 32.2, and 40.25 km) long with deflections randomly generated every 0.1 mi (0.161 km). The average error was plotted against spacing for each length (see figure 43). Looking at the graph, it is evident that the length of the section influences the magnitude of the expected error. A power curve of the form y = a x x b was fit to all curves. Comparing the intercept, a, and the exponent, b, in each curve suggested that these values could be modeled as power functions of the section length themselves. These two functions are shown in figure 44.

Figure 43. Graph. Average error curves for different section lengths. This graph shows a scatter plot of average error curves for different section lengths. The x-axis represents the spacing from zero to 2.5 mi (zero to 4.025 km), and the y-axis represents the average error from zero to 18 percent. There are seven data series corresponding to different section lengths of 2, 3, 5, 10, 15, 20, and 25 mi (3.22, 4.83, 8.05, 16.1, 24.15, 32.2, and 40.25 km). They all start at a spacing of 0.2 mi (0.322 km) and continue up to 2 mi (3.22 km). As spacing increases, the expected error increases for all series. A power curve is fit to all data points for each series. The shortest section length of 2 mi (3.22 km) produces the highest errors. For the 2-mi (3.22-km) length, the errors corresponding to the first and last data point are 3.4 and 15.7 percent, respectively. The longest section with a length of 25 mi (40.25 km) yields the lowest errors. For the 25-mi (40.25-km) length, the first and last data points have an error of 0.9 and 4.2 percent, respectively. The other five series of data points are based on 3-, 5-, 10-, 15-, and 20-mi (4.83-, 8.05-, 16.1-, 24.15-, and 32.2-km)-long sections and fall between the 2- and 25-mi (3.22 and 40.25-km)-long sections following the same trend.
1 mi = 1.61 km
Figure 43. Graph. Average error curves for different section lengths.

Figure 44. Graph. Values of a and b from the seven average error curves. This graph shows a scatter plot of values of variables a and b from seven average error curves. The x-axis represents the section length from zero to 30 mi (zero to 48.3 km), and the y-axis represents the values of a and b from zero to 0.7. There are two sets of data points in this figure. The higher values correspond to variable b, and the lowest values correspond to variable a. Power curves are fit to both sets of data. For the top curve corresponding to variable b, the R-squared value is 0.844, and the equation of the best fit power curve is y equals 0.6687 times x raised to the power of -0.031. For the bottom curve corresponding to variable a, the R-squared value is 0.9998, and the equation of the best fit power curve is y equals 0.1494 times x raised to the power of -0.523.
1 mi = 1.61 km
Figure 44. Graph. Values of a and b from the seven average error curves.

The results from figure 43 and figure 44 suggest that the expected error can be calculated for a particular section depending on its length and the chosen sampling strategy (spacing) according to the equation in figure 45.

Figure 45. Equation. Average expected error due to sampling. Mu subscript epsilon equals 0.1494 times L raised to the power of -0.523 times S raised to the power of 0.6687 times L raised to the power of -0.031. On the right side of the equation, the equation R-squared equals 0.998 is shown.
Figure 45. Equation. Average expected error due to sampling.

Where:

µε = Expected average error (percent).
L = Length (miles).
s = Spacing (miles).

The same process was repeated for the standard deviation of the expected error, which is shown in figure 46. The two functions for the intercept, a, and exponent, b, are shown in figure 47. The standard deviation of the expected error can be calculated for a particular section depending on its length and the chosen sampling strategy (spacing) according to figure 48.

Figure 46. Graph. Standard deviation of average error curves for different section lengths. This graph shows a scatter plot of the standard deviation of the average error curves for different section lengths. The x-axis represents the spacing from zero to 2.5 mi (zero to 4.025 km), and the y-axis represents the standard deviation of the average error from zero to 8 percent. There are seven data series corresponding to different section lengths of 2, 3, 5, 10, 15, 20, and 25 mi (3.22, 4.83, 8.05, 16.1, 24.15, 32.2, and 40.25 km). They all start at a spacing of 0.2 mi (0.322 km) and continue up to 2 mi (3.22 km). As the spacing increases, the standard deviation of the average error increases for all series. A power curve is fit to all data points for each series. The shortest section length of 2 mi (3.22 km) produces the highest errors. Its standard deviations of the errors corresponding to the first and last data point are 3.1 and 7.4 percent, respectively. The longest section with a length of 25 mi (40.25 km) yields the lowest errors. Its first and last data points have a standard deviation of the error of 0.9 and 2 percent, respectively. The other five series of data points are based on 3-, 5-, 10-, 15-, and 20-mi (4.83-, 8.05-, 16.1-, 24.15-, and 32.2-km)-long sections and fall in between the 2- and 25-mi (3.22- and 40.25-km)-long sections following the same trend.
1 mi = 1.61 km
Figure 46. Graph. Standard deviation of average error curves for different section lengths.

Figure 47. Graph. Values of c and d from the seven standard deviation curves. This graph shows a scatter plot of variables c and d from the seven standard deviation curves. The x-axis represents the section length from zero to 30 mi (zero to 48.3 km), and the y-axis represents the values of c and d from zero to 0.45. There are two sets of data points in this figure. The higher values correspond to variable d, and the lowest values correspond to variable c. Power curves are fit to both sets of data. For the top curve corresponding to variable d, the R-squared value is 0.8222, and the equation of the best fit power curve is y equals 0.4218 times x raised to the power of -0.044. For the bottom curve corresponding to variable c, the R-squared value is 0.9998, and the equation of the best fit power curve is y equals 0.08 times x raised to the power of -0.519.
1 mi = 1.61 km
Figure 47. Graph. Values of c and d from the seven standard deviation curves.

 

Figure 48. Equation. Standard deviation of the expected error due to sampling. Sigma subscript z equals 0.08 times L raised to the power of -0.519 times S raised to the power of 0.4218 times L raised to the power of -0.044. On the right side of the equation, the equation R-squared equals 0.996 is shown.
Figure 48. Equation. Standard deviation of the expected error due to sampling.

Where:
σ ε = Standard deviation of the expected error (percent).
L = Length (miles).
s = Spacing (miles).

The effectiveness of these two equations is demonstrated in figure 49 and figure 50. Both figures show a good fit between the predicted error and standard deviation of error versus computed values from the Monte Carlo simulation.

Figure 49. Graph. Observed average error (data points) plotted with the computed average error (lines). This graph shows a scatter plot of observed average error plotted with 
the computed average error. The x-axis represents the spacing from zero to 2.5 mi (zero to 4.025 km), and the y-axis represents the average error from zero to 18 percent. There are 14 data series corresponding to different section lengths of 2, 3, 5, 10, 15, 20, and 25 mi (3.22, 4.83, 8.05, 16.1, 24.15, 32.2, and 40.25 km). There are two series for each length - one is the calculated average error, shown in solid lines, and the other the actual average error, shown in scattered data points. They all start at a spacing of 0.2 mi (0.322 km) and continue up to 2 mi (3.22 km). As spacing increases, the expected error increases for all series. The calculated average error curves are fit to the seven series of the actual average error to determine how close the calculated errors would be to the actual errors. The calculated curves were determined based on equations for calculating the average error based on the section length and spacing. The shortest section length of 2 mi (3.22 km) produced the highest errors. Its errors corresponding to the first and last data point are 3.4 and 15.7 percent, respectively. The longest section length of 25 mi (40.25 km) yielded the lowest errors. Its first and last data points have an error of 0.9 and 4.2 percent, respectively. The other five series of data points are based on 3-, 5-, 10-, 15-, and 20-mi (4.83-, 8.05-, 16.1-, 24.15-, and 32.2-km)-long sections and fall in between the 2- and 25-mi (3.22- and 40.25-km)-long sections following the same trend. The calculated curves fit the data well
1 mi = 1.61 km
Figure 49. Graph. Observed average error (data points) plotted with the computed average error (lines).

Figure 50. Graph. Observed standard deviation (data points) plotted with the computed average error (lines). This graph shows a scatter plot of the observed standard deviation plotted with the computed average error. The x-axis represents the spacing from zero to 2.5 mi (zero to 4.025 km), and the y-axis represents the standard deviation of the error from zero to 8 percent. There are 14 data series in this graph corresponding to different section lengths of 2, 3, 5, 10, 15, 20, and 25 mi (3.22, 4.83, 8.05, 16.1, 24.15, 32.2, and 40.25 km). There are two series for each length - one is the calculated average error, shown in solid lines, and the other the actual average error, shown in scattered data points. They all start at a spacing of 0.2 mi (0.322 km) and continue up to 2 mi (3.22 km). As spacing increases, the standard deviation of the average error increases for all series. The curves for the calculated standard deviation of the errors are fit to all seven series of the actual standard deviation of the errors to check for accuracy between the two sets of series. The calculated curves were determined based on the equations for calculating the average error based on the section length and spacing. The shortest section length of 2 mi (3.22 km) produced the highest errors. Its standard deviations of the errors corresponding to the first and last data point are 3.1 and 7.4 percent, respectively. The longest section length of 25 mi (40.25 km) yielded the lowest errors. Its first and last data points have a standard deviation of the error of 0.9 and 2 percent, respectively. The other five series of data points are based on 3-, 5-, 10-, 15-, and 20-mi (4.83-, 8.05-, 16.1-, 24.15-, and 32.2-km)-long sections and fall in between the 2- and 25-mi (3.22- and 40.25-km)-long sections following the same trend. The calculated curves fit the scattered data points well
1 mi = 1.61 km
Figure 50. Graph. Observed standard deviation (data points) plotted with the computed average error (lines).

The final step in the development of these equations involved the influence of the coefficient of variation (COV) on the average error. For this purpose, simulations were run for various means and standard deviations. The results are shown in figure 51. The numbers in the legend correspond to the mean, standard deviation, and COV. For the same COV, the average error remains the same regardless of the mean and standard deviation. Also, it is important to point out that an increase in COV increases the average error significantly.

Figure 51. Graph. Calculated errors for a 10-mi (16.1-km)-long section with different means and standard deviations of errors. This graph shows a scatter plot of calculated errors for a 10-mi (16.1-km)-long section with different means and standard deviations of errors. The x-axis represents the spacing from zero to 1.2 mi (zero to1.93 km), and the y-axis represents the average error from zero to 12 percent. There are nine data series in this graph based on a 10-mi (16.1-km)-long section corresponding to different standard deviations of errors and average errors. They all start at a spacing of 0.2 mi (0.322 km) and continue up to 1 mi (1.61 km). As the spacing increases, the average error increases for all trends. Although all nine series have different standard deviations and average errors, every three of them have the same coefficient of variation (COV). The trends with the same COV fall on top of each other, having the same average error for a given spacing. The series with the highest COV had the highest average errors. The series calculated based on a COV of 40 percent have an error of 3.7 and 11.2 percent for the first and last data points, respectively. The series in the middle with the 15 percent COV have an error of 1.4 and 4.2 percent for the first and last data points, respectively. The series on the lower part of the plot corresponding to a 5 percent COV have an error of 0.5 and 1.4 percent for the first and last data points, respectively.
1 mi = 1.61 km
Figure 51. Graph. Calculated errors for a 10-mi- (16.1-km)-long section with different means and standard deviations of errors.

In order to consider the effect of COV, a shift factor was determined to incorporate COV into the average error equation. The equation for the COV shift factor is shown in figure 52.

Figure 52. Equation. Calculation of COV shift factor. f subscript COV equals 5.342 times COV.
Figure 52. Equation. Calculation of COV shift factor.

Although the shift factor was calculated and is available, COV will not likely be known prior to surveying the road with an FWD. Therefore, for practical applications, it is recommended that the average error should be calculated for a COV of 33 percent. This value was obtained from the FWD measurements used in this study and could be representative of the variability observed in field FWD data. This value was also used in this research for verifying the reliability approach laid out in this section. The verification of this approach is described in the following section.

Being able to predict the expected error and its standard deviation enables the development of error curves for different section lengths and reliability levels without running any more Monte Carlo simulations. Table 30 can be used to calculate the expected error in the average deflection as a result of a selected sampling strategy (spacing) for a given section length and 90 percent reliability. Additional tables were developed using this approach for a variety of section lengths and different reliability levels and are presented in appendix E.

Table 30. Errors in percentage for 90 percent reliability based on various section lengths, sample spacings, and COV of 33 percent.

Probability Length (mi) Spacing (mi)

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

90 percent

1

14.18

17.94

21.24

24.22

26.98

29.57

32.02

34.35

36.59

2

10.10

12.71

14.98

17.03

18.93

20.70

22.37

23.96

25.48

3

8.28

10.38

12.21

13.86

15.38

16.80

18.13

19.41

20.62

4

7.19

9.00

10.56

11.97

13.27

14.48

15.63

16.71

17.75

5

6.44

8.05

9.44

10.69

11.84

12.91

13.92

14.88

15.80

6

5.89

7.35

8.61

9.74

10.78

11.75

12.67

13.54

14.36

7

5.46

6.80

7.97

9.01

9.97

10.86

11.70

12.49

13.25

8

5.11

6.36

7.45

8.42

9.31

10.14

10.92

11.66

12.36

9

4.82

6.00

7.02

7.93

8.76

9.54

10.27

10.96

11.62

10

4.58

5.69

6.65

7.51

8.30

9.03

9.72

10.38

11.00

1 mi=1.61 km

Comparison to Available Data

Deflection data from various roads in three States (New Mexico, Oregon, and Kansas) were obtained and analyzed. FWD data from New Mexico and Kansas were spaced at 0.1 mi (0.161 km) per test point, while Oregon was spaced at 0.05 mi (0.08 km). Overall statistics are presented in table 31. The roads were further separated into smaller sections based on the asphalt concrete layer thickness and base thickness (not available for every road) for creating homogenous sections. An overview of the deflection data for each state is given in figure 53. More detailed information is presented in appendix E.

Table 31. Deflection data for New Mexico, Oregon, and Kansas .

Statistics

New Mexico

Oregon

Kansas

Maximum (mil)

11.34

22.48

18.20

Minimum (mil)

4.33

4.31

4.27

Average (mil)

7.12

11.30

10.68

Standard deviation (mil)

2.51

3.90

3.09

COV

0.352

0.345

0.289

1 mil=25.4 μm

First, similar to the theoretical approach, the entire section data were divided in groups of increasing spacing by skipping up to 19 deflection points when enough data were available. The average error for each spacing was then computed for all sections. Next, the mean was compared to the mean of the entire section, which was assumed to be the true mean. Figure 54 shows the average error of means at different spacings averaged for all the sections in New Mexico. It can be noted that as the spacing increased, the difference to the true mean increased, as well, similar to the randomly generated deflections. The same trend follows for the other States.

Figure 53. Graph. Maximum and minimum section means and weighted mean deflection for all sections in each State. This graph shows a bar plot of the maximum and minimum section means and weighted mean deflection for all sections in New Mexico, Colorado, Oregon, Iowa, and Kansas. The x-axis shows the five States, and the y-axis represents the deflection from zero to 25 mil (zero to 635 microns). For New Mexico the maximum, minimum, and weighted mean are 11.34, 4.33, and 7.14 mil (288.03, 109.98, and 2,793.49 microns), respectively. For Colorado, the maximum, minimum, and weighted mean are 17.85, 15.53, and 17.17 mil (453.39, 394.46, and 436.12 microns), respectively. For Oregon, the maximum, minimum, and weighted mean are 22.07, 4.48, and 11.20 mil (560.58, 113.79, and 284.48 microns), respectively. For Iowa, the maximum, minimum, and weighted mean are 16.96, 2.88, and 8.15 mil (430.78, 73.15, and 207.01 microns), respectively. For Kansas, the maximum, minimum, and weighted mean are 18.2, 4.27, and 10.68 mil (462.28, 108.46, and 271.27 microns), respectively.
1 μm = 0.039 mil

Figure 53. Graph. Maximum and minimum section means and weighted mean deflection for all sections in each State.

Figure 54. Graph. Average error for each spacing. This graph shows a scatter plot of average error for each spacing. The x-axis represents the spacing from zero to 25 mi (zero to 40.25 km), and the y-axis represents the average error from zero to 10 percent. The 19 data points in this plot show that as the spacing increases, the average error also increases. The first data point has an error of 1.34 percent, and the last data point has an error of 9.16 percent

1 mi=1.61 km

Figure 54. Graph. Average error for each spacing.

The verification of the theoretical approach with field measured data was done by comparing the expected average error, computed using the equation in figure 45 and the COV shift factor equation in figure 52, with calculated values from the field distribution. Figure 55 through figure 57 show the measured error for a specific sampling strategy (spacing) and the expected error computed by the equations developed in the theoretical approach considering a reliability level of 50 percent (i.e., without including the standard deviation). As a consequence, it is expected that at least 50 percent of the sections would have errors less than or equal to thecalculated error. It can be seen from all three figures that the theoretical approach provides a reasonable estimate of the error expected when one particular sampling strategy is selected.

Figure 55. Graph. Comparison between measured error and predicted error for all New Mexico sections at 0.2-mi (0.322-km) spacing. This graph shows a bar and line plot of the average error in deflection measurements in various roads in New Mexico for a fixed spacing of 0.2 mi (0.322 km). The x-axis represents 12 section ID numbers, and the y-axis represents error from zero to 16 percent. The bars represent the percent error based on measured deflections, and the line represents the percent error based on predicted deflections for each section based on a 50 percent reliability level. Only 2 out of 23 percent error values from the measured deflections are higher than the corresponding values in the predicted deflections line.
Figure 55. Graph. Comparison between measured error and predicted error for all New Mexico sections at 0.2-mi (0.322-km) spacing.

 

Figure 56. Graph. Comparison between measured error and predicted error for all New Mexico sections at 0.5-mi (0.805-km) spacing. This graph shows a bar and line plot of the average error in deflection measurements in various roads in New Mexico for a fixed spacing of 0.5 mi (0.805 km). The x-axis represents 12 section ID numbers, and the y-axis represents error from zero to 16 percent. The bars represent the percent error based on measured deflections, and the line represents the percent error based on predicted deflections for each section based on a 50 percent reliability level. Only 7 out of 23 percent error values from the measured deflections are higher than the corresponding values in the predicted deflections line.
Figure 56. Graph. Comparison between measured error and predicted error for all New Mexico sections at 0.5-mi (0.805-km) spacing.

 

Figure 57. Graph. Comparison between measured error and predicted error for all New Mexico sections at 1-mi (1.61-km) spacing. This graph shows a bar and line plot of the average error in deflection measurements in various roads in New Mexico for a fixed spacing of 1 mi (1.61 km). The x-axis represents 12 section ID numbers, and the y-axis represents error from zero to 16 percent. The bars represent the percent error based on measured deflections, and the line represents the percent error based on predicted deflections for each section based on a 50 percent reliability level. Only 2 out of 23 percent error values from the measured deflections are higher than the corresponding values in the predicted deflections line
Figure 57. Graph. Comparison between measured error and predicted error for all New Mexico sections at 1-mi (1.61-km) spacing.

ANALYSIS OF FREQUENCY FOR FWD DATA COLLECTION

The recommended frequency of FWD data collection on pavements is dependent on the overall "rate of change" of structural conditions over time. Since pavement deflection measurements, particularly the deflection at the center of the load plate, are a direct measurement of the overall structural condition of a pavement, the LTPP database was evaluated to determine how quickly deflections change over the range of testing dates and pavement thicknesses contained in the database. Flexible pavements were evaluated separately from rigid pavements with recommendations given for each.

Flexible Pavements

The objective of this analysis was to determine the rate of change of the center deflections over time for a variety of asphalt pavement thicknesses, traffic levels, subgrade types, and climatic conditions. The rate of change is used to determine how often deflection measurements should be taken on a network-level basis. Center deflections were used because they represent the total response of all the layers in the pavement structure.

The LTPP database contains 2,873 days of FWD tests taken for 59 State codes, 297 SHRP sites, and 8 construction cycles. Each record in the database contains the average of the deflections collected on a particular day over the entire SHRP test section along the outer wheel path. The records also contain the average air and mid-depth AC temperatures for the day of test. The deflection data were reviewed for statistical outliers, such as deflections measured on frozen pavements which were removed. In addition, those sites with too few or insufficient data collection cycles were omitted from the analysis.

The analytical process consisted of the following steps:

  1. Sort the order of the records by the total number of test days on a particular test section, as well as the standard deviations of center deflections (D1).

  2. Normalize the deflections to a 9,000-lbf (40,050 N) load for each SHRP section.

  3. Determine the degree of temperature sensitivity due to the AC layer for each SHRP section by regressing D1 versus mid-depth AC temperatures.

  4. Remove the influence of temperature from the D1 measurements by adjusting them to a reference temperature of 68 °F (20 °C).

  5. Regress D1 versus time to determine the slope (change in D1 over time).

  6. Determine relationship between the slope and AC thickness, traffic levels, subgrade type, and climatic conditions (WF, DF, WNF, and DNF).

This process is demonstrated in figure 58 for one particular SHRP section. In this case, the State code is 50 and the SHRP ID number is 501002, which is US-7 near New Haven, VT. This section is composed of 8.5 inches (215.9 mm) of AC over 26 inches (660.4 mm) of unbound base or subbase materials, a fine-grained subgrade, and a WF climatic designation. There were 60 days of FWD data collection available for analysis on this section, starting in September 1989 and ending in October 2003. The traffic classification for this section was high, with an AADTT of 300 and 28 percent trucks of the class 9 variety. The deflection versus temperature characteristics for log (D1) are shown in the figure 58.

This graph shows a scatter plot of the logarithm of the deflection parameter D subscript 1 under the loading plate versus temperature for Strategic Highway Research Program section 501002. It also includes a solid line that is the best fit model for the data points. The x-axis represents test temperature minus reference temperature of choice (T minus T subscript ref) ranging from -22 to 86 ?F (-30 to 30 °C), and the y-axis represents the logarithm of D subscript 1 ranging from zero to 0.12 mil (zero to 3 microns). In general, the logarithm of D subscript 1 increases as the reference temperature increases. The data point in the far left corresponds to a temperature of -1.66 °F (-18.7 °C), and the logarithm of D subscript 1 is 0.08 mil (2.08 microns). The data point in the far right corresponds to a temperature of 65.84 °F (18.8 °C), and the logarithm of D subscript 1 is 0.10 mil (2.62 microns). The data points are concentrated close to the best fit model with a maximum distance between two points in the y-axis being 0.0078 mil (0.2 microns). The equation of the best fit line to the data is y equals 0.0114 times x plus 2.3775, and the R-squared value is 0.8553.
1 µm = 0.039 mil
1 °F = 1.8 °C + 32

Figure 58. Graph. Plot of log (D1) versus temperature for SHRP section 501002.

 

The slope of the regression line is 0.0114 and is used in the following equation in figure 59 to adjust each deflection to the standard temperature of 68 °F (20 °C):

 

D subscript 1adj equals 10 raised to the power of the following term: logarithm of open parenthesis D subscript 1meas closed parenthesis minus b times open parenthesis T minus T subscript ref closed parenthesis.
Figure 59. Equation. Deflection adjustment by temperatures

 

Where D1adj is the center deflection adjusted to 68 °F (20 °C), D1meas is the center deflection adjusted to 9,000 lb (4,086 kg), load b is the slope of the regression equation, T is the average mid-depth temperature, and Tref is the reference temperature of 68 °F (20 °C).

Figure 60 shows the center deflections plotted against test date for SHRP section 501002 before and after the temperature corrections were applied.

Figure 60. Graph. Center deflection measurements versus test date - adjusted and unadjusted. This graph shows a line plot of center deflection variations in years. There are two data series in this plot: one corresponding to the unadjusted deflections and one for the temperature-adjusted deflections. The x-axis represents the year from September 7, 1989, to September 7, 2003, and the y-axis represents the deflections from zero to 17.55 mil (zero to 450 microns). A long-term trend of increasing deflections can be detected. There are some seasonal variations, but these are minor in relation to the overall trend. The unadjusted deflection line has more seasonal fluctuations of up to 9.36 mil (240 microns) compared to the temperature adjusted deflection line with a maximum fluctuation of less than 3.9 mil (100 microns).

1 µm = 0.039 mil
1 lb = 0454 kg
Figure 60. Graph. Center deflection measurements versus test date - adjusted and unadjusted.

Note that in figure 60, a long-term trend of increasing deflections can be detected. There are some seasonal variations, but these are minor in relation to the overall trend.

By fitting a linear regression line to the temperature adjusted data, as seen in figure 61, the rate of change of structural condition on the section can be determined. Note that the change is essentially linear. The slope of the regression line, 0.0164, represents the increase in microns per day for the center deflection. This can be converted to a yearly rate by multiplying it by 365, which equals roughly 0.234 mil (6 µm ) per year.

Figure 61. Graph. Temperature-adjusted center deflections versus test date for SHRP section 501002. This graph shows a line plot of temperature-adjusted deflection variations in years. There is also a linear regression line model fit to the temperature adjusted data. The x-axis represents year from September 7, 1989, to September 7, 2003, and the y-axis represents the deflections from zero to 15.6 mil (zero to 400 microns). A long-term trend of increasing deflections can be detected. There are some seasonal variations, but these are minor in relation to the overall trend. The R-squared value of the linear regression line is 0.5965. The equation of the line is y equals 0.0164 times x minus 346.56.

1 μm = 0.039 mil
Figure 61. Graph. Temperature-adjusted center deflections versus test date for SHRP section 501002.

A similar analysis was done for the remaining selected sections. Some sections displayed decreasing deflections over time, so the scalar value of the slope was used for the analysis. A summary of slope values is provided in table 32. The average slopes were grouped by traffic level, subgrade type, AC thickness, and climate classification in table 33 through table 36.

Table 32. Annual change in D1 by SHRP test section.

State Code

SHRP_ID

Construction Number

Annual Deflection Change (microns)

Number of
Test Dates

48

3739

1

24.68

13

1

1019

1

21.28

4

20

1010

1

20.99

9

31

1030

2

18.50

10

27

6251

2

15.18

9

30

509

2

14.16

4

90

6405

1

12.67

17

49

1001

1

10.94

21

1

4155

1

9.85

4

83

1801

1

8.58

20

4

1024

3

8.15

13

27

1018z

4

8.03

10

20

1005

1

7.72

11

48

1060

1

6.98

32

33

1001

1

6.72

33

16

1010

1

6.32

28

50

1002

1

5.98

57

34

502

2

4.88

11

48

1119

2

4.64

5

56

1007

1

4.56

25

30

8129

1

4.31

33

23

1026

2

4.10

14

28

1802

1

3.76

21

2

1008

1

2.90

4

28

1016

1

2.21

18

2

1004

2

1.95

4

1

6019

3

1.84

4

1

4125

1

1.63

4

23

1026

1

1.51

18

24

507

2

1.44

9

34

507

2

1.43

13

87

1622

1

1.22

24

13

1031

1

1.18

4

13

1031

3

1.18

26

34

509

2

1.18

12

2

6010

1

1.12

4

34

506

2

1.07

12

13

1005

2

1.02

28

8

1053

1

0.96

39

24

1634

1

0.96

28

27

6251

1

0.83

30

37

1028

1

0.79

31

34

505

2

0.78

10

1

509

2

0.71

6

40

4165

1

0.68

29

34

504

2

0.56

12

2

1002

1

0.51

5

1

6012

1

0.50

4

27

1028

1

0.42

25

9

1803

1

0.30

12

35

1112

1

0.29

39

1

4127

2

0.24

4

34

508

2

0.23

12

34

503

2

0.10

14

1 μm=0.039 mil

Table 33. Rate of deflection change by traffic level.

Traffic Classification Average Annual Rate of Change in Center Deflection
(microns)

High

3.8

Low

4.5

1 μm=0.039 mil

 

Table 34. Rate of deflection change by subgrade type

Subgrade Type

Average Annual Rate of Change in Center Deflection
(microns)

Fine

4.2

Coarse

4.4

1 μm=0.039 mil

 

Table 35. Rate of deflection change by AC thickness.

AC Thickness
(mm)

Average Annual Rate of Change in Center Deflection
(microns)

≤ 50

7.8

51-100

4.3

101-250

4.5

> 250

2.8

1 mm = 0.039 inches
1 μm=0.039 mil

 



Table 36. Rate of deflection change by climate classification.

Climate Average Annual Rate of Change in Center Deflection
(microns)

DF

5.8

DNF

6.4

WF

2.8

WNF

3.1

1 μm=0.039 mil

 

From table 32 through table 36, the following can be concluded:

Based on the analysis of these pavement sections, a test frequency of 5 years between tests is recommended for flexible pavements.

Rigid Pavements

The rigid pavement sections that were evaluated also exhibited temperature dependency and were adjusted to a standard temperature of 68 °F (20 °C). This dependency is most likely due to slab curling under higher temperatures. Subsequent to normalizing the deflections to 9,000 lb (4,086 kg) and then removing the temperature effects, the annual change in D1 was less overall than that observed on the flexible pavement sections. It appears that the frequency of network level testing of rigid pavements can be less than flexible sections, perhaps up to 10 years between tests.

 

Federal Highway Administration | 1200 New Jersey Avenue, SE | Washington, DC 20590 | 202-366-4000
Turner-Fairbank Highway Research Center | 6300 Georgetown Pike | McLean, VA | 22101