U.S. Department of Transportation
Federal Highway Administration
1200 New Jersey Avenue, SE
Washington, DC 20590
202-366-4000


Skip to content
Facebook iconYouTube iconTwitter iconFlickr iconLinkedInInstagram

Federal Highway Administration Research and Technology
Coordinating, Developing, and Delivering Highway Transportation Innovations

Report
This report is an archived publication and may contain dated technical, contact, and link information
Publication Number: FHWA-HRT-08-035
Date: March 2008

LTPP Computed Parameter: Moisture Content

Appendix B. Characterization of Error in the SID

Relative to the least squares error associated with linear regression, assuming that y1 = ax1 + b, then the error (ri) and the variance (ri2) at a point can be expressed as:

Equation 99. Equation. r sub i equals the product of the following: y sub i minus a times x sub i minus b. (99)

Equation 100. Equation. Squared r sub i equals the product of the following: squared y sub i plus squared a times squared x sub i plus squared b minus 2 times a times x sub i times y sub i minus 2 times b times y sub i plus 2 times a times b times x sub i. (100)

Equation 101. Equation. the sum over i from 1 to n of squared r sub i equals the sum over i from 1 to n of the product of the following: squared y sub i plus squared a times squared x sub i plus squared b minus 2 times a times x sub i times y sub i minus 2 times b times y sub i plus 2 times a times b times x sub i. (101)

Equation 102. Equation. the partial derivative of the sum over i from 1 to n of squared r sub i with respect to a equals the sum over i from 1 to n of the product of the following: 2 times a times squared x sub i minus 2 times x sub i times y sub i plus 2 times b times x sub i, which equals 0. (102)

Equation 103. Equation. the partial derivative of the sum over i from 1 to n of squared r sub i with respect to b equals the sum over i from 1 to n of the product of the following: 2 times b minus 2 times y sub i plus 2 times a times x sub i, which equals 0. (103)

Equation 104. Equation. the multiplication of the 2 by 2 matrix consisting of 2 multiplied by the sum over i from 1 to n of squared x sub i for first row and first column, 2 multiplied by the sum over i from 1 to n of x sub i for first row and second column, 2 multiplied by the sum over i from 1 to n of x sub i for second row and first column, and 2 multiplied by the sum over i from 1 to n of 1 for second row and second column by the 2 by 1 matrix consisting of a for first row and first column and b for second row and first column equals the 2 by 1 matrix consisting of 2 multiplied by the sum over i from 1 to n of the product of x sub i multiplied by y sub i for first row and first column and 2 multiplied by the sum over i from 1 to n of the product of y sub i for second row and first column. (104)

Equation 105. Equation. column vector y equals the subtraction of column vector r from matrix X multiplied by column vector a. (105)

The total variance over all points n is:

Setting the derivatives of the variance with respect to the coefficients a and b to zero gives:

and yields two equations in the two unknown coefficients a and b:

Which expresses the definition of linear regression. In matrix form, where there are a number (i) independent variables xi associated with observations yj (dependent variable) that form a matrix of independent variables, xi,j can be expressed as:

Where:

vector of j observations= vector of j observations
X= matrix of xi,j
vector of unknown coefficients= vector of unknown coefficients
vector of regression errors= vector of regression errors

Solving for vector of unknown coefficients:

Equation 106. Equation. the product of the transpose of matrix X multiplied by column vector y equals the product of the following: the transpose of matrix X times matrix X times column vector a minus the transpose of matrix X times column vector r. (106)

Equation 107. Equation. the product of the transpose of matrix X multiplied by matrix X multiplied by column vector a equals the sum of the transpose of matrix X multiplied by column vector y and the transpose of matrix X multiplied by column vector r. (107)

Equation 108. Equation. column vector a equals the product of the following: the inverse matrix of the product of the transpose of matrix X multiplied by matrix X times the transpose of matrix X times column vector y plus the inverse matrix of the product of the transpose of matrix X multiplied by matrix X times the transpose of matrix X times column vector r. (108)

Equation 109. Equation. column vector r equals the subtraction of matrix X multiplied by column vector a from column vector y. (109)

Equation 110. Equation. the sum over i from 1 to n of squared r sub i equals the product of the transpose of column vector r multiplied by column vector r, which equals the multiplication of the transpose of the matrix of the subtraction of matrix X multiplied by column vector a from column vector y by the subtraction of matrix X multiplied by column vector a from column vector y. (110)

Equation 111. Equation. the partial derivative of the sum over i from 1 to n of squared r sub i with respect to column vector a equals the partial derivative of the product of the transpose of column vector r multiplied by the column vector r with respect to column vector a, which equals the partial derivative of the multiplication of the transpose of the matrix of the subtraction of matrix X multiplied by column vector a from column vector y by the subtraction of matrix X multiplied by column vector a from column vector y with respect to column vector a, which equals 0. (111)

Equation 112. Equation. the partial derivative of the sum of the transpose of column vector y multiplied by the subtraction of matrix X multiplied by column vector a from column vector y, minus the transpose of column vector a multiplied by the transpose of matrix X multiplied by column vector y, and the transpose of column vector a multiplied by the transpose of matrix X multiplied by matrix X multiplied by column vector a with respect to column vector a equals 0. (112)

Equation 113. Equation. The sum of the partial derivative of the transpose of column vector y multiplied by column vector r with respect to column vector a, minus the transpose of matrix X multiplied by column vector y, and the transpose of matrix X multiplied by matrix X multiplied by column vector a equals 0. (113)

Equation 114. Equation. the product of the transpose of matrix X multiplied by matrix X multiplied by column vector a equals the subtraction of the partial derivative of the product of the transpose of column vector y multiplied by column vector r with respect to column vector a from the transpose of matrix X multiplied by column vector y. (114)

Equation 115. Equation. column vector a equals the product of the following: the inverse matrix of the product of the transpose of matrix X multiplied by matrix X times the transpose of matrix X times column vector y minus the inverse matrix of the product of the transpose of matrix X multiplied by matrix X times the partial derivative of the product of the transpose of column vector y multiplied by column vector r with respect to column vector a. (115)

Equation 116. Equation. column vector y equals the product of the following: column vector y sub m at column vector a minus the partial derivative of column vector y sub m at column vector a with respect to column vector a times delta times column vector a. (116)

Equation 117. Equation. the partial derivative of column vector y sub m at column vector a with respect to column vector a multiplied by delta multiplied by column vector a equals the subtraction of column vector y sub m at a from column vector y, which equals column vector r. (117)

Where the second part of the above expression represents the residual regression error. Formulating this on the basis of partial derivatives:

Differentiating with respect to the vector of unknown coefficients vector of unknown coefficients and setting to zero:

Rearranging and solving for vector of unknown coefficients:

Where again the second part of the above expression represents the residual regression error. Drawing the analogy to the system identification method (SID):

Where Gamma (vector of unknown coefficients) is the matrix of model predictions. Rearranging:

Where:

[F]= Rectangular sensitivity matrix (k x n); k = number of coefficients a which is a rectangular sensitivity matrix (k x n); k = number of coefficients a
{ß}= Matrix of Change which is the matrix of change in the model coefficient (n x 1)
Vector of Regression= the matrix of change in the model prediction or the residual error (k x 1)

Therefore:

Equation 118. Equation. the product of the transpose matrix of the partial derivative of column vector y sub m at column vector a with respect to column vector a multiplied by the matrix of the partial derivative of column vector y sub m at column vector a with respect to column vector a multiplied by the matrix of the product of delta multiplied by column vector a equals the product of the transpose matrix of the partial derivative of column vector y sub m at column vector a with respect to column vector a multiplied by column vector r. (118)

Equation 119. Equation. the matrix of the product of delta multiplied by column vector a equals the product of the following: the inverse matrix of the product of the transpose matrix of the partial derivative of column vector y sub m at column vector a with respect to column vector a multiplied by the matrix of the partial derivative of column vector y sub m at column vector a with respect to column vector a times the transpose matrix of the partial derivative of column vector y sub m at column vector a with respect to column vector a times column vector r. (119)

This yields a solution for the changes in the model coefficients based on the residual error in the model prediction.

 

< PreviousContentsNext >>
Federal Highway Administration | 1200 New Jersey Avenue, SE | Washington, DC 20590 | 202-366-4000
Turner-Fairbank Highway Research Center | 6300 Georgetown Pike | McLean, VA | 22101