U.S. Department of Transportation
Federal Highway Administration
1200 New Jersey Avenue, SE
Washington, DC 20590
202-366-4000


Skip to content
Facebook iconYouTube iconTwitter iconFlickr iconLinkedInInstagram

Federal Highway Administration Research and Technology
Coordinating, Developing, and Delivering Highway Transportation Innovations

 
REPORT
This report is an archived publication and may contain dated technical, contact, and link information
Back to Publication List        
Publication Number:  N/A    Date:  November 2001
Publication Date: November 2001

 

LTPP Traffic QC Software Volume 1: Users Guide

PDF Version (1.19 MB)

PDF files can be viewed with the Acrobat® Reader®

LTPP TSSC
November 1, 2001
Software version 1.61


TABLE OF CONTENTS

1. Software Overview

2. Program Usage Flow

3. Control Panel

3.1 PREFS
3.2 DB Connect
3.2.1 Setting up a Data Source
3.2.2 Data Source Selection
3.3 Data Loader
3.3.1 Post-Processing File Location
3.3.2 Summary Data
3.3.3 Output Files
3.3.4 Transmittal Sheets
3.3.5 Processing Outcomes for Bad Data Files
3.3.6 User Notes
3.4 File Tracker
3.4.1 Plett-Plot
3.4.2 User Notes
3.5 Graph MGR
3.5.1 Site and Data Selection
3.5.2 Graph Selection
3.5.3 Data Selection Options for Vehicle Based Graphs
3.5.4 Graph Template Manger
3.5.5 Printing Graphs
3.5.6 User Notes
3.6 PRF Editor
3.6.1 Purge File Structure
3.6.2 Standard entries used in PURGE files
3.6.3 User Notes
3.7 Card Statistics
3.7.1 AVC Statistics
3.7.2 Weight Card Statistics Report
3.8 QC Report
4. Data Viewer
4.1 Viewing Classification Records
4.2 Viewing Weight Records
5. Interpreting Results of QC Processing
5.1 4-Card Data
5.1.1 Time Check Edit
5.1.2 4+ Consecutive Static Volumes Edit
5.1.3 8+ Consecutive Zero Volumes
5.1.4 Missing Hourly Volume
5.2 7-Card Data
5.2.1 Distribution of Gross Vehicle Weight
5.3 7-card, 4-card Comparisons
5.3.1 Volume Comparison
5.3.2 Vehicle Class Distribution Comparison
5.4 Generating Statistics using the ORACLE tables
5.4.1 List of Days - 1 am > 1 pm Volume
5.4.2 List of Days - 4 Consecutive Static Volumes
5.4.3 List of Days - 8+ Consecutive Zeros
5.4.4 List of Days - Missing Data
5.4.5 Statistics for Class 9 Weights
5.4.6 Volume Comparisons 4- & 7- cards
5.4.7 Graphs Excluding Purged Records
5.5 Standard Graphing Templates
5.5.1 4-card checks
5.5.2 GVW graph - Class 9
5.5.3 7-Card vs. 4-Card Volume
5.5.4 7-Card vs. 4-Card Class Distribution
5.6 Plotting Data Trends
A. LTPP QC System Requirements
A.1 Installation Instructions
B. DAT File Requirements for Operation LTPP QC

C. SHRP.DAT File

D. DEFSHT.DAT File

D.1 Keywords - General
D.2 Keywords - Classification Data Transmittal Sheets
D.3 Keywords - Weight Data Transmittal Sheets
D.4 Keywords - Volume Data Transmittal Sheets
D.5 Key Word Deficiencies
E. NEWSHT.DAT File Format
E.1 Example - NEWSHEET to list incoming files
E.2 Example - NEWSHEET Changing DEFSHT values
F. Input and Output File Conventions
F.1 File Naming - Raw Data Files
F.2 File Naming - Processed Data Files
F.3 File Naming - Extensions for Data Files
F.4 Sort Order for Input Data
F.5 Format Classification Records (4-card)
F.6 Format - Classification Records (C-card)
F.7 Format - Weight Records (7-card face)
F.8 Format - Weight Records (7-card continuation)
F.9 Format - Station Description Record (2-Card)
F.10 Format - Weight Records (W-card)
F.11 Format - Station Description Record (S-Card)
F.12 Codes used in TMG card submissions
F.13 Format - Weight Records (HELP-card)
F.14 Format - ATR Station Record (1-Card)
F.15 Format - Volume data records (3-card)
G. ORACLE Tables
G.1 G.FILETRACKER
G.2 LTPPD4 tables
G.3 LTPPVOL7 tables
G.4 LTPPGVW tables
G.5 LTPPRC tables
G.6 LTPPRW tables
G.7 LTPPERRORCOUNT
G.8 Codes for ERROR in ORACLE tables
G.9 Statistics Possible Using ORACLE Tables
H. Processing Resubmitted Raw Data
H.1 Data processed only by the NT software
H.2 Going from all lanes to LTPP lane only
H.3 Data not previously processed by the NT software
I. Data Evaluation and Error Identification
I.1 Card 4 Range Check Parameters
I.2 Card 7 Range Check Parameters
I.3 Continuation card 7 range check parameters
I.4 QC Edit Flag Codes
I.5 File Fatal Flaws
J. Log Files
J.1 Log File Names and Location
J.2 Log File Contents
K. LTPP QC Program Error Descriptions

L. Issues

L.1 Support volume files
L.2 Support HELP files
L.3 SHRP.DAT as an ORACLE table
L.4 DEFSHT.DAT in ORACLE
L.5 Elimination of NEWSHT.DAT
L.6 Transmittal sheets (*.inx file) in ORACLE
L.7 Processed raw data files in ORACLE
L.8 Log file for processing
L.9 Consolidate GVW tables to one per site
L.10 Consolidate VOL7 files to one per site
L.11 Consolidate error tables
L.12 Create a duplicate checking process
L.13 Pre-processor
L.14 Support Site ID cards
L.15 Alpha characters in SHRP_ID
L.16 Loading robustness
L.17 Purge Conditions
M. Transmittal Sheets
M.1 Sheet 12
M.2 Sheet 13
M.3 Sheet 11

Table 1-1 Software modifications since version 1.5

Version 1.51 - Limited number of continuation cards to 1 and set software to load sets of 2 or more but flag as a critical error. Incoming record storage modified to accommodate the change. Tightened validation on continuation card values and sequencing. Year, month, day and hour checked on all records not just the first in a file. Check added for constant or increasing date and hour. Printing routine modified to insure data always prints. Edit flags changed to record rather than vehicle basis.

Version 1.52 - Graphs per printed page increased from 2 to 4 and titles shortened to accommodate the change. Changed WIM line type on graphs to be able to more easily differentiate it on printouts. Insured storage of comments longer than 255 characters in LTPPFILETRACKER.

Version 1.53 - Created QC cover sheet version 1.0 to summarize data in terms of quantity and errors by site, lane and direction. Modified AVC and WIM types to correct a printing problem. Sorted graph output so that graphs, when printed, appear in chronological order for classification errors. Included ability to restrict graphs printed for this error type to a single year. Corrected mixed case problems in path names.

Version 1.6 - Modified QC cover sheet to remove site statistics and do all reporting on a by lane and direction basis.

Version 1.61 - Modified Daymaker to account for missing hours or days which might otherwise create a continuously increasing sequence for the purposes of error graphing. Corrected AVC and WIM labeling errors. Corrected process for counting classification errors. Corrected loading process to account for orphaned continuation cards. Corrected loading to handle errors in counting 8+ consecutive zero hours


Document Modifications

1. Converted to Word and removed line numbering. Changed line spacing to 1.2 and placed page numbers in header rather than footer.

2. Added change lists for software and document.

3. Modified the document to reflect the software changes in user notes sections as applicable.

4. Added process flow charts to section 2.

5. Modified section 3.5 and replaced figure 3.14 to reflect capability to select 4-card error graphs by year.

6. Replaced section 3.8 to reflect new QC cover sheet report including replacement of figure 3.27.

7. Added a new subsection to section G to describe a new table, LTPPERRORCOUNT. Labeled the subsection G.7 and renumbered previous sections G.7 and G.8.

8. Replaced section H.3 since analysis software as designed is indifferent to the QC software used to initially review the file.

9. Removed Section N on Data Management. The material is now part of the LTPP directive on traffic data processing.

10. Added purge codes to Table I.2.

11. Added ORACLE codes for purges to section G.8.


LTPP Traffic QC Users Guide

1. Software Overview

The Long Term Pavement Performance traffic quality control (QC) software is designed to load, process, and produce reports on monitored traffic data submitted to the LTPP program. It is divided functionally based on the flow of data through the system to ultimately produce a data review report and data for loading into the analysis software. Eight program functions (buttons) are available from the main control panel for use by the program operator as data are loaded, processed, and analyzed for reports. The software uses an ORACLE database to store summaries of data and writes processed data to text files for use in the LTPP traffic analysis software.

The program has 4 basic types of files: input files, reference files, summary files and output files.

The input files consist of volume, classification, and weight files. The file formats are those of the Traffic Monitoring Guide (TMG) 2nd and 3rd editions. The latter uses S.I. units for weight data. Volume files are not supported by the program nor are HELP files. Reference files contain site specific and file details. Input files and reference files must be correctly prepared as discussed later in this document (sections B, C, D, E, and F).

ORACLE tables include a file tracker, error summaries for classification and weight data, daily volume tables for classification and weight data, and monthly summaries of gross vehicle weight (GVW). These tables, particularly the file tracker, are essential for program functions. The routine backup of these tables is suggested. A daily backup is strongly recommended and weekly backup is essential for good practice. The tables themselves are discussed in detail in section G.

Summary files are created within the QC software. The summary files serve as the basis for creating the various ORACLE tables. They are not modified by the purge process.

The output files are the processed files which have completed QC. They are the direct inputs into the analysis software (formerly referred to as Level 3-2-1 processing). There is a one to one correspondence between input and output files.


2. Program Usage Flow

The QC software is designed with an expected program sequence procedure in mind. This design is reflected in the position of the main options on the control panel. In the first position is the ADB Connect" button, which allows the user to make a connection to the database. A database connection is a fundamental requirement of all operations within the software. The following steps are a general guideline for operating the software. A series of flow charts illustrating the process are found at the end of this section. Processing requirements for the LTPP program are addressed by directives issued to FHWA LTPP contractors.

  1. Create NEWSHT.DAT after locating all files to be loaded in the current session including at a minimum filename, start date, end date, start time and end time. Transmittal sheet comments may be entered here but will need to be reentered in the File Tracker portion of the process.

  2. Verify that SHRP.DAT, NEWSHT.DAT, DEFSHTDAT and FUNCLASS.DAT are in the DAT subdirectory of the director(ies) to be used in PREFS.

  3. Start the software and set preference options in the PREFS menu to the root directory for the data outputs.
    Refer to section 3.1 or section B on DAT files information regarding this feature.
  4. Identify the connection to the database: DB Connect

  5. Invoke the data loader option to load LTPP data files: Data Loader
    Select the files to be loaded and load them. Check the log file after the load is completed according to the on screen message to verify that all files did in fact load. Correct and try to load again any file that did not load the first time.
  6. Utilize the file tracker to determine the processing status of loaded files: File Tracker
    For each file that had transmittal sheet comments, enter them in the View/Edit files comments box and apply to entry to save the information. Review the Plett-plots for the sites. For files that are thought to be missing check the log files and look for a state XX unusual values for subdirectories.
  7. Utilize the card statistics option to view in depth details about loaded files: Card Statistics
    Check the counts in both the card statistics and data view portions.
  8. Utilize the graph manager to view graphical information about selected data: Graph Mgr.
    Use the relevant templates to review and print the necessary graphs. (The templates will need to be created the first time the software is used by each user.)
  9. Print selected graphs for reporting and agency review.

  10. Prepare a list of potential purges for review: PRF Editor
    Chapter 5 discusses conditions where purges may be appropriate. Save the recommended purge lists WITHOUT applying them. Print a copy for inclusion in the QC packet if needed.
  11. Invoke the QC reporting menu to produce summary statistics on data loaded and file errors.
    At this point the QC report block needs to be checked in the File Tracker.
  12. Forward the QC packet for agency review.
    The date the report was generated should be included in comments for the File Tracker for at least one of the files of each type for each site included in the report. The date the report is being sent should also be entered.
  13. On receipt of QC packet comments, invoke the purge recommendations file editor to apply approved purges to loaded data for subsequent processing in the analysis software: PRF Editor.
    Open up the File Tracker and check report received for at least one file of each type at each referenced site. The date the report was returned should be entered in the View/File Edit Comments section. Retrieve the saved PRF files, modify them to reflect the accepted purges. Apply the purges and save the modified purge file for reference.

Unfamiliar users should read the following sections before running the software:

3.1 PREFS
3.2.2 Data Source Selection
3.4.1 Plett-Plot
3.5 Graph Manager
3.6 PRF Editor
3.8 QC Report
B. DAT File Requirements
E NEWSHT.DAT
F.1 File Naming Raw Data Files
F.3 File Naming Extensions for Data Files
J.2 Log File Contents
Installation instructions accompany the software and are found in section A.

Table 2-1 Initial File Preparation

First, Log in and acknowledge receipt and prepare files for loading.

Question 1: Does Input directory exist?
If NO, create input directory and then place files in input directory.
If YES, place files in input directory.

Next, create NEWSHT.dat (sec E)

Question 2: Does Output Directory exist? (sec 3.3.1)
If NO, create output directory and create dat subdirectory in output directory. Then go to Question 3.
If YES, go to Question 3.

Question 3: Are all .dat files current? (sec B)
If YES, start software.
If, NO go to Question 4.

Question 4: Is the SHRP.dat file current?
If NO, update for new year and data (sec C). Then go to Question 5.
If Yes, go to Question 5.

Question 5: Is the DEFSHT.dat file current?
If NO, update for changed site conditions (sec D). Then go to Question 6.
If YES, go to Question 6.

Question 6: Is the FUNCLASS.dat file current?
If NO, update for new class (sec B). Then start the software.
If YES, start the software.


Table 2-2 Starting Data Processing

First, check PREFS (sec 3.1).

Question 1: Is the output directory correct?
If NO, revise PREFS. Then connect to the database (sec 3.2)
If YES, connect to the database (sec 3.2)

Question 2: Is the ORACLE option available?
If NO, set up the ORACLE ODBC. Then, select Data processing option.
If YES, select Data processing option.

Data Processing Options: 3.3 Load Data, 3.4 File Tracker, 3.5 Graph, 3.6 PRF Editor, 3.7 Card Statistics, 3.8 QC Report.


Table 2-3 Loading Data Files

First select input file(s) and open option.
Then acknowledge loading complete message box.
Next, review log file (sec J).

Question 1: Did all files load?
If NO, determine reason and modify file names or edit data as appropriate. Then continue processing with File Tracker.
If YES, continue processing with File Tracker.


Table 2-4 File Tracker Options

First, select site.

Question 1: Did one or more files successfully load for year of interest?
If NO, go to Question 2.
If Yes, create Plett plot after selecting year of interest (sec 3.4.1). Then add file comments from transmittal sheets. Then go to the next site or process.

Question 2: Have all possible file corrections been made?
If NO, load Data process.
If YES, comment all failed files with reason.


Table 2-5 Purge File Editor

Select site and data type.

Question 1: Are there suggested or approved purges?

If Approved, recall purge file. Apply ONLY approved purges. Then save updated purge file and go to the next data type, site or process.

If Suggested purges, enter the list of suspect days in each month. Enter supplementary comments on purge reasons for agency reference if needed. Then save purge file and go to the next data type, site or process.


Table 2-6 Suggested Selections for Graphing Options

Question 1: Are templates present?
If NO, create templates (sec. 5.5 plus user specific). Then select site or file.
If YES, select site or file.

Next, determine appropriate time scale for graphing:
Option 1: If all continuous, then use quartely graphs.
Option 2: If samples and continuous, then use Volumes - monthly and Distributions quartely.
Option 3: If all samples, then use monthly graphs.

Next, data types available:
Option 1: If ACV only, plot daily errors and vehicle distributions.
Option 2: If WIM only, plot vehicle and weight distributions.
Option 3: If AVC and WIM, plot all vehicle and weight graphs.

Next, print review packet either internal or external and then go to the next file, site or process.


Table 2-7 Card Statistics - Record Level Errors

First select site and file.

Option 1: AVC Data (sec 3.7.1).
If YES, go to Note 1.
If NO, go to Option 2.

Note 1:
Number of records
Number of record level errors
Number of days of data

Check before graphing.
If NO, determine principle reasons for the failure. Then go to Option 2.
If YES, go to Question 1.

Question 1: Is a useable file expected? If not, why not?
If NO, determine principle reasons for the failure. Then go to Option 2.
If YES, go to Option 2.

Option 2: WIM Data (section 3.7.2)
If YES, go to Note 2.
If NO, go to next file, site or process.

Note: 2
Number of records.
Number of record level errors.

Question 2: Is error rate and type acceptable?
If NO, are errors correctable by agency? Then go to next file, site or process
If YES, go to next file, site or process.


Table 2-8 Options for QC reports

Step 1: Select report.
Option 1 - QC cover sheet.
Option 2 - Daily Summary. Then select file (opt.).
Option 3 - Error Summary. Then select file.

Step 2: Select site, year.
Step 3: Print.
Step 4: Go to next file, site or process.


3. Control Panel

Control Panel Showing the following buttons: DB Connet, Data Loader, File Tracker, Graph MGR, PRF Editor, Card Statistics, QC Report, PREFS, Exit. Also shows the Database Connection indicator
Figure 3-1 Control Panel

The control panel, or main menu, shown in Figure 3-1 contains eight functional buttons, a database connection indicator, and an exit button for exiting the program. The version number can be found under About after clicking on the QC in the upper left-hand corner.

3.1 PREFS

The user preferences menu provides the ability to customize the behavior of the program. Currently, only one option is available: base data location. This location is very important to the behavior of the program and must be set to a location the output files and reference files are to be or have been stored. Many directories and files are generated by the QC software during the data loading process, and most of these are used in the analysis software. Except for tables generated in ORACLE, files and directories are created in a consistent structure under the Base Data Location. Set this directory to a location with ample space to store data and that is designated for the archival and processing of LTPP data.

All directory and subdirectory names in the path must be 8 characters or less.

Figure 3-2 shows the PREFS dialog box.

LTPP Operator Preferences panel showing  Path Options and Browse button.
Figure 3-2 Sample operator preferences dialog box.

Selecting BROWSE brings up the currently accessible directories for user selection in lieu of typing in the directory name as shown in Figure 3-3 Results of Browse Selection. Open and close perform that function for the selected directory. OK selects the directory name to be used for PREFS. Multiple data locations for PREFS may be used within a processing session. It should be noted however that using anything other than the drive name and regional abbreviation for the path will result in work to relocate files for use by the analysis software.

shows Select Directory panel
Figure 3-3 Results of Browse Selection.

3.2 DB Connect

A database connection to ORACLE is required for most of the functionality within the software. This is due to the fact that data are loaded from files into the ORACLE database and stored there for graphing and file tracking. Tracking information is also stored within a table in the ORACLE database, LTPPFILETRACKER (see section G.1 for a discussion of this table).

To make a connection to the database, select the "DB Connect" button on the main control panel. This will invoke a "Select Data Source" window, which expects the user to select the method through the database will be accessed.

There may or may not be any data sources listed in the selection window. If an ORACLE ODBC option is not available in the white selection window, continue with section 3.2.1, Setting up a Data Source. If there is a choice available like the example in Figure 3-4 for connecting to ORACLE, skip to section 3.2.2, Data Source Selection. The software will not recognize anything but an ORACLE ODBC connection.

Data Source panel
Figure 3-4 Data Selection Screen

3.2.1 Setting up a Data Source

To make a database connection to any database available, the user must identify a method for the software to make this connection. On the "Select Data Source" menu, click the "New…" button, which invokes a "Create New Data Source" window.

This window provides a list of data base drivers that may be used to connect to a database. Figure 3-5 shows an example. In the list select Microsoft ODBC for ORACLE. The software will only recognize this ODBC option. If this option is not available, contact the system administrator to have the driver installed on the computer. After selecting this driver from the list, click "Next>" at the bottom of the menu. An input window, should now be available that asks where to save this driver configuration.

Create New Data Source panel showing the Select Driver selection box
Figure 3-5 Driver selection when setting up new database

Type "ORACLE ODBC" in the window like the example in Figure 3-6, and click the "Next>" button. A final window (Figure 3-7) provides some information about the driver.

Create New Data Source Panel showing the Type Name of the Data Source box
Figure 3-6 Creating a new data source

Create New Data Source Panel showing the Finish box for creating a new data source.
Figure 3-7 Configuration of data source selection

On this window, click the "Finish" button. At this point, a log in dialog box for the database appears as illustrated in Error! Reference source not found. For ORACLE, there are three inputs: User Name, Password, and Server. Type the required information into the spaces for the relevant ORACLE database. The established Traffic User Account should be used to connect to the database. If the account has not been established or the required user name/password is not known, contact the local database administrator (DBA). After entering all of the information, click the "OK" button. "ORACLE ODBC" selection should now appear on the "Select Data Source" menu.

3.2.2 Data Source Selection

In the "Select Data Source" window, select the data source which allows a connection ORACLE database. A dialog box like Figure 3-8 will appear to enter a user name, password, and server. Enter the required information to connect as the established Traffic User. (See the DBA if necessary.)

Microsoft ODBC for Oracle Connect Panel showing User Name, Password, and Server entry spaces
Figure 3-8 ORACLE log in screen

Assuming all information is correct, and the database is available, the connection will be established and the Database Connection message box updated to 'Connected' on the control panel of the LTPP QC software. The software may or may not recall the items User Name and Server from session to session.

To disconnect from the database, or re-establish the connection, click again on the "DB Connect" button. If currently connected, the program will request confirmation to disconnect. Selecting "No" will leave the database connected. On selecting "Yes", the program will terminate the database connection. If the software is not closed, clicking on "DB Connect" will bring up the query "Do you want to make a new connection?" To make a new connection or reconnect, select "Yes". It is not necessary to disconnect from the database before exiting the program.

The first time the software is used, a message box will come up with a Yes/No query - "Your file tracking table does not exist. Create one?" Click on "Yes". (Clicking on "No' disconnects the database.) If this message ever appears again, it indicates that the ORACLE file tracking table has been renamed or dropped from the database. If this was unintentional, restore the table from the most recent backup.

3.3 Data Loader

The data loader permits loading of 4-, 7-, C-, and W- card files as well as 7-card files with as many as eight 2-card header records into the program (See section F for a discussion of the card types). 3-card files will not load. HELP data files (a prefix of H) will not load. While data are loaded, they are checked for errors at the file, record and daily (if applicable) levels. They are consolidated into daily summary records for reporting purposes, statistics and errors are stored in the ORACLE database, and a corresponding text data file is written for use with the traffic analysis software. Days of data are counted and summarized by calendar days. This does not depend on whether collection equipment is permanent or portable. This is the way the original LTPP traffic QC software counted days.

To load a data file, select the Data Loader button on the main control panel. The window that appears is used to select the files to load from any input source available as shown in Figure 3-9.

Select Files to Load Panel showing Look in, File Name, Files of Type, and Load LTPP Lane Only selection boxes
Figure 3-9 Sample screen for selecting files to load

It is possible to switch between drives during a loading session as well as switch floppy disks or CD-ROMs as needed.

Double-click on the file to be loaded. To select multiple files, press the CTRL key and single-click on each file to be loaded. Alternately, press the SHIFT key, and click on the first and last file for a group of files to be loaded. All files in between will be selected. Depending on the length of the path for the input files, up to 70 files can be loaded at a single time. If more than 70 are selected, NONE are loaded and no record of the loading attempt will appear in the log. When satisfied with the selection, click the Open button on the menu. The data is not loaded by the program in alphabetical order by filename. The loading success or failure of each file is written to the log file. For failures, a reason is identified as well. (See section A.)

The option to load only the LTPP lane is provided, which results in the program loading only data from the LTPP lane and direction as defined in the SHRP.DAT (See Section C for a discussion of this file.) file. The option selected is retained from load selection to load selection. This selection is also used in other portions of the program. Turning it on in one section will affect the others (Graph Manager, PRF Editor and Card Statistics.) To see other lanes later, the file will need to be reloaded in its entirety. Reloading the IDENTICAL file to obtain the information on all lanes will not require the processing described in section H. However, going from all lanes to LTPP lane only in all the output files needs to be addressed per section H.1.

A "Loading LTPP Card File" message box (Figure 3-10) appears to indicate the processing status of the file. To cancel the loading process at any time, click the "Cancel" button. Canceling the loading process stops processing of the current file for the current step only. The cancel feature does not erase any subdirectory structures or temporary files that have been created for the file being read at the time of cancellation. It can damage summary.dat (See section 3.3.2 for a discussion of this file type.) files leaving a summary.tmp file instead and may remove any output file with the same extension as the input file. Canceling the loading process is not encouraged or recommended. If loading must be cancelled, treat the file as if the raw data is being reloaded per section H.1.

Loading LTPP Card File W501002_l19... panel showing Percentage Processed
Figure 3-10 Reporting loading progress

At this point in the loading procedure, the data file is being checked at the record level for data errors. Incoming data is separated into monthly files for more efficient summarization in the QC software. For each file loaded, a record-by-record table is created within the database to store records containing errors. The errors can be viewed with the Data Viewer. (See section 4 for a detailed explanation of the function.)

Once the file is loaded and saved, the daily processor loads summary files for months of input data that have been updated. Only months containing altered data are loaded so as to reduce redundant processing and increase program efficiency. During this process, multiple status bars will appear to show the processing status of each monthly file that contains new data. Upon completion of processing for all files selected to load, the status bars will disappear and either the next file will be processed or a "Load Completed" message box will appear. The process may take a considerable amount of time when loading weight data since the summary.dat files are not indexed.

It is CRITICAL to note that when data is resubmitted for a site, old data must be removed prior to processing. This includes information in both the summary files and ORACLE tables. If this is not done there will be problems with creating ORACLE tables, their contents, and the QC graphs. See section H on processing resubmitted data for details.

3.3.1 Post-Processing File Location

ORACLE tables are stored in the Traffic User account, as established by the ORACLE DBA. The remaining files are stored in the directory which was identified when selecting PREFS. The traffic data and working files are discussed here. Purge recommendation files are discussed in section 3.6, PRF Editor. ORACLE tables are discussed in section G. Log files are discussed in section J.

A working file called workindx.$$$ is used as a scratch file for data processing. It is located in the directory identified in user preferences. If processing terminates normally the file will be empty or non-existent.

Scratch files for converting C-card and W-card files are written to the root directory of the drive identified in PREFS. If processing terminates normally, there will be no trace of these files. Otherwise files with a S prefix will be found.

The new software creates a somewhat different directory structure than the old software. There are more levels and differentiation between the traffic file types. As a result, the base directory should be the root directory on the drive with the region name. (See section 3.1, PREFS.)

The first level of the directory structure is region. The second level is state as identified by its 2 character alpha abbreviation. The third level is the 6 character STATE_CODE, SHRP_ID combination. The fourth level is LEV4 or LEV5 (LEV1, LEV2, and LEV3 are created as a function of the analysis software). The subdirectory structure below LEV4 has a subdirectory DATA which is split into a subdirectory for each YEAR. A YEAR subdirectory is split into a subdirectory for AVC (AVC4) and a subdirectory for WIM (WIM7). When volume data can be loaded, an ATR3 subdirectory is created. The processed data files are stored in the data type directories. Additionally, the type subdirectories are split in up to thirteen additional subdirectories, one for each month of the year and (Non) one for any data with an invalid month value. A sample of a partial, post QC subdirectory structure appears in Figure 3-11.

Sample output of a directory structure
Figure 3-11 Sample output directory structure

The first and last character of the file extension, not file content, is used in determining the year for a given site. The month and year of the file extension must match the month and year of the first record in the data file in order for the file to be loaded.

3.3.2 Summary Data

Each data type has two groups of summary data: ORACLE tables and summary. The ORACLE tables are discussed in section G. The input files for the ORACLE tables are created one per month for any month with data. There is a set for classification (under AVC4) and one for weight (under WIM7). They are all called summary.dat. They are text files which group all of the data for a month together in chronological order. Each record indicates the file which supplied the data. The exception is continuation cards (for weight data) which are not labeled in this fashion. These files include most QC flags. Daily level classification errors are omitted. The records do not include purges. Graphs for counts after purging must be done with other tools (spreadsheets or the LTPP traffic analysis software). See Section 5.4.7 for a description of a possible process. The flags on the records in summary.dat files determine what information is added to the ORACLE tables. Data with critical errors is excluded from the total counts of vehicles and weights. Data associated with critical errors is identified in section I.

3.3.3 Output Files

Output data files are renamed using the existing file naming conventions for such processed files. (See section F.) They are stored in the AVC4 or WIM7 subdirectories by data type for the site and year under evaluation. They include the error information and are the only files to which the purge flags are applied. They are used as input to the LTPP traffic analysis program. Output files, like input files, are in ASCII fixed column format. All output files are either 4-card or 7-card records.

3.3.4 Transmittal Sheets

The QC software will automatically create the transmittal sheets (Note: Trasmittal sheets are used to catalog what data has been submitted on a file by file basis. The origials are paper forms completed by the states and sent with the data submission. There is a differnrt format for each type of data. The information on the sheets includes beginning and ending dates and times, classification scheme, equipment type, and any comments on the data collected. This information is used in creating or modifying the DEFSHT.DAT and NEWSHT.DAT files. See sections D and E for more information on these files.) that are required for each QC'd file. Transmittal sheets are written to the Index subdirectory of the LEV5 subdirectory. There is only one such file for the site for all years. The name for this file is xxxxxx5.inx where xxxxxx is the STATE_CODE, SHRP_ID for the site. This is a binary file with visible ASCII characters. This file is an artifact of the original LTPP traffic QC software and is not used in the ORACLE based traffic analysis software. Comments that would have appeared in the electronic version of the transmittal sheets are not entered in the 'View/Edit File Comments' box in the File Tracker module (section 3.4).

Writing correct transmittal sheets requires that two files be present in the DAT subdirectory: DEFSHT.DAT and NEWSHT.DAT (See sections D and E respectively for a discussion of these files.). If the state code, SHRP ID combination is missing from DEFSHT.DAT, the information will default to blanks. If the input file is missing from NEWSHT.DAT, the QC processing will occur but a note will appear the log file.

3.3.5 Processing Outcomes for Bad Data Files

The processing of various types of data errors is discussed in section J.2, Log File Contents. If a file fails to load, the error message in the log should be sufficient to indicate what actions are required to correct the problem. Section J has more details on the log file and its messages.

3.3.6 User Notes

To maximize the number of files which can be loaded, path names store used to input data should be kept as short as possible.

  • A 4-card file loads as long as all lines are the same length as the first card in the file and that length is at least 51. If a 4-card file has data beyond column 51, the processing software reads it assuming that the additional columns are to be split as 2 columns per additional class. This is true whether or not these are actually vehicle records. The existence of a sheet 7 is not checked in reading such files. It was not checked in the old software.

  • If duplicate records (more than one record with the same date, time, direction and lane for a given site) were encountered in 4-card data, the old software would not use them in QC graphs. The QC software still does not recognize duplicate records as an error. This error is not detected until daily summaries are created in the analysis software.

  • In the old software, days of data that fall on daylight savings time are deleted because they have too few or too many hours of data. The new software behaves the same way.

  • The software is not smart enough to deal with resubmitted data whether modified files have less data, more data, or the same name with different data. IF AN INPUT FILE MUST BE REPLACED, PROCESSED FILES, SUMMARY FILES AND ORACLE RECORDS WILL NEED TO BE DELETED OR MODIFIED. See section H.1 for the instructions on this process.

  • If a file does not meet LTPP lane only criteria, an error message comes back "Input file contains no loadable data". A record is written to LTPPFILETRACKER with a 1/1/2025 processing date. The relevant subdirectories for the site, year and data type are created.

  • If multiple years are loaded in a single file, the data will be loaded but not split by year. It will all be included as data for the first year in the file.

  • In reloading data, the index file is not updated. When replacing or reloading files, the index file does not show a file creation/revision date consistent with the file reloading. This is a non-fatal error.

3.4 File Tracker

The LTPP File Tracker is a partially automated feature within the QC software. It is the user's responsibility to maintain some information contained within the file tracker. The purpose of this tracker is to maintain information about the status of data from loading through applying purge recommendations. The file tracker provides the ability to monitor the processing status of files loaded into the software by state and site and contains the capability to graph data received from a given site (Plett-Plot).

Select the "File Tracker" option on the control panel to invoke the LTPP File Tracker. A window appears with a variety of information as shown in Figure 3-12. PREFS may be set to any location without affecting the operation of this module.

Much of the information pertains to the currently selected file, and some of the information can be changed. For this reason the file tracker is considered "partially automated."

LTPP File Tracker panel showing File Selection, Processing Status, and View/Edit File Comments fields
Figure 3-12 File Tracker screen

Two selection lists, state and site, are provided to specify the site for which loaded files should be displayed. File loading failures result in a state XX with a SHRP ID of 0000. Loaded files will be displayed in the "Available Files" selection list - an empty list indicates no files were loaded or a selection has not previously been made to view files. The file lists contains information about the file name and the versions loaded. The file name is displayed first with the period replaced by an underscore and the version of the file is in parenthesis. The first time a specific file is loaded, version A will be assigned. If a file with the same name is loaded at a later time, the next version (B) will be assigned and so on.

The file type (data and format), archive location (to locate processed data), and date processed are displayed next to the file list for each file selected. On the right side of the menu are the processing steps that have been completed for the selected file. Steps marked with a red X are not yet completed, while steps with a blue check mark are completed. Steps with an empty check box next to them are provided for the user to check off when the step is completed.

Figure 3-12 illustrates the results of a file which has completed the QC process.

A white, text input box, labeled "View/Edit File Comments" is provided. The user can enter comments regarding the selected version of the data file. Up to 2000 characters of information can be included containing any notes the user considers relevant. This is where any comments from the transmittal sheets should be stored electronically. This information is not carried forward to the QC report. It is stored in the LTPPFILETRACKER (See section G.1 for a discussion of this table.) table in the COMMENTS field. To print it out an extraction must be made from that table. Standard copy and paste commands (Ctrl+C and Ctrl+V respectively) work if comments need to be repeated for multiple files or imported from other files. To electronically store notes on a site the user may also use the commenting capability in the purge file. (See section 3.6.1, Purge File Structure).

Note: Any changes made on this menu must be "Applied" before changing the selected data file or exiting the menu. Failure to apply the changes results in the loss of changes. The Apply button greys out after changes are applied. Any revision to the comments reactivates the Apply button.

3.4.1 Plett-Plot

The ability to plot a graph of data received from a given state/site is provided by the file tracker for years in which data have been received and processed to date by the software. Displayed on the right side of the file tracker menu below the processing steps list is the Plett-Plot button. Beside the button is a selection list of years for data already processed for this state and site.

To produce a Plett-Plot:

  • Select the site and year to be graphed. It is not necessary to select a file.
  • Select the Plett-Plot button next to the list of years to produce the graph.
For each of the months in which data have been received, a graph is displayed with values of 1 (received) or 0 (not received) for every day. The plot is produced for all data received with AVC showing as the left hand bar for a day and WIM data graphing on the right side of the interval. All bars are currently the same color so the results will be similar to Figure 3-13. The graphs are labeled with site, month and year. The text at the top of the graph indicates the relative positions of the data types. The software goes through the entire calendar year in sequence whether or not any data was received for a month. Any month's plot may be printed.

LTPP Graph Output panel showing a sample of a Plett-plot
Figure 3-13 Sample Plett-plot

3.4.2 User Notes

  • On occasion, thick lined boxes in varying colors will appear around the processing status boxes which can be checked by the user. The outlines are erratic in both presence and color even for the same files. This does not affect program function.

WARNING: The option to delete files exists through this module. Selecting a file from the Available Files list and pressing the delete key on the keyboard will delete knowledge of that file from the ORACLE database. It will not, however, affect either the summary files used for graphing or the output files used for analysis even if the last version loaded is selected. Deleting files here can produce erratic results in further processing activities. The user is solely responsible for dealing with any consequences of deleting files here. Reloading files from scratch is generally the only corrective action.

3.5 Graph MGR

The graph manager produces graphics of data containing errors, gross vehicle weight (GVW) plots, and comparisons of volume and weight data over a given time period within a year. The graph manager includes a template manager to set up predefined graph sets. The template can then be run for the selected state, site, and year to produce the series of graphs saved within that template. (See section 5.5 for a set of recommended templates.)

Graphs can be printed from the Graph Manager.

The purge file editor can be accessed from the Graph Manager without having to exit it and enter the PRF module.

The Graph Manager selection screen is divided in to three main sections, beginning with the site selection criteria, followed by the graphics options and the template manager as shown in Figure 3-14.

LTPP Graph Manager panel showing File Information, Daily Volume Graphs, Data Selection Options, and Graphing Templates fields
Figure 3-14 Graph Manager Screen

If more than one output location has been defined for a site for a year, the graphs will be wrong since the ORACLE tables will be incomplete. It is the user's responsibility to ensure that all data files processed for a site for a year are output to the same place as defined by PREFS and the standard subdirectory structure. Failure to do so will prevent the QC software from producing accurate results.

3.5.1 Site and Data Selection

The entry in PREFS affects the functioning of this element of the software. The graphing requires access to the SHRP.DAT file since LTPP Lane Only is an option. If multiple PREFS have been adopted for processing (i.e. separate state subdirectories) and the current selection does not contain a SHRP.DAT entry for the currently selected state and site, the error message "Unable to find SHRP entry for the state and site." will appear. Either change the site or change the PREFS entry before continuing.

The file information section in the Graph Manager consists of a state and site to identify the data sets to be used, a data type, and a file restriction option. Selecting the active data type determines the graphing options available. For example, the weight graphs are not available when the Volume By Class data option is selected. Daily volume graphs cannot be produced without classification data being loaded.

It is possible to restrict graphs to a specific file or to the LTPP lane data. The former can be useful for plotting only errors or statistics from that file and not all files loaded for the site. By selecting the "LTPP Lane Only" option, the direction and lane should automatically be filled in under "Data Selection Options" when a new site is chosen.

Graphing is not restricted to the data for the LTPP lane if multiple lanes of data have been loaded for a site.

3.5.2 Graph Selection

A variety of graphs are available through the graph manager to plot errors and comparison values. For Volume by Class data, four graphs are available. They are based on LTPPRC tables. To limit the number of graphs produced by the software the year of interest is one of the selection options for "Daily Volume Graphs". These graphs do not exist for volumes derived from weight data.

  • 8+ Consecutive Zero Volumes - Produces a graph for any day containing eight or more zero hourly volume counts. Zero hourly volume counts are not the same as missing hours. In this case the volume recorded for the hour was zero as evidenced by the fact that a record exists in the file for that hour. An example is shown in Figure 3-15.

  • 4+ Consecutive Static Volumes - Produces a graph for any day containing four or more identical hourly volume counts. Four consecutive zero volume hours do not count as 4+ consecutive static volumes. An example is shown in Figure 3-16.

LTPP Graph Output panel showing sample of 8+ consecutive zero volumes graph
Figure 3-15 Sample 8+ consecutive zero volumes graph

LTPP Graph Output panel showing sample 4+ consecutive static volumes graph
Figure 3-16 Sample 4+ consecutive static volumes graph

  • 1 AM > 1 PM Volume Count - Produces a graph for any day containing a volume count at 1 a.m. that is greater than the observed volume count at 1 p.m. 1 a.m. is the hour starting at midnight. 1 p.m. is the hour starting at noon. An example is shown in Figure 3-17.
  • Missing Hourly Volume - Produces a graph for any day in which one or more an hourly volumes was not present during loading. Figure 3-18 has an example.

    These four types of plots are created only from 4-/C-cards. They can be obtained if any classification data exists even if 'Weight by Vehicle' is selected as the active file type.

    The following graphs are available to display AVC versus WIM data and require information be set in the "Data Selection Options" of the Graph Manager (see section 3.5.3.)

  • AVC vs. WIM Volume A graph comparing observed daily AVC volume counts versus WIM data converted to volume counts for a given class or for all trucks as shown in Figure 3-19.

LTPP Graph Output panel showing sample 1 am 1 pm volume graph
Figure 3-17 Sample 1 a.m.>1p.m. volume graph

LTPP Graph Output panel showing sample missing hourly volumes graph
Figure 3-18 Sample Missing Hourly Volumes graph

LTPP Graph Output panel showing sample AVC vs WIM volume graph
Figure 3-19 Sample AVC vs. WIM volume graph

  • AVC vs. WIM Distribution - A graph comparing the total volume for AVC versus WIM data for each day in the date range selected similar to the example in Figure 3-20. This will print even in the absence of one of the two types of data. These values are not adjusted for count duration which may severely distort the comparison if weight data is sampled and classification data is continuous.
  • GVW - For the "Weight by Vehicle" data selection, the additional option of a GVW graph is available. This option produces a distribution graph for the date range selected in the "Data Selection Options" with percentages of vehicle weights observed in each of the vehicle weight bins as shown by the example in Figure 3-21. This graph can only be produced for individual vehicle classes 4-20. If 'All Trucks' is selected, the error message "GVW requires a single class" appears followed by "Nothing to graph."

LTPP Graph Output panel showing sample AVC vs WIM vehicle distribution graph
Figure 3-20 Sample AVC vs. WIM vehicle distribution graph

LTPP Graph Output panel showing sample GVW graph for a vehicle class
Figure 3-21 Sample GVW graph for a vehicle class

3.5.3 Data Selection Options for Vehicle Based Graphs

Various data selection criteria in the graph manager are required for the comparative and GVW graphs. Date ranged, class, lane, and direction information must be specified to produce the comparison desired graphs. While the comparative graphs require that a day range be specified, the GVW graph does not, (it is either a monthly or quarterly graph). The following criteria are available:

  • Month - The option to produce 12 monthly graphs or 4 quarterly graphs for data that are available for the specified period of time.

  • Days - The range of days for which the graphic should be produced, in the format of ##-##, where each ## is day to indicate the beginning and ending day. For each of the months or quarters graphed, only days specified in this range will be displayed. For quarterly graphs, specify 0-31 for days to insure the graph makes sense. Leading zeros are not required. At least two days must be specified.

  • Year - The year for the data to be graphed. This value must be typed in as there is no pick list provided.

  • Class - The vehicle class for which class specific data will be plotted. 'All Trucks' is an option for comparative graphs. The AVC vs. WIM distribution graph does not use this information when plotting but one of the class options must be selected.

  • Lane - The lane of interest for which data will be plotted. (It should be automatically set if LTPP Lane Only is selected and must be selected again if the LTPP lane is desired or after the lane value has been changed.)

  • Direction - The direction of interest for which data will be plotted. (It should be automatically set if LTPP Lane Only is selected and must be selected again if the LTPP lane is desired after the lane value has been changed.)

After the graphing options are selected (one or many may be selected, but changes can be made to produce graphs with other options), click the "Display Graph" button at the bottom of the menu. If any data meet the selection criteria specified, a "LTPP Graph Output" menu will appear with the desired graph. If no data exist which meet the graphing criteria, a message - "No data available which match your criteria or graph selection" will appear. Selection of the "Print Graph" button will generate a copy to a printer.

If multiple graphs are generated by the criteria selected, click the "Next >>" button, to display the next graph for the criteria used. Once all graphs have been viewed, clicking "Next >>" brings back the graphing options menu. The "Cancel" button may be selected at any time to terminate the viewing process. It is not possible to go backwards through a series of graphs.

3.5.4 Graph Template Manger

The graphing template manager allows for the setup a set of templates, each containing a set of graphs that can be saved and run at any time with the specified state, site, and year. This is useful to consistently produce the same set of graphs for different sites. Daily graphs (4-card errors) cannot be in the same template as monthly/quarterly graphs which may use WIM data.

Two windows are displayed in the template manager, the left displaying saved templates, and the right displaying graphs saved within the selected template. Begin by creating a new template with the "New" option under the "Templates" window. A new template is created with a default name, which can be changed with the "Rename" option.

To save a new graph to the new template, setup the graphing options on the graph manager menu, including desired data type, graphs to produce, and data selection options. Once set, select the "Capture" button under the "Graphs in Template" window of the template manager. A new graph will appear, numbered consecutively, in the graphs window. A user may store up to 30 templates, each with 10 graphs.

To run a template at any time, select the state and site for the data to be graphed, and specify the year of the data to graph. Select the template containing the graphs to be produced, and select the "Run" button under the "Templates" window. To view any of the saved graphs, select the graph in the "Graphs in Template" window, which will update the Graph Manager screen to reflect the settings of the saved graph.

Templates are user specific. The instructions for each are contained in a file - templates.dat. This file is saved in WINNT\Profiles\user name\LTPP. Procedures for setting up a minimum recommended set of templates are contained in section 5.5.

3.5.5 Printing Graphs

The option to print graphs is provided within the Graph Manager. Graphs must be printed individually whether defined and selected individually or produced using a template. Graphs are printed one or four per page. Printing directly from the screen display produces one graph per page. Printing using the Graph Manager "Print" button results in four graphs per page. All graphs on the page are the same type. If fewer than four of the type exist, a new page is started for the next graph type. Any available printer can be used or the graphs may be printed to a file. The latter course is not recommended since the files contain all of the printer control characteristics. For the most readable graphs, printers should be set in landscape mode prior to printing.

3.5.6 User Notes

  • It is not possible to select a single day of data to graph on a monthly graph for AVC vs. WIM volume graphs. At least two days must be specified.

  • If the full month is not specified when selecting the quarterly option for AVC vs. WIM volume graphs, the only labels on the x-axis are the first day of the first month in the quarter and one or more intervals later.

  • When graphic AVC vs. WIM volume graphs restricting the graph to a specified file restricts only the data of the type in the selected file. Thus, if a classification file is the restrict to selection, all WIM data for the year will be graphed as part of the comparison. If a WIM data file is the file to which the graph is to be restricted, all of the class data for the year will be presented.

  • The color of the data line for AVC or WIM may change when the data type under the "Restrict to File" option is classification and there is a gap in data for AVC vs. WIM distribution graphs. The reviewer should pay careful attention to the individual graphs and highlight the data of interest when comparing a series of graphs.

  • If the year of the restricted file and the data selection option don't match for AVC vs. WIM distribution graphs, the graph still plots if data exists for the relevant file type in the year selected. For example, restriction to a 1991 class file with the 1993 year to plot, gives 1993 WIM graphs with no classification data.

3.6 PRF Editor

The Purge Recommendations File (PRF) Editor is used to enter purge recommendations into the software. These recommendations instruct the software to purge (exclude) data in the given data file from inclusion in the daily summaries and annual estimates. The data will still exist in the output file used by the analysis software. To accomplish this, the PRF Editor provides a graphical interface through which to exclude data within specific date ranges.

The editor is started with the "PRF Editor" button on the control panel. A "PRF Editor" screen is presented with variety of input windows to specify which data should be excluded for the given state and site as shown in Figure 3-22. The PREFS selection will affect the function of this element if the user intends to have the LTPP lane selected automatically for the lane and direction entries.

Select the state and site for which to purge data as well as the data type to be affected by the purge. Checking the "Use LTPP Lane Only" checkbox should result in the LTPP lane and direction being automatically be filled for the lane and direction boxes for rows with a date range to purge.

PRF Editor (none) panel showing Data Source, Purge Recommendation fields
Figure 3-22 Screen for the purge file editor

To begin entering purge recommendations, start with the Purge Dates column of the menu. Dates can be entered by typing or point and click. When typing dates a range must be entered even if only one day is to be purged. To graphically enter a purge date range, click the "Select…" button next to the date input window. Each window requires using the accompanying "Select" button. A calendar appears that allows selection of the date range to be purged.

The calendar will start by displaying the current month and year. The month and year may be rapidly changed as follows. Click on the year to get a list of years that may be scrolled through to pick the desired year. Click on the month to get a pick list of months. The months may also be changed by using the arrows or clicking on the greyed out dates of the previous or following months. To select a range of dates, simply click the mouse on the starting date and move the mouse, holding down the button, across the range of dates within a month. Hit Enter to accept the date(s) selected. Up to 31 days can be selected at a given time. Entering a range of dates longer than a month will not be accepted by the software. Date ranges for purges do not have to match the range of dates associated with an individual file(s). To dismiss the calendar at any time, press the Esc key.

Screen showing example of the calendar to pick purge dates
Figure 3-23 Calendar example for purge date picks

For the given date range, select the lane and direction (if the "Use LTPP Lane Only" option was not selected). Each lane and direction affected must be purge separately. It is recommended that each lane and direction be saved in a separate file.

A list of standard comments is presented to choose from or another reason may be manually entered for the purge recommendation.

The checkbox listed under the Purge column indicates whether the data should actually be purged (an X in the box), or whether this recommendation identifies a potential problem (an empty box). In the latter case, DO NOT check the Purge box with data matching the specified criteria. This is to prevent accidentally applying the purge which CANNOT be undone. Up to 53 purge recommendations may be entered by clicking the up or down arrows on the left hand side of the dialog box to scroll through the input windows. Multiple purge files may be created for a data type for a year.

To save the current recommendations to a file select the "Save" or "Save As" buttons at the bottom of the menu. The name of the *.prf file currently being used is shown in parentheses at the top of the dialog box. A file name and location must be entered. A .prf extension is automatically added to the file. A systematic file naming and storage convention is suggested. Year and data type may be sufficient as a file name if the file is located in the site-level subdirectory and only a single lane and direction is affected. If the file is located in the year level subdirectory, file type may be sufficient. At a later time, this file can be loaded with the "Open" button to recreate the exact purge recommendations that were entered for the specified data file. If the purge recommendations file is not saved, the purges applied will need to be determined by manual inspection of the analysis files or the ORACLE tables, LTPPD4 and LTPPVOL7.

To return the window to all blank entries select the "Reset" button at the bottom of the menu. Saving the .prf file at this point will erase all information previously stored in it.

Identify accepted purge recommendations by checking the associated purge box(es). To implement the purge recommendations select the "Apply" button at the bottom of the menu. The program will prompt for confirmation to apply the current purges. Once purges are applied, they CANNOT be removed. New purges can be added or existing purge reasons can be modified. Purges are applied to the data tables in ORACLE and the output data file used for processing in the analysis software. To be able to see that purges have been applied by viewing the purge file itself, the file must be saved when the purge boxes are selected (preferably immediately after applying the purges.)

A comments section is included in the PRF screen. Comments entered in this box are restricted to 64 characters per line. This is where any comments on the purge recommendations that should be seen by reviewers are saved. There is no limit on the number of lines allowed. The comments are saved in the purge file and will be printed out when the purge file is printed.

3.6.1 Purge File Structure

The purge file is an ASCII text file. While separate purge files must be generated for class and weight files, the structures are similar as shown by the two following examples. There may be up to 53 lines beginning PURGE in each file.

#
# LTPP Purge Recommendations File
# Generated on 03/24/2000 at 23:16
#
# PURGE entry format is:
StartDate-EndDate,Lane,Direction,Reason,Purge (1=Yes, 0=No)
#
STATE 9
SITE 1803
COMMENTS STATE CONCURRED WITH RECOMMENDATIONS 7/31/94
END*COMMENTS
DATATYPE Volume by Class
PURGE 01061991-01091991,1,1,"sample for manual", 0
....

#
# LTPP Purge Recommendations File
# Generated on 03/24/2000 at 23:16
#
# PURGE entry format is:
StartDate-EndDate,Lane,Direction,Reason,Purge (1=Yes, 0=No)
#
STATE 9
SITE 1803
DATATYPE Weight by Vehicle
COMMENTS The data is considered suspect because previous data
COMMENTS for January have volumes only 1/3 of those shown.
COMMENTS In addition, the average ESAL value has doubled
COMMENTS from the previous month.
COMMENTS
END*COMMENTS
PURGE 01061991-01091991,1,1,"sample for manual", 1
....

The only extension the program recognizes to retrieve purge files is .prf. The .prf extension on a system running Internet Explorer may be associated as a PICS Rules file. In this instance it will not be possible to open the file for review directly from Windows Explorer. The file will need to be opened and printed from a text editor or word processor instead.

3.6.2 Standard entries used in PURGE files

The following direction codes are used in PURGE files:
1 - North
2 - Northeast
3 - East
4 - Southeast
5 - South
6 - Southwest
7 - West
8 - Northwest

The contents of Reason tell the software what code follows Q at the end of a purged record. The following reasons are provided on a pick list for use in assigning a code to purged records. Other is not provided as an option. The user entering a reason will result in a code of ?. The software will not indicate if the purge being applied is inappropriate for the data type selected.

8+ Consecutive Zeros (r)
Time Check (s)
Missing Data (t)
Zero Data (u)
Improper Direction Designation (v)
Improper Lane Designation (w)
7 Card Greater Than 4 Card Daily Volume by Significant Difference (x)
4 + Consec Nonzeros (y)
Zero Daily Volume (+)
4 Card Greater than 7 Card Daily Volume by Significant Difference (z)
Over Calibrated (&)
Under Calibrated (#)
Large % of Vehicles > 80 KIPS (^)
Large % of Vehicles < 12 KIPS (~)
Lower Volumes Than Expected - Possible Sensor Problem (|)
Misclassification Error (>)
Atypical pattern (<)
user entered (?)

3.6.3 User Notes

The software will not report an error if the lane or direction does not exist in the file.

Report Cards Statistics panel showing State, Site ID, Available File, File Type, and Skip Purged Records fields
Figure 3-24 Screen for selecting card statistics

3.7 Card Statistics

The Card Statistics menu produces a data statistics report on screen for a given file that has been loaded by the software. This report also includes the Data View option for viewing data records. The Card Statistics window as shown in Figure 3-24 includes an option to exclude purged records from the statistics report. These records are not excluded from the Data View report.

Select the state and site for the statistics reviews. A list of files and versions are displayed in the "Available Files" selection window. Select the file for the data statistics report. General information about the data type in the file and the date the file was processed is shown on the right side of the menu. To exclude purged records from the statistics report (to not see errors for purged records), check the "Skip purged records" checkbox. Different reports are generated for AVC and WIM data.

Class Card Statistics Report panel showing File Information, Record-level Errors, and Daily-level Errors fields
Figure 3-25 Screen for classification data statistics report

3.7.1 AVC Statistics

Selecting an AVC data file produces a screen like the one in Figure 3-25. The information is generated from the LTPPD4 and LTPPRC tables (See section G for information on these ORACLE tables.). The number of daily errors and the number of records with errors respectively are summed to produce the totals.

To view data in depth for the site to which this data file belongs, select the DataView option on the menu, which invokes the LTPP data viewer (See Section 4, Data Viewer).

3.7.2 Weight Card Statistics Report

For WIM data files the report provides information on record-level errors only since those are the only type the software recognizes for weight data. The report uses LTPPRW (See section G.4 for a discussion of this table type.) tables. In order to view errors on a screen similar to that in the relevant file must be selected. See Figure 3-26 for an example of this screen.

3.8 QC Report

There are three options for QC report generation, a cover sheet, a record counting option and a file level error summary.

Weight Card Statistics Report panel showing File Information, Error Information fields
Figure 3-26 Screen for weight records statistics report

The QC cover sheet generation is the most frequently used as it summarizes the data and volumes provided. The report is generated at the site and year level on a by lane by direction basis. All data included in the ORACLE tables for that year is reported as no "LTPP Lane only" option exists. For each lane and direction the report indicates by month the number of days of classification data received and how many of them had no critical errors and are therefore suitable for use in annual estimates. Within the classification section the number of vehicles by class, the total trucks and the total vehicles by month are also reported. The same information is translated into percentages for the truck population only so that the percentage distribution of trucks by class and the percentage of trucks on a monthly basis can be viewed. The end of the classification section indicates the number of days by error type. The second section of the report covers weight data. For each month the number of days of data, the number of record received, the number passing QC and the error percentage are reported. Then the by month, by vehicle class statistics for the classification records are computed for the weight records. The end of the section indicates the percentage of Class 9 vehicles over 80 kips or under 20 kips in each month and tabulates the total number of errors by type observed in the weight records.

LTPP QC Report Generation panel showing Select a Single Report Type and Report Selection fields
Figure 3-27 QC Report selection screen

The second option is a summary of the total number of records by data type by lane by direction received for the year at a site. Only one year and site are printed per page. Each year and site must be selected separately. This count may also be done at the file level.

The third option is a summary of errors found at the file level. Only one file is printed per page and each file must be selected separately.

Invoke the QC Report Menu with the "QC Report" button on the control panel. A screen like Figure 3-27 appears with a series of boxes to select report type and its site, year and file where applicable.

After selecting all the options necessary, click the "Print" button to invoke the printer selection menu. Any printers or printing to a file are options. No other options have any affect on the printing process. After selecting the printer, click "OK" to begin the printing process, or click "Cancel" to cancel the process. If graphs have been printed using the selected printer, the layout option, Landscape or Portrait, should be checked before printing.

Any comments and notes to be included in the report should have been entered in the purge file. The purge file must be printed separately as a text file for inclusion in a review packet. If comments to be included in the review are located in the LTPPFILETRACKER an SQL statement must be used to generate the relevant text file.


4. Data Viewer

The Data Viewer allows review of records stored in the ORACLE database after a data file is loaded. It is available through the Card Statistics button on the control panel. It is helpful for looking at each record to determine what a possible cause for an error may be and what sorts of problems may exist with a given set of data.

To invoke the Data Viewer, use the Card Statistics button on the control panel. A data file report will be generated with a Data View button at the bottom. Select this button to invoke the Data Viewer.

Depending on the type of data being viewed, the viewer will contain information specific to that data type.

4.1 Viewing Classification Records

The Class Data Viewer shown in Figure 4-1 uses the LTPPD4 (See section G.2 for a description of this table type.) tables for the site. As can be seen from the figure there are three options for reports: By Day; By Day, errors only; and By hour, errors only. The option to restrict the report to the selected file, changes which reports can be viewed.

Volume Data Viewer panel shoing File Information, Available Files, Record Number, Date/Time Collection, Record Error Description, Collection Site Data, and Volumes by Class fileds
Figure 4-1 Sample classification data viewer

If the option selected is 'By Day', all days in the LTPPD4 table will be displayed in the order they were loaded into the software whether or not they have errors. The data within a given file will be in chronological order because that is a requirement for successful loading of data. However, the files are not loaded in file extension order and are therefore do not appear in date order in the LTPPD4 table.

In order to restrict the records viewed to a specific file, the 'Restrict to Selected File' option must be checked. The file selected here does not need to match the one selected in the Card Statistics dialog box.

When the option selected is 'By Day, errors only' all records with errors in the LTPPD4 table will be displayed in the sequence they are encountered in the table (recall that the loading order is not chronological). To see only the errors in a specific file, the 'Restrict to Selected File' box must be checked.

The 'By Hour, errors only' option is only available when the 'Restrict to Selected File' box is checked. The data for this display comes from the LTPPRC (See section G.3 for a description of this table type.) table associated with the selected file. It will show all hourly records for a day which has an error whether or not they contribute to the error.

The current record number out of the total number of records is indicated (e.g. Record #: 1/30 in figure 4.1). Use the left and right arrows to scroll through the records to obtain information about the date and time of collection, the error status of the record, and the data on that record.

Weight Data Viewer Panel showing File Information, Record Number, Date/Time of Collection, Record Error Description, Collection Site data, Axle and Total Weights, Axle and Total Spacings fields
Figure 4-2 Sample weight data viewer

There is no LTPP lane only option for this review. The user must know that information (lane number and cardinal direction) if that data is of particular interest in reviewing the error information.

4.2 Viewing Weight Records

The Weight Data Viewer illustrated in Figure 4-2 can only be used to view weight data records with errors. It works on a file by file basis using the LTPPRW (See section G.4 for a description of this table type.) table associated with a file to obtain the necessary information. The errors are presented in the order in which they are encountered. It is possible to go both forwards and backwards through the list of errors.

There is no LTPP lane only option for this review. The user must know that information (lane number and cardinal direction) if that data is of particular interest in reviewing the error information.


5. Interpreting Results of QC Processing

This section describes the basic quality control tests the Long-Term Pavement Performance (LTPP) program applies to state and provincial highway agency data. State agencies can use these same tests to help identify potential errors in any weigh-in-motion or vehicle classification data, whether or not they intended for submission to the LTPP program.

Note to the Reader:
The items in "red" in this section reflect functionality that existed in the SAS version of the software but does not currently exist.

The LTPP QC software automates these checks through a program that uses C++ and ORACLE 8.0 in the WINNT 4.0 environment. Users are able to control the processing through the software's Control Panel. Directions for running these programs are provided in sections 1-5 of this document. The program produces a number of output reports and graphs that require interpretation. Essentially, the LTPP software summarizes a data set in a series of simple graphs that can be used to identify "unusual occurrences" in the submitted traffic data. The reviewer must then determine whether these "unusual occurrences" are actually invalid data or rather the result of unusual traffic patterns. A series of examples is provided to illustrate how the quality control checks work and provides information on interpreting the output from the LTPP software.

Note that all graphs produced by the LTPP software are lane- and direction-specific for a relevant period. The software can create graphs for all lanes and directions for which data are submitted and loaded.

The revision of the software has eliminated a number of functions present in the original version of the QC software. Most of that functionality can be reproduced by using SQL on the ORACLE tables and spreadsheets if required. Section 5.4 discusses how this can be done. In order to fully understand this section the user should be familiar with section G on the ORACLE tables associated with this application.

QC edit checks are the first set of quality control checks. The first check counts the number of records (usually 4-card records) present for each day and examines the hourly traffic volume patterns that occurred on those days. The checks performed on volume patterns are discussed in sections 5.1 and 5.2. Gross vehicle weight analysis is a quality control check of 7-card data intended to detect both unreasonable scale calibration and scale calibration shifts over time. It is discussed in section 5.3.

5.1 4-Card Data

The first set of graph types produced by the LTPP software points out potential equipment failure by showing hourly volume patterns for 4-card records. Each of the QC checks described below results in a graph whose heading indicates the type of potential error detected. Each QC check produces one graph for each lane and direction per day in which an error is detected. If the QC check detects more than one occurrence of a specific error per quarter (for one lane and direction), the hourly volumes for those days are printed on the same graph. (This means that a graph can become quite cluttered if the QC program detects data "errors" in a large number of days.) This means that a substantial number of graphs can be generated.

5.1.1 Time Check Edit

Graph: QC Edit Checks for 4-Card Data Edit = Time Check, showing Volume versus Time, with Volume ranging from 2500 to 500
Figure 5-1 Time check edit - Example 1

The TIME CHECK edit graphs the hourly volumes for any day in which total volume at 1:00 a.m. exceeds total volume at 1:00 p.m. for the same lane and direction. If 1:00 a.m. volumes are larger than 1:00 p.m. volumes, the clock may be incorrect, or equipment failures may have arisen midday. Instructions to generate a listing of days which fail this criteria in a text file rather than a graph are found in section 5.4.1.

Examples of output from the Time Check edit routine are shown in Figure 5-1 though 8.

Graph: QC Edit Checks for 4-Card Data Edit = Time Check, showing Volume versus Time, with Volume ranging from 800 to 100
Figure 5-2 Time check edit - Example 2

In Figure 5-1, there is no direct evidence showing that these are caused by a special event that dramatically increased the volume data from a malfunctioning machine. In fact, the high volume around midnight could have been. However, Figure 5-2 shows that all volumes at night are greater than the noon hour volume. These data are questionable.

Figure 5-3, Error! Reference source not found., and Figure 5-5 also show questionable data.

Graph: QC Edit Checks for a 4-Card Data Edit = Time Check, showing Total Volume versus Time, with Total Volume ranging from 0-50
Figure 5-3 Time Check Edit - Example 3

Graph: QC Edit Checks for 4-Card Data Edit = Time Check, showing Volume versus Time, with Volume ranging from 450 to 0
Figure 5-4 Time Check Edit - Example 4

Graph: QC Edit Checks for 4-Card Data Edit = Time Check, showing Volume versus Time, with Volume ranging from 450 to 0
Figure 5-5 Time Check Edit - Example 5

Graph: QC Edit Checks for 4-Card Data Edit = Time Check, showing Volume versus Time, with Volume ranging from 16 to 0
Figure 5-6 Time check edit - Example 6

It is difficult to decide whether Figure 5-6 shows valid data. This volume pattern can occur frequently when the hourly volumes are very low at a given site.

Graph: QC Edit Checks for 4-Card Data Edit = Time Check, showing Volume versus Time, with Volume ranging from 500 to 0
Figure 5-7 Time check edit - Example 7

Figure 5-7 shows irregular on/off patterns. Two hourly volumes seem to be combined into one hour volume. These data would be purged, as it is extremely unlikely that traffic would behave in this manner.

5.1.2 4+ Consecutive Static Volumes Edit

  • The program 4+ consecutive static volumes graphs the hourly volumes for every day during which four or more consecutive hours have the same non-zero volume. If this happens, the data may or may not be invalid. State personnel should review these data, and if the data are determined to be invalid, they should be removed from the database. If a site normally exhibits low traffic volumes, these data may usually be left in the database. This is because low-volume sites often exhibit identical volumes (often 1 vehicle per hour) for several consecutive hours early in the morning. However, if a site normally has higher volumes, these data are usually flagged for removal because it is statistically unlikely that volumes will be exactly the same four hours in a row. Nevertheless, the choice of when to remove these data is left to the state reviewer. Instructions to generate a list rather than a graph are found in section 5.4.2.

No example of four or more consecutive non-zero hourly volumes is shown. The production of graphs for 4+ Consecutive Non-zeros occurs frequently in the QC analyses, especially with 7-card data from locations where hourly truck volumes are low. However, most of these occurrences represent valid conditions. Therefore, this edit check is ignored most of the time. If the hourly volumes are high and the repeated non-zero hourly volume is also high, then this day of data might be purged.

5.1.3 8+ Consecutive Zero Volumes

The 8+ consecutive zero volume edit graphs the hourly volumes for every day during which the hourly volumes recorded at the site are zero for eight or more consecutive hours. This event usually indicates that some portion of the equipment (typically axle sensors) may have failed, but the data collection equipment is still producing hourly records. Instructions to generate a list (a text file listing dates which are identified by this check) rather than a graph are found in section 5.4.3.

Graph: QC Edit Checks for 4-Card Edit = 8+ Consec Zeros, showing Total Volume versus Time, with volume ranging from 0 to 40
Figure 5-8 8+ consec zeros edit

Figure 5-8 shows a pattern where the hourly traffic volumes from 10 a.m. to 8 p.m. are zero. These data should be purged. This edit check may detect errors when devices are malfunctioning (outputting zero hourly volumes).

5.1.4 Missing Hourly Volume

The program missing hourly volumes edit graphs hourly volumes for each day during which 4-card records are present for some, but not all, 24 hours of a day. This QC check points out when counters have failed and are no longer producing 4-card records. The graphs show the hours for which data are present for these days, and they are often helpful in explaining days with extremely low volumes that appear elsewhere in the QC graphic output.

The LTPP traffic data QC software discards data for these days if the data are from permanent devices. If the data are from portable devices, they are kept if they are part of a continuous, 24-hour data collection period that stretches over two or more calendar days. The LTPP program makes this distinction between "permanent" and "portable" devices to assure as much consistency as possible in the database (all of the daily volumes from permanent devices are based on midnight-to-midnight counts) while keeping and using as many data as possible from sites where few data are available. (Sites with portable devices may produce only one midnight-to-midnight day of data per year, but more than 48 hours of consecutive traffic counts may be present. Using all of the available hours doubles the number of days of data available for LTPP research in these cases.) (The revised LTPP analysis software ignores the "permanent" and "portable" equipment distinctions. Data is treated as continuous, 7 or more midnight to midnight days for a month, or sampled, less than 7 midnight to midnight days in a month. This distinction applies only to classification data. All weight data is treated as if midnight to midnight days exist.) Instructions to generate a list rather than a graph are found in section 5.4.4.

Graph: QC Edit Check for 4-Card Data Edit = Missing Data, showing Total Volume versus Time, with Volume ranging from 0 to 115
Figure 5-9 Missing data check edit

Figure 5-9 demonstrates the missing data edit check for 4-card data. If the day of data is incomplete (contains less than 24 hourly volumes) and from a permanent device, then the day of data will be purged in level 3 processing in the LTPP database. Level 3 processing does not delete incomplete days of data from a portable device if the incomplete day is the beginning day or ending day of a short-term count. This edit therefore produces graphs of the data that may be purged in Level 3 processing. States using the QC software for their own purposes should treat these partial days in the way that best fits their normal data processing routine.

5.2 7-Card Data

Two more types of graphs result from a review of the hourly volume patterns for 7-card records. [Not implemented in new software since they are seldom, if ever critical to making decisions on retaining WIM data.]

EDIT= 4+ CONSEC NONZEROS prints the hourly volumes for any day during which four or more consecutive hours have the same non-zero volume. This check is similar to that applied to 4-card records, described above. The only difference is that the 4-card records include car and light truck volumes, whereas the 7-card records usually do not include this information. This means that more 7-card record sites will have "low" volumes, and more of these graphs are likely to be produced even when state reviewers would consider the data valid.

EDIT=ZERO DATA or EDIT=MISSING DATA prints the hourly volumes for days during which 7-card records are present for some, but not all, 24 hours of a day. Unlike this check for 4-card data, when no 7-card data are present for a given hour, the hourly volume is considered zero. This difference is due to the data collection and reporting process. (4-card records are meant to be generated for all hours of the day, regardless of how many vehicles are observed; 7-card records are only generated when a vehicle is observed.)

If the data with zero hourly volumes are from sites that have typically high truck volumes, the data for the remainder of that day are usually considered to be invalid and should be flagged for removal to prevent false hourly volumes (i.e., the information that no traffic occurred during those hours) from being used in the data aggregation process. (The LTPP data aggregation process assumes that lack of a 7-card record simply means that no trucks were present.) If the data are from sites with typically low truck volumes, the data present for the remainder of the day are usually assumed to be valid and should be retained.

5.2.1 Distribution of Gross Vehicle Weight

The Gross Vehicle Weight graph illustrates the distribution of gross vehicle weights (GVW) for a user selected vehicle class (generally FHWA Class 9, 5-axle tractor-trailers). This graph presents a single month, or an entire quarter, at a time depending on the period marked. Only one month or one quarter will appear on each graph. All valid vehicle weights measured during the time period selected are incorporated into the GVW distribution graph. The logic underlying the quality control process is based on the expectation of two peaks in the GVW distribution for Class 9 vehicles. The first peak represents unloaded tractor-trailers and should occur between 28 and 36 kips (1 kip = 1,000 pounds). This weight range has been determined from static scale data collected from around the country and appears to be reasonable for most locations. (Most unloaded peaks fall between 28 and 32 kips.) The second peak in the Class 9 GVW distribution represents the most common loaded vehicle condition at that site and varies somewhat with the type of commodity being carried. Generally, the loaded peak falls somewhere between 70 and 80 kips.

The QC software plots the GVW distribution.

A standard template to obtain a Class 9 GVW distribution is discussed in section 5.5.2. This must be examined to decide whether the vehicle weights illustrated represent valid data or the scale either is not correctly calibrated or is malfunctioning. The following discussion uses the Class 9 vehicles as the standard for site evaluation. To help the you, reference lines appear on the GVW graph at 28, 36, and 80 kips. [not implemented] The graphs also lists percentages of vehicles that are less than 28 kips and vehicles greater than 80 kips in the lower left hand corner of the graph. [Not implemented]. To get the data to calculate these values see section 5.4.5.

  • Both Peaks Shifted - If a plot shows both peaks shifted from their expected locations in the same direction (that is, both peaks are lighter than expected or heavier than expected), the scales are assumed to be out of calibration, and the data are not used within the LTPP database. (An agency would then want to visit the WIM site and adjust the scale calibration.)

  • One Peak Shifted - If a plot shows one peak correctly located but another peak shifted from its expected location, the site should be reviewed for other potential scale problems (such as a high number of classified but not weighed vehicles or scale failure during the data collection session). Additional information about that site may also be needed to determine whether the scale is operating correctly. Information that can be very useful for this determination includes the types of commodities Class 9 trucks are carrying on that road and the load distribution obtained from the scale when it was last calibrated. (For example, investigators might discover that a cement plant is just down the road from the WIM scale and that loaded, 5-axle, cement trucks are routinely exceeding the 80,000-pound legal weight limit. This finding might result in the acceptance of a loaded peak at the site that exceeds the normal 80,000-pound upper limit.) If additional information indicates the presence of scale problems and the data will be submitted to the LTPP for inclusion in the LTPP database, the LTPP recommends that the agency include a description of the problems. (Data from a malfunctioning scale should be not be submitted to the LTPP. Data from a scale that is simply out of calibration should be submitted to the LTPP along with an explanation of the calibration problem.) If no evidence of scale problems is present and agency personnel believe that the data accurately reflect truck weights at that site, the LTPP will accept the submitted data for use within the LTPP database. The agency should explain why the data are valid, despite their appearance, so that LTPP researchers can be aware of the unusual truck characteristics at that site.

    Number of Vehicles Heavier than 80 Kips - [Not implemented] A second check performed with the Class 9 GVW data is an examination of the number (or percentage) of vehicles that are heavier than 80 kips. This check should be performed partly because when many piezo-electric scales begin to fail, they generate a nearly flat GVW distribution. This distribution results in an inaccurate ESAL computation for a given number of trucks. It is particularly important to look at the number and percentage of Class 9 vehicles that weigh more than 100 kips. High percentages of extremely heavy Class 9 trucks (particularly vehicles over 100 kips) are assumed to be a sign of scale calibration or operational problems. It is highly unusual for 5-axle trucks (FHWA Class 9) to carry such heavy weights. In almost all cases, trucks legally carrying these heavy weights are required to use additional axles and are therefore classified as FHWA Class 10 (or higher) and do not appear in the GVW graph. While illegally loaded 5-axle trucks may be operating at the site in question, most illegally loaded trucks do not exceed the legal weight limit by more than several thousand pounds, and the number (or percentage) of these extremely high weights is usually fairly low.

In the case of either scale problems or extreme numbers of overloaded trucks, agency personnel should investigate the situation. If the data are valid, they should be submitted to the LTPP database along with an explanation of the investigation findings. Otherwise, the data should be withheld from further use by the LTPP.

Figure 5-10 is an example of a Class 9 gross vehicle weight (GVW) distribution. The unloaded peak falls within the expected unloaded range (28-36 Kips) and the loaded peak is less than the loaded maximum (80 Kips). There are no extreme outliers (large percentage of vehicles greater than 80 kips or less than 12 kips).

Graph: GVW Distr. for Vehicle Class 9, showing Frequency versus GVW in Kips, with Frequency ranging from 0 to 2500 and GVW ranging from 0 to 120
Figure 5-10 Gross vehicle weight distribution for vehicle class 9

Figure 5-11 is an example of a GVW distribution plot that shows a large percentage of vehicle Class 9s that weigh more than 80 kips.

Graph: GVW Distr. for Vehicle Class 9, showing Frequency versus GVW in Kips, with Frequency ranging from 0 to 400 and GVW ranging from 0 to 120
Figure 5-11 GWV Distribution - Example of high percentage of overweights

Graph: GVW Distr. for Vehicle Class 9, showing Frequency versus GVW in Kips, with Frequency ranging from 0 to 2500 and GVW ranging from 0 to 128
Figure 5-12 GVW Distribution - Example of right shifted peaks

The unloaded and loaded peaks in Figure 5-12 are shifted to the right of the expected ranges. There are also some vehicles that are greater than 100 kips. This plot demonstrates an over-calibration error.

Graph: GVW Distr. for Vehicle Class 9, showing Frequency versus GVW in Kips, with Frequency ranging from 0 to 2500 and GVW ranging from 0 to 140
Figure 5-13 GVW distribution - Example without loaded peak

Figure 5-13 illustrates a GVW distribution without a loaded peak. These data would not be purged if this situation was typical for this site.

5.3 7-card, 4-card Comparisons

5.3.1 Volume Comparison

This analysis compares daily traffic volume information submitted in 4-card and 7-card formats. Each graph produced by the program (by lane and direction) contains volumes for one vehicle class, for one month or quarter, from both the 4-card and 7-card files. Significant differences between these two estimates of truck volume are often an indication of machine error. In addition, because most roads have fairly repeatable traffic volume patterns, visual inspection of daily traffic volume patterns often can be used to detect equipment malfunction.

The template described in section 5.3.1 will automatically produce graphs of daily truck volumes for FHWA Classes 6, 8, 9, and 13 on a monthly or quarterly basis. These classes constitute the majority of trucks for many sites, and they are also the classes into which most vehicles are incorrectly classified when vehicle classification equipment is malfunctioning. States may want to examine additional truck classification volumes (for example, for FHWA Class 11) at specific sites using the Graph Manager.

Errors that the graphs produced by this program can help identify include the following:

  • shifts in vehicle classification
  • loss of truck volume due to sensor failure
  • significant increases in truck volumes caused by malfunctioning axle sensors.

Graph: 4&7 Card Daily Volumes for VC 9, showing Volume versis Date. 4-Card Volumes range from 600-1500 and 7-Card Volumes range from 0-600
Figure 5-14 Example of non-matching 4- & 7-card volumes

Daily volumes may be examined to see whether they fall within an acceptable range given by the other data points. Seasonal and weekly patterns should be consistent. A dramatic decrease in daily and weekly volumes may indicate a sensor problem. When an axle sensor begins to fail, it often starts to miss one axle on closely spaced tandems. This problem results in a significant shift in observed volumes by classification, as the number of Class 9 trucks counted decreases significantly, and the number of Class 8 trucks increases significantly. Truck volumes also drop because of a variety of sensor errors and other equipment problems. Invalid truck volume increases are usually caused by chattering sensors (which often result in simple misclassification problems and therefore a commensurate drop in some other volume classification) or by poorly tuned loop sensors. Other types of axle sensor failures can also result in sudden volume increases.

When volume estimates from 4-card and 7-card records differ significantly, it is a sign that additional attention must be paid to the submitted data. A variety of conditions can produce these differences.

Graph: 4&7 Card Volumes for VC 9, showing Volume versis Date. 4-Card and 7-Card Volumes range from 350-1600
Figure 5-15 Example of matching 4- and 7-card volumes

    Single Piece of Collection Equipment - Where a single piece of equipment collects both of these data sets, differences in 4-card and 7-card volumes occur when a WIM device can detect, but is unable to weigh, a vehicle. This often happens when the vehicle being weighed is "off-scale." If this problem occurs infrequently (under 10 percent of the Class 9 trucks are counted but not weighed), the scale is probably working correctly; however, this percentage should not grow very large with a correctly operating scale. Large differences in 4-card and 7-card volumes usually mean that the scale is suffering from some type of operational problem.

    Two Pieces of Collection Equipment - This graph allows a "sanity check," or a check of the reasonability, of the data collected by both devices. Where two different devices are used (usually a portable classifier and a portable WIM scale, or a permanent classifier and a portable WIM scale), large differences in the two volume estimates can mean either that at least one of the data collection devices is not functioning correctly or that the classification algorithms being used by the devices are inconsistent. In all likelihood, one (or both) of the data collection devices is incorrectly classifying trucks. This may mean that one of the devices is malfunctioning, or it may mean that one of the devices has a poor translation table for converting axle spacing and axle count information into classification information. Usually the equipment has to be visually observed to determine which system is misclassifying vehicles.

Figure 5-14 gives an example of a big difference between 4-card and 7-card daily volumes for trucks. Figure 5-15 shows an example in which the 4-card and 7-card daily volumes for vehicle class 9 are similar.

5.3.2 Vehicle Class Distribution Comparison

This analysis compares vehicle class frequencies (percent of truck volume by class) submitted in 4-card and 7-card formats. Each graph produced by the program contains quarterly or monthly frequencies for vehicle classes 4 through 13 from both the 4-card and 7-card files. See section 5.5.4 for the templates on producing these graphs on either a monthly or quarterly basis. Percentages of each vehicle class in comparison to total trucks are shown at the top of each graph for 4- and 7-card data. The total volume of trucks counted (by card type) is plotted in the graph itself. [Not implemented; see section 5.4.6 for instructions on how to obtain the necessary data.] Note that the plot shows total vehicle volumes not percentages. These volumes are not adjusted to account for different count durations. Thus, the total volume presented in this graph for a 12-day classification count will be much higher than the volume for that vehicle class from a 2-day weighing session during the same period, even if the two devices counted the same number of trucks during the two days that the WIM scale operated. Similarly, the percentages of vehicles counted by class and shown in tabular form at the top of the graph are for the duration of all days of data for the period being plotted. This tabular information and graph can be used to perform several quality control checks. The primary checks that can be performed are discussed below. The percentages referenced must be generated separately.

    Change in Percentages from 4-Card to 7-Card Data - Significant differences between the percentage estimates from 4-card and 7-card data sets indicate machine error or misclassification problems in one or both of the data collection devices. For example, when two data collection devices are being used, the relative difference in percentage of trucks in each classification for 4-card and 7-card data may show that, for one device, vehicles that should be assigned to vehicle Class 9 are being shifted to another classification.

    Atypical Percentages or Frequencies - If agency personnel know roughly the typical truck mix at the site, this graph can indicate when a scale is malfunctioning by showing atypical vehicle percentages or frequencies for truck classes. For example, in many states Class 9 trucks are observed much more frequently than Class 8 trucks. (This ratio is usually more than 3 to 1.) When this graph shows that the number of Class 8 trucks observed exceeds the number of Class 9 trucks, the agency should examine the operation of the data collection equipment to determine whether the equipment is consistently missing axles.

Similarly, these graphs often show that WIM and automatic vehicle classification devices are treating some smaller vehicles differently. This becomes apparent when one of these devices observes a very high proportion of Class 5 trucks (2-axle, 6-tire trucks) while the other observes relatively few of these vehicles. This discrepancy normally indicates either that one of these devices is slightly off on its measurement of axle spacing distances or that the classification algorithms used by the two devices are dissimilar.

    Vehicle Class Frequencies Outside Acceptable Range - In a similar analysis, a reviewer can use these graphs to determine whether the vehicle mix is changing over time. Quarterly or monthly vehicle class frequencies may be examined to see whether they fall within an acceptable range given by the other data points. A dramatic change in vehicle class frequencies over time may indicate a sensor or other equipment problem.

Graph: 4&7 Card Vehicle Class Distribution, showing Frequency ranging from 0 to 200 for 4-Card and 0 to 80 for 7-Card
Figure 5-16 Example of vehicle class distribution discrepancies

If there is a big difference between 4-card and 7-card daily volumes for a given vehicle class, the 4- and 7-card vehicle class distribution plots can detect misclassification errors. Figure 5-16 demonstrates misclassification errors between vehicle Class 8 and vehicle Class 9.

5.4 Generating Statistics using the ORACLE tables

The user is assumed to be familiar with SQL and its syntax in the ORACLE environment when referencing this section. Any application used to create and run SQLs may used with any needed syntax changes. The SQLs presented were developed for SQL Worksheet® in ORACLE Enterprise Manager®.

The naming conventions used in this subsection are:
D = direction
dd = day
L = lane
mm = month
yyyy = 4-digit year
xxxxxx = concatenation of STATE_CODE and SHRP_ID

The direction for a LTPP lane is numeric. The value of D can be from 1-8 where 1 is North, 2 is Northeast, 3 is East, 4 is Southeast, 5 is South, 6 is Southwest, 7 is West and 8 is Northwest. The IMS reports only N, S, E or W. How the intermediate directions are converted (or if they even exist in the original data) has not been determined at this point. The LTPP lane, L, is the number of the lane on the highway counting through lanes from the right shoulder in the LTPP direction. With one or two exceptions, the LTPP lane is equal to 1. A value of 0 for LTPP lane means that all of the lanes in one direction have been included in the data as a single value. This data will not be in the summary tables. Grouped lane data is considered a critical error. To determine if there is more than 1 lane in the LTPP direction, the LTPP Information Management System (IMS) must be checked.

The common syntax to get data from a table is:

SELECT * FROM tablename WHERE fieldname1 = AA [ AND fieldname2 = BB ...] [ORDER BY fieldnameA [, fieldnameB, ....] ;

The asterisk indicates that all fields in a record will be extracted from a file. It can be replaced by explicit lists of variables in any of the statements in these subsections. Where variables are explicitly named, other orders are possible. The ones provided reflect the author's preferences. In this section anything in brackets [ ] is optional.

To generate a listing of specific information to accompany the data set, a file may be spooled to capture the extraction results. The process produces an ASCII text file that may be manipulated in a spreadsheet or database. Start the sequence with a spool command which includes a path and file name to store the text output in a logical place. There can be NO spaces in any of the subdirectory names in the path. Enter as many SQLs as desired and end the sequence of commands with the spool off command. Alternatively, each command may be sent to an individually named file.

5.4.1 List of Days - 1 am > 1 pm Volume

SELECT direction, lane , year, month, day, error FROM LTPPD4xxxxxx
WHERE error = 62 [AND direction = D] [AND lane = L] [AND year = yyyy] [AND month = mm]
[ORDER BY [direction, ] [lane, ] [year], [month,] [day]];

5.4.2 List of Days - 4 Consecutive Static Volumes

SELECT direction, lane , year, month, day, error FROM LTPPD4xxxxxx
WHERE error = 61 [AND direction = D] [AND lane = L] [AND year = yyyy] [AND month = mm]
[ORDER BY [direction, ] [lane, ] [year], [month,] [day]];

5.4.3 List of Days - 8+ Consecutive Zeros

SELECT direction, lane , year, month, day, error FROM LTPPD4xxxxxx
WHERE error = 60 [AND direction = D] [AND lane = L] [AND year = yyyy] [AND month = mm]
[ORDER BY [direction, ] [lane, ] [year], [month,] [day]];

5.4.4 List of Days - Missing Data

SELECT direction, lane , year, month, day, error FROM LTPPD4xxxxxx
WHERE error = 63 [AND direction = D] [AND lane = L] [AND year = yyyy] [AND month = mm]
[ORDER BY [direction, ] [lane, ] [year], [month,] [day]];

5.4.5 Statistics for Class 9 Weights

This application requires manipulating the exported data by hand or in a spreadsheet to obtain the actual statistics.

SELECT * FROM LTPPGVWyyyyxxxxxx
WHERE vehicle_class = 9 [AND lane = L] [AND direction = D]
ORDER BY [direction,] [lane,] month;

The total number of class 9 vehicles can be determined from the LTPPGVW table by summing all the bins (BIN1 - BIN50). The BINj represent the weight groups into which gross vehicle weights have been aggregated to obtain a frequency distribution. Each bin represents a four kip (four thousand pound) interval from the lowest value up to 1 less than the next multiple of 4000 (i.e. 0-3999, 4000-7999 ...). It is also possible to use the LTPPVOL7 tables and sum the values over all days in a month.

To find the percentage and number of unusually light or heavy vehicles use the following calculations.

  • The number of Class 9s weighing more than 80 kips is the sum of BIN21 to BIN50 for Class 9 vehicles.
  • The percentage of Class 9s weighing more than 80 kips is the sum of BIN21 to BIN50 divided by the total number of Class 9s.
  • The percentage of Class 9s weighing less than 28 kips is the sum of BIN1 to BIN7 divided by the total number of Class 9s.
  • The percentage of Class 9s weighing less than 12 kips is the sum of BIN1-BIN3 divided by the total number of Class 9s.

A extra step is required to get quarterly numbers. Aggregation with Excel® of all relevant monthly totals for a quarter is probably the simplest way.

The same data set can be used to plot comparative monthly or quarterly gross vehicle weight distributions after loading into a spreadsheet. A similar process may be used to get the data for any other vehicle classification.

5.4.6 Volume Comparisons 4- & 7- cards

Note that to compare data for a given period the values of year (yyyy) within the SQL statement must match in these two statements.

4-card Volume information -

SELECT * FROM LTPPD4xxxxxx
WHERE error = 0 AND purge = 0 AND hour = 0 [AND direction = d] [AND lane = L]
AND year = yyyy
ORDER BY [year,] [direction,] [lane,] month;

  • The total number of trucks is the sum of VOLUME4 through VOLUME20 over the month or quarter of interest.

  • The total number of trucks in a class X is the sum of VOLUMEX over the month or quarter of interest.

  • The percentage of trucks in a class X with respect to all trucks is the total in the class divided by the total number of trucks.

  • The number of days of data by month or quarter can be determined by counting the number of days in the interval in question.

The values for error = 0, purge = 0 and hour = 0 eliminate all days with invalid data or less than 24 hours in the day.

7-card Volume information -

SELECT * FROM LTPPVOL7yyyyxxxxxx
[WHERE [direction = d] [AND lane = L] ]
ORDER BY [direction,] [lane,] month ;

A separate select statement is needed for each year to be matched in the 4-card data set.

  • The total number of trucks is the sum of CNT4 through CNT20 by month or over the quarter of interest.
  • The total number of trucks in a class X is the CNTX for the month or the sum of the same over the quarter of interest.
  • The percentage of trucks in a class X with respect to all trucks is the total in the class divided by the total number of trucks.
  • The number of days of data by month or quarter can be determined by counting the number of days in the interval in question.

5.4.7 Graphs Excluding Purged Records

Purging data eliminates it from inclusion in creating daily summaries and annual estimates of vehicle statistics. However, the QC software provides no method for reviewing the impact on the data set after the purges are applied. The process described here is one method for investigating the effects of the purges. It uses the ORACLE tables since the software processes the output files first and then annotates the ORACLE tables with the same information.

To determine which data files have had purges applied a query may be made of the LTPPFILETRACKER table with as much detail as required.

SELECT filename FROM LTPPFILETRACKER WHERE purge = 1 [AND state_code = XX] [AND shrp_id = AAAA ..] [AND startdate BETWEEN ('dd-MON-yyy' AND 'dd-MON-yyyy)]. The required date format is '01-JAN-1998'.

This does not provide any information as to the reason for the purges.

To verify that all classification data which failed a daily record check and was supposed to be purged has been use the following two step process.

Run a SQL of the form:

SPOOL path\dailypurge.sql;
SELECT 'SELECT year, month, day, error, purge FROM ', table_name, ' WHERE error >= 60 AND purge = 0 AND year = yyyy order by year, month, day ; ' FROM USER_TABLES WHERE table_name LIKE 'LTPPD4xxxxxx' ORDER BY table_name;
The value of xxxxxx may be as general or specific as desired.

Edit the spooled results to remove any non-SQL statements and set up the spool file to save the results. The output of the SQL (the nested select) will be a list of all days with 8+ consecutive zero volumes, 4+ consecutive static volumes, a 1 a.m. > 1 p.m. volume or missing hourly volumes which have NOT been purged. This does not automatically imply an error as there are instances (i.e. portable data collection equipment) where these records should not be purged.

To verify days have been purged for other types of errors, omit the error >= 60 condition from the SQL. Wildcard characters can be used to generate multiple file lists.

To verify that the relevant days of weight data have been purged, the same process can be executed using the LTPPVOL7yyyyxxxxxx table name and omitting the condition on error. The daily level errors are not applicable to weight files.

The output of any of these SQLs may then be graphed.

There is no way using this software to investigate the impact of purges on the GVW file without using the output files as raw data and reloading the data to compute new GVW distributions.

5.5 Standard Graphing Templates

The following set of graphing templates are suggested as standard for any installation of the traffic software. They provide the functionality to evaluate the data using the guidelines in this section.

A discussion of the Graph Template Manager is found in section 3.5.4.

5.5.1 4-card checks

This template will produce all of the graphs discussed in section 5.1.

  • Set file information to insure that valid data is available to plot. All options in the File Information may be changed each time this template is run.
  • Select all four Daily Volume Graphs
  • Under Templates box click "New". Select the new template name and then click on the "Rename" option.
  • Place the cursor in the box and type "4-card checks" and click on "OK" to save the name.
  • Select "4-card checks" in the template box and then click on "Capture" under the graphs box. Graph 0 which appears will contain ALL 4 plots. [If each graph should be an individual selection, select each daily volume graph separately and click on "Capture" after each selection.]
  • To see the graphs in the template, click on "Run".

5.5.2 GVW graph - Class 9

This template will produce the monthly graph discussed in section 5.2.1.

  • Set file information to insure that valid data is available to plot. In order for all of the necessary options to be available, the Data Type - Weight by Vehicle - must be selected. The remaining options in File Information may be changed each time this template is run.
  • Check 'GVW Distribution' under Weight Graphs.
  • In Data Selection pick 'Monthly', a valid year, lane, direction and Class = 9.
  • In Graphing Templates click on "New". Select the new template name and then click on the "Rename" option.
  • Place the cursor in the box and type "MON_GVW_9" . Click on "OK" to save the name.
  • Select 'MON_GVW_9' in the template box. Then click on "Capture" under the graphs box.
  • To see the graph in the template, click on "Run".

This template will produce a quarterly GVW graph for Class 9s.

  • Set file information to insure that valid data is available to plot. In order for all of the necessary options to be available, the Data Type - Weight by Vehicle - must be selected. The remaining options in File Information may be changed each time this template is run.
  • Check 'GVW Distribution' under Weight Graphs.
  • In Data Selection pick 'Quarterly', a valid year, lane, direction and Class = 9.
  • In Graphing Templates click on "New". Select the new template name and then click on the "Rename" option.
  • Place the cursor in the box and type "QTR_GVW_9". Click on "OK" to save the name.
  • Select 'QTR_GVW_9' in the template box and then click on "Capture" under the graphs box.
  • To see the graph in the template, click on "Run".

5.5.3 7-Card vs. 4-Card Volume

The procedure as explicitly outlined is for monthly graphs. A quarterly template should also be created using the same process with Quarterly as the 'Month' option and QTR in lieu of MON in the labels.

  • Set file information to insure that valid data is available to plot. In order for all of the necessary options to be available, the Data Type - Weight by Vehicle - must be selected. The remaining options in File Information may be changed each time this template is run.
  • Check 'AVC vs. WIM Volume' under Comparative Graphs.
  • In Data Selection pick 'Monthly', enter 1-31 for Days, a valid year, lane, direction and Class = 6.
  • In Graphing Templates click on "New". Select the new template name and then click on the "Rename" option.
  • Place the cursor in the box and type "MON_VOL_6_8_9_13". Click on "OK" to save the name.
  • Select 'MON_VOL_6_8_9_13' in the template box. Then click on "Capture" under the graphs box. Graph 0 will have Class 6 vehicles.
  • Change Class to 8 and click on "Capture" to generate Graph 1.
  • Change Class to 9 and click on "Capture" to generate Graph 2.
  • Change Class to 13 and click on "Capture" to generate Graph 3.
  • To see the graphs in the template, click on "Run".

5.5.4 7-Card vs. 4-Card Class Distribution

The procedure as explicitly outlined is for monthly graphs. A quarterly template should also be created using the same process with Quarterly as the 'Month' option and QTR in lieu of MON in the labels.

  • Set file information to insure that valid data is available to plot. In order for all of the necessary options to be available, the Data Type - Weight by Vehicle - must be selected. The remaining options in File Information may be changed each time this template is run.
  • Check 'AVC vs. WIM Distribution' under Comparative Graphs.
  • In Data Selection pick 'Monthly', enter a valid year, lane, and direction.
  • In Graphing Templates click on "New". Select the new template name and then click on the "Rename" option.
  • Place the cursor in the box and type "MON_CLASS_4_V_7". Click on "OK" to save the name.
  • Select 'MON_CLASS_4_V_7' in the template box. Then click on "Capture" under the graphs box.
  • To see the graph in the template, click on "Run".

5.6 Plotting Data Trends

A number of trends can be plotted using appropriate extractions and summaries of data from the ORACLE tables. The list provided here is not intended to be exhaustive. It should be apparent that the yearly data comparisons require processing more than 1 year of traffic data through the new software in order to load the relevant ORACLE tables.

  • Monthly/Quarterly/Yearly comparison of GVW distributions (frequency or percentage) for a class: LTPPGVWyyyyxxxxxx. (The yearly comparison requires extractions from multiple LTPPGVW* tables for a site.)
  • Monthly/Quarterly comparison of truck volumes (frequency or percentage) for a class: LTPPGVWyyyyxxxxxx. (The yearly comparison requires extractions from multiple LTPPGVW* tables for a site. LTPPVOL7* tables can also be used but they contain far more records to manipulate.)
  • Monthly/Quarterly comparison of truck distributions (frequency or percentage) for a class: LTPPVOL7xxxxxx.
  • Yearly comparison of truck volumes based on average day/weekday/ weekend day volume for a class: LTPPVOL7yyyyxxxxxx. (The yearly comparison requires extractions from multiple LTPPGVW and LTPPVOL7 tables for a site.)
  • Monthly/Quarterly/Yearly comparison of Class 5, 9 and total truck volumes from 4-card data: LTPPD4xxxxxx.
  • Monthly/Quarterly/Yearly comparison of Class 5, 9 and total truck volumes from 7-card data: LTPPVOL7yyyyxxxxxx. The yearly comparison requires extractions from multiple LTPPVOL7 tables for a site.

A. LTPP QC System Requirements

Most newly purchased, standard personal computers are sufficient to operate the software. Minimum system requirements recommended for operating the LTPP QC software include:

  • Pentium-II 350MHz or higher processor.
  • 64 MB system memory
  • 8 GB hard disk (for data file storage)
  • ORACLE 8.0 or higher (operating locally or on network)
  • Mandatory - a Traffic User account in Oracle to separate traffic tables from IMS tables

A.1 Installation Instructions

The software is distributed in a zipped file. The contents should be unzipped and the program added through the Add/Remove Programs function of Control Panel using the setup.exe provided.

Updates are generally done by unzipping a revised executable and copying it over the existing executable.

The software may be installed and run on multiple machines simultaneously.


B. DAT File Requirements for Operation LTPP QC

The DAT files are the group of files referred to elsewhere in this document as reference files. All .dat files must be located in a DAT directory, which must be located in the directory specified in the "Base Data Location" on the PREFS menu. For example:

LTPP Operator panel shiwing Path Options field
Figure B-17 Example of preferences selection

In this example, the base data location is D:\LTPP. All .dat files must then reside in D:\LTPP\DAT. Note that all user supplied subdirectory names are limited to 8 characters.

The required DAT files for LTPP QC are:

  • SHRP.DAT (See section C.)
  • DEFSHT.DAT (See section D.)
  • NEWSHT.DAT (See section E.)
  • FUNCLASS.DAT (for metric data file loading)
      The FUNCLASS.DAT file has a line for each LTPP site. Each line contains an eight character element consisting of (in order) state code, SHRP ID and functional class. The file is sorted in state code SHRP_ ID order.

An operator.dat file is created in each user's WIN NT profile under LTPP subdirectory. This text file has the following :

  • ACTIVEFILE=
  • LASTFILE= STATE_CODE
  • LASTSITE=SHRP_ID (Does not store alpha named sites as such)
  • NEWSTYLENAMES=FALSE
  • BASEPATH= (from PREFS)
  • NEWSHTPATH=(from PREFS)\DAT\NEWSHT.DAT
  • DEFSHTPATH=(from PREFS)\DAT\DEFSHT.DAT
  • LOGPATH=(from PREFS)\LOGS
  • SHRPPATH=(from PREFS)\DAT\SHRP.DAT
  • FUNCLASSPATH=(from PREFS)\DAT\FUNCLASS.DAT

C. SHRP.DAT File

Example data:
SS SHRP Y M D ID3 ID6 RHO SN DPTH PTYPE DIR LN NUMLTPP NUMNON FLGS SRO REASON
# -- This is a comment line in the SHRP.DAT file
48 0001 0000 00 00 001 000001 2.5 . 8.0 R 7 1 2 2 100 SS3 ORIGINAL PAVEMENT PARAMETERS
48 0001 1991 10 22 001 000001 2.5 . 8.0 R 7 1 2 2 100 SS3 1/4" OVERLAY
48 1039 0000 00 00 039 000239 2.5 3.0 . F 3 1 2 2 100 SS3 ORIGINAL PAVEMENT PARAMETERS
48 1039 1991 05 29 039 000239 2.5 3.0 . F 3 1 2 2 100 SS3 1/2" OVERLAY
48 1039 1991 09 12 039 000239 2.5 3.0 . F 3 1 2 2 100 SS3 CHIP SEAL COAT
48 1039 1991 07 08 039 000239 2.5 3.0 . F 3 1 2 2 100 SS3 COMPLETE REBUILDING OF SECTION

FIELDTYPELENDESCRIPTION
SSINTEGER2State FIPS Code
SHRPINTEGER4SHRP 4 digit Id code
YINTEGER4Effective year
MINTEGER2Effective month
DINTEGER 2Effective day
ID3ALPHA3State 3 digit Id code
ID6ALPHA6State 6 digit Id code
RHOFLOATxTerminal serviceability Index
SNFLOATxStructural Number
DPTHFLOATxPavement Depth
PTYPECHAR1Pavement type R=rigid or F=flexible
DIRINTEGER1Direction of LTPP Lane (compass direction 1-8)
LNINTEGER1LTPP lane number
NumLTPPINTEGER1Number of lanes in the LTPP direction
NumNONINTEGER1Number of lanes in the Non-LTPP direction
FLGSINTEGER33 digit flags field
SROALPHA3Data availability code including SRO code and data quality/quantity indicator
REASONALPHAxConstruction reason

Data Entry Rules:

  • All fields must be separated by at least one space and must be contained on one line. All fields must be in the order specified in the format and the record length must not exceed 256 characters. Other than these restrictions the format is fairly free form. The fields do not have to be column aligned, although it is recommended that the SS, SHRP, Y, M, and D fields be column aligned for sorting purposes.

  • All fields (except the construction reason field) must be present for each record. If the value of a field is unknown, then the actual value must be replaced by a place holder. The recommended place holder is a dot (period).

  • The construction reason may be left out of the record without having to write a place holder character. It is recommended that the construction reason be included for documentation reasons and for the fact that it shows up in the Level 2 and 1 annual summary records.

  • Any line beginning with a "#" (pound sign) will be treated as a comment and ignored.

  • The effective year, month, and day fields form the effective date of each SHRP.DAT record. This date specifies when the pavement parameters are to become effective for the LTPP section. Each section should have a record with the effective date of 0000-00-00 that states the original pavement parameters. As construction events are performed on the section, a new record should be added to the SHRP.DAT file indicating the date that the new parameters are to become the effective parameters.

  • The SHRP.DAT file must be sorted first by state, and then within each state by SHRP ID and then within each SHRP ID by effective date. If the SHRP.DAT file is not sorted in this order, the search algorithm in the LTPP traffic database software will not be able to locate the correct record to use in the statistical calculations. [The DOS sort program will correctly sort the records in this order, if the SS, SHRP, Y, M, and D fields are column aligned.]

  • The last line of the file needs to end with a return.

Field definitions:

  • ID3 - State 3 digit ID as it would appear in the station identification field in a 4-card or a 7-card. The default of no information is generally represented by three zeros rather than a period.

  • ID6 - State 6 digit ID as it would appear in the station identification field in a 4-card or a 7-card. The default of no information is generally represented by six zeros rather than a period.

  • SRO - The SRO or data availability code is a three character code. It indicates the relationship of the data collection equipment's location to the pavement section under study and a rough estimate of the quality and quantity of the data.

The Data Availability Code is written in this order (see Table C-1 for the list of Data Availability Codes):

    S, R, O Code for AVC
    S, R, O Code for WIM
    Data Availability Code

For example, if a permanent, continuously operating AVC device is located at the site, but the portable WIM device is set up at a location downstream of the LTPP test location, the code would be S-R-7 and would be defined as follows:

    S - Site Specific AVC
    R - Site Related WIM
    7 - Continuous operating, permanent AVC with portable WIM for all seasons and weekday/weekend time periods

Table C.1: Codes for Data Availability

0 to 9 Code (Amount of Data Collected):

9 - Continuous WIM meeting the ASTM standard.

8 - Continuous WIM that does not meet the ASTM standard (or hasn't been tested against the ASTM standard).

7 - Permanent classifier operating continuously, with portable WIM for all seasons and weekday/weekend time periods.

6 - Continuous vehicle classification with some seasonal WIM.

5 - Continuous vehicle classification with limited WIM.

4 - Continuous AVC with no WIM data.

3 - Continuous ATR volume station, with limited vehicle classification and truck weight data, and a measurement of truck seasonality.

2 - Vehicle classification and WIM data with some measure of seasonality.

1 - Limited data (only short duration counts) for either vehicle classification or truck weights.

0 - Data collected on a different roadway that the LTPP site, including system level estimates.

S/R/O Code (Location of Class and Weight):

S - Site specific data collection (data collected immediately up- or down-stream from the LTPP site).

R - Site related data collection (data collected on the same road as the LTPP test section, but separated from the test site by some traffic generator).

O - Other (data collected on another highway, or at a location which does not experience the same traffic stream as the LTPP test section.)


D. DEFSHT.DAT File

An entry in the DEFSHT.DAT file consists of a site index and a set of keyword parameters. The site index is simply a line of text that indicates state FIPS code and SHRP ID. The keyword parameters are labels defining every field that may be set on sheets 11, 12, and 13. This section contains two examples of entries in the DEFSHT.DAT file. Refer to sections D.1, D.2 and D.3, and D.4 for a complete list of the valid keyword parameters.

[48 1123] -- Format: [<STATE> <space> <SHRP ID>]

ROUTE*=SH 43
MILEPOST*=109.4
LOCATION*=4 Miles East of Stanton River Bridge

*Each of these are keyword parameters - Format: <KEYWORD> = <VALUE>

Each entry may contain any or all of the available keyword parameters. When a sheet is initially created in memory, all fields are set to blank. When the entry is read from the DEFSHT.DAT file, only those fields that have keyword parameters are modified. Consequently, an entry may be created so that the resulting transmittal sheet can have as many or as few fields filled in with default values as the user desires.

Example:
The following example shows the entries in the DEFSHT.DAT file for two sites in the state of Idaho. Site 1001 has a permanent Diamond TT2001 vehicle classifier installed using piezo film as the axle sensors. Once a quarter, a Golden River Weighman is used to collect the WIM data. The AVC data is submitted using the FHWA class scheme, while the WIM data is submitted using the 6 digit code. Site 2034 has a permanent PAT DAW200 WIM device installed that generates both 4 and 7 cards in the FHWA class scheme. Since the DAW200 uses a bending plate as the sensor, the vehicle class sensor type (CSENSOR) keyword parameter is set to OTHER and the bending plate sensor is specified using the CSENSOROTHER keyword parameter.

[16 1001]
ROUTE = US 95
MILEPOST = 230.92
LOCATION = 1.5 MILES SOUTH OF JCT US 12

CCLASS = FHWA
CMAKE = DIAMOND
CMODEL = TT2001
CTYPE = PERM
CSENSOR = FPFILM
WCLASS = 6DIGIT
WMAKE = GOLDEN RIVER
WMODEL = WEIGHMAN
WTYPE = PORT
WSENSOR = CAPPAD

[16 2034]
ROUTE = I84
MILEPOST = 113.6
LOCATION = 4.0 MILES EAST OF BLISS
CCLASS = FHWA
CMAKE = PAT
CMODEL = DAW200
CTYPE = PERM
CSENSOR = OTHER
CSENSOROTHER = BENDING PLATE
WCLASS= FHWA
WMAKE = PAT
WMODEL = DAW200
WTYPE = PERM
WSENSOR = BENDPLATE

D.1 Keywords - General

KEYWORD PARAMETERVALID VALUESDESCRIPTION
ROUTEANY VALID CHARACTER STRINGHIGHWAY ROUTE THE SHRP SITE IS LOCATED ON
MILEPOSTANY VALID CHARACTER STRINGMILEPOST ON ROUTE
LOCATIONANY VALID CHARACTER STRINGDESCRIPTION OF LOCATION OF SITE
BDATEMM-DD-YYYY, MM-DD-YY, MM/DD/YYYY, MM/DD/YY, MM\DD\YYYY, MM\DD\YYBEGINNING DATE OF COUNT
BTIMEHH:MMBEGINNING TIME OF COUNT
EDATEMM-DD-YYYY, MM-DD-YY, MM/DD/YYYY, MM/DD/YY, MM\DD\YYYY, MM\DD\YYENDING DATE OF COUNT
ETIMEHH:MMENDING TIME OF COUNT
COMMENT0ANY VALID CHARACTER STRINGCOMMENT LINE #1
COMMENT1ANY VALID CHARACTER STRINGCOMMENT LINE #2
COMMENT2ANY VALID CHARACTER STRINGCOMMENT LINE #3
COMMENT3ANY VALID CHARACTER STRINGCOMMENT LINE #4
COMMENT4ANY VALID CHARACTER STRINGCOMMENT LINE #5
COMMENT5ANY VALID CHARACTER STRINGCOMMENT LINE #6
COMMENT6ANY VALID CHARACTER STRINGCOMMENT LINE #7
COMMENT7ANY VALID CHARACTER STRINGCOMMENT LINE #8
COMMENT8ANY VALID CHARACTER STRINGCOMMENT LINE #9
COMMENT9ANY VALID CHARACTER STRINGCOMMENT LINE #10

D.2 Keywords - Classification Data Transmittal Sheets

KEYWORD PARAMETERVALID VALUESDESCRIPTION
CMAKEANY VALID CHARACTER STRINGMAKE (MANUFACTURER) OF CLASSIFICATION EQUIPMENT (SHEET 12)
CMODELANY VALID CHARACTER STRINGMODEL OF CLASSIFICATION EQUIPMENT (SHEET 12)
CTYPEPORT, PERMTYPE OF CLASSIFICATION COUNT
CCLASSFHWA, OTHERTYPE OF CLASSIFICATION SCHEME USED FOR CLASSIFICATION COUNT
CSCHEMEANY VALID CHARACTER STRINGIF CCLASS=OTHER THEN THIS VALUE GIVES NAME OF SHA SCHEME
CSENSORROADTUBE, PCABLE, PFILM, LOOPS, OTHERTYPE OF SENSOR USED FOR A CLASSIFICATION COUNTER
CSENSOROTHERANY VALID CHARACTER STRINGIF CSENSOR=OTHER, THEN THIS STRING GIVES THE NAME OF THE SENSOR TYPE.
GENERALFACTNUMBER:NAME:
FACTOR:STD
GENERAL ADJUSTMENT FACTORS NUMBER BETWEEN 1 AND 4
CLASSFACTCLASS:NUMBER:
NAME:FACTOR:STD, w/ CLASS BETWEEN 1 AND 20; NUMBER BETWEEN 1 AND 4
CLASS SPECIFIC ADJUSTMENT

D.3 Keywords - Weight Data Transmittal Sheets

KEYWORD PARAMETERVALID VALUESDESCRIPTION
WMAKEANY VALID CHARACTER STRINGMAKE (MANUFACTURER) OF WIM EQUIPMENT
WMODELANY VALID CHARACTER STRINGMODEL OF WIM EQUIPMENT
WTYPEPORT, PERMTYPE OF WIM COUNT
WCLASSFHWA, 6DIGIT, OTHERTYPE OF CLASSIFICATION SCHEME USED FOR WIM COUNT
WSCHEMEANY VALID CHARACTER STRINGIF WCLASS-OTHER THEN THIS VALUE GIVES NAME OF SHA SCHEME
WSENSORPFILM, CAPPAD, BENDPLATE, HYDRAULIC, BRIDGE, OTHER COUNTERTYPE OF SENSOR USED FOR A WIM
WSENSOROTHERANY VALID CHARACTER STRINGIF WSENSOR=OTHER, THEN THIS STRING GIVES THE NAME OF THE SENSOR TYPE

D.4 Keywords - Volume Data Transmittal Sheets

KEYWORD PARAMETERVALID VALUESDESCRIPTION
AXLEFACTFACTOR:STD DEVAXLE CORRECTION FACTOR AND STANDARD DEVIATION
COUNTTYPEONEWAY, TWOWAY, GPSLANETYPE OF VOLUME COUNT
DOWFACTFACTOR:STD DEVDAY-OF-WEEK FACTOR AND STANDARD DEVIATION
GPSDISTFACTFACTORGPS LANE DISTRIBUTION FACTOR
GPSDISTSOURCEANY VALID CHARACTER STRINGGPS LANE DISTRIBUTION FACTOR SOURCE
OTHERFACTFACTOR:STD DEVOTHER FACTOR AND STANDARD DEVIATION
OTHERFACTNAMEANY VALID CHARACTER STRINGNAME OF THE OTHER FACTOR
SEASONFACTFACTOR:STD DEVMONTHLY/SEASONAL FACTOR AND STANDARD DEVIATION
STATEIDANY VALID NUMBERSTATE ASSIGNED ID CODE
VMAKEANY VALID CHARACTER STRINGMAKE (MANUFACTURER) OF VOLUME EQUIPMENT
VMODELANY VALID CHARACTER STRINGMODEL OF VOLUME EQUIPMENT
VTYPEPORT, PERMTYPE OF DEVICE INSTALLATION
VSENSORROADTUBE, PCABLE, PFILM, LOOPS, OTHERTYPE OF SENSOR USED FOR A VOLUME COUNTER
VSENSOROTHERANY VALID CHARACTE STRINGIF VSENSOR=OTHER, THEN THIS STRING GIVES THE NAME OF THE SENSOR TYPE

D.5 Key Word Deficiencies

WSENSOR does not appear to have the various types of piezo sensors currently in use. A desirable enhancement would be to add the 5 types of piezo sensors to eliminate entering OTHER for them as well as a generic ceramic piezo sensor. Key words would be as follows:

QPIEZO - Quartz piezo
BFPIEZO - Bare flat piezo
BRPIEZO - Bare round piezo
CFPIEZO - Channelized flat piezo
CRPIEZO - Channelized round piezo
UCPIEZO - Unknown configuration of ceramic piezo.

E. NEWSHT.DAT File Format

The NEWSHT.DAT file format is very similar to the DEFSHT.DAT file. An entry consists of a file index and a set of keyword parameters. It is very important to note that the DEFSHT.DAT file has entries based on state and SHRP site while the NEWSHT.DAT file has entries based on the SHA file name. The software expects the entries in NEWSHT.DAT to be sorted in ascending order by file name.

A standard entry is of the form:

[<FILENAME>!]Begin
Date
End
Date
Begin
Time
End
Time
[C481123.KO1!]4-3-934-3-9300:0023:00

An exclamation point following the filename signifies replacement mode. Four positional parameters are defined that follow the file name on the same line. These are begin date, end date, begin time, and end time. The positional parameters are provided so that a single line of text may be entered in to the NEWSHT.DAT file for those files that the begin/end date/times are the only fields that need to be specified to complete creation of transmittal sheets. The positional parameters are optional and do not have to be specified. However, the transmittal sheets cannot be completed without this information. If any are specified, then they must be specified in the order shown. If any one parameter is specified, then all preceding parameters must also be specified. For instance, if it is desired to specify end date, then begin date must also be specified.

If the data has been collected at a location other than that in the DEFSHT.DAT file for the site, then the changes may be noted by using the relevant key words immediately following the file name line. These key word entries must be repeated for every file that has collection information which does not match the DEFSHT.DAT entries. The key words which follow on the next lines are the same as those used for DEFSHT.DAT. A list of the key words and the allowable values is includes in sections D.1-4.

E.1 Example - NEWSHEET to list incoming files

The following Texas SHA files are to be processed through the QC software. The default values for site 1123 listed in the DEFSHT.DAT file are correct and adequate for creating the transmittal sheets for these files. Consequently only the begin/end dates/times need to be specified in the NEWSHT.DAT file. These are specified using the positional parameters so that only one line per file needs to be entered into the NEWSHT.DAT file.

The transmittal sheet for C481123.K01 has been previously entered manually, but several values were entered incorrectly. Therefore, the user chooses to use replacement mode so that the transmittal sheet created by the Level 4 processor will replace the one entered by hand.

[C481123.K01!] 9-10-91 9-10-91 00:00 23:00
[C481123.KA1 ] 9-11-91 9-11-91 00:00 23:00
[W481123.K01 ] 9-10-91 9-10-91 00:00 23:00
[W481123.KA1 ] 9-11-91 9-11-91 00:00 23:00
[W481123.KB1 ] 9-12-91 9-12-91 00:00 08:00

E.2 Example- NEWSHEET Changing DEFSHT values

This example shows how to override the values in the DEFSHT.DAT file using keyword parameters in the NEWSHT.DAT file. Consider the site entry for site 16 1001 in section D. During the summer quarter, Idaho's Golden River WIM unit was run over by a truck and is no longer operational. They substitute a portable PAT DAW100 for the fall quarter until the Golden River equipment can be repaired. Consequently, the default device type listed in the DEFSHT.DAT file needs to be overridden so that the transmittal sheets for the WIM data files will indicate the correct WIM device. At the same time, comment lines are specified so that the transmittal sheets will reflect the reason for the change in equipment. Also general factors for C161001.L61 were submitted. The NEWSHT file entries are listed below.

[C161001.L11] 10-01-91 10-01-91 00:00 23:00
[C161001.L21] 10-02-91 10-02-91 00:00 23:00
[C161001.L31] 10-03-91 10-03-91 00:00 23:00
[C161001.L41] 10-04-91 10-04-91 00:00 23:00

[W161001.L11] 10-01-91 10-08-91 07:00 11:00
WCLASS = FHWA
WMAKE = PAT
WMODEL = DAW100
WSENSOR = PFILM
COMMENT0 = NORMAL GOLDEN RIVER WIM DEVICE WAS DAMAGED BY TRUCK.
COMMENT1 = USING SUBSTITUTE DAW100 DEVICE BORROWED FROM DIST 3.

[C161001.L51] 10-05-91 10-05-91 00:00 23:00
[C161001.L61] 10-06-91 10-06-91 00:00 23:00
GENERALFACT=1:SEASONAL FACTOR:1.0123:.0234
GENERALFACT=2:MACHINE ADJUST FACT:1.1443:.0899

[C161001.L71] 10-07-91 10-07-91 00:00 23:00


F. Input and Output File Conventions

F.1 File Naming - Raw Data Files

The file name will be provided by the SHA for each volume count, classification count, or weight session as it is submitted to the RSC for entry into the National Traffic Database. Since original software operated under DOS 3.3, the file name is limited to eight characters with a three-character extension. This convention has NOT changed with the new software. When the SHAs submit data files to the RSC, the file name should be noted on the data transmittal form. The format for a file name is described in the following paragraphs. The software will prevent misnamed files from loading and report the reason to the log file.

The first character of the file name will be a character referencing the type of data collected; W refers to weight data, C to classification data, and V to traffic volume data. The character H is used for HELP files, a file type not supported by the software.

The second through seventh characters of the file name will be the six-digit SHRP site ID number. The first two digits (2-3) are the State Code, and the next four digits (4-7) are the SHRP test site ID number. The eighth character of the file name has been reserved for use by the RSC to describe the data entry, editing, and summarization stage of the data file.

F.2 File Naming - Processed Data Files

The naming of a processed data file varies from that of a raw data file by the addition of a character in front of the file name. The character is either 3, 4, or 7 depending on whether volume, classification or weight data is included in the file. Characters two through eight are identical to characters one through seven of the raw data file. The extensions are identical.

F.3 File Naming - Extensions for Data Files

The three characters of the extension are an index to the starting date (Month, Day, Year) of the count, beginning with the month code as the first character of the extension. The second character of the file name extension is an index to the beginning day. The third character of the extension is the code for the year of the count. Normally, the year code would require two digits to cover the period 1954 to 2025. However, by creating two groups of the years (1954 to 1989 and 1990 to 2025) and by coding the month depending upon which year group it falls into, only one digit is required to cover a period of 72 years. This will generally be sufficient to cover the period of interest to the SHRP LTPP, 1965 to 2010. To illustrate how this works, a count made in November 1988 would be given the month code "A" because it falls in the first year group. On the other hand, November 1991 would be given the month code "M" because it falls in the second group of years.

The creation of the file name and the use of the one-digit year code are illustrated in the following examples.


Table F-1 File Naming Convention Example - Raw Data File

Example File Name: W123456.MNB

Character(s) File Entry Explanation
1 W Weight Data
2-7 123456 SHRP Site ID Number
2-3 12 State Code
4-7 3456 Test Site Number
8 N/A Reserved for RSC Processing Code
Extension File Entry Explanation
1 M Month of Count (November in the 1990-2025 period. Codes in Table F-3.)
2 N Day of Count (24th. Codes in Table F-3.)
3 B Year of Count (Either 1965 or 2001. Since the month code is M, which falls in the 1990-2025 period, the appropriate year is 2001. Codes in Table F-4.))

Table F-2 File naming convention example - Processed data file

Example File Name: 4C123456.MNB

Character(s) File Entry Explanation
1 4 output is 4-card
2 c Classification Data
3-8 123456 SHRP Site ID Number
3-4 12 State Code
5-8 3456 Test Site Number
Character(s) File Entry Explanation
1 M Month of Count (November in the 1990-1025 period. Codes in Table F-3.)
2 N Day of Count (24th. Codes in Table F-3.)
3 B Year of Count (Either 1965 or 2001. Since the month code is M, which falls in the 1990-2025 period, the appropriate year is 2001. Codes in Table F-4.)

Table F-3 Beginning Date Codes (Month and Day)

Month: January
1954-1989 Month Code: 1
1990-2025 Month Code: C
Day of Month: 1 = 1st, C = 13th, O = 25th

Month: February
1954-1989 Month Code: 2
1990-2025 Month Code: D
Day of Month: 2 = 2nd, D = 14th, P = 26th

Month: March
1954-1989 Month Code: 3
1990-2025 Month Code: E
Day of Month: 3 = 3rd, E = 15th, Q = 27th

Month: April
1954-1989 Month Code: 4
1990-2025 Month Code: F
Day of Month: 4 = 4th, F = 16th, R = 28th

Month: May
1954-1989 Month Code: 5
1990-2025 Month Code: G
Day of Month: 5 = 5th, G = 17th, S = 29th

Month: June
1954-1989 Month Code: 6
1990-2025 Month Code: H
Day of Month: 6 = 6th, H = 18th, T = 30th

Month: July
1954-1989 Month Code: 7
1990-2025 Month Code: I
Day of Month: 7 = 7th, I = 19th, U = 31st

Month: August
1954-1989 Month Code: 8
1990-2025 Month Code: J
Day of Month: 8 = 8th, J = 20th

Month: September
1954-1989 Month Code: 9
1990-2025 Month Code: K
Day of Month: 9 = 9th, K = 21st

Month: October
1954-1989 Month Code: 0
1990-2025 Month Code: L
Day of Month: 0 = 10th, L = 22nd

Month: November
1954-1989 Month Code: A
1990-2025 Month Code: M
Day of Month: A = 11th, M = 23rd

Month: December
1954-1989 Month Code: B
1990-2025 Month Code: N
Day of Month: B = 12th, N = 24th


Table F-4 Beginning Date Codes (Year)

Year
Code
MonthCode
"1 - B"
Month Code
"C - N"
Year
Code
Month Code
"1 - B"
Month Code
"C - N"
019541990I19722008
119551991J19732009
219561992K19742010
319571993L19752011
419581994M19762012
519591995N19772013
619601996O19782014
719611997P19792015
819621998Q19802016
919631999R19812017
A19642000S19822018
B19652001T19832019
C19662002U19842020
D19672003V19852021
E19682004W19862022
F19692005X19872023
G19702006Y19882024
H19712007Z19892025

F.4 Sort Order for Input Data/a>

The following is the sort order for traffic records input to the Quality Control program: Date, Time

F.5 Format Classification Records (4-card)

ColumnsNo. of
Columns
DescriptionTMG
page
11Vehicle classification record code (4) 5-4-1
2-32State code5-4-1
4-52Functional Classification5-4-2
6-83Station Identification Number 5-4-2
91Direction of Travel5-4-2
10-112Year of Data5-4-3
12-132Month of Data5-4-6
14-152Day of Month5-4-6
16-172Hour of day5-4-6
18-192Number of motorcycles (optional)4-A-1
20-234Number of passenger cars or all 2-axle, 4-tire single unit vehicles4-A-1
24-263Number of other 2-axle, 4-tire single unit vehicle4-A-1
27-282Number of buses4-A-1
29-313Number of 2-axle, 6-tire single unit trucks4-A-1
32-332Number of 3-axle single unit trucks4-A-1
34-352Number of 4 or more axle single unit trucks4-A-1
36-372Number of 4 or less axle single trailer trucks4-A-1
38-403Number of 5-axle single trailer trucks4-A-2
41-422Number of 6 or more axle single trailer trucks4-A-2
43-442Number of 5 or less axle multi- trailer trucks4-A-2
45-462Number of 6-axle multi- trailer trucks4-A-2
47-482Number of 7 or more axle multi- trailer trucks4-A-2
491Motorcycle reporting indicator5-4-6
501Vehicle class combination indicator5-4-6
511Lane of travel5-4-6
52-8031Blank or optional State data5-4-6

Source: Traffic Monitoring Guide (TMG), 2nd edition, Federal Highway Administration, FHWA-PL-92-017, 1992, pg. 5-4-5.

Codes specific to the 4-card:
Motorcycle reporting indicator = 0 - motorcycles not reported, 1 - motorcycles reported.

Vehicle class combination indicator = 0 - Class 2 & 3 reported separately, 1 - Class 2 & 3 reported together.


F.6 Format - Classification Records (C-card)

ColumnsNo. of
Columns
DescriptionTMG Ref
Page
11Vehicle classification record code (C)6-4-1
2-32State code6-2-1
4-96Station Identification Number6-2-3
101Direction of Travel6-2-3
111Lane of Travel6-2-3
12-132Year of Data6-2-3
14-152Month of Data6-3-1
16-172Day of Data6-3-1
18-192Hour of Data6-4-1
20-245Total Volume6-4-3
25-295Class 1 Count6-4-3
30-345Class 2 Count6-4-3
35-395Class 3 Count6-4-3
40-445Class 4 Count6-4-3
45-495Class 5 Count6-4-3
50-545Class 6 Count6-4-3
55-595Class 7 Count6-4-4
60-645Class 8 Count6-4-4
65-695Class 9 Count6-4-4
70-745Class 10 Count6-4-4
75-795Class 11 Count6-4-4
80-845Class 12 Count6-4-4
85-895Class 13 Count6-4-4
The record may end here if the FHWA 13 class system is being used
90-945Class 14 Count6-4-4
95-995Class 15 Count6-4-4

Source: Traffic Monitoring Guide (TMG) - 3rd Edition, Federal Highway Administration, FHWA, February 1995, pg. 6-4-2.

The C-card format allows for entries from column 20 on to have either leading zeros or leading blanks. Only one option is allowed in any given record.


F.7 Format - Weight Records (7-card face)

ColumnsNo. of
Columns
DescriptionTMG Ref
Page
11Truck weight record code (7)5-4-8
2-32State code5-6-2
4-52Functional Classification5-6-3
6-83Station Identification Number5-6-3
91Direction of Travel5-6-3
10-112Year of Data5-6-4
12-132Month of Data5-4-8
14-152Day of Month5-4-8
16-172Hour of day5-4-8
18-236Vehicle type code5-4-8
24-252Body type (optional)*5-4-10
261Engine type (optional)*5-4-10
27-282(open)5-4-10
29-313Registered weight (thousands of pounds)5-4-10
321Basis of registration5-4-10
33-342(open)5-4-10
351Lane of travel5-4-10
36-405Commodity code (optional)*5-4-10
411Load status code (optional)*5-4-10
42-454Total weight of truck or combination5-4-10
46-483A-axle weight (hundreds of pounds)5-4-10
49-513B-axle weight (hundreds of pounds)5-4-10
52-543C-axle weight (hundreds of pounds)5-4-10
55-573D-axle weight (hundreds of pounds)5-4-10
58-603E-axle weight (hundreds of pounds)5-4-10
61-633(A-B) axle spacing (feet and tenths)5-4-10
64-663(B-C) axle spacing (feet and tenths)5-4-10
67-693(C-D) axle spacing (feet and tenths)5-4-10
70-723(D-E) axle spacing (feet and tenths)5-4-10
73-764Total wheel base5-4-10
77-793Record serial number (same for continuation record)5-4-10
801Continuation indicator:
0 = no continuation record
1 = has a continuation record
5-6-32

*Each of these data items has a default value which must be entered when the data item is not collected.

Source: TMG, 2nd edition, pg. 5-4-7.

The six digit code which may be entered for the vehicle type code as an alternative to a state classification system or the FHWA 13 bin system from the TMG is shown in Tables F.1 to F.4 taken from the TMG, 2nd edition.

Table F-5 Definition of 6-Digit Classification Scheme From FHWA Truck Weight Study

Vehicle Type Coding Chart*

Vehicle Type1st Character2nd Character3rd Character4th Character5th Character6th Character
Personal passenger vehiclesBasic vehicle type = 0Code = 9Code = 0Table A; light trailer modifierCode = 0Code = 0
BusesBasic vehicle type = 1Code = 9Code = 0Table B; axle & tire modifierCode = 0Code = 0
Single unit trucks or tractorsBasic vehicle type = 2Table C; total axlesCode = 0Table A; light trailer modifierCode = 0Code = 0
Tractor + semitrailerBasic vehicle type = 3Total axles on power unitTable D; total axles on first trailerCode = 0Code = 0Code = 0
Tractor + full trailerBasic vehicle type = 4Total axles on power unitTable D; total axles on first trailerCode = 0Code = 0Code = 0
Tractor + semitrailer + full trailer**Basic vehicle type = 5Total axles on power unitTable D; total axles on first trailerTable D; total axles on second trailerCode = 0Code = 0
Truck + full trailer + full trailerBasic vehicle type = 6Total axles on power unitTable D; total axles on first trailerTable D; total axles on second trailerCode = 0Code = 0
Tractor + semitrailer + 2 full trailersBasic vehicle type = 7Total axles on power unitTable D; total axles on first trailerTable D; total axles on second trailerTable D; total axles on third trailerCode = 0
Truck + 3 full trailersBasic vehicle type = 8Total axles on power unitTable D; total axles on first trailerTable D; total axles on second trailerTable D; total axles on third trailerCode = 0
* See next page for table references.
** Semitrailers pulled by other semitrailers will be considered full trailers.

Table F-6 Table A, B, C and D for 6-Digit Classification Codes

Table A - Light Trailer Modifer

0 = No trailer
1 = Camp trailer
2 = Travel or mobile home
3 = Cargo or livestock trailer
4 = Boat trailer
5 = Towed equipment
6 = Towed auto
7 = Towed truck
8 = "Saddle mount" (Tractors or trailers with front axles on unit ahead) Type trailer not determined
9 = Type trailer not determined

Table B - Axle and Tire Modifer

0 = Axle arrangement not recorded
1 = Two-axle, four-tire
2 = Two-axle, six-tire
3 = Three-axle
4 = Four or more axles

Table C - Total Axles

0 = Panel and pickup
1 = Heavy two-axle, four-tire
2 = Two-axle, six-tire
3 = Three-axle
4 = Four-axle
5 = Five-axle
6 = Six-axle
7 = Seven-axle
8 = Eight axles or more

Table D - Total Axles on Trailer

1 = Single-axle trailer
2 = Two-axle trailer
3 = Three-axle trailer
4 = Four-axle trailer
5 = Five-axle trailer
6 = Six-axle trailer
7 = Two-axle trailer with axles in a spread tandem configuration
8 = Three-axle trailer with axles in a spread tandem configuration
9 = Four-axle trailer including a spread tandem configuration

F.8 Format - Weight Records (7-card continuation)

ColumnsNo. of ColumnsDescriptionTMG Ref Page
1-2323Same as columns 1-23 of the face record  
24-285(open) 
29-313F-axle weight (hundreds of pounds)5-4-10
32-343G-axle weight (hundreds of pounds)5-4-10
35-373H-axle weight (hundreds of pounds)5-4-10
38-403I-axle weight (hundreds of pounds)5-4-10
41-433J-axle weight (hundreds of pounds)5-4-10
44-463K-axle weight (hundreds of pounds)5-4-10
47-493L-axle weight (hundreds of pounds)5-4-10
50-523M-axle weight (hundreds of pounds)5-4-10
53-553(E-F) axle spacing (feet and tenths)5-4-10
56-583(F-G) axle spacing (feet and tenths)5-4-10
59-613(G-H) axle spacing (feet and tenths)5-4-10
62-643(H-I) axle spacing (feet and tenths)5-4-10
65-673(I-J) axle spacing (feet and tenths)5-4-10
68-703(J-K) axle spacing (feet and tenths)5-4-10
71-733(K-L) axle spacing (feet and tenths)5-4-10
74-763(L-M) axle spacing (feet and tenths)5-4-10
77-793Record serial number (same as face record)5-4-10
801Continuation indicator:
2= first continuation record for a vehicle with more than 13 axles
9=last continuation record
5-4-10

** Used only for truck combinations having six or more axles. Immediately follows the face record.

Source: TMG, 2nd edition, pg. 5-4-7.

F.9 Format - Station Description Record (2-Card)

This record provides header information for 2nd edition TMG classification and weight records. There is supposed be one for each direction in the data file at a minimum. It is possible to have one for each lane in each direction. The software currently only recognizes but does not use the data in the card when it comes at the top of 7-card files. The information in here is some of the information used in DEFSHT.DAT.

ColumnWidthAlpha/
Numeric
Description TMG
Ref Pg.
11NStation description record code (2)5-4-1
2-32NState Cod5-4-1
4-52NFunctional classification5-4-2
6-83AStation identification number5-4-2
91NDirection of travel5-4-2
10-112NYear of data5-4-3
121NPosted route number category5-4-3
13-175NPosted route number5-4-3
18-203NCounty code5-4-3
21-3212NHPMS sample number5-4-3
331NHPMS sample section subdivision number5-4-4
34-352NYear station was established5-4-4
361NNumber of lanes in one direction at site5-4-4
371NType of weighing equipment5-4-4
381NMethod of classification counting5-4-4
391NCoordination with enforcement activities5-4-5
40-456NMost current AADT figure5-4-5
46-8035ALocation of station (distance and direction from nearest major intersecting route)5-4-5

Source: Traffic Monitoring Guide (TMG) -2nd edition, Federal Highway Administration, FHWA-PL-92-017, 1992, pg. 5-4-1.

The following are the code ranges for station identification cards for the indicated fields for data collected by the LTPP program. Other values may be appropriate for statewide data collection systems. For the definition associated with each code see the relevant TMG page.

  • Number of lanes - 1-5
  • Posted route number category - 0-4
  • Type of weighing equipment - 0, 5-8
  • Method of vehicle classification counting - 0, 3-8
  • Coordination with enforcement activities - 1,2

F.10 Format - Weight Records (W-card)

Cols.No. of Cols.DescriptionTMG Page
11Truck weight record code (W)6-5-1
2-32State code6-2-1
4-96Station Identification Number6-2-3
101Direction of Travel6-2-3
111Lane of Travel6-2-3
12-132Year of Data6-2-3
14-152Month of Data6-3-1
16-172Day of Data6-3-1
18-192Hour of Data 
20-212Vehicle Class6-5-3
22-243Open6-5-3
25-284Total Weight of Vehicle6-5-3
29-302Number of Axles6-5-3
31-333A-axle weight* 
34-363(A-B) axle spacing** 
37-393B-axle weight* 
40-423(B-C) axle spacing** 
43-453C-axle weight* 
46-483(C-D) axle spacing** 
49-513D-axle weight* 
52-543(D-E) axle spacing** 
55-573E-axle weight* 
58-603(E-F) axle spacing ** 
61-633F-axle weight * 
64-663(F-G) axle spacing** 
67-693G-axle weight* 
70-723(G-H) axle spacing** 
73-753H-axle weight* 
76-783(H-I) axle spacing** 
79-813I-axle weight* 
82-843(I-J) axle spacing** 
85-873J-axle weight  
88-903(J-K) axle spacing** 
91-933K-axle weight* 
94-963(K-L) axle spacing** 
97-993L-axle weight* 
100-1023(L-M) axle spacing** 
103-1053M-axle weight* 
Additional fields if needed.
*Axle weights are to nearest tenth of a metric ton (100 kilograms) without a decimal point.
** Axle spacings are to the nearest tenth of a meter (100 millimeters) without a decimal point.

Source: TMG, 3rd edition, pg. 6-5-2.

The vehicle classification expected with this record is either the FHWA 13 bin scheme from the TMG, 3rd edition or a two-digit state classification scheme. The classifications are as follows:

1 - Motorcycles
2 - Passenger cars
3 - Other 2-axle, 4-tire single unit vehicles
4 - Buses
5 - 2-axle, 6-tire single unit trucks
6 - 3-axle single unit trucks
7 - 4 or more axle single unit trucks
8 - 4 or less axle single trailer trucks
9 - 5-axle single trailer trucks
10 - 6 or more axle single trailer trucks
11 - 5 or less axle multi-trailer trucks
12 - 6-axle multi-trailer trucks
13 - 7 or more axle multi-trailer trucks
(14- unknown or state defined)
(15- unknown)

F.11 Format - Station Description Record (S-Card)

ColumnField
Length
Alpha/
Numeric
DescriptionTMG
Ref Pg.
11ARecord type (must be S)6-2-1
2-32NFIPS State Code6-2-1
4-96AStation ID6-2-3
101NDirection of Travel Code6-2-3
111NLane of Travel6-2-3
12-132NYear of Data6-2-3
14-152NFunctional Classification Code6-2-4
161NNumber of Lanes in Direction Indicated6-2-4
171ASample Type for Traffic Volume6-2-4
181NNumber of Lanes Monitored for Traffic Volume6-2-4
191NMethod of Traffic Volume Counting 6-2-4
201ASample Type for Vehicle Classification6-2-5
211NNumber of Lanes Monitored for Vehicle Classification6-2-5
221NMethod of Vehicle Classification 6-2-5
231AAlgorithm for Vehicle Classification6-2-5
24-252NClassification System for Vehicle Classification6-2-5
261ASample Type for Truck Weight6-2-6
271NNumber of Lanes Monitored for Truck Weight6-2-6
281NMethod of Truck Weighing6-2-6
291ACalibration of Weighing System 6-2-7
301NMethod of Data Retrieval 6-2-7
311AType of Sensor6-2-7
321ASecond Type of Sensor6-2-8
33-342NEquipment Make6-2-8
35-4915AEquipment Model6-2-9
50-512NSecond Equipment Make6-2-9
52-6615ASecond Equipment Model6-2-9
67-726NCurrent Directional AADT6-2-10
73-786AMatching Station ID for Previous Data6-2-10
79-802NYear Station Established6-2-10
81-822NYear Station Discontinued6-2-10
83-853NFIPS County Code6-2-10
861AHPMS Sample Type6-2-10
87-9812NHPMS Sample Number or Kilometerpoints6-2-10
991NHPMS Subdivision Number 6-2-10
1001NPosted Route Signing6-2-11
101-1088NPosted Signed Route Number6-2-11
1091NConcurrent Route Signing6-2-11
110-1178NConcurrent Signed Route Number6-2-11
118-16750AStation Location 6-2-11

Source: Traffic Monitoring Guide (TMG) - 3rd edition, Federal Highway Administration, FHWA, February 1995, pg. 6-2-2.

The following are the additional code ranges for station identification cards for the indicated fields for data collected by the LTPP program. Other values may be appropriate for statewide data collection systems. For the definition associated with each code see the relevant TMG page.

Sample type for traffic volume - T, N
Method of traffic volume counting - 1-3
Sample type for vehicle classification - H, N
Method of vehicle classification - 1-3
Algorithm for vehicle classification - A-H, K-N, Z
Classification system for vehicle - 1-5, 13-15 and others TBD
Sample type for truck weight - B, L, T, N
Method of truck weighing - 1,2,4,5
Calibration of the weighing system - A-D, M, S-U, Z
Method of data retrieval - 1,2
Type of sensor - A-I, K-M, P-X, Z
Equipment make - 0-18, 21, 23, 24, 30-63, 99
HPMS sample type - Y, N

All text fields in this record are left justified.

F.12 Codes used in TMG card submissions

DIRECTION:

1 - North
2 - Northeast
3 - East
4 - Southeast
5 - South
6 - Southwest
7 - West
8 - Northwest

FUNCTIONAL CLASS:

RURAL
Code/Functional Classification
01 - Principal Arterial - Interstate
02 - Principal Arterial - Other or Expressways
06 - Minor Arterial
07 - Major Collector
08 - Minor Collector
09 - Local System

URBAN
Code/Functional Classification
11 = Principal Arterial - Interstate
12 = Principal Arterial - Other Freeways
14 - Principal Arterial - Other
16 = Minor Arterial
17 = Collector
19 = Local System

LANE OF TRAVEL OR MAINLINE LANE OF TRAVEL:
0 = combined lanes
1 = outside (rightmost) lane
2 = next to outside lane, ... to 9 = inside lane

F.13 Format - Weight Records (HELP-card)

FieldLengthFormatStarts in
Column
L = LANE1n2
LD = LANE DIRECTION2nn4
MO = MONTH2nn7
DD = DAY2nn10
YY = YEAR2nn13
HH = HOUR2nn16
MN = MINUTE2nn19
SS = SECOND2nn22
HS = HUNDREDTHS OF SECONDS2nn25
VEHNUM = VEHICLE NUMBER6nnnnnn28
NA = NUMBER OF AXLES2nn35
CL = CLASS2nn38
GROS = GROSS WEIGHT * 104nnnn41
LENG = OVERALL LENGTH *104nnnn46
SPED = SPEED * 104nnnn51
SP1 = AXLE SPACING 12 * 103nnn56
SP2 = AXLE SPACING 23 * 103nnn60
SP3 = AXLE SPACING 34 * 103nnn64
SP4 = AXLE SPACING 45 * 103nnn68
SP5 = AXLE SPACING 56 * 103nnn72
SP6 = AXLE SPACING 67 * 103nnn76
SP7 = AXLE SPACING 78 * 103nnn80
SP8 = AXLE SPACING 89 * 103nnn84
WT1 = WEIGHT OF AXLE 1 * 103nnn88
WT2 = WEIGHT OF AXLE 2 * 103nnn92
WT3 = WEIGHT OF AXLE 3 * 103nnn96
WT4 = WEIGHT OF AXLE 4 * 103nnn100
WT5 = WEIGHT OF AXLE 5 * 103nnn104
WT6 = WEIGHT OF AXLE 6 * 103nnn108
WT7 = WEIGHT OF AXLE 7 * 103nnn112
WT8 = WEIGHT OF AXLE 8 * 103nnn116
WT9 = WEIGHT OF AXLE 9 * 103nnn120

A HELP file is a file in comma separated value format with all numeric fields. HELP stands for highway electronic license plate and refers to one of the earliest commercial vehicle ITS applications. All numbers are right justified within a field. Fields are comma delimited. Each record starts with a "<" and ends with a ">". All weights are in tenths of kips. All spacings are in tenths of feet. Values are multiplied to produce integers in the record.

F.14 Format - ATR Station Record (1-Card)

ColumnField LengthAlpha/ NumericDescription
11NRecord Type: 1 = ATR Station
2-32NFIPS State Code
4-52NFunctional Classification Code
6-116AStation Identification
121NDirection of Travel
131NLane of Travel
141NPosted Route Signing
15-206NPosted Signed Route Number
211NConcurrent Route Signing
22-276NConcurrent Signed Route Number
28-303NFIPS County Code
31-4212NHPMS Sample Number or Kilometerpoints
431NHPMS Subdivision Number
44-452NYear Station Established
46-472NYear Station Discontinued
481NMethod of Data Retrieval
49-502NEquipment Make
51-10050ALocation of Station

Source: Traffic Monitoring Guide (TMG), 2nd edition, Federal Highway Administration, FHWA-PL-92-017, 1992, pg. 3-2-3

This format is provided for information only. LTPP does not currently or in the future expect to process this record format, since the data associated with it is not useful for LTPP research needs.

F.15 Format - Volume data records (3-card)

ColumnField
Length
Alpha/
Numeric
Description
11NRecord Identification: 3 = ATR data
2-32NFIPS State Code (TMG pg. 5-4-1)
4-52NFunctional Classification
6-116NStation Identification Number
121NDirection of Travel (TMG 5-4-2)
131NMainline Lane of Travel
14-152NYear of Data (last 2 digits)
16-172NMonth of Data (01-12)
18-192NDay of Month of Data (01-31)
201NDay of Week (1 = Sunday ...7= Saturday)
21-255NTraffic Volume Counted, 00:01 - 01:00
26-305NTraffic Volume Counted, 01:01 - 02:00
...""(hourly traffic volumes counted)
136-1405NTraffic Volume Counted, 23:01 - 24:00
1411NFootnotes (0 = No restrictions, 1 = Construction or other activity affected traffic flow)

Source: Traffic Monitoring Guide (TMG) - 2nd edition, Federal Highway Administration, FHWA-PL-92-017, 1992, pg. 3-2-4; 3rd edition, Federal Highway Administration, FHWA, February 1995, pg. 6-3-3.

This data record type contains no useful information about trucks. It is only useful in expanding sampled AVC data to full year estimates. For a full discussion of its use see the LTPP traffic analysis software documentation. 3-card specific codes:

DAY OF WEEK (3-card only): 1 = Sunday, 2 = Monday, 3 = Tuesday, 4 = Wednesday, 5 = Thursday, 6 = Friday, 7 = Saturday.

G. ORACLE Tables

There are six different types of ORACLE tables created by this program. One is a unique table, LTPPFILETRACKER. Two are input file specific, two are year specific for a site and one is site specific. The existence of ORACLE tables makes it possible to generate statistics about data submitted and processed that previously required a significant amount of labor to generate. While the reports have not been incorporated in the software, a discussion of the possibilities is contained in section G.8.

ORACLE RTDB tables are created in the user account where processing takes place. A traffic user account should be created by the ORACLE DBA and used by all traffic processing personnel to ensure that all traffic tables are created in the same account. This will segregate the traffic tables from the other IMS tables, minimizing the impact on the IMS database. The traffic user account should be created with storage parameters that will allow the thousands of small tables currently required by this software. Be sure that database administrator is aware of their existence to insure backups are made and to avoid accidental deletions.

G.1 LTPPFILETRACKER

This table is used to track the loading of files and the steps in the processing. A description of the table appears below. The information in this table is displayed in the File Tracker Module to locate the analysis data and to permit file selection for Plett-Plots. It is used in the Graph Manager to identify data which can be graphed. It stores the number of records and days of data from each file loaded. It also stores the information on the processing steps. The table description is followed by variable definitions.

SQLWKS> desc ltppfiletracker;

Column NameNull?Type
FILENAME VARCHAR2(32)
STATE_CODE NUMBER
SHRP_ID CHAR(4)
VERSION VARCHAR2(4)
ARCHIVEDIN VARCHAR2(512)
STARTDATE DATE
ENDDATE DATE
PROCESSED DATE
RECORDQC NUMBER
DAILYQC NUMBER
QCREPORT NUMBER
COMMENTS VARCHAR2(2000)
REPORTSENT NUMBER
REPORTRECV NUMBER
PURGESAPPLIED NUMBER
FILETYPE VARCHAR2(32)
DAYS NUMBER
RECORDS NUMBER

FILENAME = The name of the input data file where an underscore replaces the period between the eight character file name and its extension.

STATE_CODE = FIPS state code; incomplete loads/ bad data will be represented by 00.

SHRP_ID = SHRP_ID; incomplete loads/ bad data will be represented by 0000.

VERSION = Which of potentially multiple loads of this data the information in this record represents.

ARCHIVEDIN = The subdirectory where the output file is located including the name of the output file.

STARTDATE = The first date for data exists in the input file. The year is currently reported in two digit format. Invalid loading attempts result in a date of 01-Jan-2025.

ENDDATE = The last date for which data exists in the input file. The year is currently reported in two digit format. Invalid loading attempts result in a date of 01-Jan-2025.

PROCESSED = The date the file was loaded. The year is currently in reported in two digit format. This value is automatically assigned by the software. Invalid loading attempts are characterized by 01-01-2025.

RECORDQC = A 0/1 variable assigned by the software as 1 when the record level QC process is successful.

DAILYQC = A 0/1 variable assigned by the software as 1 when the Daily QC process is successfully completed.

QCREPORT = A 0/1 variable assigned by the software as 0 when the file is loaded. A value of 1 is assigned after the user checks the QC report box in the processing box and applied that change.

COMMENTS = Comments entered in the comments box of the LTPP File Tracker dialog box. The user must apply the comments in order for them to be stored in the table. A maximum of 2000 characters is allowed.

REPORTSENT = A 0/1 variable assigned by the software as 0 when the file is loaded. A value of 1 is assigned after the user checks the 'QC Report Sent' box in the processing box and applies that change.

REPORTRECV = A 0/1 variable assigned by the software as 0 when the file is loaded. A value of 1 is assigned after the user checks the 'Report Received' box in the processing box and applies that change.

PURGESAPPLIED = A 0/1 variable assigned by the software as 0 when the file is loaded. A value of 1 is assigned after the user applies a purge.

FILETYPE = A label indicating what the original file type was, 4-card, 7-card, C-card or W-card.

DAYS = The number of days of data in the file including any with errors.

RECORDS = The number of records in the file including any with errors. Continuation cards are not included in the count.

G.2 LTPPD4 tables

LTPPD4 tables contain a record for every day of AVC data provided for all years. There is one such file for each site. The naming convention is LTPPD4xxxxxx where xxxxxx is the STATE_CODE and SHRP_ID for the site. The table structure is as follows.

SQLWKS> desc ltppd4xxxxxx;

Column NameNull?Type
FILENAME VARCHAR2(32)
STATE_CODE  NUMBER
FUNC_CLASS NUMBER
STATION VARCHAR2(6)
DIRECTION NUMBER
YEAR NUMBER
MONTH NUMBER
DAY NUMBER
HOUR NUMBER
LANE  NUMBER
ERROR NUMBER
PURGE NUMBER
VOLUME1 NUMBER
VOLUME2 NUMBER
VOLUME3 NUMBER
VOLUME4 NUMBER
VOLUME5 NUMBER
VOLUME6 NUMBER
VOLUME7 NUMBER
VOLUME8 NUMBER
VOLUME9 NUMBER
VOLUME10 NUMBER
VOLUME11 NUMBER
VOLUME12 NUMBER
VOLUME13 NUMBER
VOLUME14 NUMBER
VOLUME15 NUMBER
VOLUME16 NUMBER
VOLUME17 NUMBER
VOLUME18 NUMBER
VOLUME19 NUMBER
VOLUME20 NUMBER

Note that the table does not have any fields with non-null requirements. Thus there is the potential for duplicate records based on DIRECTION, YEAR, MONTH, DAY, HOUR, and LANE particularly when the data is split across two data files. FILENAME is the name of the input file. HOUR represents the first hour of the day found in the file. All other elements except PURGE are read and summed or assigned using summary.dat file created from classification records. ERROR is the code for any error encountered in a DAY's records. The error may be record level or daily in nature. Only records with non-critical errors (EDIT_1 codes other than C or Q) are included in the summarization. There must be 24 hours in a day to create a record in the table. PURGE is a 0/1 field indicating if that day's data has been purged so that it will be omitted from the analysis. Zero is not purged. One is purged. Purges are applied at the daily level by design. Data beyond column 51 on 4-cards is found in VOLUME14-VOLUME20 as applicable.

At least one calendar day of good data must exist for this table to exist for a site.

G.3 LTPPVOL7 tables

LTPPVOL7 tables are the WIM record equivalent of the LTPPD* tables. Unlike the LTPPD* tables they are yearly rather than for the site as a whole. The table naming convention is LTPPVOL7yyyyxxxxxx where yyyy is the 4-digit year and xxxxxx is the STATE_CODE, SHRP_ID combination for the site. These tables contain a summary of the volumes by vehicle class for a given MONTH, DAY, LANE, and DIRECTION. They have no non-null fields. Since weight data may be collected in two different files for the same day, there is the potential for duplicate records. FILENAME is exactly that. CNT1- CNT20 reflect the number of vehicles in each of the 20 bins possible. Generally, for U.S. data the count values should be 1-15 to reflect use of the 13-bin FHWA system with 15 representing vehicles tagged as invalid. There is a one to one correspondence between the number in the field name and the bin number. This table does not have ERROR characteristics in it. Records with critical errors as identified by EDIT_1 codes on the output files are excluded from counts in this table. A description of the table appears below.

SQLWKS> desc ltppvol7yyyyxxxxxx;

Column NameNull?Type
FILENAME VARCHAR2(32)
MONTH NUMBER
DAY NUMBER
LANE NUMBER
DIRECTION NUMBER
PURGE NUMBER
CNT1 NUMBER
CNT2 NUMBER
CNT3 NUMBER
CNT4 NUMBER
CNT5 NUMBER
CNT6 NUMBER
CNT7 NUMBER
CNT8 NUMBER
CNT9 NUMBER
CNT10 NUMBER
CNT11 NUMBER
CNT12 NUMBER
CNT13 NUMBER
CNT14 NUMBER
CNT15 NUMBER
CNT16 NUMBER
CNT17 NUMBER
CNT18 NUMBER
CNT19 NUMBER
CNT20 NUMBER

G.4 LTPPGVW tables

The LTPPGVW tables store gross vehicle weight distributions for a given year at a site for each vehicle class on a monthly basis by lane. The table is described below. The table naming convention is LTPPGVWyyyyxxxxxx where yyyy is the 4 digit year and xxxxxx is the STATE_CODE, SHRP_ID combination for the site. An input file for each set of weight distributions is identified in the table If more than one file has data for a month, the last file loaded will show in FILENAME. There will be duplicate records for a month if the data is split across two or more weight files. Each bin covers a 4 kip interval with a maximum vehicle weight of 204,000 pounds. There is no provision for including PURGE or ERROR characteristics in this file. If purges are applied to a weight file, this table is not updated. Records with critical errors as identified by EDIT_1 codes on the output files are not included in these numbers.

SQLWKS> desc ltppgvwyyyyxxxxxx;

Column NameNull?Type
FILENAME VARCHAR2(32)
MONTH NUMBER
LANE NUMBER
DIRECTION NUMBER
VEHICLE_CLASS NUMBER
BIN1 NUMBER
BIN2 NUMBER
BIN3 NUMBER
BIN4 NUMBER
BIN5 NUMBER
BIN6 NUMBER
BIN7 NUMBER
BIN8 NUMBER
BIN9 NUMBER
BIN10 NUMBER
BIN11 NUMBER
BIN12 NUMBER
BIN13 NUMBER
BIN14 NUMBER
BIN15 NUMBER
BIN16 NUMBER
BIN17 NUMBER
BIN18 NUMBER
BIN19 NUMBER
BIN20 NUMBER
BIN21 NUMBER
BIN22 NUMBER
BIN23 NUMBER
BIN24 NUMBER
BIN25 NUMBER
BIN26 NUMBER
BIN27 NUMBER
BIN28 NUMBER
BIN29 NUMBER
BIN30 NUMBER
BIN31 NUMBER
BIN32 NUMBER
BIN33 NUMBER
BIN34 NUMBER
BIN35 NUMBER
BIN36 NUMBER
BIN37 NUMBER
BIN38 NUMBER
BIN39 NUMBER
BIN40 NUMBER
BIN41 NUMBER
BIN42 NUMBER
BIN43 NUMBER
BIN44 NUMBER
BIN45 NUMBER
BIN46 NUMBER
BIN47 NUMBER
BIN48 NUMBER
BIN49 NUMBER
BIN50 NUMBER

G.5 LTPPRC tables

LTPPRC tables are used to stored erroneous classification records for a specific input file. A table is created even if there are no errors in the file. The naming convention is LTPPRCxxxxxx_ext_v where xxxxxx is the combination of STATE_CODE, SHRP_ID. Ext is the extension for the input file and v is the version counter for loading the data into the database. A full day's records will be stored when the error is the result of a 24 hour interval evaluation such as 1 a.m. > 1 p.m. volumes. A partial day's records will be stored when more or less than 24 hours of data are found for a file. Inspection of the error tables for consecutively labeled data files may be used in conjunction with the appropriate LTPPD* file to determine if a complete day's worth of data has been split across two files. FILENAME is null since the source file is identified in the file name. FUNC_CLASS is functional classification. Although there are no non-null fields, there should be no duplicate records in this table if the source file was correctly prepared. A duplicate for DIRECTION, DAY, HOUR and LANE may occur when clocks are turned back for daylight saving's time. All other fields but ERROR and PURGE are read directly from the summary.dat file for the classification files. ERROR uses the same codes as the LTPPD file. PURGE is 0 (keep the data) or 1 (eliminate the data) depending on whether or not it is to be used in further calculations. The table structure is shown below.

SQLWKS> desc ltpprcxxxxxx_ext_v;

Column NameNull?Type
FILENAME VARCHAR2(32)
STATE_CODE NUMBER
FUNC_CLASS NUMBER
STATION VARCHAR2(6)
DIRECTION NUMBER
YEAR NUMBER
MONTH NUMBER
DAY NUMBER
HOUR NUMBER
LANE NUMBER
ERROR NUMBER
PURGE NUMBER
VOLUME1 NUMBER
VOLUME2 NUMBER
VOLUME3 NUMBER
VOLUME4 NUMBER
VOLUME5 NUMBER
VOLUME6 NUMBER
VOLUME7 NUMBER
VOLUME8 NUMBER
VOLUME9 NUMBER
VOLUME10 NUMBER
VOLUME11 NUMBER
VOLUME12 NUMBER
VOLUME13 NUMBER
VOLUME14 NUMBER
VOLUME15 NUMBER
VOLUME16 NUMBER
VOLUME17 NUMBER
VOLUME18 NUMBER
VOLUME19 NUMBER
VOLUME20 NUMBER

G.6 LTPPRW tables

The LTPPRW table serves the same function for weight files as the LTPPRC table serves for classification files, storage of erroneous records. A description of the table is shown below. The naming convention is LTPPRWxxxxxx_ext_v is nearly identical to that for LTPPRC tables. Unlike the LTPPRC tables there is no FILENAME field. All other elements are read directly from the summary.dat file except for PURGE. The list of possible errors for a weight file is different from those for a classification file. PURGE is 0/1 valued. Zero is for a record that has not been removed. One is for a record that has been removed. The PURGE information does not reflect the coding in the processed data file for critical errors with EDIT_1 codes of C. If a record has a continuation card, the information from both records is included in a single record in this table.

SQLWKS> desc ltpprwxxxxxx_ext_v;

Column NameNull?Type
STATE_CODE NUMBER
FUNC_CLASS NUMBER
STATION VARCHAR2(6)
DIRECTION NUMBER
YEAR NUMBER
MONTH NUMBER
DAY NUMBER
HOUR NUMBER
ERROR NUMBER
VEHICLE_CLASS NUMBER
BTYPE NUMBER
ETYPE NUMBER
REGWEIGHT NUMBER
BREG NUMBER
LANE NUMBER
COMMOD NUMBER
LOAD NUMBER
TOTWEIGHT NUMBER
TOTWHEEL NUMBER
SERIAL NUMBER
PURGE NUMBER
WEIGHT_A NUMBER
WEIGHT_B NUMBER
WEIGHT_C NUMBER
WEIGHT_D NUMBER
WEIGHT_E NUMBER
WEIGHT_F NUMBER
WEIGHT_G NUMBER
WEIGHT_H NUMBER
WEIGHT_I NUMBER
WEIGHT_J NUMBER
WEIGHT_K NUMBER
WEIGHT_L NUMBER
WEIGHT_M NUMBER
SPACE_AB NUMBER
SPACE_BC NUMBER
SPACE_CD NUMBER
SPACE_DE NUMBER
SPACE_EF NUMBER
SPACE_FG NUMBER
SPACE_GH NUMBER
SPACE_HI NUMBER
SPACE_IJ NUMBER
SPACE_JK NUMBER
SPACE_KL NUMBER
SPACE_LM NUMBER

G.7 LTPPERRORCOUNT

The LTPPERRORCOUNT table is a working table used for accumulating data in the preparation of the QC cover sheet. It is empty otherwise.

SQLWKS> desc ltpperrorcount;

Column NameNull?Type
ERROR NOT NULL NUMBER
LANE NOT NULL NUMBER
DIRECTION NOT NULL NUMBER
MONTH NOT NULL NUMBER
DAY NOT NULL NUMBER

G.8 Codes for ERROR in ORACLE tables

The following are the reasons associated with a value of ERROR in the traffic QC software ORACLE tables.

REASONCODE
EDITFLAG_OK0
BAD_CARDTYPE1
BAD_ID62
BAD_ID33
CONSECHEADERRECS4
BAD_DAY5
BAD_WEEKDAY6
BAD_STATE7
BAD_STATION8
BAD_FUNC9
BAD_VEHTYPE10
BAD_TOTWTSUB11
BAD_TOTWHEELSUB12
BAD_WEIGHT13
BAD_SPACE14
BAD_LANE15
BAD_SERIAL16
BAD_CONT17
BAD_VOLUME18
BAD_METHOD19
BAD_ATR20
BAD_ROUTE21
BAD_SITE22
BAD_EQUIP23
BAD_COUNTMETHOD24
BAD_ENFORCEMETHO25
BAD_OPTCLASS26
BAD_HOUR27
BAD_BODYTYPE28
BAD_ENGINETYPE29
BAD_COMMODITY30
BAD_MCYCLERPT31
BAD_REGWEIGHT32
BAD_BASISREG33
BAD_LOADSTATUS34
BAD_VEHCOMBO35
BAD_MINUTE36
BAD_SECOND37
BAD_HUNDRETH38
BAD_NUMAXLES39
BAD_RECORDLEN40
BAD_DATESEQ41
BAD_ALLWEIGHTS42
BAD_ALLSPACES43
BAD_DIRECTION44
BAD_TOTALWGT45
BAD_TOTALSPACE46
BAD_ROUTECAT47
BAD_COUNTY48
BAD_HPMS49
BAD_AADT50
BAD_FOOTNOTE51
BAD_YRESTAB52
BAD_YRDISC53
BAD_YEAR54
BAD_NUMAX55
BAD_TIME56
BAD_SPEED57
BAD_MONTH58
BAD_DATE59
CONSEC_ZERO_VOLS60
CONSEC_STATIC_VOLS61
ONE_AM_ONE_PM62
MISSING_HOUR_VOL63

G.9 Statistics Possible Using ORACLE Tables

The following statistics can be created for sites where all data for a given year has been processed through the new software.

  • Number of days of class data received: LTPPD4 table
  • Number of records of class data received: LTPPD4 table and LTPPC tables (or LTPPFILETRACKER)
  • Number of days of WIM data received: LTPPVOL7 table
  • Number of records of WIM data received: LTPPVOL7 table and LTPP RW tables (or LTPPFILETRACKER)
  • Number of days of class data received vs. accepted for processing: Using the LTPPD4 table and the LTPPRC tables for a site for a given year.
  • Number of records of class data received vs. accepted for processing: Using the LTPPD4 table and the LTPPRC tables for a site for a given year.
  • Number of records of WIM data received vs. accepted for processing: Using the LTPPVOL7 table, and the LTPPRW tables for a site for a given year.
  • Number of files for which QC reports have been sent: LTPPFILETRACKER
  • Number of files for which QC reports have been received back: LTPPFILETRACKER
  • Number of sites for which QC reports have been sent: LTPPFILETRACKER
  • Number of sites for which QC reports have been received back: LTPPFILETRACKER
  • Number of sites for which traffic data has been received generally (or by type) (by state) (by year): Using LTPPFILETRACKER and EXPERIMENT_SECTION
  • Distribution of errors in classification data by type: LTPPD4
  • Percentage of invalid weight records in a file at a site: Using LTPPVOL7 and LTPPRW tables
  • Distribution of weight errors by type: LTPPRW tables
  • Distribution of vehicles by class: :LTPPVOL7, LTPPD4 and LTPPGVW tables
  • Days of purged data: LTPPVOL7 and LTPPD4

H. Processing Resubmitted Raw Data

There are three basic cases under which a data resubmission occurs. In the first instance, data submitted was submitted in 1999 or later and processed only with the new software and a determination is made somewhere in the QC process that the information is in error. The second case is where, for whatever reason, a decision is made to reduce the data processing by handling the LTPP lane only. The third is the result of an in depth data review which calls into question data received and processed prior to 2000. In this instance the new software has no record of previous submissions in the current directory structure. Each case must be treated differently.

H.1 Data processed only by the NT software

A multi-step process must be used to eliminate all information which might confuse the QC process. However, not all data will be removed during this process in order to retain the data trail. A record will be retained in the ORACLE table that the data was loaded and subsequently modified substantially at the raw data file level. The steps are as follows:

  1. Identify the file name, date and first and last records of the file being replaced (the original raw data file). Make a note of this information for use in subsequent steps.
  2. Erase the output file from the relevant AVC4 or WIM7 subdirectory using WIN NT Explorer, File Manager or other appropriate tool.
  3. Determine which summary.dat file(s) contain information from the input file. Summary.dat files which have data only from the input file (all data for the month was in a single input file) may be erased. For all other summary.dat files which have data from more than one input file, open a text editor. Using the text editor, erase all records which have data from the raw data file being replaced. The records to be removed will all start with the file name of the raw data file except the continuation records. They are left justified data and should be removed with their companion face cards. Save each non-empty file as summary.dat in its original directory.
  4. If the input file was a classification file there is one SQL which must be created. It is of the form - "delete from LTPPD4xxxxxx where FILENAME = 'filename' " where xxxxxx is the appropriate STATE_CODE, SHRP_ID combination and filename is the exact name (upper case) of the file which has the data to be removed. Note that filenames in ORACLE tables for traffic use underscore not period to separate a filename from its extension. The SQL and its output should be spooled under a name of the format Dextmm-dd-yyyy.sql where ext is the extension of the file being replaced and mm-dd-yyyy is the date the action was taken. The output should be saved in the AVC4 directory for future reference. Permission is not needed to run this SQL.
  5. If the input file was a weight file there are two SQLs which must be created. The first is of the form - "delete from LTPPVOL7yyyyxxxxxx where FILENAME = 'filename' " where yyyy is the year for the source file and xxxxxx is the appropriate STATE_CODE, SHRP_ID combination. Filename is the exact name (upper case) of the file which has the data to be removed. The SQL and its output should be spooled under a name of the format Vextmm-dd-yyyy.sql where ext is the extension of the file being replaced and mm-dd-yyyy is the date the action was taken. The output should be saved in the WIM7 directory for future reference. The second is of the form - "delete * from LTPPGVWyyyyxxxxxx where MONTH IN (a,b,c)" where yyyy is the year for the source file, xxxxxx is the appropriate STATE_CODE, SHRP_ID combination and A,B and C represent the month(s) of data included in the file to be reprocessed. Since the last file used is the name assigned to a month's data, the file name present in the file may or may not identify all of the data to be reprocessed. There is no way short of reprocessing all the weight data with dates in the relevant months to guaranteed the validity of the weight graphs. The SQL and its output should be spooled under a name of the format Gextmm-dd-yyyy.sql where ext is the extension of the file being replaced and mm-dd-yyyy is the date the action was taken. The output should be saved in the WIM7 directory for future reference. Permission is not needed to run these SQLs.
  6. Select the File Tracker option from the main menu. Select the version of the loaded file that is being replaced. It should be the highest number version. This version and its error files are NOT being removed. A record of the processing is being maintained. The record will be kept in the LTPPFILETRACKER table. The comments section (View/Edit File Comments) should be annotated in a fashion similar to the following and the comments APPLIED to insure they are recorded in the database.
    'input file' was replaced on 'DATE'. The AVC4 summary.dat files for yyyy for the months of mmm, mmm... were erased. The summary.dat files for yyyy for the months of mmm, mmm ... were modified. 'output file' was erased. Changes to the LTPPD* table were made using Dextmm-dd-yyyy.sql
    OR
    'input file' was replaced on 'DATE'. The WIM7 summary.dat files for yyyy for the months of mmm, mmm... were erased. The summary.dat files for yyyy for the months of mmm, mmm ... were modified. 'output file' was erased. Changes to the LTPPVOL7 table and LTPPGVW table where made using Vext-mm-dd-yyyy.sql and Gext-mm-dd-yyyy.sql.

Note that the record pertaining to the original loading of this data file and the error file associated with it are NOT being removed.

H.2 Going from all lanes to LTPP lane only

The software will permit loading of all lanes or only the LTPP lane. If it is decided to change already processed files from all lanes and to only the LTPP lane, the only way to clean up the traces is to treat the condition as a resubmittal of data and use the instructions of the section above.

H.3 Data not previously processed by the NT software

If one or more files processed by the old software is replaced, all files of that type or none of the remainder need to be processed through the new QC software. If a QC packet from the QC software is desired then all files of the type must be processed through the new QC software. If a QC packet from the traffic analysis software is satisfactory, then the replaced file should be processed through the QC software and all remaining files processed according to LTPP traffic analysis software instructions.


I. Data Evaluation and Error Identification

There are several steps is evaluating inputs. The first is verification that the file will load. Some of the checks at that step are identified in section I.5. The actual evaluation of data is done on the 4-card and 7-card versions of records. This means that error checking on data in c-card and w-card formats will not catch errors such as alphas in numeric fields or duplication of data within a record line in these formats. Error checking may not catch data duplication within a line for 4-card or 7-card records either. Within a record error checking is for rational rather than "correct" values. Thus there is limited logic to verify that a 6-digit truck identifier actually represents a truck vehicle. Additionally, there is no checking currently being done to verify that the number of axles and spaces is consistent with each other and with the identified vehicle class.

The logic for checking errors is such that the first error found is the error identified with the record. In loading files, several attempts may need to be made to eliminate all problems preventing file loading.

Several of the following sub-sections list the allowable ranges, the valid values, the severity of an error and the error number or flag associated with it. The allowable ranges (Min, Max) are those of the original software. The Valid Data is what the current version of the software expects for critical elements. Where there is no difference between the two ranges groups, Valid Data is blank. Severity of an error indicates whether the data from the record should be included in the summaries found in the LTPPD4*, LTPPVOL7* and LTPPGVW* tables. All critical errors are omitted from the summaries. All records with errors are included in the LTPPR* tables and the record counts for LTPPFILETRACKER. Error numbers are used in the LTPPR* tables to identify why the record was rejected. Flags are used to attach reasons to output data files. The flags and the error numbers should match.

I.1 Card 4 Range Check Parameters

The state codes identified are all of those which actually could be encountered by the LTPP program for the states and provinces.

The functional class values represent the systems on which LTPP data is collected. LTPP does not expect data from local streets.

The value of 10 and 11 is used to differentiate between the 4-card and the c-card formats (and 7-card and w-card formats) on loading. The values in the relevant columns are mutually exclusive. For 4- and 7-card the allowable value are 00-09 and 89-99 for C-and W-card the allowable values are 10-88. The C-/W-card limits will permit loading of data collected in 0, the code for all lanes. However, this value of lane will be flagged as a critical and the record omitted from processing for annual estimates. For data submitted on a two-lane roadway, where the code for lane is 0, it is strongly recommended that subject to verification of this condition that a copy of original data file be modified and loaded with lane=1. That change, if made, MUST be entered in LTPPFILETRACKER comments and the PURGE file for that site even if no purges are made.

The software makes no provision for verifying that the maximum number of days in a month represents the maximum number possible.

Field MinMaxValid DataField Type SeverityFlagError
CARD TYPE 4 4 4Numeric CriticalA1
STATE1 90 1,2,4-6,8-56,
72,81-90
Numeric CriticalG7
FUNC CLASS 0 99 1,2,6,7,8,11,
12,14,16,17
Numeric Non CriticalI9
STATION 0 999 alphanumericAlpha Non CriticalC3
DIRECTION1 8 1-8Numeric Critical044
YEAR0 99 89-07Numeric Critical!54
MONTH1 12  Numeric Critical$58
DAY1 31  Numeric Critical. 5
HOUR 0 24 0-23Numeric Criticala27
CLASS 1 099  Numeric CriticalR18
CLASS 2 0 9999  Numeric CriticalR18
CLASS 3 0 999  Numeric CriticalR18
CLASS 4 0 99 Numeric CriticalR18
CLASS 5 0 999  Numeric CriticalR18
CLASS 6 0 99  Numeric CriticalR18
CLASS 7 0 99  Numeric CriticalR18
CLASS 8 0 99  Numeric CriticalR18
CLASS 9 0 999  Numeric CriticalR18
CLASS 10 099  Numeric CriticalR18
CLASS 110 99  Numeric CriticalR18
CLASS 12 0 99  Numeric CriticalR18
CLASS 13 0 99  Numeric CriticalR18
MCYCL RPT 0 1  Numeric Non Criticale31
VEH COMBO 0 1  Numeric Non Criticali35
LANE 0 9 1-8Numeric CriticalO15
CLASS 14 0 99  Alpha CriticalR18
CLASS 15 0 99  Alpha CriticalR18
CLASS 160 99  Alpha CriticalR18
CLASS 17 0 99  Alpha CriticalR18
CLASS 18 0 99  Alpha CriticalR18
CLASS 19 0 99  Alpha CriticalR18
CLASS 200 99  Alpha CriticalR18

I.2 Card 7 Range Check Parameters

The comments made on the various data elements for 4-card checks also apply to 7-card data. It is particularly important to note the restriction on the value of lane. The range data checking allows for 3 records to describe a truck. This is a truck with 14 or more axles. However, the software does not make any provision for a truck with more than 13 axles worth of data to write out error records. These records will be handled as follows:

  • be flagged with a critical error and omitted from summary processing
  • the RW record will have all zero axle weights and spaces
  • the error code will be 55, the record flag will be *
A somewhat more rigorous check on the 6-digit classification is made than is used in both the previous versions of this software and the VTRIS software. The checks can be summarized as shown below:

A valid vehicle classification can be (and is) determined without use of DEFSHT.dat. For the 6-digit case to completely process two items of information are needed: the value of each position in the classification and the number of axles calculated on the basis of the values in each position for groups 3-8 only.

090001-090900: if the 3rd, 5th and 6th digits = 0, vehicle_class = 2; otherwise vehicle_class=15.
090901-099999: vehicle_class = 15.
100000-150000: if the 4th, 5th and 6th digits=0, vehicle_class is two left digits; otherwise vehicle_class=15.
150001-189999: vehicle_class=15.
190000-190400: if the 3rd, 5th and 6th digits = 0, vehicle_class = 4; otherwise vehicle_class=15.
190401-199999: vehicle_class = 15.
200000-280900: if the 3rd, 5th and 6th digits = 0

If the 2nd digit = 0 or 1, vehicle_class = 3
If the 2nd digit = 2, vehicle_class = 5
If the 2nd digit = 3, vehicle_class = 6
if 4 <= 2nd digit<= 8, vehicle_class = 7
otherwise vehicle_class = 15.
260901-320999: vehicle_class = 15.
321000-349000: if the 4th, 5th and 6th digits = 0 the number of axles must be computed (numaxles) otherwise the numaxles is set to 20;
if 3 <= numaxles <5, vehicle_class = 8
if numaxles = 5, vehicle_class = 9
if 5< numaxles <11, vehicle_class = 10.
In any other case vehicle_class = 15.
349001-421999: vehicle_class = 15.
422000-449000: same as 321000-349000 except that minimum number of axles for vehicle_class 8 = 4 not 3.
449001-521099: vehicle_class = 15.
521100-549900: if the 5th and 6th digit = 0, the number of axles must be computed (numaxles) otherwise the numaxles is set to 20;
if numaxles = 5, vehicle_class = 11
if numaxles = 6, vehicle_class = 12
if 6 < numaxles < 13, vehicle_class = 13
In any other case vehicle_class = 15.
549901-622199: vehicle_class = 15
622200-649900: same as 521100-549900 except the numaxles = 5 is not a possibility.
649901-721219: vehicle_class = 15
721220-749990: if the 6th digit = 0, the number of axles must be computed (numaxles) otherwise the numaxles is set to 20;
if 7 <= numaxles <16, vehicle_class = 13
In any other case vehicle_class = 15.
749991-822219: vehicle_class = 15
822220-849990: same as 721220-749990 except the minimum number of axles is 8.

Additionally, the total wheel base and the gross vehicle weight are checked to verify that the sum of the individual inputs is with 15 percent of the respective totals.

FieldMinMaxValid DataField Type SeverityFlagError
CARD TYPE77 Numeric CriticalA1
STATE1901,2,4-6,8-56,
72,81-90
Numeric CriticalG7
FUNC CLASS0991,2,6,7,8,11
12,14,16,17
NumericNon CriticalI9
STATION0999alpha charAlpha Non CriticalC3
DIRECTION091-8Numeric Critical044
YEAR09989-07Numeric Critical!54
MONTH112 Numeric Critical$58
DAY131 Numeric Critical.5
HOUR0240-23Numeric Criticala27
VEH TYPE 1849990see noteNumeric CriticalJ10
BODY TYPE099  Numeric Non Criticalb28
ENGINE TYPE09 Numeric Non Criticalc29
REG WEIGHT0999 Numeric Non Criticalf32
BASIS OF REG09 Numeric Non Criticalg33
LANE091-8Numeric CriticalO15
COMMODITY099999 Numeric Non Criticald30
LOAD STATUS0 9  Numeric Non Criticalh34
TOT WEIGHT0 9999  Numeric Criticalf45
WEIGHT A140010-400Numeric CriticalM13
WEIGHT B140010-400Numeric CriticalM13
WEIGHT C0400 Numeric CriticalM13
WEIGHT D0400 Numeric CriticalM13
WEIGHT E0400 Numeric CriticalM13
SPACE A-B04500, 19-450Numeric CriticalN14
SPACE B-C04500, 19-450Numeric CriticalN14
SPACE C-D04500, 19-450Numeric CriticalN14
SPACE D-E04500, 19-450Numeric CriticalN14
WHEEL BASE08900 Numeric CriticalZ46
SERIAL09991-999Numeric Non CriticalP16
CONTIN.090,1,(2,9)Numeric CriticalQ17

I.3 Continuation card 7 range check parameters

The range checks on a continuation card are the same as for a face card. The sole difference is the allowable values for the continuation field.

Field MinMaxValid DataField Type SeverityFlagError
CARD TYPE 77 Numeric CriticalA1
STATE1901,2,4-6,8-56,
72,81-90
NumericCriticalG7
FUNC CLASS099 1,2,6,7,8,11,
12,14,16,17
NumericNon Criticali9
STATION 0 999alpha charAlpha Non CriticalC3
DIRECTION18  Numeric Critical044
YEAR09989-07Numeric Critical!54
MONTH 112 Numeric Critical$58
DAY131 NumericCritical.5
HOUR024  Numeric Criticala27
VEH TYPE1849990see noteNumeric Criticalj10
BODY TYPE099 Numeric Non Criticalb28
ENGINE TYPE09 Numeric Non Criticalc29
AXLE F0400 Numeric CriticalM13
AXLE G0400 Numeric CriticalM13
AXLE H0400 Numeric CriticalM13
AXLE I0400  Numeric CriticalM13
AXLE J0400 Numeric CriticalM13
AXLE K0400  Numeric CriticalM13
AXLE L0400  Numeric CriticalM13
AXLE M0400 Numeric CriticalM13
SPACE E-F04500,19-450Numeric CriticalN14
SPACE F-G0450 0,19-450Numeric CriticalN14
SPACE G-H 0450 0,19-450Numeric CriticalN14
SPACE H-I 0450 0,19-450Numeric CriticalN14
SPACE I-J 0450 0,19-450Numeric CriticalN14
SPACE J-K 0450 0,19-450Numeric CriticalN14
SPACE K-L 0450 0,19-450Numeric CriticalN14
SPACE L-M04500,19-450Numeric CriticalN14
SERIAL09991-999Numeric Non CriticalP16
CONTIN.09(0,1,)2,9Numeric CriticalQ17

I.4 QC Edit Flag Codes

Each record processed through the QC process will have a two character edit code appended to it. This code indicates to the analysis software whether or not the record is to be included in summary statistics. The first character indicates the severity of the error. Table I.1 shows the possible values for the first character.

Table 5-8 Edit_1 Codes

Edit Flag NameFirst Edit Flag Character
EDITFLAG_OK_ (UNDERSCORE)
NONCRITICAL_ERRORN
CRITICAL_ERRORC

The second character identifies the problem with the record. Table I.2 shows the possible values of the second character. Only one error can be reported per record. Error codes r-z, &, #, ^, ~, |, >, < and ? are associated with purging data.

Table 5-9 Edit_2 Codes

Second Edit:

FlagEdit Flag Name
_EDIT FLAG OK
AINVALID CARD TYPE IDENTIFIER
B6 DIGIT STATION ID
C3 DIGIT STATION ID
DFAULTY CONSECUTIVE HEADER RECS
EINVALID DAY SPECIFIED
FINVALID WEEKDAY SPECIFIED
GINVALID STATE ID
HINVALID STATION ID
IINVALID FUNCTIONAL CLASS
JINVALID VEHICLE TYPE
KINVALID TOTAL WEIGHT
LINVALID TOTAL AXLE SPACING
MINVALID AXLE WEIGHT
NINVALID AXLE SPACING
OINVALID LANE
PINVALID SERIAL NUMBER
QCONTINUATION CARD
RINVALID VOLUME
SINVALID METHOD
TINVALID ATR
UINVALID ROUTE
VINVALID SITE
WINVALID EQUIPMENT TYPE
XINVALID COUNT METHOD
YINVALID ENFORCEMENT METHOD
ZINVALID OPTCLASS
aINVALID HOUR
bINVALID BODY TYPE
cINVALID ENGINE TYPE
dINVALID COMMODITY
eINVALID MCYCLERPT
fINVALID REG WEIGHT
gINVALID BASIS REGISTRATION
hINVALID LOAD STATUS
iINVALID VEHICLE COMBO
jINVALID MINUTE
kINVALID SECOND
lINVALID HUNDRETH OF SECOND
mINVALID NUMBER OF AXLES
nINVALID RECORD LENGTH
oINVALID TIME/DATE SEQUENCE
pINVALID ALL AXLE WEIGHTS
qINVALID ALL AXLE SPACINGS
r8+ CONSECUTIVE ZEROS
sTIME CHECK
tMISSING DATA
uZERO DATA
vIMPROPER DIRECTION DESIGNATION
wIMPROPER LANE DESIGNATION
x7 CARD GREATER THAN 4 CARD
DAILY VOLUME BY SIGNIFICANT
DIFFERENCE
y4+ CONSEC NONZEROS
z4 CARD GREATER THAN 7 CARD
DAILY VOLUME BY SIGNIFICANT
DIFFERENCE
0INVALID DIRECTION
1INVALID TOTAL WEIGHT
2INVALID TOTAL SPACE
3INVALID ROUTE CATEGORY
4INVALID COUNTY
5INVALID HPMS SAMPLE SECTION
6INVALID AADT
7INVALID FOOTNOTE
8INVALID YEAR ESTABLSHED
9INVALID YEAR DISCONTINUED
!INVALID YEAR
*INVALID NUMBER OF AXLES (>13)
@INVALID TIME
%INVALID SPEED
$INVALID MONTH
.INVALID DATE
,8+ CONSECUTIVE ZERO VOLUMES
?4+ CONSECUTIVE STATIC VOLUMES
/1 AM VOLUME > 1 PM VOLUME
)MISSING HOURLY VOLUME
+ZERO DAILY VOLUME
&OVER CALIBRATED
#UNDER CALIBRATED
^LARGE % OF VEHICLES > 80 KIPS
~LARGE % OF VEHICLES < 12 KIPS
|LOWER VOLUMES THAN EXPECTED -
POSSIBLE SENSOR PROBLEM
>MISCLASSIFICATION ERROR
<ATYPICAL PATTERN
?USER ENTERED

I.5 File Fatal Flaws

The following rules are incorporated in the QC software with respect to accepting files for loading and processing. A failure results in a file that doesn't load. The advantage of stopping the loading process is that none of the output files or ORACLE tables will exist. The file can be edited and reloaded. The version number will increase but that has no impact on the rest of the processing.

  • Records must start with C, W, 2, 4 or 7.
  • All records in the file must match the first data record. The exception is 2-cards followed by 7-cards.
  • File type (c,w) must match record type (4,C or 7,W respectively) where the record type is recognized as valid.
  • State code in the file must match the state code in the file name.
  • The month and year of the file extension must match the month and year of the first data record.
  • The first record read in the file must have all valid (non-null entries)
  • Blank lines including the last line of a file.
  • Year must be constant. Month and day must be constant or increasing.

The following individual record errors will stop loading. the line number of the failure will appear in the log.

  • All records in a 4-card file must be the same length as the first record in a file.
  • The length of a converted c-card must be the same as the first record converted in the file.
  • A 7-card must have 80 columns.
  • A w-card must have the same characters in the first three columns. (This is a very weak check.)

J. Log Files

J.1 Log File Names and Location

Log files for the TRF QC software are written to the LOGS subdirectory. The LOGS subdirectory is located as a first tier subdirectory in the user's preferred directory (PREFS). It has a subdirectory for each year that processing occurs. Within that subdirectory are stored all of the log files. The naming convention for log files is YYMMDDLX.log where X is the level of processing. 4 is for QC processing, 3 for creation of daily summaries, 2 for annual summaries by vehicle class and 1 for annual summaries combining all classes. YYMMDD refers to the day the log was created. Log files are appended, not overwritten with each successive batch of files loaded on a given day. The log file is tab delimited ASCII.

J.2 Log File Contents

The log file reports the success or failure of loading a data file. A failed attempt to load a file will be include the reason. For each batch of files loaded the number of successfully and unsuccessfully loaded will be summarized at the end of the file list.

Note: A file that fails to load or process completely may show up in LTPP File Tracker with state XX since if was entered in LTPPFILETRACKER (See section G.1 for a description of this table.) with STATE_CODE 00. It will have a processed date of 01/01/2025. Another outcome may be the creation of the directory NoRegion in the user specified directory with state = XX, site = 001000 and year = 0.

If a record is not found in the SHRP.DAT file for a given site, the file will not load. The subdirectories will be created if the state in the file name is valid but the SHRP ID is not.

A message - "Failed to create directory path for index files" will appear if an invalid state code is used. That message will be followed by the same SHRP.DAT message.

A case of no errors captured and no file loaded (Load failed) indicates that the attempt to create directories, write index files, summary.dat files or output files failed. Verify preferences and the amount of space in the output directory before proceeding.

A case of no record in the output log of any type without the program aborting implies that more files were included in the list for loading then the software could handle. The maximum is approximately 70 depending on the path length.

Below are listed the various warning and error messages that will be printed into the raw data QC process log file:

File "filename" ext/date do not match 1st record "date" - The date of the first record in the file does not match that of the extension. The data on the first record is printed out to assist in renaming the file.

File name inconsistent with file type - The first character of the file name and the record type in the file are inconsistent.

Input file contains no loadable data matching criteria - The loading is being done with LTPP Lane only selected and the lane or direction in the file does not match the lane and or direction for the LTPP section in the SHRP.DAT file.

Record format does not appear valid OR Record format does not appear valid for a 3-card (4-card, 7-card, C-card, W-card) - In each case the data in columns 11 and 12 does not match the expected values. For 3-, 4- and 7-cards this is 00-09, 89-99. For C-and W-cards this is 10-88.

Input data lines must begin with a (4, 7, C, W) - In each case a record begins with a character that does not match the rest in the file. This includes spaces and line feed characters which produce blank last lines.

State value does not match file content - The state in the name and the state in the first record in the file must be the same. This message will also appear when an attempt is made to load a HELP file.

Attempt to load invalid data file - The data file matches no recognized record type.

Too many station ID cards - More than eight 2-cards at the beginning of the file.

Input contains station ID card only - no loadable data - The file consists of a 2-card (possibly part of an HPMS submission).

Error encountered during database processing - Processing of the summary.dat records to create the ORACLE tables could not be completed for any one of a number of reasons.

Input data line, invalid length line # - A 4-card or C-card with a length not matching that of the first record or a 7-card with other than 80 characters was encountered. Edit or remove the line and reload the file.

3-cards currently not supported - An attempt was made to load a file with 3 cards.

Record format does not appear valid - A validation check failed on the record type somewhere after the first record in the file. See the record specific error message above for review criteria.

Failed to find FUNCLASS.DAT for metric conversion. - The file is missing from the DAT directory and the file will not load. This message is relevant to C-card and W-card files only.

Attempt to store a day of volumes not found in volume array - a value for day larger than allowed by the program has been encountered.

Error splicing summary.dat - The input file contains an invalid date, probably a month. Check for summary.tmp file.

Card file is no longer available for loading. - Data set was selected for loading but user tried to select files in more than one directory. Only files from the last directory selected will load.

Failed to Locate DEFSHT.DAT information. Load succeeded.

Failed to Locate NEWSHT.DAT information. Load succeeded. - The files will be completely processed in either case except that the index file will not be created properly. This is a non-fatal error.


K. LTPP QC Program Error Descriptions

A variety of error messages can appear that are specific the use of the LTPP QC software, in addition to error messages that may be produced by ORACLE or Microsoft Windows TM. The following is a list of error messages and descriptions of the problem at hand.

  • Failed to load SHRP DAT information!
    The SHRP.DAT file could not be located during the data loading process. The Base Data Directory may be specified incorrectly in the User Preferences menu.

  • Failed to load default DAT sheet!
    The DEFSHT.DAT file could not be located during the data loading process. The Base Data Directory may be specified incorrectly in the User Preferences menu.

  • Failed to load new DAT sheet!
    The NEWSHT.DAT file could not be located during the data loading process. The Base Data Directory may be specified incorrectly in the User Preferences menu.

  • Error creating archive directory.
    The program loaded a data file and attempted to create the archive directory and file but was unable to. The specified directory name length may be too long for Microsoft Windows, or permissions on directories within the Base Data Path may be incorrect.

  • A connection could not be made to the file tracker database.
    A connection to the database exists, but the connection to the LTPP File Tracker has been dropped. Disconnect from the database and reconnect.

  • Failed to create directory path for saving data file.
    The QC software creates a lengthy directory structure under the Base Data Path in the User Preferences. During data loading, the directory failed to be created. Possible causes are directory/user permissions, or the length of the final directory name exceeded a given limit.

  • Unable find specified file in file tracker.
    The software believes a given file was loaded and should exist in the LTPP File Tracker, but the entry does not exist. This error may arise if an entry is deleted from the file tracker and subsequently used in a menu that was already open. All menus containing reference to that file must be closed and the file reloaded (if desired).

  • Couldn't open a temporary file for processing.
    Microsoft Windows failed to allocate a temporary file processing. Possible causes are that the TEMP directory doesn't exist, the number of files in the location is at a maximum limit, or the hard disk is full.

  • Unable to find file tracker entry.
    The software believes a given file was loaded and should exist in the LTPP File Tracker, but the entry does not exist. This error may arise if an entry is deleted from the file tracker and subsequently used in a menu that was already open. All menus containing reference to that file must be closed and the file reloaded (if desired).

  • Unable to retrieve file tracker entries.
    The program could not find the LTPP File Tracker (table LTPPFILETRACKER) in the database. Permissions may be set improperly on the table, or the table may have been deleted while the software was in use.

  • Couldn't open archived file for updating. Purges partially applied.
    Apply Purges operation was attempted, but the archived data file was not located. It was probably moved or deleted. Use the LTPP File Tracker to browse for the file. If it cannot be found, then the file was permanently deleted and must be reloaded into the software from the original data file.

  • Failed to delete archived file before overwrite. Purge failed.
    The QC software must read and rewrite the archived data file when purges are applied. The above error is produced because the software could not delete and rewrite the file, probably due to permission problems.

  • Input file "<name>" is same as output file!
    An archived (previously processed) data file was loaded into the software. This operation is forbidden.

  • Failed to locate SHRP.DAT information for file <name>.
    The data collection site information could not be located in the SHRP.DAT file.

  • Card file "<name>" is no longer available for loading.
    During the loading operation, the specified input file name was deleted or became no longer available.

  • Failed to process sheet information for this file!
    The sheet information (11, 12, or 13) failed to be produced. This can occur if there is a failure to read, write, or update the CINDEX binary files used to store sheet information pertaining to input data files. Permissions may be set improperly on the directories within the Base Data Path directory structure.

  • Card type consistency error. Discarding data!
    During the loading process, a card type (C, W, 4, 7) inconsistency was encountered, and the type of data can no longer be confirmed. The input data file is discarded, and the loaded process for that file discontinued.

  • Data file corrupted. Discarding data!
    During the loading process, a problem with the file (probably incomplete data) was encountered. The file loading process is discontinued and all data discarded.

  • Unable to open FUNCLASS.DAT file for card conversion.
    The FUNCLASS.DAT file could not be located, which contains the functional classification codes for the specified SHRP ID. This is used only during conversion from C & W to 4 & 7 cards. The data cannot be converted without this DAT file.

  • Functional classification not found in FUNCLASS.DAT.
    The FUNCLASS.DAT file exists, but the site could not be located in the file and, therefore, the functional classification code is not found.

  • Invalid compliment data file selected.
    A comparative graph option was selected on the Graph MGR menu, and the user was prompted for an input weight file, but no complimentary data file was provided.

  • Error Getting Logical DriveStrings!
    The program requested physical drive (hard disk, floppy, CD ROM) information from Microsoft Windows, but none was returned. The only cause could be that Windows does not currently want to cooperate.

  • Error Getting SystemFileInfo!
    The QC software attempted to get iconic images represented the physical drives on your computer, but failed. No probable cause.

  • Cannot retrieve the Handle of SystemImageList!
    Iconic images for directories and files on your computer were not locatable. There may be a problem with memory sharing at the current time on your computer. Try restarting the program or rebooting the computer to free system resources.

  • Selected data file does not contain weight data.
    The user specified a complimentary input weight file for comparative graphing with the Graph MGR, but the input data file is not a weight data file.

  • One or both files contain no data.
    The comparative graphing option was specified on the Graph MGR, and two input files were given, but one or both contain no data.

  • Couldn't create profile directory: <name>
    The program attempted to create an initial user profile (used for custom program settings), but it couldn't be created. User profiles are typically stored in the standard user profiles directory. Have the systems administrator validate this directory and permissions.

  • Couldn't write profile to: <name>
    The program was able to create a directory for the user profile, but an operator.dat file couldn't be created in the directory.

  • Unable to read operator profile. Check permissions.
    The program was unable to load the existing operator.dat file.

  • Bad profile entry detected! Defaulting.
    The program encountered an invalid user profile entry and is using the default setting internal to the program.

  • Failed to open daily data table.
    The program attempted to open the daily summary ORACLE table for the specified data file, but the table could not be opened. It may have been deleted, or the table permissions may be incorrect.

  • Failed to open record data table.
    The program attempted to open the record level ORACLE table for the specified data file, but the table could not be opened. It may have been deleted, or the table permissions may be incorrect.

  • Couldn't scan archive directory for data files.
    The program attempted to apply purges to data files which are not present. Check PREFS for the expected subdirectory path for SHRP.DAT. Check the output directories for the actual existence of the files.

  • End day must be equal to or greater than start day.
    The range of day values entered manually has the value of the last day less than the value of the first day. The range must be specified within a month from lowest to highest day in the interval.

L. Issues

L.1 Support volume files

Current software does not support 3-cards which provide information only on total volumes by hour. This is required only if reprocessing of pre-1999 data is required or it is determined there are sufficient sites with continuous ATR data and sampled AVC to make it worth providing additional information for the analysis software.

L.2 Support HELP files

This is a capability lost on conversion. HELP files are thought to have been used only by the Canadian provinces. They are a truck weight record format. This is needed either if new data is received in this format or pre-1999 data must be processed.

L.3 SHRP.DAT as an ORACLE table

Much of the data in SHRP.DAT is redundant due to existence of the information in the IMS. It needs to be determined which items of information are essential to the processing software and which can be eliminated. Of those that are essential, a determination must then be made as to which are unique to the traffic software's needs and which exist in other parts of the database. Finally a table to hold all of the site constants needs to be developed.

Duplicate data:

  • State_Code and SHRP_ID to verify that the site exists in EXPERIMENT_SECTION.
  • Effective year, month, day are in EXPERIMENT_SECTION as CN_ASSIGN_DATE since they are used for ESAL calculations. Additional records have customarily been added when CONSTRUCTION_NO has changed because it indicates a structure change and therefore a change in either SN or D for the relevant ESAL equation.
  • Direction of LTPP lane - INV_?, SPS_?. Used for identifying which data should be read and stored when LTPP lane only is selected in the QC software. (This should remain duplicate to verify that the data submitted is in fact for the LTPP lane.)
  • Number of lanes in the LTPP direction - INV_?
  • Number of lanes in the non-LTPP direction ?
  • Construction reason - RHB_? And various rehabilitation and maintenance tables which indicated what work was done at the time the CONSTRUCTION_NO value changed.

Unique data:

  • State 3 digit ID is station number for the site from the perspective of traffic personnel in 2nd edition TMG files
  • State 6 digit ID is station number for the site from the perspective of traffic personnel in 3rd edition TMG files
  • Terminal serviceability index used for computing ESALs
  • Structural number used for computing ESALs. It can be derived from IMS data and will be for the ESAL calculations that are actually loaded into the IMS. Possible source of this number for an analysis software input table is the intermediate values table used to generate ESAL values.
  • Pavement depth used for computing ESALs. It can be derived from IMS data and will be for the ESAL calculations that are actually loaded into the IMS. Possible source of this number for an analysis software input table is the intermediate values table used to generate ESAL values.
  • Pavement type (rigid or flexible) used for selecting ESAL equation. It can be derived from TST_L05B or EXPERIMENT_SECTION in the IMS.
  • LTPP lane number - Used for identifying which data should be read and stored when LTPP lane only is selected in the QC software.
  • 3 digit flags field ?
  • Data availability code - relationship of AVC and WIM data collection locations with respect to the LTPP section and a rough quality and quantity assessment based on equipment and data received.

L.4 DEFSHT.DAT in ORACLE

This is a site specific equipment and data collection format table. It is currently maintained using a text editor. Putting it into ORACLE would make maintenance of the table easier. It could also be derived in part from Traffic Sheet 14 and Sheet 15 information which is not yet being considered for inclusion in the IMS.

L.5 Elimination of NEWSHT.DAT

Creation of this file as ASCII text requires opening and reading the first and last record of each file when transmittal sheets are not received with the data. It needs to be determined what the information in the file is used for other than putting starting and ending dates and times in the transmittal sheet files. If that is the sole purpose, then modification of the software to get this information while reading the first and last data record (not station identification card) should be considered. Since the user currently selects the files to be loaded, this file serves no purpose in that function. The information in NEWSHT can be derived from tables developed by the QC software if need for analysis.

L.6 Transmittal sheets (*.inx file) in ORACLE

The transmittal sheets are currently stored in a single binary file for the site. The records are written in the order the files are read by the QC software. However, the analysis file expects them to be in sorted data order and possible in a type order as well. This file can be eliminated from the process by the direct use of DEFSHT.DAT, SHRP.DAT and LTPPFILETRACKER>

L.7 Processed raw data files in ORACLE

Currently processed raw data files are written back out into ASCII files as 4-card or 7-card files whose data matches the raw input files as if they were originally 4-card or 7-card files. Writing these files out in an ORACLE table would make them more accessible to users and easier to query and manipulate. It would also make it possible to check for duplicates on loading data.

L.8 Log file for processing

The file has currently been revised to report only loading activity whether or not it is successful. All statistics previously included may be generated using SQL and the various LTPPR* tables.

L.9 Consolidate GVW tables to one per site

The software currently produces one ORACLE table per year per site with GVW information. This table is used for graphing purposes. Consideration should be given to reducing the number of tables and being able to manipulate the GVW data without having to locate multiple files for a site.

L.10 Consolidate VOL7 files to one per site

The software currently produces one ORACLE table per year per site with VOL7 information. This table is used for graphing purposes. Consideration should be given to reducing the number of tables and being able to manipulate the VOL7 data without having to locate multiple files for a site.

L.11 Consolidate error tables

Currently every time a file is read it generates a new ORACLE table for error storage whether or not any errors exist. This creates large numbers of files used for little but indicating that the processing has been accomplished.. It would be worth investigating the implications of reducing the error tables to three per site (1-WIM, 1-AVC, 1-VOL) or even three per site per year.

L.12 Create a duplicate checking process

Incorporate a duplicate checking process prior to the analysis software. This can be incorporated with the conversion of all files to indexed ORACLE tables.

L.13 Pre-processor

The software currently requires data be sorted in a specific order and that all records be valid for the file type. A pre-processor would do a pre-loading clean-up. It might be just as much or even less effort to change the software function to skip records which were inconsistent with type (too long or too short) and handle the data without having it in sorted order. This can be avoided entirely by loading the inputs into an indexed ORACLE table.

Another thing that would be useful is the ability of some software tool to open read and then rename files as received from highway agencies. This would save staff enormous amounts of time and make it easier to acquire data in a timely fashion. This might be worth development independently of any QC software refinements. Alternative a module could be added that would, on encountering an invalid file name, create the output file name from file content and the use of SHRP.DAT information, specifically ID3 and ID6.

L.14 Support Site ID cards

The software has limited capability to read files with 2-cards (7-card files only). It would help reduce the amount of pre-processing if 2-cards could be header records on 4-card files. Similarly, the ability to use S-cards as header lines for C-cards and W-cards would be useful. This is a medium priority activity to reduce the amount of preprocessing required. This would be useful for product development purposes.

A very, very low priority associated with processing 3-cards (volume records) would be including 1-cards (the station ID card for this record type) as possible headers for those files.

L.15 Alpha characters in SHRP_ID

The file naming convention is rigid even for LTPP. As currently coded the software only permits an alphabetic character in the first position of SHRP_ID. Having any character in that variable be alphabetical would improve product possibilities.

L.16 Loading robustness

The software currently will not load files with extra CR/LF characters at the end of the file (with or without leading blanks). It would help if the software would ignore such lines as valid but data free records.

L.17 Purge Conditions

Add an appendix to the QC manual on clear and fuzzy cases for applying purges beyond the information in section particularly as regards the SPS data collection locations.. In addition, purge conditions for the SPS sites must be described and additional codes added on an as needed basis.

The ERROR information in the ORACLE tables currently reflects the QC process. The PURGE reason is not present. Changing either the PURGE value to a number code for PURGE reason or modifying the value of ERROR for purged data would eliminate the need to look at PURGE files.

Another possibility is to consider putting the PURGE files in ORACLE with the capability of undoing PURGES being added concurrently.


M. Transmittal Sheets

There are three types of transmittal sheets: volume, classification, and weight. They are referred to as sheets 11, 12 and 13 respectively due to their numbering in the LTPP Traffic Data Collection Guide. A transmittal sheet is submitted for every data file sent by an agency. It comes in hard copy. Data is extracted from it to create or modify the various *.dat files used by the QC software.

When data is read into the QC software an electronic version of the transmittal sheet is created. There is one per file read. All transmittal sheets for a file go into a single binary file with the name xxxxxx5.inx where xxxxxx represents STATE_CODE, SHRP_ID for the site. The file is composed of three types of records of varying lengths and composition. The single variable they all have in common is the first one, referred to as SheetNum which creates the unique key for a record within a file.

The SheetNum field in the three sheets is used for accessing data within the .inx file. It is built as follows by column:

1 - 2 = 11, 12, 13 (depending on type of sheet)
3 - 6 = nn.0 (MM.0)
7-10 = nn.0 (DD.0)
11-14 = nn.0 (YY.0)
15-20 = 000000
The values for month, day and year should never have any value after the decimal, so are expected to always be 0. It possible that this capability was included to permit updating the sheet by version or multiple loads of files.

M.1 Sheet 12

Sheet 12 is used with classification data.

SheetNum[20] - alpha - Unique key described above.
ShrpId[10] - alpha - /* All others are DUP */
StateId[6] - alpha - This is presumably the station identification assigned by the state to the site.
StateCode[4] - alpha -
HwyRoute[12] - alpha -
Milepost[12] - alpha - presumably with embedded decimal point.
Location[33] - alpha -
Filename[14] - alpha
DiskId[14] - alpha - presumably the volume label for the media on which the disk was received.
BeginDate - of the form MmDdYy - This should be formatted to report a 4-digit year when converted into an ORACLE table element.
MmDdYy EndDate - of the form MmDdYy - This should be formatted to report a 4-digit year when converted into an ORACLE table element.
BeginTime - of the form HhMm -
EndTime - of the form HhMm -
TypeOfCount - alpha - coded for 2-way, 1-way or LTPP lane only.
VehClassMethod - alpha - either FHWA (13 class) or agency.
VehClassOtherStr[4] - alpha - name of agency scheme.
AvcEquip - alpha - coded either port(able) or perm(anent); No code provided for manual even though LTPP Traffic DCG indicates that is an option.
SensorType - alpha - type of sensor used for volume counter (road tube, piezo cable, piezo film, loops, other)
SensOtherStr[16]- alpha - name of sensor not included in list of expected types.
CounterType[16] - alpha -
NameModel[16] - alpha - model of volume equipment
AdjustFact GenAdjust[NUMADJUSTFACT] - alpha - a number with an embedded decimal is what should exist for non-null entries. This is a factor that applies to all classes in the count. Where the number of them that should exist is entered isn't obvious from the file description.
AdjustFact ClassAdjust[25][NUMADJUSTFACT] - alpha -
VehClass[25][4] - alpha -
MoreVehClass - alpha -
Comments[10][64] - alpha - 10 64-character comment lines, presumably with a different comment on each line.

M.2 Sheet 13

Sheet 13 is used with weight data.

SheetNum[20] - alpha - Unique key described above.
ShrpId[10] - alpha - /* All others are DUP */
StateId[6] - alpha - This is presumably the station identification assigned by the state to the site.
StateCode[4] - alpha -
HwyRoute[11] - alpha -
Milepost[11] - alpha - presumably with embedded decimal point.
Location[31] - alpha -
Filename[13] - alpha
DiskId[13] - alpha - presumably the volume label for the media on which the disk was received.
BeginDate - of the form MmDdYy - This should be formatted to report a 4-digit year when converted into an ORACLE table element.
MmDdYy EndDate - of the form MmDdYy - This should be formatted to report a 4-digit year when converted into an ORACLE table element.
BeginTime - of the form HhMm -
EndTime - of the form HhMm -
Classmethod - alpha - either FHWA (13 class), FHWA truck weight study (6- digit) or agency.
Methodname - alpha - name of agency scheme.
ScaleType alpha - coded either port(able) or perm(anent).
ScaleTypeOtherStr[18] - alpha - for another type of static scale used which does not measure loads at highway speeds.
CounterType[18] - alpha -
NameModel[18] - alpha - model of volume equipment.
SensorType - alpha - type of sensor used for volume counter (bending plate, piezo cable, piezo film, other).
SensOtherStr[18]- alpha - name of sensor not included in list of expected types.
Comments[10][64] - alpha - 10 64-character comment lines, presumably with a different comment on each line.
M.3 Sheet 11

Sheet 11 is included even though the QC Software currently only addresses Sheets 12 and 13.

SheetNum[20] - alpha - Unique key described above.
ShrpId[10] - alpha - /* All others are DUP */
StateId[6] - alpha - This is presumably the station identification assigned by the state to the site.
StateCode[4] - alpha -
HwyRoute[11] - alpha -
Milepost[11] - alpha - presumably with embedded decimal point.
Location[32] - alpha -
Filename[15] - alpha
DiskId[15] - alpha - presumably the volume label for the media on which the disk was received.
BeginDate - of the form MmDdYy - This should be formatted to report a 4-digit year when converted into an ORACLE table element.
MmDdYy EndDate - of the form MmDdYy - This should be formatted to report a 4-digit year when converted into an ORACLE table element.
BeginTime - of the form HhMm -
EndTime - of the form HhMm -
TypeOfCount - alpha - coded for 2-way, 1-way or LTPP lane only.
DevType - alpha -
SensorType - alpha - type of sensor used for volume counter (road tube, piezo cable, piezo film, loops, other)
SensOtherStr[18] - alpha - name of sensor not included in list of expected types.
NameModel[18] - alpha - model of volume equipment.
CounterType[18] - alpha - coded either port(able) or perm(anent).
AxleCorrFact[9] - alpha - a number with an embedded decimal is what should exist in non-null fields. This value indicates the number of axles per vehicle that are expected at the site to estimate daily traffic.
AxleCorrStd[9]- alpha - a number with an embedded decimal is what should exist in non-null entries. The standard deviation of AxleCorrFact.
MonthlyFact[9]- alpha - a number with an embedded decimal is what should exist in non-null entries. This value is a multiplier used to adjust the data when factoring to a full year estimate when this is the only data available.
MonthlyStd[9]- alpha - a number with an embedded decimal is what should exist in non-null entries. The standard deviation of MonthlyFact.
DayOfWeekFact[9]- alpha - a number with an embedded decimal is what should exist in non-null entries. This value is a multiplier used to adjust the data to the average day of week when factoring to a full year estimate when this is the only data available.
DayOfWeekStd[9]- alpha - a number with an embedded decimal is what should exist for non-null entries. The standard deviation of DayOfWeekFact.
OtherFact[9]- alpha - a number with an embedded decimal is what should exist in non-null entries. Any other factor applied by the agency to adjust the data in expanding a sample to a year.
OtherStd[9]- alpha - a number with an embedded decimal is what should exist in non-null entries. The standard deviation of OtherFact.
OtherFactStr[28]- alpha - a description of OtherFact.
DistFactGps[8]- alpha - a number with an embedded decimal is what should exist for non-null entries. This is the percentage of the count in the LTPP lane if more than one lane is included in the data.
DistFactSource[45]- alpha - description of how the lane distribution factor was developed.
Comments[10][64] - alpha - 10 64-character comment lines, presumably with a different comment on each line.