I-15 REVERSIBLE LANE

 

CONTROL SYSTEM PROJECT

 

 

 

 

 

System Architecture /

High Level Design Document

 

 

 

 

 

Prepared for:

 

State of California

Department of Transportation

Transportation Management Center

District 11

7183 Opportunity Road

San Diego, CA  92186

 

 

 

 

 

 

Prepared by:

 

TRANSCORE logo

 

300 South Harbor Boulevard

Suite 516

Anaheim, CA  92805

 

 

 

 

 

June 18, 1999


Table of contents

Page #

1.       Introduction............................................................................................................................................................ 1-1

2.       Applicable Documents..................................................................................................................................... 2-1

3.       Selected System Architecture................................................................................................................ 3-1

3.1     System Context Diagram............................................................................................................................. 3-1

3.1.1       Interim System Considerations....................................................................................................................... 3-2

3.2     Interim and Final System Design............................................................................................................ 3-3

3.3     Top Level Hardware Architecture..................................................................................................... 3-4

3.3.1       TMC Hardware Architecture.......................................................................................................................... 3-6

3.3.2       FCU/DCU Hardware Architecture................................................................................................................ 3-7

3.3.3       Communications Architecture........................................................................................................................ 3-7

3.4     Top Level Software Architecture....................................................................................................... 3-8

3.4.1       Data Tier............................................................................................................................................................. 3-9

3.4.2       Middle Tier......................................................................................................................................................... 3-9

3.4.3       Client Tier........................................................................................................................................................ 3-10

3.4.4       Cross Tier Services......................................................................................................................................... 3-11

3.4.5       Component Allocation.................................................................................................................................. 3-11

3.5     Redundancy and Degraded Mode Operation............................................................................... 3-12

4.       ALTERNATIVE ARCHITECTURE COMPONENTS........................................................................................... 4-1

4.1     Unix Operating System................................................................................................................................... 4-1

4.2     “Thick” Client...................................................................................................................................................... 4-1

4.3     Back-up FCU to FCU Communications Link....................................................................................... 4-2

4.4     Full Motion Surveillance Video............................................................................................................ 4-4

4.5     Shared use of a Camera for Surveillance and Device Confirmation........................ 4-4

4.6     Additional CMS.................................................................................................................................................... 4-5

4.7     Queue Detection and Trapped Vehicle Loops................................................................................. 4-5

LIST OF EXHIBITS AND TABLES

Exhibit 3.1: I-15 Reversible Lane Control System Context Diagram................................................................................ 3-1

Exhibit 3.2: I-15 Reversible Lane Architecture (Interim)..................................................................................................... 3-5

Exhibit 3.3: Three-Tiered Software Architecture................................................................................................................... 3-9

Exhibit 4.1: Backup FCU to FCU Communications Link................................................................................................... 4-3

 

Table 3.1: System Redundancy Features............................................................................................................................. 3-12

 

 


1.                  Introduction

This document provides a description of the selected system architecture and high level design for the I-15 Reversible Lane Control System.  This architecture and design is based on the system requirements identified earlier to this project.  The architecture/design incorporates those elements of the existing system that should be retained and provides a platform for the new system functionality.  In particular, the provision of a Graphical User Interface and of a robust, fault tolerant system drove this stage of the design process.

The architecture is presented both graphically in diagrams and in text.  The relationships and interfaces between the hardware components and the software components are shown.  The basis of the design is a client/server architecture, which is evident in both the hardware and software descriptions.  In addition, the descriptions of the hardware architecture highlight the minimal changes that will occur as other District 11 ITS systems are deployed.  Specifically, the deployment of the Advanced Traffic Management System and the Fiber Optic Network by District 11 will impact the I-15 Control System.  This document covers the design features that will change if there is an interim period where the I-15 Control System is deployed before either of the other two systems.  This design was selected to mitigate those impacts as much as possible while providing all the required functions.

The document also presents some important design alternatives that were considered but not incorporated.  Most of these represent additional features and capabilities of the system which could be added if their cost was considered to be commensurate with their operational benefit.

 


2.                  Applicable Documents

1.      System Requirements Document, Version C, for the I-15 HOV Reversible Lane Control System Project, TransCore, December 4, 1998

2.      Communications Plan, draft, for the I-15 Reversible Lane Control System Project, TransCore, May 12, 1999

3.      Air Driven Delineator Study, draft, for the I-15 Reversible Lane Control System Project, TransCore, May 12, 1999

4.      Existing Control System Report, for the I-15 Reversible Lane Control System Project, TransCore, September 17, 1998

 

 


3.                  Selected System Architecture

3.1              System Context Diagram

Exhibit 3.1 shows the context diagram for the I-15 Reversible Lane Control System.

Exhibit 3.1: I-15 Reversible Lane Control System Context Diagram

I-15 reversible lane control system diagram.

As is shown, the system will support three types of operators and has an electronic interface with four external systems.

·         HOV Operator – This person is responsible for control of the system, including opening and closing the lanes to traffic.  The HOV operator will have completed control over the devices controlling access to the reversible HOV lanes, including: gates, pop-ups, lights, changeable message signs (CMS), and CCTV cameras.  The system will provide the status of the components to the operator, including operational state (open, closed, etc.) and maintenance status (working or not working).  In addition the system will make available reports on the use of the system and the data collected by the system.

·         Maintenance Technician – This person is responsible for scheduling and performing maintenance activities on the system.  In addition to providing detailed component status, the system will permit the technician to perform diagnostics and to exercise control of devices in conjunction with maintenance activities by field personnel.  The system will also track system failures and the resulting maintenance activities.

·         Trainer / Trainee – The trainer is responsible for training users of the system on system capabilities and operations.  The trainer will be able to develop and run training scenarios to simulate a variety of condition (including failure conditions) to exercise the capabilities of the trainee.  To train personnel on control of the system, the trainer will use a device simulator to mimic the operation of the devices without actually allowing any control of the real field devices.  The trainee will be able to monitor the state and status of the system to become familiar with its monitoring capabilities.

·         ATMS/RMS – The integrated Advanced Traffic Management System (ATMS) and Ramp Metering System (RMS) will have an electronic interface with the I-15 Control System.  This interface will be used to exchange traffic information and status between the systems.  The ATMS/RMS will provide the I-15 Control System with traffic volumes and speeds along the HOV lanes and in the adjacent mainline lanes.  The ATMS/RMS will provide information on the status of the traffic flow sensors.  The I-15 Control System will provide the ATMS with its operational and maintenance status and limited traffic flow information.

·         CCTV – The fiber optic network to be installed throughout District 11 (including the stretch of I-15 where the reversible HOV lanes are located) will include the installation of surveillance cameras.  In addition, the eight existing surveillance cameras and the new device verification cameras of the I-15 Control System will be integrated onto the I-15 segment of the fiber optic network.  Users of the ATMS and the I-15 Control System will be able to view the video images and share control of these cameras.

·         Congestion Pricing – The Congestion Pricing system, which is collocated along the HOV lanes, will have an electronic interface with the I-15 Control System to share operational state, equipment status and traffic flow information.  The I-15 Control system will report on its state (closed, open to HOV, open to all) and will provide traffic flow data (extracted from traffic flow information from the ATMS/RMS).  The Congestion Pricing System will provide its status (able to charge tolls or not), the messages displayed on its CMSs and the volume of tagged vehicles (vehicles paying a toll) using the facility

·         Special Projects – Occasionally, special projects will use the HOV lanes (when they are closed to traffic) as a protected environment to evaluate special purpose vehicles, such as automated vehicles.  This interface will provide a window for these projects into the status of the I-15 Control System so they can coordinate their activities with the normal operation of the lanes.  The projects will be able to view the schedule for the lanes and the status of the control systems.  In addition, surveillance video will be made available to the project.

3.1.1        Interim System Considerations

As noted above, the I-15 Control System will interface with the ATMS and will utilize portions of the fiber optic network for video and data communications.  Due to variations in the deployment schedules for the three independent projects, it is possible that the I-15 Control System will be installed and become operational before one, or both, of the other two systems.  If this happens, the system design will incorporate features to support operations during this interim period.  However, these features must be designed to allow an easy transition from the interim system design to the final system design.  Specifically, the interim I-15 Control Design will include the following features:

1.      Prior to the deployment of the ATMS, the I-15 Control System will implement a direct interface to the RMS in order to receive traffic flow information from the RMS sensors.  In addition to the volume and speed information, the RMS will indicated the status of each sensor.

2.      Prior to the deployment of the fiber optic network, the I-15 Control System will utilize leased telephone lines to communicate from the TMC to the field devices and dial-up ISDN lines to transmit video from the CCTV cameras to the TMC video monitors.  The fiber optic network will replace both of these links when it is installed.  In addition, backup communications links will become less important, and may be abandoned altogether, due to the increased reliability and path redundancy of the fiber optic system.  Due to the different transport characteristics of the fiber optic network, some terminal equipment, such as modems, will be replaced when the fiber optic network becomes available and is integrated into the I-15 Control System.

3.2              Interim and Final System Design

The I-15 Reversible Lane Control System will be deployed at about the same time as two other major projects in District 11.  They are the deployment of the Advanced Traffic Management System (ATMS) and the deployment of Phase V of the Fiber Optic Network.  The fifth phase of the Fiber Optic Network project will install the network along that portion of I-15 what includes the HOV lanes.  Both of these projects interface with the I-15 Control System.  Therefore, the design of the I-15 Control System will be different before and after the deployment of the two system.  In this document, the term Interim Design refers to the design of the I-15 Control System before both of the system (ATMS or Fiber Optic Network) are deployed and available.  The term Final Design refers to the design of the I-15 Control System after both systems are deployed and can be interfaced with and incorporated into the I-15 Control System.

Either of these two independent projects may be deployed before or after the I-15 Control System.  Therefore the planning and design effort for the I-15 Control System must consider all contingencies.  Fortunately these two systems are relatively independent of each other and therefore can be considered independently.

ATMS System – When this system is deployed and can interface with the I-15 Control System, it will require a small change to both the hardware and software architecture of the Control System.  The interface with the ATMS will supercede and replace the interface with the Ramp Metering System.  The transport media may change from a serial connection to an Ethernet connection.  The data supplied via this connection will continue to include all the necessary traffic speed and volume measurement data, but will also include some level of status on the ATMS.  As more detector loops are installed as part of the ATMS/Fiber Optic Network, this additional data, as it pertains to the HOV lanes and the adjacent I-15 mainlines, will also be provided to the I-15 Control System through the same physical interface.

Fiber Optic Network – The fiber optic network will become the communications path between the TMC and the FCUs.  The network will also become the communications between the CCTV cameras and the TMC.  The existing modems and dedicated or dial-up lines will be taken out of service when the fiber network is connected.  The I-15 Control System devices should then connect directly into the terminal equipment of the Fiber Optic system.  The changeover will be transparent to the operators and other users, except that the quality of the video images will be greatly improved.

3.3              Top Level Hardware Architecture

The configuration of I-15 Control System hardware is shown in Exhibit 3.2.  This network diagram shows the interim system before implementation of the Fiber Optic Network but after the ATMS is deployed.  In addition, only equipment that may be changed or added for this project is shown.  Therefore, as will be described in more detail later, the existing MCU and the devices connected to it (gates, pop-ups, etc.) are not shown because they will be reused without change.


Exhibit 3.2: I-15 Reversible Lane Architecture (Interim)

I-15 reversible lane architecture


3.3.1        TMC Hardware Architecture

The basis of the TMC is a standard client/server architecture with the Control Server servicing the operator workstations.  A Database Server and the TSU 2070 Controller, which will function as a communications controller, support the Control Server.  These devices are networked with an Ethernet local area network.

Server Components – Two server engines are included, one to run the bulk of the applications programs for the I-15 Control System and one to support the database functions.  The Database Server also serves as a redundant processor that can take over the Control Server functions in case of Control Server failure.  Both servers will use a RAID disk array for critical Control System data storage.  The RAID disk array provides mirrored redundant disk units that can be replaced on-line.

Workstations – Three identical workstations are provided: one for the HOV operator in the TMC, one for the maintenance personnel, and one for the simulator for training and software testing.  Each workstation will have two monitors: one for the graphical display and one for video display.  Conveniently located printers will support the personnel using the workstations.

TSU 2070 Communications Controller – In this architecture, the TSU 2070 Controller functions as the communications interface to the FCUs and the rest of the field devices.  The TSU connects to the Ethernet LAN via a Serial Port Hub so it is networked with both the Control Server and the Database server to support the redundancy provided by those two servers.

External System Links – A Firewall processor is provided to connect to the external systems, using modems where the system is not located in the TMC.  Through this unit, a path is provided to the modems to the Congestion Pricing System (via a dedicated circuit) and to any Special Projects (which would dial-in to access the HOV lane status).  For the interim system, the connection to the Ramp Metering System would be a serial connection through the same Firewall processor.  However, after the ATMS is deployed the connection will likely be to the ATMS LAN.

Back-up Modem Connections -- Two dial-up modems provide a capability to establish a back-up link from the TMC to the FCUs in case of any failure affecting the primary link through the TSU.  This arrangement takes advantage of the dial-up modem at each FCU, which is provided to allow access by a remote workstation.  Because of the need to provide a fully independent path to the FCU, the TSU and the modems are connected to the LAN via separate and independent Serial Port Hubs.  The Serial Port Hub also provides a connection to the CCTV Controller.

Simulator – The Simulator is implemented as a separate, independent and isolated system with components that duplicate the major elements of the TMC and field installation.  To support testing of new software and firmware, the Simulator contains one of each major processing unit in the Control System (workstation, Control/Database Server, TSU 2070, FCU 2070 and DCU 2070).  One unique unit, the Device Simulator is provided to simulate the scripted responses of the field units during a training scenario.  The Simulator is connected with a separate LAN Hub so equipment repair and reconfiguration can be performed on the Simulator suite without compromising the integrity of the operational system.  However, two links to the operational system are provided from the simulator.  The Simulator system is connected, via a Firewall processor, to the operational system so trainees can view the status of the live system.  Also a dial-up modem is provided to train users on the capabilities of the back up link to a live FCU.

TMC CCTV Video Distribution – During the interim period (before deployment of the fiber optic network) the existing architecture is used at the TMC to distribute video to the workstations.  The existing equipment is augmented with additional equipment to handle the video from the new cameras.  However, the same approach, of dialing the camera of interest, is retained.

3.3.2        FCU/DCU Hardware Architecture

While the bulk of the new Control System is installed at the TMC, a few upgrades will be made to the equipment at the field sites.  However, the architecture of the FCUs and DCUs will not be altered.

2070 Controllers – New 2070 Controllers, along with a workstation, will be installed at the FCUs and the DCUs.  It is expected that the existing interface equipment between the Controllers and the field devices would be retained, however, some may be replaced depending on the availability of driver devices for the 2070 Controllers.

CMS Controllers – An updated version of the CMS controller will be provided for each CMS.

Loop Detectors – New Loop Detector cards will be installed for those loops that will be integrated directly into the I-15 Control System.

Remote Access Laptop – A new Remote Access Laptop will be provided to enable direct access to the FCUs.

3.3.3        Communications Architecture

The Communications architecture undergoes a radical change from the interim system to the final system, that is, when the Fiber Optic Network is deployed.

Interim System Configuration – In order to minimize the cost of the interim communications equipment (which may be in place, if at all, for only a few months) the communications architecture of today’s existing system is retained and expanded as necessary.  The TSU to FCU link will utilize the existing modems and dedicated leased Telephone Company lines.  Dial-up ISDN circuits will be used to access both the existing and new CCTV cameras.  A new dedicated Telephone Company line will be provided to the Congestion Pricing System.

Final System Configuration – The Fiber Optic Network will replace the links between the TSU and the FCUs and the circuits to the CCTV cameras.  The TSU to FCU links will be provided a serial path through the Fiber Optic ATM Network while the cameras will be connected via a TCP-IP connection is compliance with the standard design of this new network.

To facilitate testing of the new components of the Control System while not interfering with operations of the Existing Control System, a new underground conduit and cable plant will be installed between the FCUs and the DCUs, the CMSs and any new CCTV cameras.  This new conduit and cable plant will allow installation of new cables to interconnect the new FCU 2070 Controllers to the new DCU 2070 Controllers and the new 2070 FCU Controllers to the new CMS Controllers without requiring the removal of the existing cables between the units.  Testing of the new system can then be performed in parallel with operation of the existing equipment with minimal risk of interruption and compromise of safety.  In addition the new conduit plant will incorporate spare conduit for installation of the Fiber Optic Network for the sections of I-15 between the FCUs and the CMSs.  The Fiber Optic Network conduit plant will be attach to the ends of the new Control System conduit plant as that project deploys their backbone fiber along that section of I-15 between their communications and data nodes.

3.4              Top Level Software Architecture

The software construction for the I-15 system will be built on a three-tier architecture to provide a separation of the client, business objects, and data store.  This separation is enabled by the use of well-defined interfaces between the tiers.  This type of architecture is the key enabler of the "Thin" client architecture.  In this architecture the Client Tier is responsible for data collection and data presentation but does not control the business rules or data storage.  A library of service utilities that provide communication, logging and other cross tier services supports the Three-Tier architecture.

Exhibit 3.3: Three-Tiered Software Architecture

Three-tiered software architecture consisting of client tier, middle tier and data tier.

3.4.1        Data Tier

The Data Tier is responsible for the storage of data into a persistent store.    The Data Tier provides a Persistence Service that gives the Middle Tier necessary data manipulation functions while shielding it from the specific implementation details of the persistent store.  The Oracle RDBMS will be used to support a highly available, robust persistent storage for the system.

The Persistence Service will be implemented by a Persistence Manager software component.  This component provides transparent access to the physical data store, and implements database connection pooling to improve system performance.

3.4.2        Middle Tier

The Middle Tier is responsible for the implementation of the Business Rules of the system.  This tier is used to manage the system objects and their interactions with the Data Tier.  The Middle Tier will be supported by a set of software components that provide object access, security, and event delivery to the Client Tier.

The Middle Tier services will be implemented by using Web or Application Server technology to provide a robust and highly available set of middleware services.  The Web Server will provide the implementation of the Security Service (HyperText Transfer Protocol [HTTP] Basic Authentication and Secure Socket Layer [SSL] for encryption) and the interface for Object Services through standard Servlet Application Program Interfaces [APIs].  The ObjectServices will be implemented by the following components: ControlServlet, CacheManager and UserManager.  The Event Service will be implemented by the EventServer component.

The ControlServlet is implemented by using the standard Java Servlet API.  This component is loaded and managed by the Web or Application Server and will only be accessed by authorized users.  The ControlServlet provides clients with the functions necessary to configure and maintain the system.  The ControlServlet implements and executes the business rules as required by the system.  The ControlServlet is responsible for providing access to required system reports.

The CacheManager provides a mechanism for the ControlServlet to maintain current information in a managed cache.  The CacheManager will be used to improve overall system performance by reducing the frequency by which system objects are built from the Data Tier.

The UserManager is responsible for collecting information about each user and the actions that a particular user is performing on the system.  The UserManager can be configured to low levels of access control for each user and will insure that only authorized users have the ability to execute specific control functions.  The UserManager will insure that only authorized users can gain access to required system data.

The EventServer is responsible for client notification of system events.  The notifications can be used by clients to update displays, notify users of critical information or to initiate specific client side functions.

3.4.3        Client Tier

The Client Tier is responsible for the presentation of system information objects to the user or to software components within the system.  The Client Tier is supported by a set of services that provide data collection and presentation of system information to the user.  The Client Tier is made up of a collection of GUI applications.  These applications are responsible for presentation of information to the user in the form of integrated map based views, high level (or rollup) data views, and lower level (drill down) data views.  The Client Tier is also responsible for providing applications to interface with 2070 controllers.

3.4.4        Cross Tier Services

The Cross-Tier Services are composed of a set of system utilities that provide a set of re-usable system components that are used by one or more tiers of the architecture.  These utilities include communication, query, formatting, printing and other commonly used functions.

3.4.5        Component Allocation

The following diagram depicts a notional architecture of the components within the three tiers.

Notional architecture of components within the three tiers.

The components depicted above will be allocated to the hardware devices in the following manner:

·         Database Server

Persistence Manager

Utilities

·         Control Server

ControlServlet

UserManager

CacheManager

EventServer

Utilities

·         Workstations

GUI Applications

Utilities

·         Remote Access Laptop

ControlServlet

UserManager

CacheManager

EventServer

GUI Applications

Utilities

·         Controllers

Controller Applications

Utilities

3.5              Redundancy and Degraded Mode Operation

Maintaining some level of operational capability, even if it is a less than full, or degraded, level of capability, is a key requirement of this safety critical I-15 Control System.  The high level design and architecture presented above incorporates several features to compensate for component failures.  Table 3.1 below summarizes these features and the level to which normal operations may be degraded.

Table 3.1: System Redundancy Features

FAILURE

REDUNDNACY FEATURES

LOSS OF OPERATIONAL CAPABILITY

Operations Workstation

Use Maintenance

Use Remote Access Laptop, if available

None if voice radio communications is available
Loss of video for device verification

Control or Database Server

Load both Control and Database application is good server

Possibly slower response times

Control and Database Servers

Use Operations Workstation to dial-up FCU as a remote access laptop

Loss of some ability to generate reports

Disk in RAID Disk Array

Redundant mirrored disks in array

None

Raid Disk Array

Server Internal Disk

Loss of some ability to generate reports

TSU, Serial Port Hub, Dedicated Modems, or dedicated Telephone Line

Dial-up Modem to FCU

Possibly slower response times

FCU 2070 Controller, Dedicated Modem

Intervention by Field Observer at DCU/MCU and CMS Controller

Slower Operation

DCU 2070 Controller

Intervention by Field Observer at MCU and CMS Controller

Slower Operation

CCTV Camera, Control Receiver, Encoder, ISDN Modem, Telephone Line, ISDN Modem, Decoder, Control Receiver, or Video Monitor

Manual device verification by Field Observer

Slower Operation

 

 


4.                  ALTERNATIVE ARCHITECTURE COMPONENTS

4.1              Unix Operating System

The use of the Unix Operating System could be considered as an alternative to the use of Windows NT.  Unix can be run on high end workstations and servers built by Sun Microsystems, or on low-end systems such as Intel based PC's.  Unix excels in environments where there are very high transaction rates or where system throughput is a critical factor.  Use of Unix in the ATMS may be a factor in support of a common operating system.

The use of Unix for this system has few advantages over the use of Windows NT because this system does not have a high transaction rate or a large system throughput requirement.  The use of Windows NT allows a greater flexibility in the choice of Commercial Off-The-Shelf (COTS) software.  Windows NT also has a simpler user interface than do most Unix implementations.  This also simplifies the support of Laptop equipment as the server and clients will be based on the same software suite.

The use of Windows NT also has the advantage over Unix in an Intel deployment because the PC manufactures can pre-install NT and provide end-to-end support of the system.  To receive the same level of support for an Unix system requires the purchase of a commercially supported Unix package (such as SCO Unix) with substantial additional maintenance costs.

4.2              “Thick” Client

A "Thick" client architecture is described as an architecture where the business objects of the system reside in the client code.  Each application that is used by the system must have the business objects encoded into the software.  This is the general approach of a Client/Server architecture where the client is usually a front end to a database management system.

This approach has two distinct drawbacks.  The first and foremost issue to be addressed by a "Thick Client" is the maintenance of the software.  Software maintenance is more difficult because changes to a business object usually require changes to each application.  These changes require additional unit and regression testing when compared to the three-tier "Thin Client" architecture.  In general a multi-tier system, based on well-defined interfaces, only requires complete regression testing when the interfaces change.

In the Three-Tier architecture, all of the business objects and business rules reside in a single location.  As long as the interfaces to the business objects do not change, the clients do not have to be changed or recompiled.  Maintenance is much easier using this approach.  The separation of the business logic from the GUI components makes for cleaner code that is easier to maintain.

The second issue to be address is that the clients rely on a connection to the database to provide useful functionality.  This database interface often contains vendor specific queries and is not portable to other alternative products.  A Three-Tier architecture resolves this issue by placing the database access outside of the client tier.  The client simply interfaces with middleware business objects to interact with the system.  The Middle Tier is not directly tied to the database, which isolates the database interfaces to a small part of the system.

4.3              Back-up FCU to FCU Communications Link

In considering alternative approaches to providing a secondary means of communication with the FCUs, in case of an outage of the dedicated Telephone Line or a failure of a dedicated Modem, we considered providing a direct and independent link between the two FCU 2070 Controllers.  This provides a triangular shaped communications architecture, as shown in Exhibit 4.1.

Exhibit 4.1: Backup FCU to FCU Communications Link

A triangular shaped communications architecture of the backup FCU to FCU communications link.

As shown in the exhibit, a temporary link could be established between the FCU during an operational sequence.  That is, the link is only need during an opening or closing period.  It would be established on command from the TMC at the start of an opening or closing sequence, but not used unless a failure was detected in the primary TSU to FCU link.  This connection would be dropped at the successful conclusion of the sequence.

The hardware needed for this alternative communications architecture is already included in the system.  The same dial-up modems at the FCU, which support remote access from either the TMC or the Remote Access Laptop, could be used to establish this link.

The value of this alternative would be increased by use of a cellular telephone link between the two modems.  This, of course, would require a different modem than would be used with a lane line.  The advantage of using a wireless connection lies in its complete independence from the routing of the dedicated primary landline.  Any back-up lane line would use the same cable as the primary line to reach the Central Office of the Telephone Company, and would therefore be subject to the same risk of interruption.

4.4              Full Motion Surveillance Video

The ISDN dial-up telephone lines which are used today to transmit the video images from the CCTV cameras to the TMC monitors can only provide a slow frame video because of their limited bandwidth.  In the District 11 Fiber Optic Network, the standard is full motion video, and as it is deployed, all surveillance CCTV cameras will be converted.  In order to convert these cameras in the interim period (after the I-15 Control System is deployed but before the Fiber Optic Network is deployed) would require providing new dedicated telephone lines capable of support full motion video.  Leased T1 lines would be an option.  However, this approach was not included in this high level design since the duration of this interim period is not known, and may in fact be zero.

4.5              Shared use of a Camera for Surveillance and Device Confirmation

Shared use of the existing cameras, and of new camera installations, to perform both surveillance and device verification functions was briefly considered as a cost saving option.  However, the ideas was rejected as part of this high level design for several reasons, including:

·         The existing cameras can only view a few of the device locations with sufficient clarity for device verification.

·         Some devices, such as the groups of longitudinal and transverse pop-ups and associated gate can be properly viewed from only a very few camera locations because of their physical dispersion.

·         There are few locations where more than one CMS can be viewed from a single camera location.

·         There is continuing concern about loss of control of pan, tilt, and zoom to other TMC, and even external, users.  With a fixed camera, there is no control to loose.

For these reasons, this high level design includes a set of approximately 18 fixed new CCTV cameras which will be specifically sited to optimally observe the operation of the control devices (including the CMSs) for the HOV lanes.

4.6              Additional CMS

A problem common with HOV lanes is congestion at exits of the lane.  On the I-15 HOV lanes, congestion most often occurs at the North end where traffic on 6 lanes (the 2 HOV lanes and the 4 mainline lanes) must merge into just 4 main line lanes north of the HOV exit at Ted Williams Parkway.  A CMS on the HOV lanes, at a point suitably upstream of the exit, would be able to warn HOV driver that they must slow because traffic is congested ahead.  The CMS could even indicate the speed of the traffic on the mainline lanes by using data from the Ramp Meter System.  On the negative side of this alternative, motorists on the HOV lanes have good visibility of the upcoming congestion well before they reach it.  An addition CMS is included in this high level design.

4.7              Queue Detection and Trapped Vehicle Loops

Vehicle detector loops could be installed at the entrances to the HOV lanes to detect queuing up for the opening of the lanes (as sometimes happens at the South end) or vehicles which might become trapped between the pop-ups when the lanes are closed.  Although some vehicle detector loops will be installed or some existing loops will be integrated at these points on the HOV lanes, loops specifically designed and located to detect queues or trapped vehicles are not included in this high level design.  The video surveillance cameras installed, or to be installed at these locations will be far more useful for detecting vehicles at these locations.  However, to supplement the use of the surveillance cameras, we have included in the design three additional fixed CCTV cameras each equipped with an image-processing device, which will alert the Operator to the presence of vehicles in the likely queuing areas.