U.S. Department of Transportation
Federal Highway Administration
1200 New Jersey Avenue, SE
Washington, DC 20590
202-366-4000


Skip to content
Facebook iconYouTube iconTwitter iconFlickr iconLinkedInInstagram

Federal Highway Administration Research and Technology
Coordinating, Developing, and Delivering Highway Transportation Innovations

 
SUMMARY REPORT
This summary report is an archived publication and may contain dated technical, contact, and link information
Back to Publication List        
Publication Number:  FHWA-HRT-15-067    Date:  August 2015
Publication Number: FHWA-HRT-15-067
Date: August 2015

 

EXPLORATORY ADVANCED RESEARCH

Breakthroughs in Vision and Visibility for Highway Safety Workshop Summary Report - August 13-14, 2014

Appendix A. Summary of TRB Workshop

Researchers held a workshop titled “Cross-Modal Distributed Simulation” in Washington, DC, on January 10, 2016, that was co-chaired by Donald Fisher (Principal Technical Advisor, Surface Transportation Human Factors, Volpe) and Maura Lohrenz (Division Chief, Aviation Human Factors, Volpe). The list of attendees is included in table 2 in appendix B.

Purpose

The typical highway transportation research study using driving simulators involves one lab operating one simulator, designing its own scenarios, and testing potential solutions to single-mode problems from one agent’s (e.g., driver’s, pedestrian’s, bicyclist’s, etc.) viewpoint.(24) This workshop was focused on two related questions. First, what use cases would provide a motivation for expanding the Nation’s highway transportation research simulation capabilities beyond the standalone, single-mode, and single-platform functionality to distributed, cross-modal, and cross-platform functionality? The proposed use cases were not generated with any particular agency or mission in mind. They were meant simply to illustrate areas of research where advances in simulation would be needed to answer critical basic and applied research questions. Second, what are the technical issues that stand in the way of that expansion? The technical issues are mostly specific to the community of researchers at universities, whose resources are often strained both in terms of the technical personnel required to operate complex simulators and by the funds required to purchase, operate, and maintain them.

Background

Central to the discussion of any type of distributed simulation is the importance of being able to replicate what any researcher has demonstrated. Many human factors experiments only require a single computer and monitor. The stimuli that are presented to participants can be easily and fully described in written documentation. However, that is not the case with experiments done on a driving simulator. An investigator who wants to replicate another investigator’s experiment must have access to adequate documentation to understand the source or executable code. This is the only way to match the precise detail in the scenes and scenarios, including the exact interaction among the host vehicle, scripted vehicles, ambient vehicles, and signalization. Some simulators with limited capabilities can replicate what was done at one site on a given platform onto another site on the same platform.(2) For simulators with advanced software that give researchers more power in generating scenes (especially complex roadway geometries) and scenarios (especially complex interactions among vehicles, pedestrians, and signals), however, it can often be quite challenging or even impossible to replicate a given model.

There is a critical need for distributed simulation as more and more decisions about what treatments to implement are being made initially on the basis of experiments run on simulators. Specifically, unless other investigators can replicate an experiment, the findings from just one experiment run by one set of investigators may be suspect. This need was originally addressed in a TRB workshop back in 2005.(25) The current workshop is the next logical progression in the advancement of distributed simulation.

Use Cases

The workshop attendees proposed and briefly discussed several possible use cases for four categories of simulation. The major focus of the workshop was not on the detailed exploration of use cases; the focus was on identifying a representative sample of use cases for each of the categories. Various relevant references (which were not explicitly recorded in the workshop minutes) are included in the following subsections to substantiate points made at the workshop. In several cases, this summary elaborates on points that were discussed briefly in the workshop so that they are understandable to the larger audience.

Distributed Asynchronous Simulation Use Cases

There are surprisingly few examples of human-in-the-loop transportation research using the same scenarios on the same platform (brand of simulator) at different locations and different times. Until recently, there were very few simulators of the same brand that had the functionality required to evaluate complex scenarios with novel roadway designs and signals. Thus, there simply was not the capacity to move forward in this arena. In addition, it is difficult to run the same virtual world (model) on the same platform at different locations. The proposed use cases for this category of simulation are described in the following sections.

Different Regions—Geometric Design, Signs, Signals, and Pavement Markings

It is important to analyze the effect of different treatments designed to increase safety with drivers from different regions of the country. In terms of signs, signals, and pavement markings, it simply cannot be known a priori whether red light cameras or dynamic speed feedback signs, for example, will have the same effect in one region of the country as they do in another. Understanding how drivers in different regions will respond to new treatments only becomes possible with distributed asynchronous simulation (and, in fact, may require distributed synchronous simulation).

Different Populations

Increasingly, manufacturers are designing treatments for special populations of drivers who are at a greatly increased risk of crashing, including older drivers—as they become a larger fraction of the total driving population—and teen drivers, especially those with Attention Deficit Hyperactivity Disorder.(26) Studies often need to include hundreds of participants, but the number of participants from a given subpopulation at any one geographic location may be limited. Distributed asynchronous simulation is needed to successfully study the problem using driving simulators.

Distributed Synchronous Simulation Use Cases

Distributed synchronous simulation can be performed at one of the following three levels: (1) micro (2 simulators), (2) mini (3–10 simulators), and (3) macro (up to thousands of simulators).(27) The need for distributed synchronous simulation is driven by the importance of understanding the complex interactions among two or more drivers. A recent motivation for understanding this interaction in more depth is the role of the human operator in the rapid advance of automated vehicles, vehicle-to-object (i.e., other vehicle, pedestrian, bicycle, or infrastructure) communications, and ITSs. These and other possible use cases for distributed, synchronous simulation are discussed in the following subsections.

Crash Avoidance Science

As noted previously, only a handful of referred articles have captured drivers’ joint behavior in the seconds leading up to a crash.(3) However, 40 percent of traffic fatalities occur in multiple-vehicle crashes.(28) A better understanding of the joint and codependent behavior of drivers in these multiple-vehicle crashes can lead to better remediation. The following use cases provide examples of where a better understanding is needed.

Roadway Design

To date, researchers have evaluated new roadway designs, such as the diverging diamond interchange using a standalone simulator.(29) However, to gain a deeper understanding of driver behavior while they are operating in these new roadway configurations, researchers need to study multiple road users in the same virtual world at the same time. Comparisons could then be made between the behavior of multiple independent drivers in standard diamond intersections with highway on and off ramps and the behaviors of multiple independent drivers in the diverging diamond interchange.

Head-On Collisions

In some states, high-speed roads (70 mi/h or more) with one travel lane in each direction and with no median to separate the lanes are common. Head-on collisions are often deadly. Understanding how two drivers behave in this scenario could provide the information needed to better train drivers to mitigate such situations. This is a clear case in which how drivers respond to each other in the last few seconds before a crash requires two real drivers operating in the same virtual world. Researchers cannot reliably script the real-life behavior of another human driver in this case, and the only safe way to study this scenario is with driving simulators because the crashes are so deadly.

Left Turn Across Path (LTAP)—Four-Way Intersections

Besides a head-on collision, one of the deadliest crashes for older and younger drivers alike is an LTAP of an oncoming vehicle at a four-way intersection (typically signalized). New V2V communications have the potential to facilitate much earlier warnings delivered to drivers. What the turning driver and through driver typically do in such scenarios during the last several seconds is critical to the design of any LTAP collision warning systems. However, little is known about this. As in the head-on collision, using driving simulators is the only safe way to study this scenario.

Left-Turn—T-Intersections

Equally problematic is a driver turning left at a T-intersection.(30) Intersection mitigation assist warnings based on V2V communications will be useful only to the extent that researchers understand how the drivers receiving these warnings interact with one another.

Car Following in Congested Traffic

Models of a driver following another car are used frequently in microsimulations of high-volume traffic.(31) However, a thorough understanding of the interactions that occur between and among drivers and the consequent effects on congestion is not currently available.

Rear-End Crashes

Off the highway, it can reasonably be argued that many rear-end crashes could be avoided if the struck driver of a stopped or slowing vehicle knew that another vehicle was about to strike his or her vehicle. V2V communications make such warnings possible, but how will the driver of the struck vehicle react? On the highway, it will be possible, with V2V communications, to know when a lead vehicle (many vehicles ahead) slows suddenly or stops. This information can be propagated V2V upstream of the slowing vehicle, but how the drivers in the convoy will react to the warning and to each other is not known.

Emergency Vehicles

Many emergency vehicles (e.g., firefighters, ambulances, and police) sometimes need to coordinate as a team in response to unexpected, large-scale events. Understanding how they interact with one another and with surrounding non-emergency vehicles could provide improvements in the quality and speed of the response. Some work in this area has been undertaken using simulated environments.

ITSs—Route Choice

Standalone driving simulators have been used to study route choice in which the driver is in an immersive environment.(32) However, this is somewhat unrealistic because many real-time decisions about which route to take are made in congested traffic, which is also responding to travel information. Mini-driving simulations and macro-driving simulations have been used to understand route choice, but in these distributed synchronous experiments, the driver is not in an immersive environment and is simply making decisions without the attendant cognitive and manual load that comes with driving in congestion.(33) Virtual reality technologies make possible the study of route choice in immersive environments across hundreds of drivers who are all interacting with one another.

Smart Signs and Signals

The suburbs are shrinking, and cities are growing and becoming ever more congested. Building new or expanded roadway capacity is not viable in most cases. Instead, transportation planners must focus on making the signs and signals smarter at directing traffic to keep it flowing more smoothly. The algorithms that control the smart signs and signals are often based on rudimentary information about driver-driver interactions. The knowledge gained from distributed asynchronous simulation of driver-driver interactions in a new environment of smart signs and signals could add to the fidelity and, therefore, effectiveness of control algorithms that make assumptions about these interactions.

Automated Vehicles

The workshop attendees were in general agreement that automated vehicles at level 3—in which the ADS is in control for much of the trip—probably will not be on the roads in large numbers in the near future except in dedicated lanes. Knowing how drivers interact with one another in a dedicated lane when following each other at short distances is important, especially when, because of emergencies, control must be transferred to every driver in a convoy. There are more issues for automated vehicles at level 2, where the driver can have both hands and feet off the controls but is supposed to maintain situation awareness. Little is known about how to make sure that the driver maintains situation awareness or how to transfer control to the driver from the ADS. In addition, from the standpoint of distributed synchronous simulation, it is critical that researchers understand how drivers of non-automated vehicles will interact with drivers of level 2 automated vehicles. A driver of a level-2 automated vehicle may not retake control as quickly as possible when his or her vehicle operates outside its designed envelope. How drivers of non-automated vehicles react to such scenarios is unknown.

V2V Communications

LTAP and rear-end crashes are two examples of crash scenarios where it is important to understand how two or more drivers involved in a potential collision would respond to whatever new warnings V2V communications can provide.(34) Certainly, these are not the only crash scenarios where V2V communications will prove helpful. There are both applied and basic questions to which researchers will want to know the answers that can be provided best by understanding driver interactions in distributed synchronous simulations.

Evacuation

Minidriving and microdriving simulations cannot model what happens when an entire city must evacuate. Macrosimulation now makes this possible. A better understanding of how drivers behave in situations where they must evacuate an area may become more critical as abrupt changes in the weather leave less time for evacuation. How drivers respond to one another in such situations will be critical to understanding how any evacuation plan will potentially work in practice.

Cross-Modal Simulation

The use cases for cross-modal simulation will become more important over the next 30 years as society moves closer to the vision of a world without traffic.(35) If today’s predictions hold true, growing numbers of pedestrians and bicyclists will coexist with autonomous and semi-autonomous vehicles. Simulators are needed with two or more modes in which operators in these different modes can simultaneously navigate the same virtual world.

Truck–Car Interactions

The determination of fault and unsafe driving acts in truck–car crashes has been difficult at best.(36) The ability to study truck–car crashes in a cross-modal simulator would allow a much more complete evaluation of hypotheses about the determination of the fault and the unsafe driving acts that lead to these crashes.

Emergency Vehicle–Private Vehicle Interactions

The interaction of emergency vehicles with other types of vehicles on the highway is not well understood, in part because there has not been the cross-modal simulation capacity to support the relevant research. If the research were only focused on the interaction of police cars and private cars, the advances could perhaps be made using the capabilities of distributed synchronous simulation. However, a study of cabs with very different interiors and types of operation, such as those of a fire engine and those of a private car, would almost necessarily need to involve two modes.

Vehicle–Pedestrian Interactions

Pedestrian behavior is one of the most difficult behaviors to script. It is impossible to realistically study the interaction that occurs between pedestrians and vehicles (e.g., cars, trucks, and buses) at intersections and marked midblock crosswalks, which are two locations where pedestrians are particularly at risk of being struck by a vehicle.(37,38) Marked midblock crosswalks represent a multiple-threat scenario in which a pedestrian just entering the crosswalk is obscured from the view of drivers approaching the crosswalk in the left of two travel lanes by a vehicle stopped in the right travel lane immediately adjacent to the crosswalk. The question to which researchers have no answer, and which seems absolutely essential to understanding this deadly mix, is what happens when either the driver, the pedestrian, or both are distracted. Does the distracted driver fail to swerve, or does the distracted pedestrian freeze in place when suddenly one or the other is confronted by the unexpected? Researchers can only understand how drivers and pedestrians respond in these situations by studying and evaluating them in a safe, simulated environment. The goal would be to use the results of such experiments to develop appropriate designs, warnings, and training to mitigate the problem.

Vehicle–Bicycle Interactions

Bicycle fatalities are on the rise, increasing by 19 percent since 2010.(39) Right-hook crashes, in which the striking vehicle is often a truck or bus, are particularly deadly. Following these crashes, investigators seek to answer the following questions: Was the driver of the vehicle not looking? Was the driver looking but the bicyclist not visible? Where was the attention of the bicyclist? Would all right-hook crashes be preventable if the driver were paying attention and scanning appropriately? Would all such crashes be preventable if the bicyclist were paying attention? The answers to these questions, like the answers to the multiple-threat scenario for pedestrians, depend on researchers being able to observe the behaviors of the agents in the last few seconds before a crash occurs.

Interactions Among Drivers in Coordinated Teams of Vehicles

The workshop participants briefly discussed the use of cross-modal simulation to understand the coordination that is required by drivers in a team of vehicles (e.g., emergency responders). This coordination is presumably a major concern of the Department of Homeland Security (DHS). The participants were not certain whether DHS had used cross-modal simulation to evaluate how teams of drivers would coordinate their response to an emergency. However, given that this coordination is essential and multiple modes of transportation are involved, cross-modal simulation would be needed to adequately study this case. What is not clear is whether the simulation needs to be immersive. The participants suggested that interactions among the different modes of emergency responders probably do not require the fine detail that is necessary to understand vehicle–pedestrian and vehicle–bicycle conflicts. Rather, the cross-modal simulation should simply be able to capture decisions being made by the different units, not the actual behaviors of the vehicles operating in the different units.

Interactions Within a Mixed Fleet of Automated, Semi-Automated, and Traditional Vehicles

None of the workshop participants knew of prior research that had studied the behavior of drivers in a mixed-fleet environment in which some vehicles are autonomous, some are semi-autonomous, and some are “traditional” (i.e., not autonomous). In particular, if a driver of a traditional vehicle encounters a driver of an automated vehicle, then researchers need to know how the traditional driver will respond to the driver of the automated vehicle, who may not be attending to the forward roadway. Driving simulation studies cannot fully address this difficult problem until the avatar in the simulated world (representing the driver of the autonomous vehicle) can be realistically programmed to communicate his or her exact intentions (or lack thereof) to the traditional driver in a way that the traditional driver can understand. To understand this communication well enough to develop such a realistic avatar, researchers first need to use cross-modal simulation to study both human agents interacting in real time.

Cross-Platform Simulation

The use cases previously described also motivate the need to advance the state of the art with respect to cross-platform simulation. Specifically, each of these use cases assumes the availability of a second platform functionally identical to that which initiated the sharing of scenes and scenarios. Usually, however, the second site to which a researcher needs access has a driving simulator with a different platform. Therefore, there is a need for infrastructure that enables the evaluation of virtual worlds across different platforms.

Cross-platform functionality can also enable the evaluation of test scenarios that are initially developed on a less expensive platform (i.e., without full functionality) to be quickly and easily evaluated more fully on a platform with full functionality (e.g., the full-motion National Advanced Driving Simulator (NADS) platform). Without cross-platform support, this process is much more resource intensive because the entire virtual world must be developed twice (once for each platform). This additional cost often means that changes in roadway design that are evaluated initially on a lower-fidelity platform are never evaluated in a high-fidelity simulator. Cross-platform simulation would resolve this issue.

Technical Issues

High-fidelity driving simulators are operationally unique, complex entities. Each consists of a set of hardware and software components, along with application drivers, compilers, and interpreters that facilitate the underlying input/output process. The hardware includes the vehicle controls, visual displays, input/output devices, possible motion platform, cab enclosure, speakers, and other peripheral devices that can assimilate data synchronously via the application program interface. The software that controls what is presented and how it is presented includes the following:

Scene and Scenario Development

One of the first steps in designing a new simulation is to develop the scenes (the static world) and scenarios (dynamic world) through which a driver (or drivers) can navigate. The workshop participants addressed various aspects of this often-painstaking process, including the growing need for project deliverables that require custom virtual worlds, which cannot be developed easily using the tools native to most driving simulators. “Custom virtual worlds” means environments that need to precisely reflect the geometry and visual appearance of the roadway in a given locale and must include novel roadway or intersection designs that have never been built before.

To understand the difficulties that the developer encounters using the current set of tools to accomplish the steps in the previous paragraph, it is necessary to understand how developers generally construct a virtual world with the software native to most driving simulators. Most driving simulators come with an interface that allows for the use of premade tiles, which contain already built sections of roadways, intersections, etc. The roadway sections are either straight (with single-, two-, or four-lane roads) or curved (left, right, and spiral), and there are a limited number of different curves, intersections, roundabouts, etc. The scene is developed by adding objects in the built and natural environment to the tiles using a set of objects that have been made available in a library. Textures can be applied to the objects and the roadway, but the basic geometry cannot change. If a researcher wants to have complete control over the roadway geometry and at the same time have the geometry conform to highway design guidelines, the researcher needs computer-aided design (CAD) software. Therefore, the person doing the development has to learn complex 3D modeling software. In addition, the researcher needs an application to do the texturing, which need to be applied to all of the many hundreds and perhaps thousands of polygons. This will continue to be the case for the new roadway designs that the researcher is evaluating (the experimental sections of the roadway).

However, what about the transition sections between the new designs that the researcher is evaluating? It is these sections of roadway that sponsors are indicating the need to be very close in appearance to the roadways in a given location or region. Therefore, the researcher still needs the complex set of 3D modeling and texturing tools for the transition sections to get them to appear like the roads in a given location. An alternative, proposed and piloted by Kelvin Santiago at the University of Wisconsin–Madison, has shown to greatly reduce the time it takes to create the transition sections in custom virtual worlds. Using Santiago’s tools, researchers can quickly and easily generate roadway surfaces specific to any given locale without resorting to complex

3D modeling of every polygon and subsequent texturing of those polygons. A textured 3D model is output as a file which is in a format that requires conversion to another file before it can be read directly by the software running on NADS. In particular, the format of file containing the textured 3D model is a dot object file (obj file). That file is then converted into a Virtual Reality Markup Language (VRML) file that can be read directly by the NADS’ software. Using Santiago’s software, researchers can easily define and combine the location of travel lanes with the roadway surface information (which is necessary for autonomous vehicles).

Santiago uses one of two approaches to perform the previously described process with transition sections. First, when a CAD file is available (e.g., from a local transportation department), he uses open-source software (e.g., CAD tools) for extracting a polyline that defines the center of the roadway and any polylines that define other important features (e.g., a bike path or sidewalk). The Python scripts then take the polyline and automatically build a 3D model. Textures are automatically applied to the polygons in the 3D model. Alternatively, when a CAD model is not available, Santiago creates a polyline by tracing a picture of a roadway in a CAD package, creating, for example, one edge line. The pictures of the roadway used for tracing can be obtained from several different external sources. Just as before, he uses this tracing (or polyline) to automatically generate the 3D model that will have a geometry indistinguishable from the transition section in the real world that is being duplicated in the simulator.

Success Stories

Santiago has used this tool to model the transition sections that occurred between a complex interchange in Sioux City, IA. Santiago built the transition sections between the interchange easily using the Python scripts to automatically generate the textured 3D models from the polylines without having to resort to manually constructing all of the polygons or manually applying all of the textures.

Distributed Asynchronous Simulation Technical Issues

Once a virtual world is created, a researcher would like to share it with other investigators using the same platform for all of the reasons previously discussed. Fortunately, not everything needs to be shared for distributed asynchronous simulation to work. It is enough that two sites have identical visual databases, terrains, scenarios (behaviors), logical road networks, and entity positions. Technically, it seems like it should be easy enough to create a model with these shared elements and then run one and the same virtual world (model) on two simulators of the same make at different locations and different times. However, this has turned out to be surprisingly difficult. The workshop participants mentioned the problems and possible solutions listed in the following sections. It is important to note the difference between what used to be called source code and executable code. Source code can be edited by the user. Executable code cannot be edited by the user. The same distinctions are not as hard and fast in today’s simulator software environment, but there are parallels.

Different Versions

While the simulators at two institutions may be the same, the versions of the software can still be (and often are) different. Most of the model may transfer, but not all of them will. For example, suppose a researcher uses the retro-reflectivity feature on traffic signs on a later version of the software at one site. Another site that uses an earlier version cannot support the reflectivity because of the non-availability of the feature on older versions of the software. Problems also come with upward compatibility. It can often take less time to completely reprogram scenarios than it can to make them upwardly compatible. In addition, small changes in the code syntax and semantics also surface on occasion (such as the use of the enumeration function) in advanced versions of the simulation software. Furthermore, the sound module may be set up differently at different sites. For example, at the University of Massachusetts Amherst, the environmental and external roadway sounds are emitted by surround speakers that function off the left channel. However, the scripted audio sounds are played using a second auxiliary model on the center channel. Other universities and research centers may have other setups. (Often, they use a single model to play different sounds.) The software is incredibly complex, so perhaps it is not surprising that upward compatibility is not always achievable.

Different Visual Databases and Different Vehicles

Two institutions may want to work together on the development of the same virtual world (visual database). However, pointers in the source code to objects (e.g., cars, pedestrians, or buildings) on one simulator may not address the same objects on a different simulator. (For example, vehicle number 5 in a simulator may be a bicyclist on one simulator and a truck on another simulator.) This flexibility is necessary given that different installations require different traffic environments and the number of moving objects that can appear simultaneously and are under the control of operator written code during a simulation is rather small (30–50 in many simulators). When the references are not the same across platforms, either different objects or blank white space (indicating a “dead” texture) will appear. Again, these differences arise naturally because different institutions will have different needs (and so their library of objects will be different). However, that creates problems. One way around this is to use only published code. Another reliable way is to share the resource folder associated with that scenario VRML file (the “res” folder contains all of the textures used in that scenario). However, the published model cannot be modified except by adding new objects to it (existing objects cannot generally be changed), which can change the environment in ways that make the altered virtual world different from the original virtual world.

Different Configuration Files

Perhaps slightly more pernicious, if only because they are less obvious to the developer, are the many settings on the multiple configuration files (also referred to as component model files or *.cmp files), including those for vehicle dynamics (e.g., steering, braking, acceleration or gain), visuals (e.g., time of day or fog), scenarios (e.g., characteristics of how the world is drawn), and terrain (e.g., roadway friction or surface properties). The configuration files have a complex set of features that need to be defined. Field of view is one feature, change in the horizon with braking is another feature, and so on. Differences between two sites may immediately stand out in these cases. However, that is not true of other features. For example, the vehicle dynamics of the host vehicle at two different institutions will likely differ from one another. Steering and brake inputs are just two aspects that might not be the same. This means that the timing in the scenarios can be thrown off at complex intersections. Going through the configuration files for every feature that might be set differently is critical, and there is no simple way to equate all of the configuration files for two sites. Nevertheless, as long as the researcher remembers to check that all of the features on all of the configuration files are set identically, what might appear before the researcher does this as different virtual worlds will appear after all the configuration files have been equated as similar virtual worlds. One potential workaround is where one makes critical vehicle dynamics and scenario-specific changes directly in the JavaScript file specific to the model and then shares this JavaScript for the configuration file (component model file) across sites. (Each configuration file has a unique JavaScript file associated with it.) As a working example, the University of Massachusetts Amherst and the University of Wisconsin–Madison were previously engaged in a two-site study understanding driver behavior at flashing yellow arrows that used this workaround.

Success Stories

There are an increasing number of success stories as different investigators learn which factors determine whether a world at one site will run at another site. For example, among universities with a simulator, the University of Massachusetts Amherst has successfully shared scenarios with the University of Wisconsin at Madison, the University of Pennsylvania, the University of Puerto Rico at Mayaguez, and North Carolina Agricultural and Technical State University. In first attempting to successfully share scenarios, knowledge of the previously listed issues arose (requiring a lot of trial and error).

Distributed Synchronous Simulation Technical Issues

Assuming a researcher can get the same virtual world to appear at two locations asynchronously, there are now significant technical challenges that come with trying to get drivers to navigate through the same virtual world simultaneously while seeing the actions of the other driver(s). Several problems and solutions are mentioned in this section, which considers the problems that occur at one site with multiple simulators and then the problems that occur at sites at different locations, each with one or more simulators.

There are three ways to implement distributed simulation. The first way is labeled “server master.” The client has no simulation software; the server has the only simulation software. In this case, driver inputs are passed to a central computer that does all of the calculations and passes the entire virtual world as modified back to the clients. This model can potentially be implemented at a single site with multiple simulators but would require more bandwidth than is currently available across multiple sites.

The second way is labeled “client-server input sharing.” The client and the server are both running simulation software. The client provides the server at each update with inputs on steering, braking, and acceleration, and the server computes the new positions of the client vehicles and passes that information back to the client which redraws the position of the vehicles at the next update.

The third way is labeled “client-server state sharing.” The clients pass information through a server about the state (the position of the vehicles), and the server distributes that to the other clients. The clients then recompute the positions of each of the other vehicles and the driver’s (client’s) vehicle at each update. This is the type of distributed asynchronous simulation that is typically used across sites.

One Site and Multiple Simulators

Multiple simulators at the same site may have no networking complications or problems. Two simulators, a desktop sim, and a motion-base sim at the OSU Driving Simulation Laboratory are connected by a single gigabit switch and get sub-millisecond latency for communication. With the distributed nature of simulator software, the multiple simulators are treated similarly to one simulator.

Multiple Sites and One or More Simulators at Each Site

With geographically distant sites, traffic across the Internet is needed, and this requires additional network infrastructure. OSU has a local simulator network along with connections to outside networks. The simulator networks communicate with one another over the Internet using the security provided by a virtual private network (VPN). The OpenVPN server is usually on the host network (in this case OSU). The OpenVPN client is at the remote site. Typically, this is accomplished using OpenVPN software with level-2 bridging.

Network Latency

There are challenges associated with latency. First, the communications usually travel over the commercial Internet. In that case, network latency is not in the researcher’s control. Data travels at approximately 120,000 mi/s (120 mi/ms). Therefore, if information traveled in a straight line over the Internet, it would take about 4 ms. Additional latency is introduced by the numerous switches along the path to the destination. The typical measured latency using a digital subscriber line from OSU to the University of Massachusetts Amherst is 33 ms.2 The latency to Indiana University–Purdue University Indianapolis (IUPUI) is almost identical (29 ms).

What might happen with a lag of, for example, 500 ms, which is larger than one that would necessarily occur in practice for two simulators but is fairly routine in the gaming world? Suppose Driver 1 and Driver 2 started side by side in a race for example. If the lag were 500 ms, then Driver 1 would have traveled half a second before Driver 1 saw Driver 2’s vehicle start to move. Similarly, Driver 2 would have traveled half a second before Driver 2 saw Driver 1’s vehicle start to move. Each driver’s behavior subsequently will be different from what it would be if they understood that they were neck and neck, not each leading the other. Therefore, researchers would lose the control that makes simulator-based experiments invaluable.

One of the solutions to the latency problem is to try and predict the course of user-controlled characters and vehicles. This is common in video game engines where the latencies can be up to 500 ms. However, this type of prediction can be counter to the design of driving simulation experiments where one is interested in operator behavior and subtle, unpredictable actions are most interesting. Thus, predicting what another driver will do robs the experiment of the purpose for which it is intended—to understand the interactions between and among drivers.

Network Drops

Second, there are network drops where packets are simply lost. As with long latencies, researchers can do some client-side predictive modeling. However, this way of dealing with what is occurring with another driver when a network drop occurs has problems similar to the issue with network latency when it comes to attempting to evaluate hypotheses about what a driver would do.

Common Model Look Up

Care must be taken to ensure that the running simulated world is shared between users. Randomness added to improve realism in the simulated world must be replicated to all users before being displayed. Having disagreements about distances and object qualities would eliminate most useful data from an experiment. There is also what is referred to as the problem of common model look up. Is a driver at one site really seeing what a driver at another site is seeing? For example, one person sees a red van, but another person sees a blue truck. This problem was referred to in the discussion of distributed asynchronous simulation. This issue can also arise if the moving objects (such as vehicles and actors) are differently mapped across different sites (a definite possibility). Vehicle 1 at University of Massachusetts Amherst may be a red sport utility vehicle while Vehicle 1 at FHWA may be a white truck.

Security

There are also the ubiquitous problems associated with security. Although using a VPN does not eliminate all security issues, the security issues surrounding it are known and relatively uncomplicated. There is no obvious alternative to a VPN at the moment.

Success Stories

OSU has also been successful at getting their simulator network (Center for Automotive Research West) to communicate with the simulator network at IUPUI. OSU is getting its simulator network to communicate with the simulator network at the University of Massachusetts Amherst as well. On a much larger scale, a U.S. company that provides computational simulation and modeling has built a network of 18 simulators with the possibility of more for the University of New South Wales, Australia.

Cross-Modal Simulation

The technical challenges are no different for cross-modal simulation than they are for distributed asynchronous simulation, assuming that the simulators for the different modes can be constructed. Once developers (or “researchers”) have constructed those different simulators and assuming that they are being run on the same platform, it is network latency, network drops, and common model look up that are the real challenges.

Success Stories

One of the major cross-modal success stories is the car-bicycle simulator at Oregon State University. Bicyclists and drivers can navigate in the same virtual world and see one another as agents in that world. OSU is now constructing a cross-modal car–pedestrian simulator. The motion base simulator at the OSU Driving Simulation Lab will communicate with a system at the Ohio Supercomputer Center Interface Lab.

Cross-Platform Simulation

Cross-platform simulation may be the holy grail of advances in simulation. Technically, it is very challenging. When the configuration files on the same platform at different sites differ, at least the corrections to those problems involve identical changes. However, when the configuration files differ, as they will across platforms, then exactly how one configuration plays out (e.g., rain) on one platform may be very different than it plays out on another platform. In fact, cars, lights, and other scenario objects may have very different behaviors. Furthermore, there is a lack of a common world and objects in that world. A red sedan should show up as a red sedan for everyone, not as a red sedan for a driver at one site and a blue truck for a driver at another site.

To solve this problem, at the very minimum, a researcher needs to ensure that the two platforms render the same world with the same objects. Assuming that this can be achieved, there are the following four options: (1) make both simulators HLA compliant and run a common run-time infrastructure, (2) make both simulators DIS compliant and pass state back and forth, (3) use the same scenario software in both platforms, or (4) develop a custom interoperability solution.

The HLA standard has enabled some interoperability, as defined under the Institute of Electrical and Electronics Engineers (IEEE) standard 1516.(40) The HLA standard resembles a middleware that runs on top of a run-time infrastructure. HLA provides the following three main advantages: (1) interface specification (i.e., it defines how simulators interact with the run-time architecture), (2) an object model template (i.e., it specifies what information is communicated between simulators), and (3) a set of rules that simulators must obey to be HLA compliant. However, there is no standardization of the network protocol. The different sites must then use common libraries for the application to be able to achieve interoperability.

The DIS standard has also enabled interoperability, as defined under IEEE standard 1278.(41) DIS does not use a central computer to coordinate simulations. In that sense, it supports peer-to-peer interactions. DIS does specify the protocol on the wire, unlike HLA. It communicates with every other simulator and passes information back and forth (not used as a central computer). DIS simulations are responsible for updating the state of their own objects. Each car is responsible for broadcasting its own state. DIS supports dead reckoning, which works similarly to client-side prediction, but it uses a simplified model. Unfortunately, DIS and HLA are not compatible (which is unsurprising because they are very different), and a researcher needs adapter software to cross-communicate.

Options 3 and 4, although possible, are not likely unless an agency decides to fund a project that requires the implementation of either one of the options. However, they may be chosen as an interim milestone on the path toward fully complying with one of the standards. Finally, it should be noted that HLA and DIS don’t solve problems such as common model lookup.

Success Stories

There are success stories such as massively multiplayer online games, real-time strategy games, and the synthetic theater of war, just to name a few. However, current success stories in driving simulation are nonexistent.

Human-in-the-Loop Simulation in Aviation

As with surface transportation, there are many primary agents in a simulation. These include pilots (for both planes and unmanned aircraft systems (UASs)) and air traffic controllers (including en route, terminal, tower, oceanic, and international) along with a bevy of other individuals who are involved in airline operations, dispatching, mission control, traffic flow management, and airport operations, among others. Use cases revolve around the issues faced by these agents: new technologies, new procedures, new airspace and airport designs, new aircraft types and capabilities, proposed future traffic volumes and mixes, extreme weather conditions, and congestion and delays. The simulators needed to evaluate the different use cases are multiple, including a NextGen Integration and Evaluation Capability simulator, a Cockpit Simulation Facility, an air traffic control tower (Research Development and Human Factors Laboratory), a UAS Human-in-the-Loop Laboratory, and an Airway Facilities Tower Integration Laboratory.

For distributed simulation, the Federal Aviation Administration (FAA) interacts with companies and agencies who use both DIS and HLA. DIS is the IEEE standard architecture created for Department of Defense war gaming. It was widely adopted sometime before the mid-1990s. While it is relatively simple and lightweight, it is not as capable as modern architectures. Nevertheless, it is still useful for simulation interoperability but only within a local area network. If an external group has a DIS-compliant simulator and can bring it to FAA, FAA can connect to it. With respect to HLA, FAA typically uses the civilian version known as AviationSimNet®. This allows real-time human-in-the-loop simulation at geographically disperse locations to study complex Air Traffic Management scenarios. It has been widely adopted by aviation research facilities in industry, academia, and the Government.


2 Personal Interview with USDOT Subject Matter Expert (2016). John A. Volpe National Transportation Systems Center, Cambridge, MA. 16 March 2016.

 

 

Federal Highway Administration | 1200 New Jersey Avenue, SE | Washington, DC 20590 | 202-366-4000
Turner-Fairbank Highway Research Center | 6300 Georgetown Pike | McLean, VA | 22101