One of the CFD workstations at TFHRC J. Sterling Jones Hydraulics Research Laboratory.
Client server architecture for CFD modeling in the J. Sterling Jones Hydraulics Research Laboratory.
Description: The computational fluid dynamics (CFD) modeling is conducted using Turner-Fairbank Highway Research Center’s (THFRC’s) high-speed Internet2 connection line to the Transportation Research and Analysis Computing Center (TRACC) cluster at the Argonne National Laboratory (ANL). The user interface is based on the Linux operating system. The CFD code, STARCCM+, is remotely executed at assigned computational nodes on the cluster. The computational results can then be transferred to local computers in the J. Sterling Jones Hydraulics Research Laboratory for further data analysis.
The photo shows the Zephyr Computer Clusters at Argonne National Laboratory. It consists of 2,944 cores on 92 compute nodes with 32 cores each, a DataDirect Networks storage system, and a high-bandwidth network.
Training course on CFD modeling conducted at Argonne National Laboratory. The Argonne National Laboratory provides engineers and students with an advanced training course on CFD simulation. The instructor is shown sitting in the classroom with onsite participants. Remote participants through videoconferencing are not shown in this picture.
Description: The Federal Highway Administration (FHWA) J. Sterling Jones Hydraulics Research Laboratory’s numerical modeling is conducted at the Argonne National Laboratory's (ANL’s) Transportation Research and Analysis Computing Center (TRACC) through remote access and collaboration. The TRACC studies computational fluid dynamics (CFD)-based simulation techniques in highway hydraulics and erosion/sediment transport applications. Engineers compare these simulations to tests conducted at the J. Sterling Jones Hydraulics Research Laboratory to develop and validate design formulation.
The TRACC components consist of Phoenix and Zephyr clusters and include high-performance computing, visualization, and networking systems, with 1024-core (Phoenix) and 2944 (Zephyr), customized system from Dell that comprises 128 compute nodes (Phoenix) and Atipa tech (Zephyr), each with two quad-core AMD 2378 Opteron CPUs and 8 gigabytes of RAM (Phoenix); two 16 cores AMD 6273 and 32 gigabytes of RAM (Zephyr); a DataDirect Networks storage system consisting of 180 terabytes of shared RAID storage (Phoenix) and 120 terabytes (Zephyr), and a high-bandwidth, low-latency InfiniBand network for computations. Available software packages include ANSYS FLUENT, HyperMesh, LS-DYNA®, LS-OPT®, MADYMO, MATLAB, NoMachine, STAR-CD, STAR-CCM+, STAR-Design, and TRANSIMS.
Training on the CFD software is conducted every year. Lectures and tutorial-based drills are offered during the training to onsite and remote participants. Staff at TRACC works with Turner-Fairbank Highway Research Center users to identify the most useful topics to include in the training courses.
|»||Office of Infrastructure R&D|
|»||Infrastructure R&D Program|
|»||Infrastructure R&D Experts|
|»||Infrastructure R&D Laboratories|
|»||Infrastructure R&D Projects|
|»||Infrastructure R&D Publications|
|»||Infrastructure R&D Topics|
Turner-Fairbank Highway Research Center
6300 Georgetown Pike
McLean, VA 22101-2296
|»||2011 FHWA Infrastructure Research and Technology Strategic Plan Goals and Objectives|
|»||Federal Highway Administration Office of Infrastructure|
|»||Pavement and Materials Discipline|
|»||Bridges and Structures Discipline|