Good afternoon or good morning to those of you to the West. Welcome to the Talking Freight Seminar Series. My name is Jennifer Symoun and I will moderate today's seminar. Today's topic is Data for State and Local Freight Planning. Please be advised that today's seminar is being recorded.
Before I go any further, I do want to let those of you who are calling into the teleconference for the audio know that you need to mute your computer speakers or else you will be hearing your audio over the computer as well.
Today we'll have three presenters, Dike Ahanotu of Cambridge Systematics, Michael Anderson of the University of Alabama Huntsville, and Joel Falter of KOA Corporation.
Dike N. Ahanotu is a Senior Associate in the Atlanta office of Cambridge Systematics with fifteen years of consulting and research experiČence in freight transportation planning and management. He has worked on freight data collection projects for DOTs in Oregon, Colorado, California, Texas and Tennessee. Dr. Ahanotu has also managed several national-level research projects relating to freight data, including his current management of NCFRP 20 (Developing Sub-National Commodity Flow Databases).
Michael Anderson is an associate professor of -- at University of Alabama Huntsville, traffic engineering, modeling and demand forecasting, straight trans and urban planning.
He's worked extensively with the Alabama Department of Transportation and the Alabama MPOs on travel demand modeling and freight.
Joel Falter is a chief operating officer and principle planner for KOA corporation, prepared a wide variety of traffic engineering studies for locality, regional and state agencies.
Some of his projects are OD studies for the Burlington northern Santa Fe railroad, SR 58, I-5 and the 359 corridors.
Currently five state roads in Kern County.
I would like to go over logistical details prior to starting the seminar.
Will last 90 minutes, 60 for the speakers and the final for audio questions. If you think of a question you can type it into the chat area, make sure you send your question to everyone
and make sure you say which presenter for question is for.
I will start the question session with the questions typed in the chat box. Then the operator will give instructions on how to ask questions over the phone.
If you haven't joined the listserv, the address to register is provided on your screen. I would like to remind you the session is being recorded, a file containing audio
and visual portion of the seminar will be posted to the Talking Freight website within the next week. We encourage you to ask others to access the recording.
The presentations used during the seminar are available in the file download box in the lower right corner of your screen. Presentations will be available online within the next week,
I will advise all attendees of the availability of PowerPoint, transcript and recording.
You are eligible for credits for today's seminar, must have logged in with your first and last name, if you are attending with a group, type your name into the chat box.
Also download the evaluation form from the file share box, submit to me after you filled it out.
If you are not applying for credits I encourage you to fill out the evaluation form, send to me as we want to make these seminars work well for you.
Today's topic, for those who just joined, data for state and local planning, our first presenter is DK Ahanotu. I will bring up your presentation.
You can go ahead when you are ready.
D.K. Ahanotu: The title of my presentation is -- I will spend time talking about the three major national freight data sources, the commodity flow survey put out by the bureau of transportation statistics,
the Federal Highway Administration straight analysis framework, or FAF database, and trans search as well.
I will talk about how to think about using these data for local freight planning efforts, talking about the strength and limitations of these databases and the different assets you can consider,
and I will spend time talking about NCRF 20, the research project on developing a commodity flow data, the project completed, the first two tasks, moving to the third, preliminary findings and the next steps for that study. Then,
conclude by talking about a few local freight data collection efforts that occurred in Washington for two different commodities. Then compare and contrast the outcomes of these local freight data collection efforts.
First, in terms of the CFF, commodity flow survey, it's pretty important you have a detailed understanding if you are using these national freight data sources. First, CFS is the primary source data for the freight analysis framework,
second version, also one of the major data sources for estimating short distance truck trips TransSearch database.
CSF data is used for the other national freight data sources as well. In terms of what is -- a shipper survey, surveys companies that are in the different establishments that actually ship goods across the country,
an address-based survey.
They survey companies based on their addresses and the types of industries they are in also, only a certain amount of industries are included in the database. They did get over 100,000 responses, it does cover,
in in term regional coverage, this is across seven modes and information on 23 commodities as well. It's a very unique and useful database that can be used for freight planning efforts,
understanding how regions are related to others in terms of economic output, production, input and outputs, a lot of very good uses in it terms of doing freight planning at the local level.
I will talk about some of the things to keep in mind as you use this data. In particular I want to focus on STPH-FT A some of the aspects that may conflict with other sources people are aware of. It's a survey of select sectors.
The main ones included are mining, manufacturing, wholesale trade, retail trade and what they call auxiliary establishments, not just headquarters are included, all the company's establishments are potential candidates for this survey.
What's not included, several sectors, and I highlighted the one in yellow that are probably most problematic in terms of doing local freight planning. First, agricultural, farms are not included; construction companies are not included,
a lot of these are moving sand, gravel, very short distance, local trips; transportation companies are not included, everything from trucking firm to third-party logistics, none of these are counted to ensure there's not double counting,
but has implications for what is and isn't included in the include database. Retail service industries are not included as well.
A lot of these truck trips related to sectors will be short distance truck trips, important at the local level, but not included in the commodity survey.
There are over 100 regions seven modes, you fill in a multidimensional matrix of over a million cells, the survey can't cover each of these, you end up with several cells with small or zero values
and are difficult to really estimate or for various reasons need to be suppressed in the commodity flow survey database. Also,
the commodity flow survey has special consideration in terms of making sure they suppress data that might give proprietary information about individual companies that might make that information public.
A lot of times this information will be -- if a particular company dominates an industry in a region, then that information will not be -- it will be suppressed in the CFS database, not something available in the ultimate products.
Then, in terms of the survey mechanism, the shippers are selected at random, each shipper identified is requested to identify 40 shipments over a one-week period, these are requested at regular intervals,
depending on the size of the company it could be every 20th shipment, every thousand 1,000th or every shipment the company produces.
Part of the complexity in terms of implementing the survey is what you do with these multi-stop tours, conducting deliveries or pick-ups across a local area. The way it's done,
it's actually not incorporated into the specific methodology so if it you can imagine a truck, only information on that stop will be included, but no information on the previous stop or the one after it will be included.
It will look like a single shipment from the establishment to that third stop, and back to the establishment, how it will basically be recorded in the CFS. It's ABG rat in terms of commodity flow information,
but you try to translate into truck trip information, you will see you will miss a lot of information there.
Trip tours, trip [indiscernible] is not included. Retail establishments are not included and because this is an address-based survey of U.S. establishments, import flows are not captured at port of entry in the survey database.
You can imagine a container coming into LA/Long Beach, if the first destination is a retail establishment or some sort of logistics transportation company, that flow is still not picked up until it hits one sectors measured in the survey.
Then there's a number of open questions that are related to the survey that, it's hard to come up with an answer in terms of how it's impacting, across multiple industries Hewlett Packard, they ship a lot of stuff,
but also have a large services component as well.
You can imagine their establishments across the country have a mix of these type activities occurring, depending on which gets the survey it will have very different results in terms of shipments shown,
identified from that particular establishment. It's a complexity that's unclear how that plays out in terms of database, something to be aware of.
The other thing to think about; at the local level, smaller trucking terms, the ones that have less sophisticated tracking, shipments, harder for them to film out the survey in an accurate manner,
they don't track shipments in a manner to ask about what is every 20th shipment like or every fifth, because they don't have a method of tracking this information.
How they fill out the survey is an open question in terms of how that is going to play out.
Moving to the analysis -- built from public sources, relies on the survey as base data, fills in a lot of zero cells using linear modeling and proportional fitting.
Without getting into details of that it basically uses information on the cells that are not zero to estimate the value of the cells that are marked as zero in the commodity flow survey. Then, in terms of the CFS sectors not included,
the out of scope in the original survey, uses a combination of local employment, trucking and population data to estimate the values of these shipments.
In terms of issues that arise from CFS, to summarize. First, in terms of zero cells, it's a good method, but can be problematic in terms of the data suppressed for proprietary reasons,
because this data is going to be very different from the existing data, the reason IT was suppressed,
to use existing data to estimate it is going to be problematic in terms of trying to feel like you have a comfort level in the data estimated.
Additionally, there's no fields data used to validate the out of scope sectors, hard to tell if the relationships established in this process match what's happening in the field.
Even if they do average match with what's happening in the field, will be different in localities, Arkansas may have difference than they would for farm shipments in California, Florida, other locations.
Again, something that's not varied in the current database.
In terms of transSearch, a privately maintained database, county level freight flow data.
The key features of methodology that make it different, IT relies more heavily on economic output data, employment data; .
Company that produces these has access to more than the publicly available information. They have a proprietary motor carrier data exchange, basically a database of commodities for major trucking firms in the country,
they use this to provide additional information in terms of how trucks are distributed across the country. They still rely on CSF for the short trip, the database tends to be the larger trucks, sampled for longer truck trips.
TransSearch is similar to what you see at CFS at the local level.
In terms of the [indiscernible] project I mentioned before, developing commodity databases, something we found in the literature review,
several attempts to disaggregate the FAP database down to a local level to do freight planning by state DOTs and MTOs. There have been a lot fewer efforts to build data from the bottom up, surveys
or economic data to understand how truck flows are occurring at the local level.
There inventory been inventory haven't been a lot of [indiscernible] out if the field. The early data we are getting indicate this method is more accurate for some commodities as opposed to others.
One actually looked at the FAF 2 database, relationships between the commodities being estimated and different socio economic data at the FAF regional level. You can see in terms of the R squared developed by regressing these variables,
the fit between socioeconomic data and the model being estimated, IT varied significantly by model ranging from 11% to 90, but we are showing the lower scores here. Yes, this methodology works for some commodities, not others;
in this case we have to come up with other -- for these that can't be estimated well using socioeconomic data.
In terms of the overall steps we will apply for NC RFP 20, looking at a range of applications attempted at the local level, identify supply chains that have a high value in terms of dollar amounts, in terms of tonnage
or problematic in general, just import flows I mentioned before, accessing the message that we reviewed in the earlier tasks in terms of how well they can address these generic supply chains I mentioned before.
We will actually test out some of the methods, collecting small methods of data, ultimately develop a guide book that will describe the different data sources, strengths, weaknesses,
talk extensively about how we can supplement some of these national databases using local freight data collection efforts.
I did want to spend some time talking about freight data collection efforts done at the local level in Washington.
As we go through we can see the methodologies are similar across the commodity types or across different processes for different commodities.
The first example is a potato production example, the second is diesel production example, both from the state of Washington, done by the Washington DOT. One of the universities up in Washington as well.
For the PO Tate OEU production example, they started by looking at the production data available from the U.S. department agriculture, one thing to note in this example,
the PO Tate production example had a lot of data used at the single course, able to leverage. Where are the potatoes being produced, actual farm locations. The next step is to understand where the processing facilities are.
And where are they located, what percent of the potatoes grown are going to these facilities as opposed to end markets, being delivered fresh.
This was done by using some of the experts associated with the Washington state potato commission, interviewing them and understanding where they are located, how big, input sources in terms of where they get the potatoes from.
Ultimately they use a survey the commission had done to understand what, how the origins and destinations played out between where the potatoes were grown, facilities, processing facilities are located
and what some of the destinations were for some of the good as well.
After distributed, they were assigned to trucks, routed across the Washington state highway network to understand which of the corridors were most important in terms of moving these goods.
This can be contrasted with the diesel supply chain example which did not have a single source of information the way the potato example did. You have information, bits and pieces across a lot of agencies.
The Washington wash department of agriculture -- the department of ecology regulates underground diesel storage tanks, and entry -- the Department of Revenue in Washington is responsible for assessing,
collecting fuel tax at terminal locations and other agencies, most specific in terms of monitor are activity, agencies that monitor pipelines, water-borne activities, et cetera.
It was a long process of culling together data from different sources that led to a combination of estimating and actual data to get to the final result, which was understanding the flows for diesel supply chains. You see the complexity,
the terminals, arrows, flows across modes; process of assigning a number to each arrow, disparate data sources, it became much more complex than the potato example.
In terms of the conclusions, looking at these examples, the main thing is some commodities will be easier than others. If there's a single source of data is important.
For the more complex commodities you will probably require a mix of actual data that can come from receipts or economic output data, and some estimated data.
You can generally find production data relatively easily, but in terms of on the other handing distribution data, specific origins and destinations is more difficult to find. You have industry experts, you can get if not actual data,
at least people to sanity check some of your estimations at a local level.
Also, because there's different levels of complexity you will probably not want to develop a bottom -- where you are trying to get this information for all commodities in it your region.
You are probably only going toment want to do this for select commodities, high volume, critical to local economy or things with particularly complex supply chain you want a good understanding of.
It's important to understand you have options when you see some of the freight data sources are not matching what you see at the local level,
to understand you have options of other ways you might be able to use local data to understand what the commodity flows are for specific commodities. And that is the conclusion of my presentation.
Thank you, DK. We have a few questions we will get to at the end. I will now turn it over to Michael Anderson of University of Alabama at Huntsville.
You can go ahead when you ready.
Thank you. My presentation will talk about using some of the federal data DK just talked about, combining with local information to actually do case study modeling we did, freight in Alabama.
The goal of all this was, the process of developing freight origin/destination matrix.
These OD matrices were to embed into typical state-wide or MPO travels of -- typically in most locations, not dealing with freight in the travel demand model, basically a passenger car situation,
trying to do freight it's implicitly brought in. We are looking to explicitly model freight.
We are trying to come up with, at the top the national freight data sources we were talking about, basically take a big-picture view of freight. On the lower end, you have local surveys, such that I can go out, MPO, within a state,
ask questions, collect freight data from a couple main shippers in my community, but the reality is most of this falls somewhere in the middle where we are looking to drive the top down,
disaggregate the national freight data to something useful, while using local information to give us knowledge to help build up and meet somewhere in the middle and use it for freight projects.
The way this is set up, we designed this integrated freight planning framework, and it starts here on the side. You have the FAS 2 data, local industry survey information and did a disaggregation of the FAS 2,
aggregated the industry survey information into current and future years, developed the information based on freight analysis zones. This is somewhere in between the federal regions
and down to what you get at the traffic analysis zones inside an MPO. There's some level where freight is appropriate. The process goes through distributing the freight trips, assigning to the network, doing analysis,
finally looking at system output, performance measures, see what freight's really doing in your community
First, I will talk about -- Alabama, state-wide model, step back a little to talk about the freight analysis framework, version 2.2, HREF 114 zones, [indiscernible] ports of entry, 43 commodities, seven modes.
Looking at this information, trying to model for a state, we're down here in the corner, Alabama. We are lucky enough to get two zones, an area of eight counties around Birmingham and the rest of the state of Alabama.
If you are trying to do state-wide freight modeling, two zones, you essentially get trips from the Birmingham area to the rest of Alabama, one road connecting between them, that's all your freight travel, not really good enough.
We are trying to get freight trips between counties on major state and U.S. highways.
What we need to do is come up with some methodology to aggregate two zones into a larger number of zones. Basically the logical decision was to try to go to the county level. We can go from two zones to 67 counties.
What we found in doing analysis, use personal income county and shipment [indiscernible] gave the best -- the personal income allowed us to incorporate a population/income factor that really represented buying power,
because we have counties in Alabama that may have a lot of people, but not a lot of income, not a lot being brought in.
Other counties have less people, but a lot more discretionary income, more freight activity associated, especially in the mixed goods type. The value of what was being shipped,
a lot of situations in Alabama where industries are transferring from labor, direct to labor into auto myself facilities, shipping more value by hiring less people.
We can pull the 114 zones together and look at assignment of everything flowing father.
The blue, everything passing through, where it's going -- I remember, going external,/internal, and we looked at truck crossings at state line, how many trucks per day the state was crossing the FAS 2,
the map I showed before, looked at how many pounds per truck, legitimacy to give a idea we were getting good information about pass through, trips with a start or end outside the state of Alabama.
Now we can go did in, start modeling, everything will be grate. There's actually a lot going on with freight. We went from two zones, initially, around Birmingham. A variety of freight moving in just Birmingham, the eight-county area,
a lot of freight moving in the rest of Alabama, from FAS 2 going from zone 1 to zone 2, and inside, outside the rest of the country. Looked at nine specific trip purposes in if a model for freight strictly the FAS 2 standpoint.
Building in locality survey information, when we start getting into the county, start looking at trying to do some sort of equation to aggregate everything.
We have in each county, pull out population, personal income, employment, value of shipments for the county, and use the county-level values in some combination with a weighting factor to disaggregate the freight to an appropriate level.
The key elements were weighted on personal income variable and the value of shipment variable for the particular county, gave us the best results.
We set up experiments, weighted, came up with an optimal solution. We got reasonably good projections of freight traffic that compared to freight traffic that was counted by the Alabama Department of Transportation for us.
Moving this forward, we said we could break it down, model the state, using counties essentially as our freight zones. After that we wanted to see if we could go into an urbanized area, a sub-county area. We entered a project,
working with Mobile, Alabama on the Gulf Coast, trying to come up with freight to build into the urban model. One reason we chose Mobile, there's a lot going on. It's the cross-roads between interstate 10, running East/West,
and Interstate 65, North South, a lot of freight, a tunnel under a portion of Mobile Bay, highly congested with freight activity, a variety of railroads, waterway, steamship company, trade zone, trucking companies,
a lot of freight activity happening in Mobile. Before we started there was actually no freight modeling being done. Essentially the Alabama DOT was collecting the traffic count,
essentially coming up with roughly estimated percent truck trips on all the roads. In the modeling process the trips were based on a estimated portion of non-home base trips.
There was no way to get the numbers in mobile high enough to -- dominated by truck traffic using this methodology.
You add to the fact, trucks are not explicitly modeled in internal, external, they were modeling cars only. They found a community with a variety of alternate modes of transportation, mainly trucks running around a community.
What we wanted to do was drive down and get sub-county. It became a question of what to collect from the local industry to aggregate this below the county level. We disaggregated for the state,
we used the county compared to other counties. For example, the value of sales, shipment in one county could be directly compared with county and see which would be dominate. We got below sub-county we needed to shift,
focus on what the specific industry in the county has to provide. What can they give us? How can we work with them? Build relationships with them to use their information for planning purposes.
What we developed was essentially a two-page data collection tool.
Members or staff from the Mobile MPO would go out and ask key questions to develop the information we were looking for. The business description, number of shipments made or received over the course of a week or month.
We wanted to know the compass direction for the shipments. This was important because we didn't want to ask them who the freight was being brought in or shipped to, we don't really care in that aspect.
All we want to know is what direction, East?
North, or West. Essentially South means you are going out in the Gulf of Mexico, Mobile, but which compass direction. If I know where your business is located and you tell me I center seven trucks a week that I load up
and send East of here. From that information we can pretty much figure out how much freight you are moving based on what you do and the direction it's going, tell us how many trucks you are putting on the road, where they are heading
and how they are getting out of town. These freight trips, generally, there's a direction they are going out of town, you know the main routes out of TOURPB. We were looking for compass direction, number of shipments.
We wanted to work in the weight of the shipment, develop a database for converting tons to vehicles, the value of the goods, and finally, asked the shippers of any transportation problems at their location.
It gave the shippers an input into the MP process. Prior to doing the survey was convincing them we were trying to collect information that would make their job easier. We want to help them, be sure they can get product into
and out of the community in a timely fashion.
If they don't have a chance to tell us what the problems are, they are not going to be as likely to tell us what's going on out there.
Essentially we gathered the information, started looking at where it's going.
Again, it's proprietary. We just needed to know, how much it weighed, where it's going, what direction. We worked with someone at the MPO who conducted the general of surveys.
We collected several hundred, could build a database of information, here's the direction, quantity, what freight is doing inside the MPO, then we had a basis to disaggregate.
We started working with the MPO model, a variety of trip purpose, external, anything from Florida to Texas, at some point on I-10 it's passing through Mobile.
You have to work with it. Stuff in from Alabama, coming from the New Orleans area, going to Birmingham, it probably came at some point through Mobile, it's just the way into Alabama.
The port to Alabama, the port into Mobile county, all this location information, variety of trip purposes, how to work with this stuff.
Finally we got to the Mobile -- a single origin destination, freight only, talking truck, a truck origin destination file that could be modeled in the network. The way it was actually modeled in the network,
it was assigned as a pre-load to the transportation network. The freight trips were assigned on to an empty network such that freight trips,
for example external/eh TERPBL TPRAEUFT PR movement would not get bumped on to a side road as could potentially happen if you are loading freight with the passenger cars, assignment technique to move to alternate paths.
In this situation the freight had first crack, got on the network, ability to pre-load gave us good freight modeling information that Mobile could use for assignment purpose
The overall conclusion, the ability to model truck trips, we didn't do regional, but gives the ability to do state-wide, local, regional you could do, but combining the local information, that's the important thing to take out of this.
Yes, the FAS 2 does provide a large amount of information, but unless you go get the local freight data at the county level to understand differences between and you start going sub-county, talking to individual shippers,
it's hard to aggregate that FS FA 2 data, in such a way we haven't been doing before.
That's the end of what I have to say.
Okay, we will move to our final presenter.
Joel Falter: I will talk about our experience in conducting -- in California. The trials and tribulations of getting good data. I specifically want to talk to you about how the data has been collected, what it's being used for,
the challenges, lessons learned in collecting the kind of data many of you are interested in, the kind that goes into the previous presentations you saw, and very briefly, some of the findings.
I will start with some background on what we have done.
So far to date we collected and performed OD surveys on three corridors in southern California. Highway 395, U.S. 6 and 14, for CalTrans, the California Department of Transportation, state route 58,
a combined effort between the San association combine San Bernardino -- the corridor 9 study for [indiscernible],
currently underway now we have recently been commissioned to conduct a rural roads study which will be the final piece in the puzzle for Kern county, how goods and commodities flow in the San Joaquin value and San valley and other areas.
We surveyed [indiscernible] truck drivers and have been able to interview, talk to over 500 different firms dealing with logistics, trucking, or general goods movement.
We've check collected an enormous amount of data for a relatively modest cost. Kern Council of Government funded three of the studies in partnership with Sandbag for the state route 58 study, the I-5 and SR 99, rural road studies,
the 395 study conducted by CalTrans.
The Kern [indiscernible] effort, for example, cost about a quarter of a million dollars and all four studies combined will have cost around a half million dollars.
It's possible to get a lot of data for a relatively inexpensive outlay of money if you plan correctly.
For your reference, I want to show the study areas, so you have an idea of where we've conducted these. This is the 395 corridor, eastern side of the State of California, returning along eastern Sierra, the North,
state line down towards the Mojave area.
The state route 58 study, an East/West major highway that links northern California and points North, Oregon, Washington. From the West, bakers fields to San Bernardino county and Barstow area,
quite an extensive area was covered in that project.
The I-5 SR 99 corridor covers on the North end from TAOUL Ari county, Kern County to the county line with Los Angeles and the Lubbock area. It's quite an extensive area, major freeway
and highway corridors did present a lot of challenges as I will go through with you.
These are all the state highways provide access to regional, local shippers, manufacturers and so on. It's an important piece of their puzzle.
You might ask why are they doing this work? What's the interest?
As you know, goods movement is a hot topic, something important to everyone.
Each agency had some unique interests, things to accomplish, as well as some similar needs; but the basic premise was to get a better understanding of goods movement, what its impact and effects are on the local community
and the local environment. In Kern county's case, the officials recognized truck traffic was growing, after year, not only the interstates, they recognized it was growing, didn't have a good idea of how to explain
or how it would affect planning decisions in the future.
Combining these studies, set out to get a good understanding of what type of truck traffic is out there, what are the goods being moved on our highways, where are they going, how many shipments occur at different times of the year.
Are there seasonal variations, what are their needs?
What's happening on interstate and intra-state level and what are people doing around the country. We conducted literature reviews to find out how other people are dealing with goods movement,
how are they studying the impact of goods on their lovely road and highway and what does it mean from a planning perspective?
With all this data in hand it gives these agencies a lot of tools and resources to do a lot of different things. For example, in CalTrans' case, IT allowed them to have a better understanding of when trucks travel, how,
to help them plan long-range infrastructure operation and safety planning.
Safety planning was a key interest to CalTrans and many other agencies. As you saw from the figures, the eastern side of the state, parts of Kern County, Kern Kern County,
the hazardous materials that move in our highways it allows them to plan for emergencies, for example. Many fire departments, local agencies may not have the resources in hand to handle a big emergency.
It gives them some better understanding of what's traveling through the area.
Kern [indiscernible] sandbag, many of the agencies around the country are very interested in goods movement from a forecasting perspective, the forecasting process, a previous presentation on modeling
and much of the information we collected is going into the modeling process these agencies are developing.
In addition to the state and regional level, it allows -- data we collected allows local agencies to tap into what's coming down the road. How we start to plan for trucks as they get off the freeway, off the highway,
we think of truck travel as interstate or interregional, but there are a lot of localized impacts that have to be taken into consideration when we see warehouse and distribution facilities popping up. How do we plan for those trucks,
have the right infrastructure in place to handle those type of movements.
Air quality is an important issue here in southern California. We are under direction by the state to access greenhouse gas. We have a resource that can be shared throughout the state, and of course throughout the region.
I can't overstate, moving on, how much goes into getting this kind of data. It is a very, very labor-intensive effort, a very detailed process, IT involved collecting automatic vehicle classification counts on the roads,
around the clock, peak period manual classification count at key locations, interchanges, road way, be highway junctions. 24-hour, around the clock, truck driver intercept surveys,
[indiscernible] surveys we did with one of our study partners, the Toyota group -- their -- supplement what we didn't get from the truck drivers. Count
and surveys were conducted for different seasons throughout the year to see if there are variations between spring and fall into what goods move when. A new challenge right now, on the rural roads project,
where we are going to employ video technology because of some physical right of way limitations I will be getting into in my presentation. So a lot of different things have gone into the data I will be talking about.
The biggest challenge you will face taking this kind of effort, you have to keep in mind, like a trip generation study or parking study, we're not collecting this it kind of data at the mall.
You are literally collecting data from a moving target you have to keep in mind, when doing studies like this, the physical environment, where am I going to conduct this study? We did these projects on interstate highways,
major state highways, so these are high volume, high speed facilities, we need to find a place that's safe to conduct the study, places where you will minimize interruption to traffic flow, traffic operations,
maintaining driver safety at all times. Minimizing disruption to state commerce, having the right people to do the job, having the right people is critical in terms of data collection and data interpretation.
Getting meaningful data. It's great to say we want to get truck flow information and find out what trucks are carrying, where they are going, but you have know what questions to ask.
We spent a lot of time doing that.
Finally, you have to figure, well, how do I get the information I want if I can't pull trucks over someplace to KHREPBGT this collect this data.
The lessons learned, I will tell you, we are still facing new colleges, learning challenges, always new challenges and things to figure out.
Have a new TKPWA*EURPL game plan.
What we have learned to date, so I can help you thinking about embarking on a detailed study, make it a success.
The best laid projects always have speed bumps, a project of this size needs overplanning, overcommunicating.
It's a continuous communication and coordination process, from the time you get started to getting people involved, to actually pulling off the surveys, keeping them going.
You constantly have to plan and replan, and talk to all the partners involved in this.
I will talk in a few minutes about who those partners are, so you can make sure it's a success. You have to provide adequate resources to ensure the success of the study, you need to have resources in terms of manpower,
resources in terms of equipment, when you are doing things around the clock there are a lot of things you need to have out there. While these have been cost-effective projects done to date, we had a lot of discussion
and dialogue with our clients to discuss what the needs are going to be and what it was really going to cost, what they were going to get for the cost to ensure they got what they wanted and we delivered what they asked for.
Here are some of the details of what we learned.
In terms of coordination, communication, obviously you need to start as early as possible.
I would say if you were planning on conducting a survey in the fall or spring, you want to be four to six months ahead of time in laying the groundwork for getting your survey done. Bringing the players together,
not only state DOT folks, but law enforcement, other lead agency, all your team members, if you are using temporary help, survey crews, traffic handling providers.
Whoever you are working with, get everybody together so they understand what you are doing, why you are doing IT, when you are doing it, what everybody needs and expects.
As you go through the process you find new needs and new challenges and new requirements will come up. Even though you can be doing a project like this for one agency, many agencies have -- CalTrans, for example,
has many different divisions, you have planning division, safety division, construction division, highway operations divisions.
It's important all of those people, if you are a state DOT, on the same page, part of the process so everyone's needs can be addressed, so when you get out there, get the surveys going, you don't run into pitfalls or have things go wrong.
You to have contingency plans for when things do go wrong, because things do go wrong.
For example, to make sure the survey site is going to be available for the days you conduct the survey and in working order.
We had experience where one of our survey sites was going to be closed for emergency repairs.
That wasn't known until very shortly before we were going to begin the survey. We had an instance where a piece of equipment malfunctioned, a message sign,
had to be repaired in order to allow the survey to be conducted at a weigh station the highway patrol was manning, to it help control the flow of traffic, we were not going to be able to proceed.
When you are working, it's important you understand everything will be working. Visit your sites far in advance, get as-built plan aerial photos, so you can understand how am I going to pull this off?
How am I going to check the surveys?
Where are the trucks going to stop?
Where is my staff being positioned? How will I deal with thank you this much traffic. You will see, trucks on interstate and highways with high volume builds up a lot of truck traffic very quickly.
You need to have all of this in place before you get started. You really need to think ahead, and again, it's a continuing, ongoing process as you do it.
Staffing is very important, as you expect, whenever there's an interactive human process, people have to deal with each other, that creates challenges. Challenges from the staffing perspective will make or break your project.
When I say hire the right people, I know IT sounds like a cliche.
It's true. We learn this in some respects the hard way as we started, but got better over time, you have to hire the right people.
You really need to specify if you are using temporary labor to staff your surveys, that you get the rite people and they have the right skills, have the ability to write and talk well, they can understand what's being told to them,
they can articulate questions clearly to drivers. You are out in an open environment, people are in a rush, in a bad mood. You need to have people with a good temperament, stay over the long haul and deliver what you need.
Bilingual skill is a plus. We have become more of a bilingual nation. Many drivers sneak other learnings, Spanish was a prevalent language among many drivers.
It would be useful depending on your area to have people that can speak a second language such as Spanish, just to be able to enhance the effectiveness of the surveyors and success of survey.
When using temporary labor make sure you screen the surveyors personally.
In our experience we had temporary staff where there were personnel difficulties, challenges, people that were less enthusiastic in some cases than others, had other issues, they weren't the rite kind of people for the job.
When you call an agency, say I want to hire 50 people to conduct a survey, meet with those, make sure they understand they will be outside, through the middle of the night, have the temperament to deal with that.
Make sure they have the right stuff, if you will, to do this kind of work. Make sure you have the right people, the same people with you throughout the conduct of your survey. You don't want crews here for the morning, going home,
will send someone else. You want continuity.
If you are out there 48 hours, as we were in some cases, you want the same people rotating in and out to make sure this is a success.
You don't want to have surprises or new people involved that don't understand. You want to plan staffing carefully, be mindful of rest periods, labor requirements of your state.
We had a case where one of our survey sites was fairly remote in terms of its physical location to lodging. We had the right coverage for our survey crews,
we hadn't thought about the drive time for people to go from the site to motel and back, so really impacting their rest period., affected their comfort.
Having more people than you need is also a plus. You make sure your crews rotate and are fully rested.
Once you are ready to go, you have done all this planning, the site, people, everything is on top, you start the survey tomorrow; check back in with law enforcement, local DOT, whoever is in charge of enforcing operations in your area.
You might meet with the local commander of the highway patrol office, field operations manager from your state highway department, but these organization run around the clock, have people on different shifts.
The people you met with in the middle of the day may not be the people working in the middle of the night. Someone coming upon a survey crew that didn't get the memo won't decide to shut the survey down
or wake you up at 3:00 in the morning asking who you are out on the highway. Law enforcement the day of, day before to make sure there are no problems. It's important, make sure the survey sites are in working order.
Important is to quality control, the survey process as you are going, take a look at what people are writing down, listen to what they are saying, make sure they are on message, on script, getting you the information you asked for.
Have good supervisors out there who can deal with problem surveyors or pressures problems as they come up. We are dealing with people, it's cold, hot, windy, they are tired, you need someone to keep them on track,
deal with problems as they come up, from truck drivers or law enforcement or transportation officials.
You need to be respectful of everyone's time when you are out on the road, drivers are giving you the courtesy to pull over.
Be brief, get to the point, ask the questions, let them get on the way when conducting phone interviews ask relevant, reasonable questions, send them off on a research project. These are people people, they are helping us,
giving information, data. We want to be as mindful of their time as we possibly can.
It's important to also let people know what you are doing, why you are doing it, so they feel like they are part of the process, not just wasting their time. This is part of a bigger planning process, their opinion counts.
Very often people are angry, they vent, why did you stop me, once they have gotten it off their chest they are very encouraged to help answer your questions.
The final lesson we learned in survey operations are, in factor the lollipops don't work, truck drivers don't like lollipops, thought we would give them a treat for stopping, they don't like them
and have a lot of suggestions on what you should do with lollipops, I suggest you save the resources and don't buy lollipops.
Our last lesson learned comes really at the end of the survey process, we successfully have done all of this data collection, thousands of surveys, you have to plan how to interpret it. You come back to the office with cases of surveys,
it's time to input the data so someone can make use of it.
-RPBG. Thinking ahead on how you will input that is important.
Data by location, easy to catalog, make sure the surveyors completed the entire survey, fully filled out. You get back you don't have surprises. Code, simplify the surveys, very important.
Different people hear the same thing differently, or people hear the same thing that's told to them differently, how they interpret that, write it down is not always the same.
The less interpretation surveyors have to do the better off you will be.
If you can pre-plan the type of things you are looking for in answering questions, have check-boxes, coded survey forms, will make the process easier and the data collection and data entry effort easier and more effective,
and as you are entering data you want a quality control that process to avoid mistakes and bad data entry, or misinterpretation of answers so people TKOERPT decide to catalog or summarize data incorrectly.
It's very important you QC this process throughout, and in fact, QCing the process and communication is something that needs to be done over and over from the planning process all the way through to the data interpretation
and analysis process.
In the little bit of time I have left, what did we learn from all this? We have a lot of goods movement data with endless analytical possibilities.
I don't have time to go into everything we have learned or what all the agencies are doing with the information, but they can share that with you.
I will talk about that at the end.
I thought I would take the last few minutes and show you what we did on the I-5/SR 99 study, and go through how we summarized some of that data, discuss a little about some of the things we learned, which in Kern County's case,
we learned the county they have become the cross roads of commerce, a lot of goods move in and out and through the area.
A number of logistics centers popped up, I KE A, target, others. There's a lot of agricultural activity, gave a good handle on what's going on, how to begin planning in the future to deal with this as their county grows.
The truck intercept surveys were conducted at six locations. They were conducted for 24 hours, in two seasons, the fall and spring, to get an understanding of if there are differences in terms of commodities,
the types of commodities being shipped throughout the year, we KPHREBG collected over 7000 surveys. The study area for these corridors, we relied on a combination of rest areas and weigh station.
Our involvement included the California Department of TRAPBLZ Oracle TRAPBLZ, the California Highway Patrol, integral part of success of these projects and very important to helping us pull these off.
We produced a lot of data on the type of commodities most common in this corridor, the types are shown here, agricultural, transportation equipment, miscellaneous goods and empty container shipments.
This represents the highest amount of commodities traveling through the study area.
We have broken down distribution information in terms of where these trucks are going to, coming from, the information available not only county by county in the state, city by city in the state, but also state by state in the country.
We have an understanding, they have an understanding of where all these trucks are going to, and coming from. We have been able to break down, provide background data for planning, other modeling,
and quality purposes what routes these drivers are taking. We not only know where they are going point to point, but the combination of routes they are taking. These are examples from this survey for Northbound and Southbound flows
and actual traffic assignments.
These are the actual routes drivers are taking.
The information has been broken down by product, is available in the database for every type of product we surveyed.
It gives them the understanding of what type commodities are traveling where, when, how, endless possibilities of data they have.
There's dozens more graphics and tables available, as well as the raw data to help in the planning process.
We conducted fleet operator surveys.
They were able to get a good understanding of who uses these routes, when, the kinds of things they are traveling, where they go and so on *UFRPLT
How local users use these corridors and facilities, they have different needs an the interstate operators. Just a good understanding and a lot of data at the local level they weren't aware of in terms of seasonal and local shipment
A lot of feedback from drivers we passed along, concerns on drivers, safety, road maintenance, they like the speed limit to be higher, where we should have bypass or additional lanes add. A wealth of data from driver and fleet operators.
That is my presentation. You see a link to Kern COG's website, they have been gracious enough to post all three of the projects completed to date.
They are pdfs, down loadable from the website. The data available, we're available to answer questions of others if they wish to conduct these studies. Thank you for your time.
Thank you, Joel, and to those who posted questions. We will now get to the Question and Answer session. I will bring up a poll to get an idea if there were other people in the room with you.
How many people besides yourself were in your room.
If you can indicate, not counting yourself.
I will go backwards in order, starting with Joel. Were surveys able to distinguish between truck and less distinct -- between vans and pick ups, plumbing, construction trades and deliveries?
Joel: For the OD surveys we prime were collecting data on heavy duty trucks. We did through fleet operator and local surveys collect information on local haulers like plumbing companies, gravel companies, local haulers,
and of course our manual classification counts picked up data on smaller trucks, the type.
We did not specifically interview pick-up truck drivers to get an understanding of the flow of goods they were carrying.
Would you consider the -- as another data source and [indiscernible]
The answer is yes, I would definitely consider that to be a freight flow database. There are several. I talked about talked three, the most commonly used. There are literally dozens of databases we identified,
summarized in terms of what's included, the sources, how it can be used, et cetera.
There's certainly a lot out there.
That's one of them.
Is there a way we can access freight data on arterial roads and highways?
That's a little trickier. If Joel presented information on the OD in southern California, that's definitely one way to get that information.
You have to get it first-hand.
There's no national rate freight data source that will give you information on arterials.? State highways, if they are major, not typically, those wouldn't be included in things you can derive from national freight data sources.
You have to look at local collection efforts, the OD survey. They tend to work well in rural areas. I tend to think of things like truck following studies, establishment surveys, things to fill in the gap in the more urbanized areas,
there's nothing off the shelf.
Can you discuss any of the proprietary issues associated with the [indiscernible] TransSearch database and the limitations that causes?
The main issue, TransSearch, for one, it's a bit of moving target.
They update processes, databases on an annual basis. The methodology is certainly not static. There are certain pieces of it where more information is provided, more detailed, in order to protect the methodology they utilize.
It's a little more complicated when you are trying to complement or supplement the database collecting local freight -- but understanding the process use the in the database you are analyzing from TransSearch,
a lot of times you have to go back to the developer to understand what they utilized so you know how to complement it It presents another layer of complexity in utilizing that database, but there are ways it can be dealt with.
The next question is for Mike, I think you covered this in the presentation, but the data set you used, was that based on 2002 freight flows or more recent data?
The FAF 2 data used was actually the 2002 -- however in Mobile it was an interpolation between 2002 and 2010, because they were dealing with a 2007 base year.
I don't see other questions typed in at the moment. We will open the phone lines. If you are only learning over the computer, you obviously aren't able to ask questions over the phone over computer, but you can type them in.
The operator will give instructions on how to ask questions on the phone.
PREGSZ press star one to ask a question, to withdraw the question press star 2.
Unmute your phone, and state your name. One moment for the first question. >
At this time there are no questions in the audio queue.
For the presenters, there questions that might have been sent directly to you that I didn't see?
This is DK, I did not get any.
I did not.
If there are no more questions we will go ahead and close out a little bit early, then. While I am reading the close-out part, if you think of anything, feel free to type it in.
I want to thank everybody for attending the seminar, the recording will be available online on the Talking Freight website. I will I send an e-mail once it's available.
If you would like to receive the certification maintenance credits make sure you are signed in with your first and last name or type your name into the chat box if you were a with a group of people. Download the evaluation form,
film it out, e-mail it to me.
If you are unsure what to do you can download the credit instructions.
The next seminar is on June 16, will be about promoting economic vitalization through enhancing freight transportation.
If you haven't done so already I encourage you to visit the website and sign up for this Webinar. The web address is on the right side of your screen and I encourage you to join the listserv if you haven't already.
We have two seminars coming up on June 1 and 2, just opened for registration today, more information through the listserv. You can register for those as well through the Talking Freight website. I will bring up that slide in a minute.
I don't see additional questions so I think we will close out.
Thank you, everybody, for attending today.
I think we covered that question. We went over that question.
That covers everything today.
Thank you, everybody, and enjoy the rest of your day.
Thank you for participating in today's conference call. You may disconnect at this time .