Good afternoon or good morning to those of you to the West. Welcome to the Talking Freight Seminar Series. My name is Jennifer Symoun and I will moderate today's seminar. Today's topic is Non-Traditional Freight Data Sources. Please be advised that today's seminar is being recorded.
Before I go any further, I do want to let those of you who are calling into the teleconference for the audio know that you need to mute your computer speakers or else you will be hearing your audio over the computer as well.
Today we'll have three presenters, Crystal Jones of the FHWA Office of Freight Management and Operations, Dale Tabat of the Washington State Department of Transportation, and Alon Bassok of the Puget Sound Regional Council.
Crystal Jones joined FHWA's Office of Freight Management and Operations in October 2003. Prior to joining FHWA Crystal worked for the Department of the Army for 11 years where she held several positions in transportation and logistics including an assignment with the Office of the Deputy Chief of Staff for Logistics at the Pentagon. Crystal's primary area of expertise is with freight technology and operations. Crystal also has extensive experience in the areas of programming and budgeting and strategic and performance planning. Crystal holds a Bachelor's Degree in Industrial Technology with a concentration in Computer Science from Elizabeth City State University, and a Masters of Science in Administration from Central Michigan University. She is the program manager for FHWA's Freight Performance Measurement initiative.
Dale A. Tabat is the Truck Freight Program and Policy Manager for the Washington State Department of Transportation. His areas of focus are on integrating truck freight services within the department, truck parking, truck freight data, research and policy. He also serves as the division's liaison with the Washington Trucking Association and legislative staff. Dale has over 30 years of management experience in the transportation sector working in operations; sales; planning; and pricing positions for numerous companies including U.S. Xpress; Federal Express; Roadway; Rollins Dedicated Services, and Pacific Intermountain Express. He has also served on the board of directors for the California and Utah Trucking Association. Dale is also the chair of the "Guidebook for Sharing Freight Transportation Data" NCFRP panel.
Alon Bassok works for the Puget Sound Regional Council, where he is a freight economics analyst. He is responsible for coordinating region-wide traffic data collection and has extensive experience working with truck related GPS data. He has taught courses on sustainability in transportation for the University of Washington's Department of Urban Planning and received his Ph. D. from the University of Washington in 2009.
I'd now like to go over a few logistical details prior to starting the seminar. Today's seminar will last 90 minutes, with 60 minutes allocated for the speakers, and the final 30 minutes for audience Question and Answer. If during the presentations you think of a question, you can type it into the chat area. Please make sure you send your question to "Everyone" and indicate which presenter your question is for. Presenters will be unable to answer your questions during their presentations, but I will start off the question and answer session with the questions typed into the chat box. Once we get through all of the questions that have been typed in, the Operator will give you instructions on how to ask a question over the phone. If you think of a question after the seminar, you can send it to the presenters directly, or I encourage you to use the Freight Planning LISTSERV. If you have not already joined the LISTSERV, the web address at which you can register is provided on the slide on your screen.
Finally, I would like to remind you that this session is being recorded. A file containing the audio and the visual portion of this seminar will be posted to the Talking Freight Web site within the next week. We encourage you to direct others in your office that may have not been able to attend this seminar to access the recorded seminar.
The PowerPoint presentations used during the seminar are available for download from the file download box in the lower right corner of your screen. The presentations will also be available online within the next week. I will notify all attendees of the availability of the PowerPoints, the recording, and a transcript of this seminar.
One final note: Talking Freight seminars are now eligible for 1.5 certification maintenance credits for AICP members. In order to obtain credit for today's seminar, you must have logged in with your first and last name or if you are attending with a group of people you must type your first and last name into the chat box. I have included more detailed instructions in the file share box on how to obtain your credits after the seminar. Please also download the evaluation form from the file share box and submit this form to me after you have filled it out.
We're now going to go ahead and get started. Today's topic, for those of you who just joined us, is Non-Traditional Freight Data Sources. Our first presenter will be Crystal Jones of the FHWA Office of Freight Management and Operations.
As a reminder, if you have questions during the presentation please type them into the chat box and they will be answered in the last 30 minutes of the seminar.
Good morning everyone. Thank you for joining. I think that at the end of the presentation, you will have found it well worth your while to participate. I will go over a National Initiative we have here at Federal Highway. It's not focused so much on data, but process and institutional challenges we faced getting the initiative started and lessons learned with regard to establishing a partnership with the private sector to obtain a data source with long-lasting value for the transportation community.
I am from the Federal Highway's Office of Freight Management and Operations where we have a wide range of activities and missions. Primarily, the work we do focuses on understanding the magnitude of moving the freight system, developing strategies, analytical tools, institutional arrangements, promoting freight management, encouraging innovation, and the size and weight enforcement in the Department of Transportation.
There have been several reauthorization proposals and positions put forth by several agencies such as AASHTO, and the Department of Transportation had a reform strategy they put forth during the last administration. While they differ in some aspects, there are common themes that come forth in most proposals with regards to freight. Listed here are some of those key themes: defining the federal role in good and freight movement, incorporating freight performance and accountability, promoting better management of existing assets, employing multiple funding sources for transportation project and programs, and linking policy and funding to the environment and energy sectors.
What kind of freight data do we typically need as transportation decision-makers? Obviously there's a myriad of data. From the infrastructure side, we need data for the key corridors and gateways, we need data to guide today's investments and to prepare for the future, which requires forecasting to understand transportation and freight movement today and be able to think about investments in a longer term strategy.
How does freight movement effect congestion and congestion effect freight movement? Understanding how the system performs and operates, and understanding the policy and regulation side. We have competing priorities. What are the best investments to make and how to allocate scarce funding through different area and programs?
The freight data challenge, I am sure most of us on the teleconference today have been involved with. If we work in the freight area, we have been challenged with coming up with the right type of data, having harmonization among definitions. There may be a data source, but not to the level needed for a particular modeling effort. The challenge of harmonization is one challenge, and using available data as a proxy for the needed data, the flow of trade for geographical flows, weight ratios, export ratios to come up with values for weight ratios, et cetera. There's also the lack of authoritative data sources, meaning there is no national freight data source we can point to as being the authoritative source to make decisions.
We consider the initiative I'll describe to be a public-private partnership. Most information we need about freight movement resides in the private sector. When you talk about getting towards realistic and accurate data, it becomes necessary to get the source data from where the data originated. This leads us to think maybe public-private partnerships can give an opportunity for better freight data.
A lot of the work in the Freight Performance Measurement Initiative started out for Federal Highway internal purposes. We have a strategic goal of system performance, we want to be able to provide safe, reliable, effective sustainable mobility for all users. From a freight perspective, we wanted measures and data to depict how effectively freight is moving on the highway. That was our initial goal or purpose for pursuing this new freight data we have now acquired through a public-private data partnership.
The most compelling reason for private industry to participate in FPM is because they share common interests which we have such as improving operations, increasing capacity and garnering funding to support programs, so a lot of their reasoning for participating in this initiative was because they believe we will use the data to make better transportation decisions.
What is the Freight Performance Measurement Initiative? It started in 2003. It is a contractual relationship between Federal Highway and American Transportation Research Institute. ATRI has contractual arrangements and nondisclosure agreement with vendors for data. The data is GPS based and it measuring travel time, speed, for the highway system. On a typical month we have 500,000 trucks equipped with GPS and satellite equipment nationwide. We use that data, location, date-time stamp to measure speed, reliability for 25 significant interstate highways with significant freight movement.
In addition to doing the work at the national level, we have worked with several universities on using the data at the state and local. Five case studies were done with universities in the United States; Portland State, University of Minnesota, Texas Transportation Institute, two different work groups within that institute, and the University of Wisconsin. This purpose was to see how we can begin to use the data in other transportation areas.
Portland State looked at recurring, non-recurring congestion, and the impact on the metro area. The point of our research at this point is to go beyond using data at initial level to derive measure of speed, travel time, reliability shown in the map, and research how states, MPOs could use this data to support their freight business areas. The University of Minnesota used archived truck data to derive freight performances, between Chicago and Minneapolis-St. Paul area.
Texas Transportation Institute used national measures to the research how the same measures have applicability below the national level. One of the other applications we also explored using this data is for are measuring border crossing performance. Texas Transportation Institute looked at the same GPS data to measure travel time reliability on the interstate system to derive measures of border crossing time and performance in the Texas area specifically El Paso, and Laredo.
They overlaid two months of GPS data on a busy crossing in the Laredo World Trade Bridge area. This showed you that you could use the same type data to understand the border crossing process, not only from a total crossing time perspective, but segments within a border crossing. For instance, it showed how long from the Mexican export lot to customs.
One of the most interesting uses of this data has been be by the University of Wisconsin, they are looking at measures of having to do with network resiliency. They used the travel time data, average speed data and number of events observed before a severe weather incident and two severe weather incidents to come up with measures of robustness, the ability of a highway to withstand incidents without significant performance loss and ability to restore capacity after an incident.
Now, what are the lessons we learned through this data? Some of the key lessons regard getting started. If you want to pursue this path of getting access to private sector data by the public sector for use for freight planning or operations, the first thing you should do when you get started is be knowledgeable of the regulations, both on the public side and private side to restrict or facilitate access to the data. As a for instance, when we started one of the main things was how to deal with the data remaining anonymous. This data doesn't disclose particular information from a trucking company. There will be time and patience required. We started with a select few trucking companies, a few interstate highways. Through the positive feedback from the trucking companies that participated we were able to convince more to work out agreements to give access not only on per company basis, but entire fleet. The data arrangements to obtain access, used to calculate speed, travel time, reliability. Now, instead of having individual arrangements with individual trucking companies we have arrangements with large technology vendors that provide access to the entire fleets for the one that's subscribe to their particular product.
Probably one of the big lessons we had to learn in getting started was that this type of data produces mass amounts of data, terabytes and terabytes. We had to carefully think through, how to handle, collect, store, process data, collect what we need as opposed to asking for the world and not having the ability to collect, process or analyze the data.
There are reasons a vendor would say no, I will list some of the top concerns trucking companies and participants had in the beginning. The primary things we hear are they are afraid of being burned by bad publicity or regulation. There's a thought process that preparing the data would put a burden on already overworked staff. They are worried about improper use, mishandling of data. For instance, GPS, can it be used to set up speed traps, for instance, even though that's a far-fetched one, but the type of concerns we had to deal with, and assure them these issues could be addressed in a way that didn't harm their business.
There is concern the source data would be modified by the government to tell the story that what you want. The fact is that there was handling and processing that takes the raw data and uses it to derive secondary measures. After modifying the data, does it really tell the right story? Those were the type of issues we had to address in terms of getting them to want to participate.
Some of the things we used to get them to say yes were the quid pro quo, meaning they get something in return for allowing us to use the data. As I started out in my message, our first story was we will use this data to hopefully come up with better freight data allowing us to make better investment decisions. That was the original process. For this particular project there is financial reimbursement. A lot of times the quid pro quo could just be financial reimbursement. There is, but considering the amount of data and benefits, we think that the price we pay is worth it and the benefits far outweigh the cons that come with the price we pay for access to the data.
It is also important to have examples ready of how it could benefit them. For instance, I had in the slide previously, showing there are specific freight programs that could come out of freight funding, dedicated freight programs, out of access to this data. If there's not a good story to tell on how it could benefit them, there should probably be a good story on how they couldn't be harmed, meaning nothing proprietary about it, anonymous. If you don't have a good story line behind how it could benefit them, at least have a good story behind how it won't harm them.
To counter the saying no about processing data to cause a burden, for our program a lot of processing comes after we receive the raw data. Assurances put in place to say there's agreed-upon process and what the processes would be so they have assurance it will be anonymous, no proprietary information. We take a lot of the burden for processing the data and minimize the impact on them in terms of packaging data to provide us access to it.
There is the concept of peer pressure. You can point to other partnerships already in place with other countries or states. Always putting forth the notion that other people are doing it, showing examples of how it has been successful.
For our initiative, the main concerns are why they should say yes and how we went about addressing this. The main concerns the trucking industry specifically had with regards to giving access to GPS is civil litigation, trial lawyers using the data to do subpoenas, competitive, proprietary nature of data, making sure you don't pinpoint a particular trucking company moving out of a particular city or just giving data on their market share to their competitors.
There is also the whole idea of government access. As Federal Highway, we don't take possession of the data; it's managed by a third party. Access issues are handled by having a third-party. We get secondary data products as opposed to raw data.
I will close by saying one thing I pointed out was having examples of success stories, things that worked in the past. Federal Highway is completing a research project on a freight compendium, compendium of examples of public-private data partnership. For instance, where the private sector shared data with the public sector and that research initiative is wrapping up now. The compendium should be available, on our website, will give examples of which partners were the in arrangements, and the processes to get into the arrangements. It will also cover some of the lessons learned. That will be available probably in the next month on our website.
Lastly, I will put this link up. I thought I would be able to do a short demo of a tool we developed called FPM, Freight Performance Measurement, and this web link gives access to a web tool based on the travel time data for the 25 interstates shown in the map early on. Jennifer will make this web link available in a chat before the end of the presentation and we invite you to log on to the website and explore the data, see how it might be useful to your organization.
This was my vision of the end state of quality freight data would be. I am sure most of you will agree, but I won't read it to you. I think that is it for me.
Thank you, Crystal. I put the link in the chat for anybody interested. I will turn it over to Dale Tabat.
Good morning from rainy Seattle. I am the Truck Freight Policy Manager for the Washington State DOT. On my title slide you see the name of Ed McCormack. Ed is my partner in this venture at the University of Washington, and he's our technical expert. Fortunately for him, he's on vacation this week, but if you have technical questions that I can't answer, we will send them over to Ed for an answer.
The Truck Freight Performance Measure Research project was a joint partnership between TransNow, WashDOT, and a lot of cooperation from the Washington Trucking Association. They were very essential in getting our funding in order to move forward with this project.
Just some quick background here, in 2007 the Washington State Legislature appropriated money for the project; we were able to secure some additional federal funding via Washington TransNow center, and some incoming money from them. It was put together in a bill in the budget and directed us to track truck movements in the Puget Sound area and see if we could take that data and develop it into a GIS map type format. The legislature appropriated $324,000 of the $428,000. The reason for that is because the original program was supposed to end at the end of 2009, but, or in July of 2009, but because of some ability to get GPS data we had to change the direction we were going in. Originally the thought process was we could get it directly from trucking companies in the area, and for some of the reasons that Crystal talked about early, we wound up going to GPS third-party providers and buying data directly from them. Our providers that we have been buying data from are from are Trimble, Turnpike (owned by Xata), ATRI has supplied data and some Qualcomm data from a fleet outfitted with Qualcomm instrumentation.
One reason we wanted to put this program together was to look at how this benefits the state of Washington. There were three top reasons on why we needed to do this program. Number one is future federal freight funding requests. All the bills we're seeing or hearing about on reauthorization talks about a freight component and within that component is a request for performance measurement data. Number two is to increase the department's accountability to our citizens and actually to our customers as we look at them. It gives us the ability to look at a project before, during and after the project and be able to say yes, it's doing what we told you it was going to do. Then, also, it gives us the ability to look at where we have existing bottlenecks or chokes or speed slow-downs, so that we will put the amount of money which is now limited into the biggest bang for the buck.
Number one, future federal freight funding requests, as we said, everything we hear on reauthorization is on performance measurements. So we have looked at three issues there, travel time, reliability, and access. The project has shown we can accurately track truck travel times, and look at reliability of the network with the onboard GPS. This map gives us a zonal idea or zonal look at the number of trips in and between zones. The reason we chose zones is also for another reason that Crystal gave. It is so we aren't drilling down to a location site. Once again, carriers get very nervous if that data is being put out in the public realm that their competition can see, or that could possibly be used to punish them or change route movements.
Increased public accountability to our citizens, to our customers, we decided to prove the concept of this. I had a construction project last year in May, the I-90 Floating Bridge, over Lake Washington. We were replacing the expansion joints on the center roadway in the May time period. So we had the idea, let's take a look at that and track trucks on that.
What we've seen, were able to actually look at differences during construction, before construction and after construction and see that there were changes in the traffic. We had the ability to look at directions, Eastbound and Westbound side. We haven't analyzed what this means, just a proof of concept that we could look at changes in truck speed based on construction project.
Another thing we were able to take that road segment, as you see here on the map, we can actually look at that and look at the spot speeds that exist on that road. You see a slow-down here on 25-miles or less on this curve, and then on the bridge. This was information for a week in January.
Now, people will tell you in Seattle, we already know there's a slow-down on this portion of I-90. Which is great, but if you are out using this to get money for a project from the federal government, presenting this to a representative from Illinois, well they don't know what happens at the junction of I-90 and I-5, this road way here. What this shows is that we have a slow down, choke point at that particular part of the roadway.
To go further to take a look, we looked at a year's worth of data, were able to see there's a variance in average truck speed Westbound, and larger variance on the travel Eastbound, and as to what happens. This piece of road has a distinct curve, causes trucks to slow down when they go through that, but the other thing is as they come down at this point here there's a set of express lane on-ramps that if they are open causes a slowdown in traffic going into the Mt. Baker Tunnel. That is one of the reasons we are looking at a slower speed there. The average of all vehicle speed comes off of our loops in the roadway. There was another study that Ed McCormack did with the university using GPS data a few years back. They noticed when the system showed free flow, truck speed was 10 to 15-miles less off the GPS than then the free flow speeds. What the university did was they sent out students with their cars to follow the vehicles and found they were actually traveling slower than the surrounding traffic. The reasons for that are hills, congestion on the ramps. Trucks are over in the right hand lane, so they are slowing down for traffic entering or leaving the road way. That does cause a variance in speed.
This is just another presentation we can do on this. We can break it down on a pie graph, gives a better visual. You can see at the bottleneck area that the slower speeds are more dominant the than on the straighter areas of the system. One of the things we were requested to look at is we can look at free flow, interstates, can we actually start looking at ramps? This is another proof of concept we had the ability to go out there, using this GPS data, and plotting the movements of trucks on the ramps and hopefully get spot speeds to look at ramp and interchange performance on the network.
So, there are benefits, and the cost of monitoring truck performance on the state-wide network. The advantages are it actually gives us truck speeds that we really have no other way to obtain. We do not have a system that differentiates a truck from a car on the freeway system. We can actually look at those trucks and provide that data to our trucking companies and shippers so they have a better idea of the delays, stops, on specific routes which help them operationally hold a truck back or move it out earlier in the day, but to avoid the congestion spots. The big advantage is that data is out there right now. We are buying it from third parties. I will tell you in a lot of instances the third parties didn't even think of this. I mean, this is, if you want to look at it, this is data waste. This is stuff, data they are storing for their customers after it's out there, unless the customer comes back to request it, it's really not out there.
The limitations on this are that you need to be researching these at all times. You need the analysis going on. It's not a performance program that you can put out there and automatically let it run. It needs to be managed, and one of the other problems that we've seen is this it works very well on heavily traveled road segments. You need a lot of GPS data on local roads to analyze their performance. You also have reads that are short nature in the time between the reads. Our average reads right now are around 15 minutes. Ideally we would like to see two to three minutes. Currently in the Central Puget Sound, we are utilizing information from about 2500 trucks. While that sounds like a lot, spread out over a couple hundred square miles it gets a little spotty in places. The more vehicles' information you can capture, the better off you will be.
Our next steps on this, we did go back to the state legislature in January, made this presentation to them. Telling them, showing them what done, what we're able to do, why we need to continue this, especially to justify federal funding and support our state freight investment decisions. We gave them three options, one was just to maintain the system, the program in the Central Puget Sound, to increase to state-wide coverage to high-volume highways, and then also to go to high volume highway and local truck corridors. The state legislature in the budget that came out this year, our Supplemental Transportation Budget, gave us an additional $122,000 to continue the program through June of 2011, and also changed the language expanding the area. We are now able to go out and look at the whole state of Washington versus just the Central Puget Sound.
So by getting the state-wide coverage, it will help us develop benchmarks and performance measures on a wider basis. We can measure freight corridors, and urban areas outside Puget Sound. We can go down to the metropolitan areas of Spokane and Vancouver, Washington. We will look at urban segments and give us the ability to look at border crossings which are very important to the economy of the state of Washington. Then finally what we are going to do on our GPS data, we talked to the vendors to get this additional data, we will have them expand it out 100 miles around the state. We will look, have a buffer zone of 100-miles in Idaho, and Oregon, to see trucks originating outside of the state of Washington and where they are coming from.
Finally, on some more of the state-wide performance measurements is we will be able to take this program and tailor it to different organization and companies. We will be able to look at particular segments of it. The more data we can collect over a period of time, the better our measurements will be. It will help us support and validate a state-wide freight forecasting model. One thing we're working on at WashDOT is getting funding to develop a commodity flow survey, and a map so we can understand where the freight is moving within the state of Washington, what routes are being taking, better understand the data and develop analytical tools.
Finally, one aspect the university is working on, putting this into an internet framework where organization can go on and look at data between two zones, look at the truck travel, performance measurements on the routes between those zones at different times of the day, and days of the week type information. There is my contact information and Ed's so you can send us questions. Also, Jennifer mentioned I am the chair of panel 31. That is a panel of the National Cooperative Research Program and what we are doing, we're in the process of getting proposals in to build a guide book to transportation data management. It basically goes hand in hand to what Crystal was talking about, using this as a guide book on getting data from the private sector and overcoming those obstacles. Thank you for your time.
Thank you, Dale. We're now moving to our final presentation, Alon Bassok of the Puget Sound Regional Council.
Thank you, I would like to talk a little more about some of the things Dale and Crystal have spoken about. I would like to talk about two projects we worked on at the Puget Sound Regional Council and how excited we are, and nationally great interest building over GPS data, and truck GPS data in particular, it allows us to do things we couldn't do before, and to do things that we have been able to do a little better.
I will talk about two projects, the first one compares vehicles speeds on trucks and cars. We completed about a year ago through our congestion management process that was supported by a STEP research grant. Dale alluded to the speed differences with trucks, how they move slower on the road. A project completed now, where we're going with it on trip generation, being able to use the GPS data and understand better the relationship between how many trucks come out of specific land uses and employment types and we used grocery stores as a proof of concept in one case study, but could be easily applied to any other type of commercial or industrial manufacturing activity. Then I want to discuss where we're going next, how exciting the GPS data is for us and others as well.
We know trucks travel slower than cars, in particular on our freeways and there's a variety of reasons for this. Most notably they accelerate more slowly, take more time to decelerate, takes more time to get up to speed on hills and to keep speed as they climb hills, they are in the right-hand side lane on the freeways and generally travel slower than cars. Just knowing this isn't quite enough. The question becomes how much slower and on which facilities, which has wide implications for travel demand model, which this work was meant to support, but also any other model, air quality, truck speed information as an input, and all models feed actual planning applications. Any analysis done with results of the models that later can be used for investment decisions. A better understanding of truck performance ultimately leads to the things that Crystal and Dale were already suggesting are important and being able to say which facilities are suffering, which need improvement, where do we need new facilities, and so on.
In our model, as things stood at the time on a regional level, we did a fairly good job of estimating how many trucks were on the roadways, combined, between freeways and arterials. At the time we grossly overestimated how many were on the freeways and underestimated how many were on the arterials. We wanted to understand what is the exact relationship of truck speeds in order to be able to better represent not on the aggregate level, but facility level, what is going on with trucks.
As Dale was mentioning, there's no better way to come up with spot speeds then to be able to pull trucks specifically out from other vehicles and to use GPS data. We were quite excited to be able to try this for the first time. Unfortunately, serendipity didn't work in our favor; we didn't get to use the GPS dataset that Dale was describing for this effort. But Ed McCormack and Mark Hellenback at University of Washington were kind enough to let us use an earlier data set they collected. This included about a half million points and if you see it on the map it looks like straight lines. This particular dataset and the one Dale described, if you look at a year's worth of data, there's hardly a street that doesn't have trucks on it. Lots of data and all the ensuing maintenance issues that come with that as Crystal has already described.
The data here is from eight trucking firms, from 25 GPS units equipped with them. The problem we ran into was that no matter how wonderful this data was, we didn't have a comparably wonderful GPS data set to use for passenger vehicles for the same time frame. For this purpose, it didn't quite work out so well. This became problematic, as you see here, the most comparable data set we had slightly different in time frame, included all traffic. We ended up with a result for the most part the trucks travel faster than cars, which, well, unfortunately, simply isn't true. So we had to do something else. For the most part GPS data is a wonderful tool and would be ideal tool but you have to plan these things far enough in advance to be able to collect all the data you need, not only truck data, but car data.
While this is the best possible type of data we could have had, we had to step back and say well, what's second best? I want to talk briefly on what we did to resolve the issue and complete this analysis.
In lieu of the GPS data we took speed trap data from the state's loop detectors on the highways and separated out vehicles by length classes. The shorter vehicles we called cars and anything that was longer in the latter three bends we considered to be trucks. There are other things, and buses, RVs, so on, but not sufficiently enough to give us concern about wanting to separate them out. Then we looked at observations with just cars or just trucks in 20-second intervals. In order to figure out truck speeds and car speeds. Later turns out that become problematic because we're not looking at time intervals with both cars and trucks in the same 20-second time interval.
This is one example at one location of how the GPS data compares to the long vehicle estimation. For AM peak, and within a very short distance of the actual speed traps, many more GPS observations, these are just GPS observations immediately adjacent to the speed trap. Across the board all the speed traps were within a reasonable range between the GPS data and long vehicles such that we felt comfortable using the analysis. The GPS data told us the best case scenario, and here it is validating this other procedure.
We came up with the difference is that truck travel about 10 to 15 miles per hour slower than cars, and this could be applied to the models framework to even out what happens on the freeway versus the arterial assignment. To further validate, we wanted to see how things turned out over time. You see something quite strange happened in that the 2006 traffic speeds for car and trucks out-perform the speeds of the year 2000. While it's always great to sit in your office and with the click of a mouse be able to fix regional congestion issues, this simply isn't the case. We had to do a little analysis on what went wrong with this, the discrepancy over time.
Again, we only considered the observations that had just cars or just trucks, and what happens, essentially we are looking at the slowest trucks and the fastest cars and missing everything in the middle, but don't have the ability to pull out from the speed trap data which are specifically cars and specifically trucks.
We decided we had to move to the least desirable approach, do things by proxy. A lot of times in the freight world as you all know we use whatever available data is, not necessarily the best data we would like to use. In this case we would have preferred to use GPS data but had to make one more simplification.
For this last version we used five minute speed trap data, didn't bother trying to distinguish cars from trucks. Instead, we compared two different lanes on freeways. So the inner most non-HOV lane is a proxy for car speeds, assuming not many trucks make their way all the way to the left. The second outer-most lane for trucks, didn't want to leave them in the right hand lane because of the many merging issues, thought it would bring the trucks to an artificially lower speed. We used the second outer most lane to get a gauge for what truck performance would be.
What we see here using this method, slightly smaller differences overall between the peaks and 10% between car and trucks on the roadways and ultimately this is what we ended up implementing.
Looking over times, things are somewhat more reasonable, 2006 performs worse than 2000, trucks are moving slower than cars, and we feel relatively confident this 10% difference is a reasonable estimate.
Then, we looked across the methods, a quick comparison between the two in and out versus short and long vehicles. Even the most simplistic method leads to good results. We would like to have been able to use better data, GPS data still validated some of these methods, useful in the context of validation, and in the future it would be great to have more GPS data, comparable car data to actually have complete picture of spot speed and directly compare on highway segments.
Our second project does use the dataset that Dale described. As he said, there are 2,500 trucks a day, 15-minute reads, richer information if you can get smaller reads, but even at 15 minutes the data is so incredibly large, for the purposes of what we are doing here it's not necessary to get to a finer level of detail than 15 minutes. Something in the magnitude of more than 3 million records a month. We now have data back to early 2008, the data will continue to be collected at least through the summer of 2011. Take all this data, figure out how to put them on the road network, Dale showed images of what that looks through the University of Washington work.
We have to figure out where trucks start and stop, and want to know the intentional stops. Anytime there was a stop of less than three minutes, that doesn't get considered as an actual stop, could be due to traffic, but want to validate through the GIS analysis these are legitimate stops and a point of interest, such that say someone is using a not very well utilized side street as truck parking overnight, that doesn't become the origin, you want to start the place they pick up goods and deliver somewhere else.
For this case study we used a month of data, 3 million reads, giving us 358,000 truck trips, give us great summary and you can pick out the statistics. Even with only 2500 trucks, this data still gives us a good picture of how trucks are performing on the roadways.
For grocery stores we were able to take over a three-month period and really look at a lot of trips and get a better understanding. Overall, 10 trips per tour, some are to larger grocers. You might find a really large grocery chain sends one truck from distribution center to a store, all that truck does, versus some of the smaller vendors say beer distributors may make many small trips within a tour to restaurants, grocery stores and so on.
So, since this is a subset of all trucks, we wanted to develop an understanding of how many trucks would really come to the grocery stores. We had a very comparable rate to the information we got out of interviews with stores, a different research project Ed McCormack was involved in said more or less in the Puget Sound, 10 to 12 truck trips happen daily to each grocery store. This turned out to be half the observations from manual counts. Trusting our manual counters more than the interviews, in this instance we are able to factor the truck trips up by two, to say the GPS data represent half the travel to grocery stores, could be for any industry, not just grocery distribution.
We can further separate this out by trips to different types of areas, largest metropolitan cities, down to rural areas, looking at number of daily truck trips. This allows us to do something we weren't able to do before. You consider trip generation for trucks you think about employment, assume each employee assumes so many trips, and factor that up. We see here under a refined industry, grocery stores, in fact employment has no correlation to the number of truck trips, more to do with the land use area they're in. This has allowed us to think more strategically how to apply trip generation rates.
Lots of things can be done with the data. We can certainly continue the work with trip generation for any employment type, any time of day, so on. There's lots of other useful metrics that come out of this, performance, speeds, times, reliability issues, trip and tour rates for activity based certainly calibration is helped by such data as opposed to calibrating to observed truck counts wherever you can find data for that.
The trip generation models, one thing we're quite excited about now is trying to develop commodity flow estimates with GPS data, quite a long way to go in developing a model that allows us to do that, but we're starting to think about being able to do that because on a sub-regional level, thinking corridor specific, we don't have a good data source. Certainly the national sources, as good or not as they might be, don't give us that offer of detail. Instead of trying to develop a model parsing out the national data set to a corridor or specific facility we are trying to do some estimation based on GPS dataset. We're not only excited about this, but this whole effort is gaining excitement from folks around the country. The vendors are using this as a by-product, waste stream essentially for them, so the quality is getting as there is a better understanding there's a purpose for this data and there's more of it, variable everywhere around the country. Others could easily implement this in similar ways and continue to do research on other uses for the data.
I have my contact information up here if you have questions beyond today, happy to chat with anyone anytime about what doing with the data and how you might apply it yourself. Thank you.
Thank you, and thank you to those who have posted questions. I will start with those posted online. I encourage you to keep typing them in. Once completed with the ones online we will open the phone lines to see if any has additional questions.
So, I am actually going backwards, since we just finished Alon's presentation. I will just start with the questioned directed to you. The first question is: did you limit the data to road ways that allow truck movement that trucks are expected to take or keep all roadway data?
We used all the roadway data, but for the most part there aren't limitations on where trucks could go. I should say that we weren't really interested, for our purposes, in the route selection, as much as the origins and destinations. We had the stop locations of where the trucks were, regardless of route, we knew where they started and finished. We did not need to worry about routing. If you want to do routing, the 15-minute intervals make that somewhat prohibitive. Trucks on the freeway can travel a long distance over that time frame.
Could this information be used to model a possible change to direct through trucks to the inner most lanes in urban areas? A second question is: would this reduce congestion for all vehicle types?
You could certainly model it, absolutely. I am not sure it would be a desirable thing to do, just in the conflicts between the trucks moving from the outer-most to the inner-most lanes. Generally you don't want to have those conflicts in lane changes, especially because of the speed differences. You could certainly model what would happen if you did do it. I am not sure what it would do, my guess, without having done analysis on it would be that if anything, it would have worse impact on congestion, you would be slowing down lanes that otherwise would travel faster. So, it might cause slightly more severe congestion than currently experienced. That's just my hunch. I haven't tried to model that.
We will go to questions for Dale. Have you coupled your data with train freight data to see the flux moving between the two parallel modes?
No, we haven't at this time. The data we are looking at now, until the legislature allowed us to go statewide, which we will be doing in the future, was within the central Puget Sound. It was in a very compact metropolitan area where there's not trains competing with trucks in that marketplace. So at this time, just using Puget Sound data, it would be difficult if not non-existent to look at data from trains, because they are running, for the most part, all running long hauls out of the Seattle market.
Did you consider using the data available through FHWA such as the velocity data Crystal described?
Not at this time. We will look at that data as we expand state-wide, but once again, this was in a metropolitan area. We were limited by the area we worked in. That data did not fit our needs at the time, but I think it will fit better for the statewide study. Getting data from third parties, one thing Ed did, is went out and we talked to providers, trying to indicate to them what we were doing, and trying to accomplish, utilize data, and those that agreed to do it, of course, we signed agreements with them, we had a contract, put together a pay schedule for the invoicing process. We also laid out what type of data we were going to get, what type of information we were going to get. There were privacy issues and that was accomplished by separating the company from the vehicle. What we were getting was positioning data, we could do speeds, but we had absolutely no idea who the truck belonged to or what type of truck it was. In the future we hope to be able to start getting that type of data so we can actually look at different types of vehicle combinations, size, weight, so forth. All of our vendors that we did agreements with, we had nondisclosures with them that limited what we could do with the data, who we could share the data with.
So, for example, when Alon and the PSRC approached us to utilize our data sets, we had to go to each of our vendors and get their permission in order to share the data. It wasn't that once you get the data you could share it with anybody. That's one way we go back to the privacy issues and how people keep data from being used in ways they may not be happy with.
Okay, thank you. Now we are moving onto some questions for Crystal. Can the GPS data be used to produce truck routings?
We have done, for instance, taking a city, looking at truck flows from a particular time. Like picking Dallas for instance, after 24 hours, eight hours, where the trucks that originate in a particular area flow. So from that perspective, we have done proof of concept, but not any widespread analysis of looking at truck flows and network utilization. I think it could be done, but the caution would be that the GPS data set is a subset of population, so how representative it would be of all data? There may be underrepresentation of some types of movement, like local deliveries, service vehicles. The nationwide data set I described is largely class 8 vehicles, typically longer haul truck movement as opposed to localized movement.
Okay, next question. Regarding thought of financial reimbursement for the private sector for data, is there a funding source on a state level?
I think Dale's model of how they did it in the state of Washington is one model I personally unfortunately do not know, whether or not SPR or state funds could be used. Dale described the Washington state experience was that they convinced and showed their legislators how this type of data could better improve decisions on freight improvement, Mobility Improvement Program. That's one model, but certainly I would think some of the regular traffic data collection funds that would be available in a state for whatever they use for their traffic data collection should be available for this type of data also. The funding could be used to access this type of data also, but I am not an authoritative subject matter expert on this, but I can take the question, see if I can find a better answer. I would say a model like used in Washington State is one model, but I would speculate any funds currently used for traffic data collection could be used to support this type of collection.
I believe the next question is for you as well. Does this include freight modes other than trucks such as rail or domestic water commerce? Discuss the needs and/or existence of comparable data for all modes to make national freight intermodal decisions.
The initiative I described was started originally as a Highway Performance Measurement Initiative. We did not endeavor in our initiative to check the data for all modes. Dale talked about the National Cooperative Freight Research Program, and this program is looking at a more holistic method of data collection. The work to date focused on highway trucks. The concept of travel time, the travel time reliability is measures that are used for other modes. Our data collection effort only focus said on highways.
How do we get the velocity model data for Wyoming? I guess this could be for any state that is interested.
I put up the web link, it is freightperformance.org. You can go in and request, as state DOT or MPO, you can request a user ID and password. For Wyoming, it would have data on I-25, I-80, and I-90. It only is historical data, available for 2009 right now. That website will give you the info. I just pulled up a query and the website will give you I-25, I-80 and I-90. That data is available for the entire year 2000 there are those three interstates. I can't think if there's another. The 25 interstates are any state can request access to the data.
That web address is up again, freightperformancemeasures.org, for anyone interested. The next question, I will put out to all the presenters. Did anyone look at rural areas and speed data? Is truck speed less than smaller vehicles in rural areas on interstates or arterials?
The network file we used for the nationwide effort for the 25 interstates does not have anything on our network file which is also available on that web link you just put up. It does not differentiate between what part is rural or urban, but the entire length of the interstate is available. You would have to assign a rural or urban designation. But it is not limited to certain parts of the interstate; it's for the entire interstate. The data we have is largely class 8. You wouldn't be able to differentiate between smaller or larger trucks.
And our study, we are looking at Central Puget Sound and it is mostly metropolitan area, and so there weren't really a lot of rural movements to look at in the Central Puget Sound study. As we expand across the state, we will have the ability to look at rural segments versus urban segments and get a better idea on speeds, movements in those areas.
This is Alon. That would be great to see when that happens, but I will suspect that trucks still in rural areas would travel slower than cars, just because of operation, climbing hills, a car would take it faster than a large truck, similarly the deceleration issues. The gap should be smaller, less merging issues unless people coming on and off different facilities. I would expect you would see the same sort of difference between trucks and cars.
Do any of the presenters have a feel for how many MPOs are collecting speed data in addition to volume data and household survey data?
I could take that. I don't think very many, to the best of my knowledge. I think there's a handful that explicitly considered this in terms of modeling framework, and their performance measurement products. There is probably a handful more that have the data or gather it from somewhere, but I don't think too many more than a dozen are actually collecting speed data.
Any other presenters have thoughts?
I agree with Alon. I have not heard of a lot of the MPOs doing this type of speed collection.
I have not either, but one thing I have heard from MPOs when we present, particularly when focusing data presentation on the interstate aspect of our initiative, is some feedback that's not going to be enough for an MPO make decisions or maybe do sufficient modeling because their portfolio includes many more facilities than just the interstate. That's one thing we incorporated and attempted to address that concern. We want to build our database to be beyond interstate collection data tool, data for off interstate routes. Just having data on the interstates while they carry most of the freight, is not enough to build into the MPO planning process. Not enough data because you have to have data for all the facilities in a region, not just the interstates. So that doesn't necessarily answer the question of are they collecting it, but they feel it's a data need, to have data on more facilities other than just the interstates, and they obviously aren't collecting that data, I guess.
Okay. We will move to a question for Dale. How did you determine your zone boundaries and what were your considerations?
Well, our zone boundaries were limited by what the state legislature told us we could do, the Central Puget Sound, a defined area. We tried to set our long lapse, how we set that, within a box, but including the counties within the Central Puget Sound area. So that is how we sets that particular area that we wanted to study, and that was also what we sent to our vendors as to we wanted data within this. Since everything in GPS moves on longitudes and latitudes, they have the ability to only give us data for vehicles that enter into that box. Once that was determined, the vendors automated the system where they would go back in with a computer program, pull out the data we needed and send it to us, at least send to up to the university via file format on a weekly basis.
I can add to that, I interpreted that a little differently. I apologize if this isn't what you meant, but the specific areas that Dale was showing in the map in terms of travel times between locations, the traffic analysis zones for the region.
If you want to clarify the question, feel free to type in. Dale, this is another question for you. Has there been any integration of GPS data with CVISN or weigh-in-motion data?
Not at this time. That's an interesting question. It is something I am going to present to Ed McCormack to see if there's a way to combine that data.
The next question, sent just privately to me, but I will read it out as it is intended for everyone. Many permanent classification counters, such as ADRs, weigh in motion and portable class like WaveTronix, and TIRTLs can get speed by vehicle type data. Might this supplement the GPS data you've gathered?
Absolutely, that would be great data to have if you can get your hands on the raw data. Certainly from a speed perspective, if you can get the classification data, the GPS data can work hand in hand with it, let you know what sort of subset of all trucks the GPS data represents. You can get comparisons, really look to see assuming all the classification data is operating well, you can see how the roadway is performing and then get the real-time GPS data to speed trap comparison and perhaps be able to use a subset of the GPS data and factor the other locations based on the speed trap, but definitely you need the raw data from the classification counters to be able to do that with.
Okay. The next question, what does the third-party data cost?
Well, depends how much you are getting and what you are looking. We were paying for all the vendors we were getting data from, about $10,000 a month for the data that we were collecting. So once again, the amount of data and how that data is sent, what you are looking for is going to determine the cost. Along that line, one thing that we have been hearing, especially from vendors, as technology improves, the vendors put new equipment out, that new equipment is actually taking shorter reads. Where Alon said they were getting 15 minute reads, with new equipment they can get 5 minute reads and charge the same amount of money because technology is driving down the cost.
I would say that the data evaluation is going to determine the situation. I think Dale hit the nail on the head that if you want it more frequently, you will have to pay for it. The effort we are leading at Federal Highway is research. Through the math, we have about 500,000 to 600,000 vehicles nationwide and we pay less than 60-cents per vehicle per year. The valuation is based on research program more than a programmatic approach.
As we have said to remember, to the third-party vendor, this is a waste product. They are getting paid up front by their customer to monitor the truck, collect the data. This is the back stream of that business transaction that they're holding in storage in the event the customer may need it down the road.
Not only are you not creating a cost center for them, you are creating a profit center. That's the key storyline. As long as it doesn't burden them or cost them, that's going to be really the driving factor for how the arrangement is negotiated. I think the National Cooperative Research Project Dale is leading, that guidebook will talk about those data valuation issues. The FHWA is releasing, hopefully this month, will talk about the cost of arrangements, some of the processes by which they were made. I think they will be at least 31 different data arrangements, not only here in the U.S., but Canadian efforts also. I mentioned the sort of peer pressure thinking. We're not the only ones doing it. They are doing it in Canada, Europe. There will have not only been experience here, but from at least Canada.
Okay. The final question is: did desire to have aggregated data versus point data from your vendors do well in calming proprietary concerns?
Yes. Because once again as I said the vendor's ability to separate customer information from the data they sent to us took care of some of those privacy issues that are out there, especially within the carrier ranks.
Okay, one more question about privacy issues. Are there privacy issues with certain vendors selling telemetry without the fleet's knowledge or okay?
That will be, from my standpoint, best directed to the vendor. We told them what our needs were, and what we are doing with it, and how we will handle it, and that was a decision that the vendor made. Now, whether there's something in their contract to that effect that allows them to utilize the data with other parties under certain, I have, to be frank, I have no idea on that. I know there are carriers in some other studies that raised issues about their data being used, but it relates back to what Crystal had in her presentation, that it would be used as a punishment means. A lot has to do with who is getting the data and what they are using it for.
I will add to that. I think Dale is, again, correct. Certainly we don't necessarily go tell every fleet a particular vendor has, your data is included, because we have done is worked with the lawyers of those companies and for our contractor, and I will assure you, whatever we are paying them is not going to be enough for them to not consider the privacy issues of their subscribers. From that perspective, I think they negotiated agreements based on the best interest of their customers. There's no way they will sacrifice or jeopardize the relationships they have with their fleets for the small profit they may gain from participating in such an initiative. To that degree, definitely the federal level, initiative we're leading, there were plenty of lawyers involved. Any privacy issues were addressed. But again, it is a case-by-case basis, and it's been around at least six years using that model. I am not going to ever say there's not going to be that issue. If you know there's non-attainment area set up, truck restrictions put certain routes because that data may cause them to say I don't want to play anymore. Nondisclosure agreements, in the case of the federal effort are written in such a way we want to understand that they can say tomorrow that they do not want to participate any more. We have had plenty of outreach to carrier fleets which are participating, but again we can't point to any one and say yes, you are included. I think the story line and benefits that they seek can be derived by having an objective and better data far outweigh concerns they may have lingering with regards to privacy. The technology vendors have negotiated agreements in there their best interest.
I think we are at the end now, so I think we are going to close out. I want to thank our presenters for the great information shared today, and thank everybody in attendance. The recorded version of this webinar, the presentations, and a transcript will all be posted online in the next few weeks. I send out an e-mail once everything is up there. If you are an AICP member and want to receive 1.5 certification maintenance credits for today's webinar, please make sure you were signed in with your first and last name. If you were not signed in, please type your name in the chat box. Please also download the evaluation form and send to me once you completed it. One other thing are actually going to do right now is bring up a poll before you sign off today. We want to get an idea of how many people are in the room with you, if you are participating in a group setting. If before you sign off you could fill in this poll, would be very helpful.
The next seminar will be held May 19. It will be about data for state and local freight planning. I encourage you to register for that through the Talking Freight website. I also encourage you to join the Freight LISTSERV if you haven't done so already because all future Talking Freight webinars are announced through the LISTSERV. With that, I am going to close out for today. Thank you everybody, and I enjoy the rest of your day.