U.S. Department of Transportation
Federal Highway Administration
1200 New Jersey Avenue, SE
Washington, DC 20590
202-366-4000


Skip to content
Facebook iconYouTube iconTwitter iconFlickr iconLinkedInInstagram

Federal Highway Administration Research and Technology
Coordinating, Developing, and Delivering Highway Transportation Innovations

Report
This report is an archived publication and may contain dated technical, contact, and link information
Publication Number: FHWA-RD-96-143
Date: April 1997

Development of Human Factors Guidelines for Advanced Traveler Information Systems and Commercial Vehicle Operations: Definition and Prioritization of Research Studies

 

CHAPTER 1. ASSESSING DRIVER ACCEPTANCE: PROBLEM DEFINITION

 

BACKGROUND

THE ACCETPANCE OF INNOVATION

APPROACHES TO STUDYING DRIVER ACCEPTANCE

 

BACKGROUND

Resistance is a common first reaction to change. Some users can be expected to put up a barrier that must be overcome before the benefits of an innovation are understood and accepted. At the other extreme, some users may perceive an innovation as the perfect answer to a problem and adopt it immediately. Other reactions to change may include compliance, acquiescence, and active or passive resistance. This range of reactions also may characterize someone's initial, short–term or final, long–term response to change. The introduction of ATIS/CVO technologies is likely to encounter this full range of potential responses from the driving population.

The introduction, adoption, and diffusion of an innovation through the potential user population appears to follow an S–shaped curve that represents the cumulative percentage of user adoption over time since introduction (Herbig, 1991). The first to adopt are often labeled "innovators," whereas "laggards" is the term applied to those who wait to adopt or who never adopt an innovation. The percentage of innovators in the user population will determine the initial success of an innovation. The percentage of laggards partly determines the asymptote of the cumulative adoption curve. As the labels imply, there is a pro–innovation bias in much of the work that has studied innovation adoption and diffusion. Innovators are encouraged and, in fact, new products are often designed to appeal to the requirements of the innovators. Innovation diffusion beyond the innovators depends on bandwagon effects as others emulate the innovators. Laggards are viewed as deficient in some way. A further implication of these labels is that there should be some consistent characteristics of innovative users and of laggards. However, this implication lacks empirical support. Adoption of innovation seems to be situation or innovation specific. An innovator for one product may be a laggard for another. Moreover, studies that have searched for consistent personality traits associated with innovativeness have not found them (Robertson & Kennedy, 1968).

A Case History

As an example of the problem of user acceptance of innovation, consider the history of the Automated Teller Machine (ATM). Since its introduction into test markets in 1974, ATM's have become as common place as banks, shopping malls, and supermarkets. ATM's provide access to banking services at virtually any hour of the day or night and, with the introduction of inter–bank networks, at almost any location. With an ATM, users can deposit or withdraw funds, transfer funds between accounts, and recently, even buy stamps and tickets for public transportation.

There are two important parallels that can be drawn between ATM's and the emerging ITS applications. First, freedom of movement is highly valued in our society. People do not like to stand in line at the bank, nor do they like to be constrained by bumper–to–bumper traffic congestion. Second, both ATM's and ITS technologies, particularly CVO applications, interpose a machine in what has traditionally been a face–to–face interaction. The bank teller and the dispatcher are replaced, at least some of the time, by automated systems. One important difference, however, is that ATM's require no initial investment by the user, whereas ITS applications must be purchased by the user.

Given the services and convenience available through ATM's, one might expect that user acceptance is not a problem. The Exchange Network, based in Seattle, has provided the following data. In 1990, the network had about 4.7 million accounts and handled 40 million transactions. In 1991, there were about 5.0 million active accounts and 47 million transactions. For 1991, then, that averages 9.4 transactions per account for the year, an increase of almost one transaction per account from the prior year. In informal surveys, current users of ATM services reported usage at three to five times the average annual rate, suggesting that perhaps less than half of the account holders are generating all of the transactions. In support of the informal surveys, the Exchange Network estimates that 35 to 40 percent of the account holders generate most of the ATM transactions. The highest level of usage comes from the 18–to–24 age group, and usage in the over–50 age group is virtually nonexistent. Among frequent ATM users, only 30 percent report ever having used the ATM to make deposits into their accounts. Forty percent of frequent ATM users report that they have used the night depository at the bank instead of the ATM, even though the ATM issues a receipt for the transaction while the night depository does not. Very small percentages report using the additional services available through ATM's. Frequent ATM users have reported waiting in line at the post office for 15 min or more instead of buying their stamps at an ATM.

The lessons from the history of ATM's are:

  • After almost 20 years and with the ATM infrastructure now in place, less than half the population use ATM's.
  • Add–on features may not be used at the same rate as the original functions.
  • Age is an important correlate of ATM usage. The over–50 age group does not use ATM's even though many current members of that age group were in their thirties when the technology was introduced.

 

Top

 

THE ACCEPTANCE OF INNOVATION

A large number of factors can influence the acceptance of an innovation. An innovation may solve a serious, longstanding problem, but if the price tag is too high, the innovation will not be accepted. If those in positions of authority prescribe one solution, other innovations may never even be considered. If users deem an innovation to be an invasion of privacy or an abridgment of their personal freedom, the innovation is likely to be resisted. If an organization is known to be unreceptive to change, individuals in that organization may show greater resistance to innovation than would otherwise be expected. If the innovation is difficult to use, acceptance will be less likely. A sophisticated, elegant innovation may fail in the marketplace because no one is aware of it. An inferior innovation may achieve wide acceptance or at least usage compliance if the users' incentives are structured appropriately. In some cases, an innovation is accepted or resisted because of a positive or negative value on a single dimension. More typically, some combination of costs and benefits, positives and negatives, across many dimensions determines the relative acceptance or rejection of an innovation.

A full consideration of reactions to ITS would incorporate relevant topics from a variety of fields of study. For example, the social psychology of attitude change (Lewin, 1951) is appropriate to understanding how an individual confronts change in general. The measurement of attitude and the relationship between attitudes on behavior are important for understanding the acceptance of new technology (Fishbein & Azjen, 1975; Fazio, 1986). Because of the relevance of this topic to the problem of ITS user acceptance, a brief review of attitude measurement and influence on behavior is given by an experimental social psychologist in appendix A, pp. 165–172.

For CVO applications, organizational behavior becomes directly relevant to the adoption of innovation because it can either facilitate or impede workers' involvement with new technology (Turnage, 1990; Zuboff, 1982). Issues of ITS system complexity and usability must be addressed, as well as concerns about driver overload and underload (Hancock &Caird, 1992a). For drivers of private vehicles, the potential market for specific innovations should be explored (e.g., Turrentine, Sperling, & Hungerford, 1991). For all classes of users, the microeconomic conditions are important to the analysis of cost–benefit and competitive advantage. The macroeconomic costs to society of creating the ITS infrastructure also must be considered. Traffic safety must be realistically projected so that personal and societal risks can be accurately assessed. Finally, the availability of the enabling technologies must be projected within a multiple–stage introduction of ITS systems into integrated solutions to transportation problems (Hancock & Caird, 1992b).

In contrast to these high–level considerations, the Statement of Work requests detailed answers to six points that span the relatively separate disciplines mentioned above. The list of points includes:

  1. Reasons for resisting new technology.

  2. Techniques used by drivers to resist new technology.

  3. Estimated percentage of drivers likely to resist use of in–vehicle technology.

  4. Estimated percentage of drivers likely to follow recommendations provided by an in–vehicle system.

  5. The conditions by which acceptance or rejection of advice is used/taken by users (e.g., the effects of weather, severity and/or potential magnitude of congestion, travel time savings, reliability of the quality and accuracy of information, and the medium used to convey the information).

  6. Potential techniques (e.g., incentives, education, system design, etc.) for promoting acceptance and use of in–vehicle ITS technology.

An attempt was made to answer these specific points in appendix A, pp. 153–165, although the answers are limited by the prescribed approach, namely, literature review. The remainder of the body of this report will address the more general issues of acceptance, the complexities, and the constraints associated with providing explicit answers to the points listed above. Most importantly, a structural model is provided as a basis for guiding further research.

To illustrate the multi–disciplinary aspect of addressing the six specific points, consider point "c" for non–commercial drivers. For one approach, estimating the percentage of drivers likely to resist in–vehicle systems requires a catalog of reasons for resistance to innovation. That catalog would likely include: "I'm just comfortable with the way things are now" to "I can't afford it" to "I can do it better myself" to "I can't stand that synthetic voice telling me what to do." In other words, the reasons would range from an unreasonable "stonewall" resistance to change in any form to a negative reaction to an implementation detail perhaps found in one small ITS component. In addition to the catalog itself, each reason for resistance would be associated with the conditions under which it applies. Clearly, "I can't afford it" is not applicable for resisting the use of your current vehicle's cruise control mechanism. Creating a percentage estimate from these source data would be an adventure, at best. Moreover, since in–vehicle technology consists of a variety of components and separable subsystems, the process would probably have to be repeated on a function–by–function basis. For instance, safety–conscious older drivers, who rarely if ever have used currently available cruise controls, may favor reliable collision avoidance systems to help them in situations to which they can no longer react as quickly. Whereas, younger drivers, including those who are frequent users of cruise control, may reject adaptive cruise control systems because they prefer to change driving lanes and go around instead of accommodating to slower vehicles (Turrentine et al., 1991).

Another approach to estimating resistance relies on analogies with other systems and on composite acceptance curves (Herbig, 1991). If the history of ATM usage is considered to be representative of the classic S–shaped adoption curve, it might be concluded that ITS acceptance will be gradual during the first 8 to 10 years, reaching perhaps 5 to 10 percent of the population during that period. As the infrastructure continues to evolve, ITS acceptance and usage can be expected to accelerate about 10 years from introduction and reach, perhaps, 40 percent of the population after 20 years. These predictions are based on the tenuous assumption that the ATM and ITS applications are fully comparable, that ITS technology will evolve at an appropriate rate, and that the social and economic climate will not change substantially. Of course, the time course of the growth in ITS acceptance could differ substantially from ATM usage.

If estimating in–vehicle technology resistance for the CVO environment is considered, a first–glance analysis might suggest that the answer would be derived from a straightforward business decision. If the fleet managing organization has the resources for the initial investments, if the projected savings define an acceptable pay–back rate, and if the new technology affords other competitive advantages, then the new technology is likely to be adopted by the organization. In making a business decision, projected savings may be based on directly measurable cost factors such as:

  • Fuel savings resulting from more efficient routing leading to less high–speed driving to maintain schedules.
  • Reduced maintenance costs because of lower mileage and less equipment abuse.
  • Better on–time delivery of perishable cargo resulting from tighter driver control.
  • Lower accident rates in an overall safer system.

Other cost factors also must be estimated, including items such as worker training costs, productivity losses resulting from worker discontent, personnel turnover, subversion or even sabotage of the new system, and worker stress resulting from the feeling of being constantly monitored and managed (Zuboff, 1982). These added costs derive from the reactions of individual workers to the introduction of new technology. These cost factors are easy to overlook because they are difficult to quantify. Ultimately, the success of in–vehicle CVO technology may depend on the acceptance or resistance by individual drivers who share many of the same concerns as drivers of private vehicles. The costs of personnel training and staff turnover can easily offset savings from reduced fuel usage and maintenance, thereby reducing the pay–back rate.

As a point of comparison, in manufacturing applications, new technologies have failed to meet expected productivity gains an estimated 50 to 75 percent of the time with the failures more often attributed to problems between the organization and its workers than to the technology itself (Turnage, 1990). Perhaps the organization's expectations for productivity gains were too high; perhaps if the expectations were lower, the initial investment would not have been made. CVO applications might be expected to show a comparable pattern of success for much the same reasons (Schauer, 1989).

Estimating the percentage of drivers likely to follow in–vehicle system recommendations (point "d") is perhaps even more difficult because it is highly dependent on the specific conditions. On a positive note, Allen, Ziedman, et al. (1991) report a simulation study in which more than 95 percent of their subjects diverted from their current freeway route in response to a 30–min congestion delay. The congestion was either detected by the driver in the simulated forward view or reported by one of three levels of simulated ITS navigation information systems, or both. Given the advance warning provided by the navigation systems, diversion often occurred before traffic congestion was encountered. In this study, 84 percent of the driver–subjects diverted in response to an 11–min delay. To simulate real–world motivation, subjects were rewarded $1 for each 5 min saved and penalized $1 for each 5 min lost during their simulated trips.

In another simulation study, Bonsall and Parry (1991) report that, overall, about 70 percent of the advice provided by a simulated ITS–type system was accepted. The primary independent variables in this study were the quality of advice being generated and the driver's familiarity with the artificial environment. When the system–generated advice usually led to near–minimal travel times, the advice was accepted almost as often as in the Allen, Ziedman, et al. (1991) study. On a subject's first journey through the simulated data base, system advice was nearly always taken. As subjects became more familiar with the simulated environment, acceptance of system advice decreased as other factors, such as the extent to which advice was corroborated by other evidence, came into play.

In contrast to these findings from simulation studies, Khattak, Shofer, and Koppelman (1991) report that less than 50 percent of their surveyed commuters diverted from their usual route even for delays as long as 50 min. Consistent with these data, Spyridakis, Barfield, Conquest, Haselkorn, and Isakson (1991) report that 63.1 percent of their surveyed subjects rarely modified their routes from home to work, and 42.2 percent rarely deviated from their normal route going from work to home. Route changes in this study were triggered by delays of about 20 min.

From these data, it can be argued that acceptance of advice generated by ITS systems should be 80 percent or better if the simulation findings are to be believed, or that acceptance will be no better than 50 percent based on the survey data. The difficulty is that there are problems with both types of data. One simulation study explicitly created demand characteristics that could lead to overestimates of advice acceptance. Allen, Ziedman, et al. (1991) paid subjects to minimize travel time and presented advice that appeared to minimize travel time, and the subjects accepted the advice. In Bonsall and Parry's (1991) simulation, subjects saw only the lowest level of simulation. The driver's display showed a map–like representation of the next intersection, an indicator of the heading to the destination, text information stating what the various directions lead to, and some advice about which turn to take. With even small amounts of conflicting evidence provided to the subjects, such as recommending turning away from the heading to the destination, advice acceptance dropped considerably. The survey data provide reports of what drivers recall doing when they encountered traffic congestion. Both surveys focused on travel time delay as a key factor in drivers' decisions to change routes with delay left to be a subjectively estimated value. The surveys provide no data on actual delays for usual versus alternative routes. Likewise, no data are given on driver's estimates of travel time via alternate routes, nor is any context provided for interpreting the basis for drivers' decisions to divert from the normal route.

A compound estimate for ITS system/advice acceptance can be generated by combining the tenuous acceptance projections, based on ATM data, with the estimated upper and lower bounds of advice compliance, based on the survey and simulation studies. The 90+ percent and the 50 percent values from the studies described above can be used as upper and lower bounds on the acceptance of ITS in–vehicle advice. If ITS in–vehicle systems are introduced in the year 2000 and it takes 20 years for them to reach 40 percent of the population, then by 2020, about 20 to 35 percent of all navigation decisions will be based on advice accepted from ITS systems. This must be considered a very gross estimate, particularly because the technology acceptance curve for ITS could differ substantially from the ATM data. Other examples of acceptance curves, sometimes called "diffusion" curves, are given in appendix A, p. 158.

 

Top

 

APPROACHES TO STUDYING DRIVER ACCEPTANCE

One result of examining the broad context of innovation acceptance is that it illuminates the difficulties of achieving a detailed understanding of user acceptance. There are so many factors that are clearly important. There are many other factors of indeterminate importance, and there are so many conditions that appear to influence whether and how a given factor applies in a specific situation.

In this section an attempt is made to constrain the approach to the problem in ways that make sense given the limited time and effort allocated to this task and that appear to lead to a useful result. Specifically, an attempt will be made to better organize the dimensions and attributes of innovation acceptance with the goal of defining what aspects of driver acceptance can be identified and manipulated experimentally. Possible research techniques focusing on when to measure acceptance during the life span of an innovation are discussed. Also discussed are some of the measurement devices that might be useful in deciding how to gather data on innovation acceptance.

A Structural Model of Innovation Acceptance

From some of the background discussions, it is clear that acceptance of innovative technologies involves categories including economic, safety, organizational, and psychological factors. Beyond the identification of potentially discrete categories, it is also clear that different factors may affect different aspects of innovation acceptance. A private–vehicle driver may accept the concept of an in–vehicle navigation advisory system but not be able to afford the device itself. To extend these small–scale linkages between factors and outcomes, an analytic structure is needed that will begin to define the relationships among factors.

As a starting point, consider the innovation acceptance theory of Mackie and Wylie (1988). Originally devised to address the procurement of large military systems, the theory is oriented to the acceptance of expensive, one–of–a–kind systems that were deliberately designed to solve relatively specific problems. The theory seems to provide adequate coverage of that special domain. It focuses on identifying the attributes of acceptance that are relatively internal to an individual user and, in a military context, would probably determine that individual's ultimate decision to use the system or to turn it off. The theory also highlights the two external factors that determine the behavior of military personnel; that is, the prevalent view of the individual's operational unit and direct orders from superiors.

To adapt the Mackie and Wylie (1988) model to our environment, many augmentations must be specified. Figure 1 presents an initial attempt at specifying a more general structural model of innovation acceptance. Definitions pertaining to figure 1 are given in table 1.

In figure 1, the section surrounded by the dashed line can be viewed as internal to the individual system user (adapted from Mackie & Wylie, 1988). The components outside the dashed line represent classes of external factors that can influence an individual's perception, understanding, and assessment of an innovative technology.

At the left edge of figure 1 are two components labeled Problem Definition and Innovation Announcement (features of the model are identified in italics). These components are the starting points for the process of innovation acceptance. Given a defined problem and an innovation that addresses the problem, initial contact is made with the individual's Understanding of Problem and Initial Awareness of Innovation. These aspects feed into the individual's judgment about whether there is a Need for Improvement or whether current approaches to the problem are adequate. Past experience with innovations, the perceived features of the current product, the weighting of expert opinions, an assessment of personal risk, and the assessment of the availability of help all set the stage for a more complete evaluation of an innovation. These factors determine the user's readiness to assess a new product or system. Significant negatives at this level could lead to immediate rejection of the innovation. For example, significant negative experiences with earlier innovations or a perception of major personal risk could yield a form of "stonewall" resistance.

A structural model of the components of innovation acceptance adapted from Mackie & Wiley (1988)

 

Table 1. Definition of terms for figure 1.

USER/CONSUMER CHARACTERISTICS
Perceived Self–Competence User's confidence in their ability to function in their current environment and to adapt to changes in that environment. Includes variables such as: Self–Efficacy (user confidence about success in using innovation) and Performance Satisfaction (level of satisfaction with the status quo). Assumed to affect Perceived Need for Improvement.
Perceived Risk A composite of economic, technical, and psychosocial risk factors such as: skill level and training required, personal safety. Assumed to affect Personal Risk.
INNOVATION/PRODUCT CHARACTERISTICS
Innovation Capability Defined relative to Problem Definition focusing on the relative advantage of the innovation compared to the status quo and to other product offerings. Assumed to affect Perceived Features of Innovation.
Innovation Similarity or Differentiation Comparability with previous innovations experienced by the user. Affects Experience with "Similar" Developments.
Innovation Application Environment Encompasses variables such as: purported Relative Advantage of innovation over current methods, Compatibility with the user's needs and the user's other activities, Communicability of the innovation's characteristics and benefits, Complexity of understanding and using the innovation, and Divisibility, or the extent to which the innovation requires a large initial investment in time, effort, or money. Assumed to affect Perceived Features of Innovation.
MODULATING FACTORS
Organizational Climate Concerns the willingness of formal organizations to incorporate change. The organization's flexibility and venturesomeness is a key, as well as the flexibility and venturesomeness of individual managers at key positions in the organization. The organizational support structures and the role of an Innovation Advocate are also important.
Work Environment The types of change introduced by an innovation include: change in the control exercised by a worker, change in cognitive demand, change from executing an operation to monitoring it, reduced opportunities for problem solving, increased responsibility for production, greater visibility of performance to supervisors, and changes in social contact, interaction and support.
Authority and Legal Mandates Decisions made by organizational superordinates or mandates of law requiring the use of an innovation (e.g., a corporate decision to adopt a specific computing system, seatbelt laws).
Economic Factors Both macroeconomic and microeconomic factors can be effective modulators. Periods of prosperity may increase the user's ability to pay for innovations, or prosperity may allow a disgruntled worker to change jobs more easily, thereby avoiding an innovation in the first job.
Usage Incentives In organizations, incentives may include bonus pay that depends on successful use of an innovation, greater promotability within the organization, or the threat of firing for non-compliance. For the purchaser of an innovation, rebate programs and tax incentives are examples of corporate and governmental inducements.
Social Approval Approval by a referent social group takes many forms. An ecologically sound innovation may be preferred by some users and make no difference to other users. Maintaining social and professional status is important, as is maintaining personal dignity.
Marketing and Advertising Effective marketing of an innovation can influence the initial adoption of an innovation. This includes highlighting the relationship between the potential user's needs and the capability of the innovation, the potential user's pricing sensitivity, and tailoring the presentation of the innovation to the potential user's current behavior.

The component of the model labeled Subjective System Assessment encompasses three kinds of rational tests and two types of empirical test(s) that can be used to evaluate an innovation. The individual assesses the innovation on its inherent Complexity, on its Compatibility with other aspects of the individual's environment, and on the probable Relative Advantages that the innovation may afford. Observation of the innovation in action and of its results provide one source of empirical data on the effectiveness of the innovation, and Trial, or hands–on experimentation with the innovation, provides a second source of direct experience with the innovation. In the structural model, the result of the subjective system assessment is the primary outcome related to the prospective user's attitude toward the innovation. But, as was previously mentioned (see appendix A, pp. 165–172), attitude is not highly correlated with behavior. Reflecting this distinction, the output of the subjective system assessment is but one input to the process that ultimately leads to an Observable Response that can be interpreted as Adoption, Compliance, or Rejection.

In a simple world, the model as described so far would suffice. A problem has been identified, a solution has been proposed, and the individual users decide whether the solution works for them. In the real world, however, the other components of the model become important. Along the top edge of figure 1 is a set of components that apply to specific Innovation/Product Characteristics. At various stages during the evaluation of an innovation, an individual user may conclude that the innovation does not offer enough Capability to warrant acceptance. If so, its capabilities could be increased to address other aspects of the problem. The other classes of system–specific factors include Similarity or Differentiation from past innovations, and the relationship between the innovation and its intended Application Environment. Table 1 also contains a brief definition of each of these variable classes as well as definitions of the factors discussed below. This is not an exhaustive list of classes; as other categories of independent variables are identified, the list can be expanded.

Along the bottom edge of figure 1 are classes of factors that represent the User/Consumer Characteristics. User Demographics includes those characteristics of individual users that can identify groups of users that may share a common reaction to innovation. For example, drivers living in an area of urban sprawl may be more interested in ITS systems than drivers living in rural settings (Green & Brand, 1992). In figure 1, user demographics are shown to affect Level of Interest in an innovation. The relative power of each demographic factor remains to be determined for ITS applications. For individual users of an innovation, Perceived Self–Competence includes two factors reflecting the user's view of his/her own performance capabilities. Self–efficacy is an estimate of how well the individual could function in a new environment, and performance satisfaction is the individual's estimate of how well things are being done in the current environment. The individual's perception may or may not match objective performance. Low self–efficacy combined with high performance satisfaction could yield strong resistance to change. Perceived self–competence is assumed to affect the User's Perception of Need for Improvement in the structural model. The class of variables subsumed under Perceived Risk include those factors that can be perceived by a user as posing some form of threat. A threat can range from a Personal Risk of injury to a fear of being embarrassed during a training session. Perceived Risk is represented as affecting two components of the structural model. The Personal Risk component could be viewed as a repository of the negative aspects of risk, whereas Availability of Support could be viewed as the collection of countermeasures or antidotes for the negative risk aspects.

Along the right–hand edge of figure 1 are Modulating Factors, or those that exert only indirect effects on the assessment of an innovation. For example, Authority and Legal Mandates may force someone to use an innovation, but without liking it. As suggested above, cost or other Economic Factors may prevent acceptance of an innovation for which the individual sees a clear need. The modulating factors are perhaps best described as a set of powerful influences that often determine the short–term outcome of the evaluation process but which do not necessarily change one's mind about an innovation. For example, an organization's decision maker may bend to a prevailing conservative climate while realizing that a new technology would provide a clear competitive advantage. Alternatively, a manufacturer's rebate program or a tax break for investment in new technology could tip the scales enough to allow adoption of an innovation.

There are two components of the model that have not yet been discussed, namely the components labeled Subjective Goal Assessment and Subjective Usage Environment Assessment. Subjective Goal Assessment incorporates such things as how important is it to achieve the minimum commuting time and just how important is it to retain this job given the changes that are being forced upon the user. Subjective Usage Environment Assessment is the final decision–making component of the structural model. It is at this point that all factors are weighed and the individual's response to an innovation is generated. It is the final common path that includes the user's subjective assessment of the innovation itself and its utility in the given environment. That response, as suggested earlier, can range from total acceptance to total rejection, with many levels compliance between these two extremes.

The structural model described here is not intended to be a complete, final product, but rather a focus for further consideration. In its current state, the model lacks the dynamics that seem pervasive in the process of reacting to innovation, and there are probably important components that have been overlooked. The model can be enhanced as the properties of innovation acceptance are explored.

The following example illustrates how dynamic properties of acceptance might be added to the structural model. There are several automatic trip recorders available on the current market. Typically, the devices measure and record such data as vehicle weight, speed, revolutions per minute (RPM), fuel, etc. The data are used to generate management reports to support driver control, scheduling, and maintenance, as well as to track fuel economy. Some long–haul trucking companies have used the management reports to identify optimal profiles for the operation of vehicles. Driving within the profile results in lower fuel costs and reduced maintenance costs, both of which are obvious benefits to the companies. When the trip recorders were first introduced, drivers reacted negatively, viewing the devices as snoops and enforcers in the cab. Given the driver rejection, fleet operators responded either by using the trip recorder data to fire non–compliant drivers or by creating compliance standards and linking drivers' bonus pay to those standards. Both techniques induced higher levels of compliance, but the incentive plan has led drivers to actively use the real–time reporting capabilities of the trip recorders to track their operating performance (R. Clarke, NHTSA, personal communication, 1993).

The two responses by the fleet operators induced other changes. Using the terminology of the structural model, there were probably changes in the organizational climate and in the nature and level of social approval within the drivers' peer group. For most drivers, there were certainly changes in their experience with innovative systems, some positive and some negative. For some drivers, there may have been changes in their perception of the need for improvement and in their level of interest. All of these changes may become more or less permanent and carry over to the next innovation that is introduced.

There appear to be several levels at which to apply the proposed structural model of innovation acceptance. Most of the model would seem to apply at an innovation concept level. For example, the structural model could be used to help in understanding the acceptance of and resistance to the concept of a trip navigation system to aid in solving the problem of traffic congestion. In moving toward more concrete levels, the model also seems appropriate for the evaluation of specific implementations of ITS systems. A level that also should be addressed is the acceptance or rejection of situation–specific outputs or advice from ITS in–vehicle systems. When a trip navigation system recommends detouring around a congested area, is the structural model described here still appropriate for assessing the acceptance of the advice?

Empirical Approaches to Analyzing Acceptance

Exploring innovation acceptance requires knowledge of appropriate methodologies. Three approaches will be considered for studying innovation acceptance that could yield useful results. The first approach results in an analytic point solution. Following the tradition of normal market research, the goal is to produce informed estimates of the potential market size; in this case, for various forms of ATIS/CVO systems. Further estimates of the number of consumer decisions to purchase the technology should translate directly into estimates of acceptance rates for the systems. To the extent that the consumer population is subdivided, acceptance rates could be obtained for various subgroups of the general population and for operators of commercial vehicles. Historically, this form of market research has proved most effective for product improvements that provide a competitive advantage in an existing market. When the methodology is applied to innovative products and the opening of new markets for which there is no historical model, the estimates and projections are more difficult and less accurate.

Burger, Ziedman, and Smith (1989) used these marketing research techniques in an inventory of CVO precursor systems. They surveyed the product literature and identified six categories of CVO in–cab support systems that were either on the market in 1988 or nearly on the market. In all, the authors catalogued the usage environments and user interface characteristics of 52 systems ranging from refrigeration monitoring systems to vehicle tracking systems. The report also included estimates of the percentage of 18 classes of commercial vehicles using each type of support system projected to 1992. The authors admit to having little or no confidence in the absolute percentages reported, but they do argue that the relative percentages of systems by vehicle type are probably appropriate. Unfortunately, neither Burger and his colleagues nor the sponsoring agency of the earlier report is conducting a validation study.

The second approach with merit argues for combining acceptance assessment with usability testing. Clearly, system usability is an aspect of acceptance and, because usability testing typically occurs late in the development cycle, the acceptance of a system could be assessed with known capabilities and implementation details. Performance measures could be obtained along with subjective measures of workload and user preferences. he problem with this approach comes from the same source as its strength. Assessing acceptance late in the development cycle increases the cost of making changes that could affect acceptance. By the time prototype systems are incorporated into simulator and on–road studies, it is anticipated that most of the flaws will have been removed.

The third approach is to adopt some of the newer techniques currently being used to aid in defining system requirements. The "House of Quality" approach (Hauser & Clausing, 1988) and the prospective use of multiple subjective measures (Bittner, 1991; Tolbert & Bittner, 1991) provide two candidates. Each of these options starts with the initial definition of the problem that the innovation addresses and attempts to acquire data about how potential users react to the planned system functions, implementation characteristics, and usage environment. The advantage of this approach is that assessment of acceptance can be initiated early in the development cycle. The disadvantage is that the definition of ATIS/CVO systems will be evolving, and the subjects must operate with notional systems in any early data collection effort. This complicates data collection, but the results should help to refine the system definitions.

Potential Measurement Techniques

There are several potential approaches to measuring driver acceptance. For example, one could assess stated preferences (what subjects say they would do), reported preferences (what subjects say they have done), or actual preferences (field observations) (Khattak et al., 1991). Stated preferences are often influenced by the demand characteristics of the data collection environment. Reported preferences are affected by recall dynamics, and working with actual preferences requires facilities far beyond those available for this task. Product features can be assessed for their linkage to general attributes of acceptance and to intentions to purchase (Holak & Lehmann, 1990). Psychophysical measurement techniques can be applied to the assessment of physical design (McCallum, 1991). Subjective workload assessments can be used to tap the perceptual, physical, and cognitive demands of different system designs (Tolbert & Bittner, 1991). For each of these approaches, there are challenges regarding the details of the data acquisition and manipulation process.

From our survey of the current literature, there appears to be no single adequate measurement technique that captures driver acceptance. A variety of subjective and performance measures has been attempted in studies using hypothetical situations (Tong, Mahmassani, & Chang, 1987), artificial environment simulations (Bonsall & Parry, 1991), and simulations using familiar environments (Allen, Ziedman, et al. 1991). Unfortunately, there has been no attempt to validate measures across procedures or to coordinate the different measures of driver acceptance.

This task provides the opportunity to bring some consistency to the measurement of driver acceptance of new automotive technology. Starting with the concepts behind proposed ITS in–vehicle systems and ending in possible road tests of prototype systems, a measurement tool for driver acceptance can be defined, refined, and at least partially validated. The measurements should be as simple as possible and as direct as the several levels of analysis will allow. Because the project started with hypothetical ITS systems, performance measures must be eliminated, leaving a variety of subjective approaches. Subjective tools must be developed that provide diagnostic power to determine source(s) of resistance. For instance, it must be determined whether resistance comes from a lack of usability or from a mismatch between system capabilities and the problem application. Any subjective tool also should provide information about the relative importance of the various sources of resistance, such as system capability or system usability.

There is an existing subjective assessment technique that can serve as an heuristic model for designing the type of measurement tool that is needed. The model is the NASA–Task Loading Index (NASA–TLX) (Hart & Staveland, 1986). Following the lead of Beith, Beith, Vail, and Williams (1990), it is proposed that a set of about 7 to 10 rating scales be created that will address several components of the acceptance of innovation. Using the structural model described above, scales could be created for constructs such as:

  • Relative advantage.
  • Apparent complexity.
  • Ease of use.
  • Compatibility with other driving activities.
  • Safety improvements.
  • Relative importance of the problem.
  • Relative personal risk.

In addition to these diagnostic scales, some form of an overall assessment is required. Two candidates are:

  1. How much would you pay for such a system?

  2. How strongly would you recommend this system to others?

These two ratings will provide some checks on the internal consistency of the data and perhaps some insight into how different subjects are using the rating scales.

In assessing the relative importance of the components, the NASA–TLX approach can be followed or other techniques can be adopted if they produce more robust measures of the relationships among components. One such candidate is a link–weighted network analysis that estimates the associations among component "nodes" in a network and that provides for a higher level grouping of components into closely coupled clusters (Schvaneveldt, 1990). Such an approach may help to identify those components of acceptance that must be satisfied first. For instance, an ITS system may need to address a relatively important problem and provide a strong advantage over other approaches before it is worth the effort to assess system usability or safety advantages. Moreover, the clustering of components and the relative importance of clusters may change as potential ITS system users become more familiar with the planned capabilities and as the systems themselves mature (Schvaneveldt et al., 1985). Regardless of the technique chosen, the relative importance of the components of acceptance must be included in any measure of acceptance.

 

Top

 

FHWA-RD-96-143

 

Previous | Table of Contents | Next

Federal Highway Administration | 1200 New Jersey Avenue, SE | Washington, DC 20590 | 202-366-4000
Turner-Fairbank Highway Research Center | 6300 Georgetown Pike | McLean, VA | 22101