ArticlePDF Available

Trust in Branded Autonomous Vehicles & Performance Expectations: A Theoretical Framework

Authors:
  • Research Collective

Abstract and Figures

Future autonomous vehicle systems will be diverse in design and functionality because they will be produced by different brands. It is possible these brand differences yield different levels of trust in the automation, therefore different expectations for vehicle performance. Perceptions of system safety, trustworthiness, and performance are important because they help users determine how reliant they can be on the system. Based on a review of the literature, the system’s perceived intent, competence, method, and history could be differentiating factors. Importantly, these perceptions are based on both the automated technology and the brand’s personality. The following theoretical framework reflects a Human Systems Engineering approach to consider how brand differences impact perceived trustworthiness, performance expectations and ultimate safety of autonomous vehicles.
Content may be subject to copyright.
Trust in Branded Autonomous Vehicles & Performance
Expectations: A Theoretical Framework
Natalie Celmer, Russell Branaghan, & Erin Chiou
Arizona State University
Future autonomous vehicle systems will be diverse in design and functionality because they will be produced
by different brands. It is possible these brand differences yield different levels of trust in the automation,
therefore different expectations for vehicle performance. Perceptions of system safety, trustworthiness, and
performance are important because they help users determine how reliant they can be on the system. Based
on a review of the literature, the system’s perceived intent, competence, method, and history could be
differentiating factors. Importantly, these perceptions are based on both the automated technology and the
brand’s personality. The following theoretical framework reflects a Human Systems Engineering approach
to consider how brand differences impact perceived trustworthiness, performance expectations and ultimate
safety of autonomous vehicles.
INTRODUCTION
Rapid technological innovation introduces great
uncertainty in how people interact with the technology, and in
system safety (Van Geenhuizen & Nijkamp, 2003). Consider a
pedestrian waiting to cross a street; will a fully autonomous
vehicle stop for them? How will the vehicle communicate to
the pedestrian that they may cross? What will the pedestrian
expect the car to do?
The answers may depend on the brand of the vehicle.
One brand might always stop, whereas another might only
stop if the pedestrian is a specific distance from the curb.
Many technology corporations and automobile manufacturers
are developing autonomous vehicles. The sheer variety of
companies with different technologies and design approaches
is likely to yield great diversity. In fact, this situation exists
already. Park-Assist (BMW) and Autopark (Tesla) are both
autonomous parking features, however, their Human-Machine
Interfaces (HMI) are different and require different user
inputs.
For instance, Tesla employs a streamlined process; about
three actions are required to parallel park the vehicle using
Autopark. The experience with BMW, however, is more
cognitively involved. The driver must initiate the Park-Assist
feature, press the brake, turn on their blinker, read a pop-up
message stating they understand that they are liable for the
vehicle’s ultimate performance, confirm this by pressing OK,
then they must press and hold the Park Assist button for the
duration of the entire parking process, and release the brake
when the process is complete.
Safety guidelines and standard requirements do not
remedy these inconsistencies. The U.S. Department of
Transportation and the National Highway Traffic Safety
Administration (NHTSA) provide guidelines in Automated
Driving Systems 2.0: A Vision for Safety (September 2017).
However, these standards still allow producers great freedom
in implementation. For example, one guideline states: “HMI
design should also consider the need to communicate
information regarding the Automated Driving System’s state
of operation relevant to the various interactions it may
encounter and how this information should be communicated”
(pg. 10). Just as a rubric for an academic assignment does not
lead students to submit identical projects, the NHTSA
guidelines address broad safety concerns and leave room for
variety in system designs and configurations.
In the automotive industry, trustworthiness of a vehicle is
closely tied to its perceived safety (Peter & Ryan, 1976).
When considering autonomous vehicles, automation and brand
associations both contribute to expectations and trust for the
system. This paper provides a review of the literature in the
areas of branding and human factors. In doing so, it
synthesizes the relationship between brand personality and
trust in automation. Additionally, this paper introduces a
theoretical framework to conceptualize the formation of trust-
based expectations for different brands of autonomous
vehicles. It focuses on trust development, before the user
interacts with the system, the stage in which Ekman,
Johansson, & Sochor (2018) refer to as, preuse.
The framework incorporates brand and system intent
towards the user, competence in producing reliable
automation, and the method of delivery; similar to purpose,
performance, and process (Lee & See, 2004). A fourth
component adds history based on past interactions, brand
reputation, and previous results produced by the brand.
Though largely inspired by the model of trust proposed
by Lee & See (2004), this framework is focused on the
influence of brand associations and brand personality (Aaker,
1997) on prospective trust, or trust expectations prior to
observing or experiencing the system’s performance.
Human-Automation Interaction
Trust in automation is a belief that another agent will
help in uncertain, or vulnerable, situations (Lee & See, 2004).
For autonomous vehicle systems, trust is based on the
expectation that the system is capable of improving the driving
task and/or performance. Trust depends on how successful the
human expects the automation to be (Lee & Moray 1992,
Sheridan 1992; Lee & See 2004). This expectation guides
human behavior with a system (Mosier, Skitka, & Heers,
1998).
Copyright 2018 by Human Factors and Ergonomics Society. DOI 10.1177/1541931218621398
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 1761
The amount of trust a user has in a system should reflect
the system’s capabilities, especially when monitoring and
occasional intervention are required. Otherwise, when trust is
not appropriately calibrated, human-automation systems will
break down.
Various design characteristics affect expectations and
trust in autonomous systems (Lee & See, 2004). Trust
perceptions tend to increase for highly transparent systems,
more technically competent systems, and systems with
acceptable situation management (Choi & Ji, 2015). Trust
increases when the perceived intent of the system design
aligns with the user’s purpose for use (Hoff, & Bashir, 2014;
Ekman, Johansson, & Sochor, 2018).
System designs should address situational characteristics
such as, perceived risks, workload, and task difficulty, which
have been shown to impact trust in automation (Desai et al.,
2012). However, expectations for system functionality, true
system capability, and the user’s role are often mismatched.
An increase in perceived trust is linked to a decrease in
perceived risk (Choi & Ji, 2015). An example of this is
Automation Bias; when a user favors the use of automation
over their own input. This often results from an over trusting
attitude and leads to an inappropriate level of reliance on the
system (Mosier, Skitka, & Heers, 1998). Other instances
resulting from inappropriate calibrations of trust include
Misuse, Disuse and Abuse (Parasuraman & Riley, 1997).
However, autonomous vehicles will vary in these
characteristics. Therefore, perceptions of trust for autonomous
vehicle systems will ultimately stem from various designs and
experiences created by different brands.
Brand Personality and Associations
A brand is a name, term, or symbol that distinguishes a
seller’s product or service from others (Bennett, 1995). An
important part of branding entails the accumulation of
associations and perceptions, in memory linked to a brand
(Aaker, 1991). In essence, it includes what consumers know
(or believe) about products. According to Deighton (1992),
brands “promise a future performance”. They set expectations
for the quality of their product (Keller, 1993). Trust in the
brand is established through fulfillment of these expectations
over time (Delgado-Ballester, 2003). From the moment the
brand is born, it is associated with specific values, limitations,
and target consumer groups (Kotler & Andreasen, 1991).
Brands are also closely linked to the performance and quality
of their products and services (Keller, 1993; Zeithaml 1988),
specifically, how reliable and successful the product is at
fulfilling its intended purpose.
Importantly, people tend to spontaneously ascribe human
personality characteristics to brands, creating a brand
personality. To categorize these personalities, Jennifer Aaker
(1997) developed an empirically derived framework of five
dimensions of Brand Personality, and 15 associated facets,
shown in figure 1. These dimensions, Sincerity, Excitement,
Competence, Sophistication, and Ruggedness (Aaker, 1997)
are useful for describing and summarizing brand associations.
These Brand Personality dimensions and facets are also
useful for differentiating brands from one another (Freeling &
Forbs, 2005). Considering the pedestrian example, one brand
of autonomous vehicle may stop for the pedestrian to cross
while another may not. The brand personalities of each may
help the pedestrian decide to walk or not. In any interaction
with autonomous vehicles, a person may simply base trust on
associations. They may take specific actions surrounding an
autonomous vehicle based on its brand personality.
Figure 1: Brand Personality Framework (Aaker, 1997)
Often, brand exposure evokes strong, automatic, and
subconscious, inclinations and feelings about a product
(Thomson et al., 2005). Understanding what makes
autonomous vehicles desirable and trustworthy, regardless of
the risks and uncertainty involved, goes beyond marketing. It
affects human interaction with the product (Lee & See, 2004).
Brand Trust and Human-Automation Trust
Trust is a fundamental component of good relationships;
it evolves based on past experience (Rempel et al., 1985,
Rotter, 1980). Regardless of individual differences in
willingness to trust others, people identify patterns in
intentions, behaviors, motivations and qualities linked to a
positive outcome (Rotter, 1980; Rempel, et al., 1985). Trust is
not dichotomous. It is not simply a matter of trust or distrust,
instead, levels of trust fall along a continuum.
Brand trust is the level of security associated with a
brand. It is based on the perceived reliability of the brand, and
how responsible it is for the welfare of the consumer
(Delgado-Ballester, 2003). Brand trust is also context
dependent. It is specific to the nature of the situation and the
other agents involved (Mayer et al., 1995; Schaefer, et al.,
2016). Trust-based relationships between consumers and
brands, resemble that of humans and automation. Similarly,
human-automation trust is based on expectations of system
capabilities.
History-based trust focuses on past performance (Merritt
& Ilgen, 2008), and how it relates to future interactions.
Brands form relationships with consumers by meeting, or
exceeding, their expectations. In this way, brands build trust
by being predictable (Mayer et al., 1995), providing good
experiences time after time. Automation is designed to build
trust in the same way. A trustworthy system is simple and
understandable. It acts in the operator’s best interest, is
designed to induce proper trust calibration, shows
performance history and meets the operator’s performance
expectations (Lee & See, 2004). Autonomous vehicle systems
produced by different brands will vary in these characteristics.
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 1762
Carlson et al. (2013) demonstrated that trust in a
vehicle’s capabilities was higher for autonomous vehicles
created by a well-known brand than for an unknown brand.
This work identified factors that influence trust in branded
autonomy, such as statistics of past performance, extent of
research on the car’s reliability, predictability, credibility of
the engineers, technical capabilities, and possibility for system
failure.
These influential factors align with the foundations for
trust in automation proposed by Lee & See (2004), Purpose,
Performance, and Process. In this model, Purpose refers to
the user’s perception of the system’s intention. Performance
includes both the capability to attain the goal, or reliability,
and predictability based on previous experience. Process
represents the user’s perception of how the automation works.
BRANDED AUTONOMY: A THEORETICAL
FRAMEWORK
An understanding of factors that influence trust in
human-autonomous vehicle interactions can help inspire safer
and more desirable systems. Since different brands will
produce different variants of autonomous vehicle technology,
their systems will differ across these factors. How might trust
in automation vary between multiple well-known brands or
companies? Subsequently, how might expectations for
performance and safety differ? This is important because
based on the literature in human-automation interaction,
differences in system trust affect user performance and
ultimate safety.
Dimensions of Brand Personality are useful in
differentiating brands from one another (Freeling & Forbs,
2005). Much like personalities of other humans help determine
how best to interact with them (Fiske et al, 2007), brand
personalities and brand image associations may serve the same
function (Rossiter & Percy, 1987). If a user trusts the brand,
they expect it will perform in a certain way, or simply,
produce a positive result (Kim et al., 2008). Together,
trustworthiness dimensions and certain aspects of brand
personality form performance and safety expectations. They
may influence whether or not a user will give an autonomous
system the benefit of the doubt in an uncertain situation.
This framework combines components of trust in
automation (Lee & See, 2004, Carlson et al, 2013, Choi & Ji,
2015) and dimensions of brand personality (Aaker, 1997).
Table 1. summarizes and organizes the literature into
categories of intent, competence, method, and history.
Intent relates to Purpose as discussed by Lee and See,
(2004), which is why the automation was developed. This
component of trust in automation is associated with faith and
benevolence. Similarly, Sincerity is a dimension of Brand
Personality (Aaker, 1997). Sincerity is concerned with four
facets of the brand’s personality, including, how down to
earth, honest, wholesome, or cheerful the brand is. An
autonomous vehicle can be produced and designed for the
purpose of augmenting the driving experience. However, if the
brand’s intent is perceived to be insincere or dishonest, this
may taint the trustworthiness perception of the system as a
whole. For example, in 2016, Volkswagen was charged for
illegal vehicle software that bypassed standards for diesel
emissions. Their dishonesty was followed by a substantial
decrease in sales (Boudette, 2017).
Competence is based on perceptions of the system’s
ability to achieve the operator’s goals. Lee and See (2004)
relate this to reliability and predictability of the system.
Competence and reliability are closely tied to trustworthiness
in social psychology (Fiske et al., 2007), human-automation
interaction (Lee & See, 2004), and vehicle systems (Carlson et
al., 2013; Choi & Ji, 2015). Therefore, it might be appropriate
for a brand of autonomous vehicle to be associated with the
competent dimension of Brand Personality, supported by the
three facets reliable, intelligent, and successful. In measuring
brand personalities of various automobile brands, Branaghan
& Hildebrand (2011) found Audi, BMW, Lexus and Porsche
to be highly competent vehicle brands.
Table 1. Summary Table
Literature
Considerations
Intent
Purpose
(Lee & See, 2004)
Why is the automation being
developed?
Sincerity
(Aaker, 1997)
Will automation benefit the
operator?
Competence
Performance
(Lee & See, 2004)
Technical Capabilities
(Carlson et al., 2013)
Will the automation
successfully achieve the
operator’s goals?
Competence
(Aaker, 1997)
Does the brand have the
appropriate expertise to
develop this type of
automation?
Method
Process
(Lee & See, 2004)
How will the automation
work? What is the operator’s
experience?
System Transparency
& Acc epta ble Situa tion
Management
(Choi & Ji, 2015)
How will the system approach
difficult situations and inform,
or communicate with, the
operator?
History
Performance
(Lee & See, 2004)
Predictability
(Carlson et al., 2013)
Was the desire outcome
achieved in the past? How
well?
Will past performance endure
in this situation?
Brand Perso nality
(Aaker, 1997)
How does the brand’s identity
and reputation fit in this
specific industry?
Method concerns the user’s experience with the system.
Much like Process (Lee & See, 2004), it is dependent on how
the automation works. It is less concerned with specific
actions; instead, it focuses on the appropriateness of the
system’s approach to various situations. For example, consider
a vehicle associated with a daring brand personality (Aaker,
1997). This association may increase risk perceptions of the
whole system. An operator my not feel as secure when their
“daring” vehicle approaches a pedestrian crossing the street.
Similarly, an observant pedestrian may decide not to cross.
History represents a summary of past performance. Lee
and See (2004) include both current and historical operation of
the autonomy. Their description of Performance included how
well the system demonstrates expertise. However, the current
framework focuses on prospective trust; before experiencing
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 1763
the system, what will one expect to occur. It is focused on how
successful the brand has been in their specific industry.
History encapsulates the brand’s overall reputation, and how
their previous work forecasts future work in the context of
autonomous vehicles.
The considerations listed on the right side of Table 1 are
based on the literature. They summarize how each foundation
of trust informs an ultimate performance expectation, as
illustrated by the Branded Autonomy Framework (figure 2).
To establish higher levels of trust in the system, the
brand image and technology work together. Autonomous
vehicles should be designed with a clear intent to improve the
driving experience and benefit the user. Systems should
exhibit high reliability, with well established technical abilities
shown to have a very low chance for failure. Systems should
be easy to understand. They should provide clear feedback to
the user that communicates system status and facilitates any
operator intervention. Additionally, systems should be
predictable, producing consistent and positive outcomes.
Figure 2: This Framework for Branded Autonomy illustrates how perceived
intent, competence, method, and history inform performance expectations.
In the future, a questionnaire could be developed and
validated to measure expected performance of a branded
autonomous vehicles based on perceived intent, competence,
method and history of the system. Results could be used to
inform autonomous vehicle designs that facilitate appropriate
trust calibration. If producers knew the average level of user
trust in their systems, designs could be crafted to create a
better match between user trust and system capabilities.
For example, consider the Dyson brand. What is the first
thing that comes to mind? Probably, a vacuum – maybe a fan,
or even a hairdryer. But, the first thing that comes to mind is
probably not an electric vehicle. So, it might be shocking to
learn that Dyson plans to release a zero-emission electric
vehicle by 2020 (Stewart, 2017).
Due to the context dependent nature of trust, the Dyson
Brand would score very differently on a performance
expectations scale when considering a future vacuum versus a
future vehicle. Based on the trust-based considerations from
Table 1, how would a Dyson vehicle be expected to perform?
First, consider their intent. Why are they producing a
vehicle? Will their product benefit the operator? The Dyson
brand already has other successful products related to air
treatment on the market and as a brand, they value innovation.
Their other products provide convenient and safe solutions, as
demonstrated by their bag-less vacuum cleaners and bladeless
fans. Therefore, their intent seems very good.
However, their competence in the automobile industry
seems weak. Will the automation successfully achieve the
operator’s goals? Does the brand have the appropriate
expertise to develop this type of automation? Probably not, the
constructs of a good vacuum cleaner are very different from a
good vehicle. Their method is hard to even imagine. It is
difficult to anticipate how the system will approach difficult
situations and inform, or communicate with, the operator.
Therefore, their expected method seems negative.
When considering history, overall the Dyson brand has
been very successful. Desirable outcomes have been achieved
in the past. Their products are reliable, they are considered the
top of the line in the vacuum/air treatment market. However,
extending the brand into the automotive industry, their
performance in this space is a far leap, so one might not expect
it to be the most trustworthy vehicle on the market.
Autonomous vehicle brands should use this type of
analysis to identify areas where trust in their system is lacking.
Engineers, designers, and marketers would all benefit from
this information. Understanding how each area of
concentration is related to performance expectations would
facilitate collaboration to create a better system. This type of
analysis provides insight for system functionality, usability,
and desirability as whole. It involves everything from
technical capability to the system’s portrayal and presentation
because each component is never without the others.
CONCLUSION
Ultimately, Autonomous vehicle system performance
will always depend on the user, or the human(s) it is
interacting with, their feelings and willingness to adapt their
behavior and accept the system (Van Geenhuizen & Nijkamp,
2003).
Traditionally, the field of Human Factors has been
dedicated to designing systems for optimal performance.
Often desirability is overlooked. Consumer psychology and
branding research often focus on desirability to the exclusion
of human system performance. Here we integrate these two
branches, to better describe expectations for human-
automation interaction when a brand is involved.
Desirability is associated with feelings of security, well-
being, confidence, excitement, and satisfaction (Jordan, 1998).
Often specific design characteristics influence these feelings
and system trust (Lee & See, 2004).
Design choices often reflect a brand, which are
sometimes exaggerated by marketing tactics. In human-
automation interaction literature, brand is often excluded from
the discussion of automation performance. However, it is a
relevant factor in the evaluation of human-automation
interactions, especially in the automotive industry. Perhaps
instead of the traditional human-automation relationship, a
relationship exists between the human and the automation in
the context of its brand.
A deeper understanding of the factors that influence a
human's expectations for different brands of autonomous
vehicle systems would help producers create a better match
between what the user thinks the vehicle is capable of and
what it is actually capable of. Designing with trust-based
expectations in mind could improve system safety and
desirability.
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 1764
REFERENCES
Aaker, D. A. (1991). Managing brand equity: capitalizing on
the value of a brand name. New York: The Free Press.
Aaker, J. L. (1997). Dimensions of Brand Personality. Journal
of Marketing Research, 34(2), 347–356.
Bennett, P. D. (1995). Dictionary of Marketing Terms.
Chicago: American Marketing Association.
Boudette, N. E. (2017, November 01). Volkswagen Sales in
U.S. Rebound After Diesel Scandal. Retrieved February
10, 2018, from https://www.nytimes.com/2017/11/01/b
usiness/volkswagen-sales-diesel.html
Branaghan, R. J., & Hildebrand, E. A. (2011). Brand
personality, self-congruity, and preference: A
knowledge structures approach. Journal of Consumer
Behaviour, 10(5), 304–312.
Carlson, M. S., Desai, M., Drury, J. L., Kwak, H., & Yanco,
H. a. (2013). Identifying Factors that Influence Trust in
Automated Cars and Medical Diagnosis Systems. The
Intersection of Robust Intelligence and Trust in
Autonomous Systems: Papers from the AAAI Spring
Symposium, (Lin 2008), 20–27.
Choi, J. K., & Ji, Y. G. (2015). Investigating the Importance of
Trust on Adopting an Autonomous Vehicle.
International Journal of Human-Computer Interaction,
31(10), 692–702.
Deighton, John (1992). The Consumption of Performance.
Journal of Consumer Research, 19(3), 362-372.
Delgado-Ballester, E. (2003). Development and validation of a
brand trust scale. International Journal of Market
Research, 45(1), 35–54.
Desai, M., Medvedev, M., Vázquez, M., Mcsheehy, S.,
Gadea-Omelchenko, S., Bruggeman, C., Yanco, H.
(2012). Effects of changing reliability on trust of robot
systems. Proceedings of the seventh annual ACM/IEEE
international conference on Human-Robot Interaction -
HRI 12.
Ekman, F., Johansson, M., & Sochor, J. (2018). Creating
Appropriate Trust in Automated Vehicle Systems: A
Framework for HMI Design. IEEE Transactions on
Human-Machine Systems, 48(1), 95-101.
Fiske, S. T., Cuddy, A. J. C., & Glick, P. (2007). Universal
dimensions of social cognition: warmth and
competence. Trends in Cognitive Sciences, 11(2), 77–
83.
Freling T.H., & Forbes L. P. (2005). An examination of brand
personality through methodological triangulation.
Journal of Brand Management 13(2), 148–162.
Hoff, K. A., & Bashir, M. (2014). Trust in Automation:
Integrating empirical evidence on factors that influence
trust. Human Factors, 57(3), 407-434.
Keller, K. L. (1993). Conceptualizing, Measuring, and
Managing Customer-Based Brand Equity. Journal of
Marketing, 57(1), 1-22.
Kim, K. H., Kim, K. S., Kim, D. Y., Kim, J. H., & Kang, S. H.
(2008). Brand equity in hospital marketing. Journal of
Business Research, 61(1), 75–82.
Kotler, P., & Andreasen, A. R. (1991). Strategic marketing for
nonprofit organizations. Englewood Cliffs N.J:
Prentice-Hall.
Lee, J .D., Moray N. (1992) Trust, control strategies and
allocation of function in human- machine systems.
Ergonomics, 35, 1243-70.
Lee, J. D., & See, K. a. (2004). Trust in automation: designing
for appropriate reliance. Human Factors, 46(1), 50–80.
Mayer, R., Davis, J., & Schoorman, F. (1995). An integration
model of organizational trust. Academy of Management
Review, 20(3), 709–734.
Merritt, S., & Ilgen, D. (2008). Not All Trust Is Created Equal:
Dispositional and History-Based Trust in Human
Automation Interactions. Human Factors: The Journal
of the Human Factors and Ergonomics Society, 50(2),
194–210.
Mosier K, Skitka l, Heers S, B. M. (1998). Automaton Bias:
Decision Making and Performance in High-Tech
Cockpits. The International Journal of Aviation
Psychology, 8(1): 33–45.
Parasuraman, R., & Riley, V. (1997). Humans and
Automation: Use, Misuse, Disuse, Abuse. Human
Factors: The Journal of the Human Factors and
Ergonomics Society, 39(2), 230-253.
Peter, J. P., & Ryan, M. J. (1976). An investigation of
perceived risk at the brand level. Journal of Marketing
Research, 13(2), 184.
Rempel, J.K., Holmes, J.G. & Zanna, M.P. (1985). Trust in
close relationships. Journal of Personality and Social
Psychology, 49, 95-112.
Rotter, J. B. (1980). Interpersonal trust, trustworthiness, and
gullibility. American Psychologist, 35(1), 1-7.
Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P.
A. (2016). A meta-analysis of factors influencing the
development of trust in automation: Implications for
understanding autonomy in future systems. Human
Factors: The Journal of the Human Factors and
Ergonomics Society, 58(3), 377–400.
Sheridan, T. B. (1992). Introduction. In Telerobotics,
Automation, and Human Supervisory Control (pp. 1-3).
Cambridge (Mass.): MIT Press.
Stewart, J. (2017, October 02). Dysons Bid to Build an
Electric Car Just Might Work. Retrieved November 25,
2017, from https://www.wired.com/story/dyson-
electric-car/
Thomson, M., MacInnis, D. J., & Whan Park, C. (2005). The
Ties That Bind: Measuring the Strength of Consumers’
Emotional Attachments to Brands. Journal of
Consumer Psychology, 15(1), 77–91.
United States, National Highway Traffic Safety
Administration, U.S. Department of Transportation.
(2017). Automated driving systems 2.0: a vision for
safety.
Van Geenhuizen, M., & Nijkamp, P. (2003). Coping with
Uncertainty in the field of new transport technology.
Transportation Planning and Technology. 26(6), 449-
467.
Zeithaml, V. A. (1988). Consumer Perceptions of Price,
Quality, and Value: A Means-End Model and Synthesis
of Evidence. Journal of Marketing, 52(3), 2-22.
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 1765
... Just as a person has a personality, a brand has a brand personality (a set of human characteristics associated with a brand), including sincerity (down-to-earth, honest, wholesome, and cheerful), excitement (daring, spirited, imaginative, and up-to-date), competence (reliable, intelligent, and successful), sophistication (upper-class and charming), and ruggedness (outdoorsy and tough) [20]. According to Celmer et al.'s framework for trust in branded autonomous vehicles [21], drivers' perceived brand personality will influence their trust and performance expectations. In other words, before judging the actual trustworthiness of an ADS, drivers may make a prior judgment regarding the identity and brand personality of the automobile brand that developed it. ...
... Thus, H2 was verified. This implied that automobile brands influenced drivers' initial trust in the Level 5 ADS, which provided empirical evidence for Lee and See's conceptual model of the dynamic process of trust in automation [2] and Celmer et al.'s framework of trust in branded driving automation [21]. To adjust drivers' initial trust in the Level 5 ADS, their prior identification and judgment of the automobile brand that developed the Level 5 ADS should be considered. ...
... In contrast, for automobile brands that could earn higher trust in the brand and higher initial trust in the products, drivers might have more cognition of brand personality, such as first-class, the best in the world, trustworthy, safe, and global. This was in accordance with a previous study that highlighted the relationship between brand personality and trust [60] and also provided empirical evidence for Celmer et al.'s framework of trust in branded driving automation [21]. ...
Article
Full-text available
Before Automated Driving Systems (ADS) with full driving automation (SAE Level 5) are placed into practical use, the issue of calibrating drivers' initial trust in Level 5 ADS to an appropriate degree to avoid inappropriate disuse or improper use should be resolved. This study aimed to identify the factors that affected drivers' initial trust in Level 5 ADS. We conducted two online surveys. Of these, one explored the effects of automobile brands and drivers' trust in automobile brands on drivers' initial trust in Level 5 ADS using a Structural Equation Model (SEM). The other identified drivers' cognitive structures regarding automobile brands using the Free Word Association Test (FWAT) and summarized the characteristics that resulted in higher initial trust among drivers in Level 5 ADS. The results showed that drivers' trust in automobile brands positively impacted their initial trust in Level 5 ADS, which showed invariance across gender and age. In addition, the degree of drivers' initial trust in Level 5 ADS was significantly different across different automobile brands. Furthermore, for automobile brands with higher trust in automobile brands and Level 5 ADS, drivers' cognitive structures were richer and varied, which included particular characteristics. These findings suggest the necessity of considering the influence of automobile brands on calibrating drivers' initial trust in driving automation.
... Very little is known about how the time horizon affects human interactions with automation because few studies include time horizon as a variable. However, developing an automation's reputation through branding (Celmer et al., 2018) may be one way to signal a longer time horizon. Hill (1990) argued that an actor's reputation was important for shaping the willingness of others to enter into an exchange with an agent, with reputation being the result of repeated trustworthy behavior. ...
Article
Full-text available
Objective This paper reviews recent articles related to human trust in automation to guide research and design for increasingly capable automation in complex work environments. Background Two recent trends—the development of increasingly capable automation and the flattening of organizational hierarchies—suggest a reframing of trust in automation is needed. Method Many publications related to human trust and human–automation interaction were integrated in this narrative literature review. Results Much research has focused on calibrating human trust to promote appropriate reliance on automation. This approach neglects relational aspects of increasingly capable automation and system-level outcomes, such as cooperation and resilience. To address these limitations, we adopt a relational framing of trust based on the decision situation, semiotics, interaction sequence, and strategy. This relational framework stresses that the goal is not to maximize trust, or to even calibrate trust, but to support a process of trusting through automation responsivity. Conclusion This framing clarifies why future work on trust in automation should consider not just individual characteristics and how automation influences people, but also how people can influence automation and how interdependent interactions affect trusting automation. In these new technological and organizational contexts that shift human operators to co-operators of automation, automation responsivity and the ability to resolve conflicting goals may be more relevant than reliability and reliance for advancing system design. Application A conceptual model comprising four concepts—situation, semiotics, strategy, and sequence—can guide future trust research and design for automation responsivity and more resilient human–automation systems.
... At the heart of belief theories lies the concept of trust. Given the uncertainty or risk revolving around AVs such as their ability to perform their tasks in a predictable, accurate, understandable and responsive manner (Choi & Ji, 2015), consumers' trust in AVs is crucial for their adoption (Celmer et al., 2018;Kaur & Rampersad, 2018;Liu, Yang. et al., 2019;Zhang et al., 2019). ...
Article
The deployment of automated vehicles (AVs) can offer many benefits to the environment and society. Trust plays a crucial role in consumers' adoption of AVs. This study examines the determinants and effects of trust on consumers' adoption of AVs. A theoretical model drawing on trust theory, health belief model, and attitude theory is presented. The results show that the health belief model's components comprising perceived safety threat, expectation outcomes, cues to action and self-efficacy influence consumers' trust toward AVs. Consequently, trust has direct and indirect effects on consumers' adoption of AVs via attitude. Bootstrapping analysis suggests a mediated relationship. The findings implicate a wide array of transport and industry policies relating to the design of AVs, transport infrastructure development, public communication and marketing , and education and training.
Article
Full-text available
Automated driving (AD) is one of the key directions in the intelligent vehicles field. Before full automated driving, we are at the stage of human-machine cooperative driving: Drivers share the driving control with the automated vehicles. Trust in automated vehicles plays a pivotal role in traffic safety and the efficiency of human-machine collaboration. It is vital for drivers to keep an appropriate trust level to avoid accidents. We proposed a dynamic trust framework to elaborate the development of trust and the underlying factors affecting trust. The dynamic trust framework divides the development of trust into four stages: dispositional, initial, ongoing, and post-task trust. Based on the operator characteristics (human), system characteristics (automated driving system), and situation characteristics (environment), the framework identifies potential key factors at each stage and the relation between them. According to the framework, trust calibration can be improved from three approaches: trust monitoring, driver training, and optimizing HMI design. Future research should pay attention to the following four perspectives: the influence of driver and HMI characteristics on trust, the real-time measurement and functional specificity of trust, the mutual trust mechanism between drivers and AD systems, and ways in improving the external validity of trust studies.
Article
Full-text available
Objective: We used meta-analysis to assess research concerning human trust in automation to understand the foundation upon which future autonomous systems can be built. Background: Trust is increasingly important in the growing need for synergistic human-machine teaming. Thus, we expand on our previous meta-analytic foundation in the field of human-robot interaction to include all of automation interaction. Method: We used meta-analysis to assess trust in automation. Thirty studies provided 164 pairwise effect sizes, and 16 studies provided 63 correlational effect sizes. Results: The overall effect size of all factors on trust development was ḡ = +0.48, and the correlational effect was [Formula: see text] = +0.34, each of which represented medium effects. Moderator effects were observed for the human-related (ḡ = +0.49; [Formula: see text] = +0.16) and automation-related (ḡ = +0.53; [Formula: see text] = +0.41) factors. Moderator effects specific to environmental factors proved insufficient in number to calculate at this time. Conclusion: Findings provide a quantitative representation of factors influencing the development of trust in automation as well as identify additional areas of needed empirical research. Application: This work has important implications to the enhancement of current and future human-automation interaction, especially in high-risk or extreme performance environments.
Article
Perceived risk is conceptualized in terms of expected negative utility associated with automobile brand preferences. Empirical evidence supports the notion that importance of loss is more useful as a segmentation variable than as a component in a multiplicative model. The findings also Indicate that probability of loss may operate at the handled risk level and importance of loss at the inherent risk level.
Article
Although a considerable amount of research in personality psychology has been done to conceptualize human personality, identify the “Big Five” dimensions, and explore the meaning of each dimension, no parallel research has been conducted in consumer behavior on brand personality. Consequently, an understanding of the symbolic use of brands has been limited in the consumer behavior literature. In this research, the author develops a theoretical framework of the brand personality construct by determining the number and nature of dimensions of brand personality (Sincerity, Excitement, Competence, Sophistication, and Ruggedness). To measure the five brand personality dimensions, a reliable, valid, and generalizable measurement scale is created. Finally, theoretical and practical implications regarding the symbolic use of brands are discussed.
Article
The author presents a conceptual model of brand equity from the perspective of the individual consumer. Customer-based brand equity is defined as the differential effect of brand knowledge on consumer response to the marketing of the brand. A brand is said to have positive (negative) customer-based brand equity when consumers react more (less) favorably to an element of the marketing mix for the brand than they do to the same marketing mix element when it is attributed to a fictitiously named or unnamed version of the product or service. Brand knowledge is conceptualized according to an associative network memory model in terms of two components, brand awareness and brand image (i.e., a set of brand associations). Customer-based brand equity occurs when the consumer is familiar with the brand and holds some favorable, strong, and unique brand associations in memory. Issues in building, measuring, and managing customer-based brand equity are discussed, as well as areas for future research.
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Article
To enrich the limited and recent work in existence on relational phenomena in the consumer-brand domain, the authors focus on the concept of brand trust. The non-existence of a wider accepted measure of this concept is surprising given that: (1) trust is viewed as the cornerstone and one of the most desired qualities in a relationship; and (2) it is the most important attribute a brand can own. In this context, this research reports the results of a multi-step study to develop and validate a multidimensional brand trust scale drawn from the conceptualisation of trust in other academic fields. Multi-step psychometric tests demonstrate that the new brand trust scale is reliable and valid. Both theoretical and managerial implications are presented.
Article
While automated vehicle technology progresses, potentially leading to a safer and more efficient traffic environment, many challenges remain within the area of human factors, such as user trust for automated driving (AD) vehicle systems. The aim of this paper is to investigate how an appropriate level of user trust for AD vehicle systems can be created via human–machine interaction (HMI). A guiding framework for implementing trust-related factors into the HMI interface is presented. This trust-based framework incorporates usage phases, AD events, trust-affecting factors, and levels explaining each event from a trust perspective. Based on the research findings, the authors recommend that HMI designers and automated vehicle manufacturers take a more holistic perspective on trust rather than focusing on single, “isolated” events, for example understanding that trust formation is a dynamic process that starts long before a user's first contact with the system, and continues long thereafter. Furthermore, factors-affecting trust change, both during user interactions with the system and over time; thus, HMI concepts need to be able to adapt. Future work should be dedicated to understanding how trust-related factors interact, as well as validating and testing the trust-based framework.
Article
Our research goals are to understand and model the factors that affect trust in automation across a variety of application domains. For the initial surveys described in this paper, we selected two domains: automotive and medical. Specifically, we focused on driverless cars (e.g., Google Cars) and automated medical diagnoses (e.g., IBM's Watson). There were two dimensions for each survey: the safety criticality of the situation in which the system was being used and name-brand recognizability. We designed the surveys and administered them electronically, using Survey Monkey and Amazon's Mechanical Turk. We then performed statistical analyses of the survey results to discover common factors across the domains, domain-specific factors, and implications of safety criticality and brand recognizability on trust factors. We found commonalities as well as dissimilarities in factors between the two domains, suggesting the possibility of creating a core model of trust that could be modified for individual domains. The results of our research will allow for the creation of design guidelines for autonomous systems that will be better accepted and used by target populations. Copyright © 2013, Association for the Advancement of Artificial Intelligence. All rights reserved.