Content uploaded by Erin Chiou
Author content
All content in this area was uploaded by Erin Chiou on Jul 30, 2019
Content may be subject to copyright.
Trust in Branded Autonomous Vehicles & Performance
Expectations: A Theoretical Framework
Natalie Celmer, Russell Branaghan, & Erin Chiou
Arizona State University
Future autonomous vehicle systems will be diverse in design and functionality because they will be produced
by different brands. It is possible these brand differences yield different levels of trust in the automation,
therefore different expectations for vehicle performance. Perceptions of system safety, trustworthiness, and
performance are important because they help users determine how reliant they can be on the system. Based
on a review of the literature, the system’s perceived intent, competence, method, and history could be
differentiating factors. Importantly, these perceptions are based on both the automated technology and the
brand’s personality. The following theoretical framework reflects a Human Systems Engineering approach
to consider how brand differences impact perceived trustworthiness, performance expectations and ultimate
safety of autonomous vehicles.
INTRODUCTION
Rapid technological innovation introduces great
uncertainty in how people interact with the technology, and in
system safety (Van Geenhuizen & Nijkamp, 2003). Consider a
pedestrian waiting to cross a street; will a fully autonomous
vehicle stop for them? How will the vehicle communicate to
the pedestrian that they may cross? What will the pedestrian
expect the car to do?
The answers may depend on the brand of the vehicle.
One brand might always stop, whereas another might only
stop if the pedestrian is a specific distance from the curb.
Many technology corporations and automobile manufacturers
are developing autonomous vehicles. The sheer variety of
companies with different technologies and design approaches
is likely to yield great diversity. In fact, this situation exists
already. Park-Assist (BMW) and Autopark (Tesla) are both
autonomous parking features, however, their Human-Machine
Interfaces (HMI) are different and require different user
inputs.
For instance, Tesla employs a streamlined process; about
three actions are required to parallel park the vehicle using
Autopark. The experience with BMW, however, is more
cognitively involved. The driver must initiate the Park-Assist
feature, press the brake, turn on their blinker, read a pop-up
message stating they understand that they are liable for the
vehicle’s ultimate performance, confirm this by pressing OK,
then they must press and hold the Park Assist button for the
duration of the entire parking process, and release the brake
when the process is complete.
Safety guidelines and standard requirements do not
remedy these inconsistencies. The U.S. Department of
Transportation and the National Highway Traffic Safety
Administration (NHTSA) provide guidelines in Automated
Driving Systems 2.0: A Vision for Safety (September 2017).
However, these standards still allow producers great freedom
in implementation. For example, one guideline states: “HMI
design should also consider the need to communicate
information regarding the Automated Driving System’s state
of operation relevant to the various interactions it may
encounter and how this information should be communicated”
(pg. 10). Just as a rubric for an academic assignment does not
lead students to submit identical projects, the NHTSA
guidelines address broad safety concerns and leave room for
variety in system designs and configurations.
In the automotive industry, trustworthiness of a vehicle is
closely tied to its perceived safety (Peter & Ryan, 1976).
When considering autonomous vehicles, automation and brand
associations both contribute to expectations and trust for the
system. This paper provides a review of the literature in the
areas of branding and human factors. In doing so, it
synthesizes the relationship between brand personality and
trust in automation. Additionally, this paper introduces a
theoretical framework to conceptualize the formation of trust-
based expectations for different brands of autonomous
vehicles. It focuses on trust development, before the user
interacts with the system, the stage in which Ekman,
Johansson, & Sochor (2018) refer to as, preuse.
The framework incorporates brand and system intent
towards the user, competence in producing reliable
automation, and the method of delivery; similar to purpose,
performance, and process (Lee & See, 2004). A fourth
component adds history based on past interactions, brand
reputation, and previous results produced by the brand.
Though largely inspired by the model of trust proposed
by Lee & See (2004), this framework is focused on the
influence of brand associations and brand personality (Aaker,
1997) on prospective trust, or trust expectations prior to
observing or experiencing the system’s performance.
Human-Automation Interaction
Trust in automation is a belief that another agent will
help in uncertain, or vulnerable, situations (Lee & See, 2004).
For autonomous vehicle systems, trust is based on the
expectation that the system is capable of improving the driving
task and/or performance. Trust depends on how successful the
human expects the automation to be (Lee & Moray 1992,
Sheridan 1992; Lee & See 2004). This expectation guides
human behavior with a system (Mosier, Skitka, & Heers,
1998).
Copyright 2018 by Human Factors and Ergonomics Society. DOI 10.1177/1541931218621398
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 1761
The amount of trust a user has in a system should reflect
the system’s capabilities, especially when monitoring and
occasional intervention are required. Otherwise, when trust is
not appropriately calibrated, human-automation systems will
break down.
Various design characteristics affect expectations and
trust in autonomous systems (Lee & See, 2004). Trust
perceptions tend to increase for highly transparent systems,
more technically competent systems, and systems with
acceptable situation management (Choi & Ji, 2015). Trust
increases when the perceived intent of the system design
aligns with the user’s purpose for use (Hoff, & Bashir, 2014;
Ekman, Johansson, & Sochor, 2018).
System designs should address situational characteristics
such as, perceived risks, workload, and task difficulty, which
have been shown to impact trust in automation (Desai et al.,
2012). However, expectations for system functionality, true
system capability, and the user’s role are often mismatched.
An increase in perceived trust is linked to a decrease in
perceived risk (Choi & Ji, 2015). An example of this is
Automation Bias; when a user favors the use of automation
over their own input. This often results from an over trusting
attitude and leads to an inappropriate level of reliance on the
system (Mosier, Skitka, & Heers, 1998). Other instances
resulting from inappropriate calibrations of trust include
Misuse, Disuse and Abuse (Parasuraman & Riley, 1997).
However, autonomous vehicles will vary in these
characteristics. Therefore, perceptions of trust for autonomous
vehicle systems will ultimately stem from various designs and
experiences created by different brands.
Brand Personality and Associations
A brand is a name, term, or symbol that distinguishes a
seller’s product or service from others (Bennett, 1995). An
important part of branding entails the accumulation of
associations and perceptions, in memory linked to a brand
(Aaker, 1991). In essence, it includes what consumers know
(or believe) about products. According to Deighton (1992),
brands “promise a future performance”. They set expectations
for the quality of their product (Keller, 1993). Trust in the
brand is established through fulfillment of these expectations
over time (Delgado-Ballester, 2003). From the moment the
brand is born, it is associated with specific values, limitations,
and target consumer groups (Kotler & Andreasen, 1991).
Brands are also closely linked to the performance and quality
of their products and services (Keller, 1993; Zeithaml 1988),
specifically, how reliable and successful the product is at
fulfilling its intended purpose.
Importantly, people tend to spontaneously ascribe human
personality characteristics to brands, creating a brand
personality. To categorize these personalities, Jennifer Aaker
(1997) developed an empirically derived framework of five
dimensions of Brand Personality, and 15 associated facets,
shown in figure 1. These dimensions, Sincerity, Excitement,
Competence, Sophistication, and Ruggedness (Aaker, 1997)
are useful for describing and summarizing brand associations.
These Brand Personality dimensions and facets are also
useful for differentiating brands from one another (Freeling &
Forbs, 2005). Considering the pedestrian example, one brand
of autonomous vehicle may stop for the pedestrian to cross
while another may not. The brand personalities of each may
help the pedestrian decide to walk or not. In any interaction
with autonomous vehicles, a person may simply base trust on
associations. They may take specific actions surrounding an
autonomous vehicle based on its brand personality.
Figure 1: Brand Personality Framework (Aaker, 1997)
Often, brand exposure evokes strong, automatic, and
subconscious, inclinations and feelings about a product
(Thomson et al., 2005). Understanding what makes
autonomous vehicles desirable and trustworthy, regardless of
the risks and uncertainty involved, goes beyond marketing. It
affects human interaction with the product (Lee & See, 2004).
Brand Trust and Human-Automation Trust
Trust is a fundamental component of good relationships;
it evolves based on past experience (Rempel et al., 1985,
Rotter, 1980). Regardless of individual differences in
willingness to trust others, people identify patterns in
intentions, behaviors, motivations and qualities linked to a
positive outcome (Rotter, 1980; Rempel, et al., 1985). Trust is
not dichotomous. It is not simply a matter of trust or distrust,
instead, levels of trust fall along a continuum.
Brand trust is the level of security associated with a
brand. It is based on the perceived reliability of the brand, and
how responsible it is for the welfare of the consumer
(Delgado-Ballester, 2003). Brand trust is also context
dependent. It is specific to the nature of the situation and the
other agents involved (Mayer et al., 1995; Schaefer, et al.,
2016). Trust-based relationships between consumers and
brands, resemble that of humans and automation. Similarly,
human-automation trust is based on expectations of system
capabilities.
History-based trust focuses on past performance (Merritt
& Ilgen, 2008), and how it relates to future interactions.
Brands form relationships with consumers by meeting, or
exceeding, their expectations. In this way, brands build trust
by being predictable (Mayer et al., 1995), providing good
experiences time after time. Automation is designed to build
trust in the same way. A trustworthy system is simple and
understandable. It acts in the operator’s best interest, is
designed to induce proper trust calibration, shows
performance history and meets the operator’s performance
expectations (Lee & See, 2004). Autonomous vehicle systems
produced by different brands will vary in these characteristics.
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 1762
Carlson et al. (2013) demonstrated that trust in a
vehicle’s capabilities was higher for autonomous vehicles
created by a well-known brand than for an unknown brand.
This work identified factors that influence trust in branded
autonomy, such as statistics of past performance, extent of
research on the car’s reliability, predictability, credibility of
the engineers, technical capabilities, and possibility for system
failure.
These influential factors align with the foundations for
trust in automation proposed by Lee & See (2004), Purpose,
Performance, and Process. In this model, Purpose refers to
the user’s perception of the system’s intention. Performance
includes both the capability to attain the goal, or reliability,
and predictability based on previous experience. Process
represents the user’s perception of how the automation works.
BRANDED AUTONOMY: A THEORETICAL
FRAMEWORK
An understanding of factors that influence trust in
human-autonomous vehicle interactions can help inspire safer
and more desirable systems. Since different brands will
produce different variants of autonomous vehicle technology,
their systems will differ across these factors. How might trust
in automation vary between multiple well-known brands or
companies? Subsequently, how might expectations for
performance and safety differ? This is important because
based on the literature in human-automation interaction,
differences in system trust affect user performance and
ultimate safety.
Dimensions of Brand Personality are useful in
differentiating brands from one another (Freeling & Forbs,
2005). Much like personalities of other humans help determine
how best to interact with them (Fiske et al, 2007), brand
personalities and brand image associations may serve the same
function (Rossiter & Percy, 1987). If a user trusts the brand,
they expect it will perform in a certain way, or simply,
produce a positive result (Kim et al., 2008). Together,
trustworthiness dimensions and certain aspects of brand
personality form performance and safety expectations. They
may influence whether or not a user will give an autonomous
system the benefit of the doubt in an uncertain situation.
This framework combines components of trust in
automation (Lee & See, 2004, Carlson et al, 2013, Choi & Ji,
2015) and dimensions of brand personality (Aaker, 1997).
Table 1. summarizes and organizes the literature into
categories of intent, competence, method, and history.
Intent relates to Purpose as discussed by Lee and See,
(2004), which is why the automation was developed. This
component of trust in automation is associated with faith and
benevolence. Similarly, Sincerity is a dimension of Brand
Personality (Aaker, 1997). Sincerity is concerned with four
facets of the brand’s personality, including, how down to
earth, honest, wholesome, or cheerful the brand is. An
autonomous vehicle can be produced and designed for the
purpose of augmenting the driving experience. However, if the
brand’s intent is perceived to be insincere or dishonest, this
may taint the trustworthiness perception of the system as a
whole. For example, in 2016, Volkswagen was charged for
illegal vehicle software that bypassed standards for diesel
emissions. Their dishonesty was followed by a substantial
decrease in sales (Boudette, 2017).
Competence is based on perceptions of the system’s
ability to achieve the operator’s goals. Lee and See (2004)
relate this to reliability and predictability of the system.
Competence and reliability are closely tied to trustworthiness
in social psychology (Fiske et al., 2007), human-automation
interaction (Lee & See, 2004), and vehicle systems (Carlson et
al., 2013; Choi & Ji, 2015). Therefore, it might be appropriate
for a brand of autonomous vehicle to be associated with the
competent dimension of Brand Personality, supported by the
three facets reliable, intelligent, and successful. In measuring
brand personalities of various automobile brands, Branaghan
& Hildebrand (2011) found Audi, BMW, Lexus and Porsche
to be highly competent vehicle brands.
Table 1. Summary Table
Literature
Considerations
Intent
Purpose
(Lee & See, 2004)
Why is the automation being
developed?
Sincerity
(Aaker, 1997)
Will automation benefit the
operator?
Competence
Performance
(Lee & See, 2004)
Technical Capabilities
(Carlson et al., 2013)
Will the automation
successfully achieve the
operator’s goals?
Competence
(Aaker, 1997)
Does the brand have the
appropriate expertise to
develop this type of
automation?
Method
Process
(Lee & See, 2004)
How will the automation
work? What is the operator’s
experience?
System Transparency
& Acc epta ble Situa tion
Management
(Choi & Ji, 2015)
How will the system approach
difficult situations and inform,
or communicate with, the
operator?
History
Performance
(Lee & See, 2004)
Predictability
(Carlson et al., 2013)
Was the desire outcome
achieved in the past? How
well?
Will past performance endure
in this situation?
Brand Perso nality
(Aaker, 1997)
How does the brand’s identity
and reputation fit in this
specific industry?
Method concerns the user’s experience with the system.
Much like Process (Lee & See, 2004), it is dependent on how
the automation works. It is less concerned with specific
actions; instead, it focuses on the appropriateness of the
system’s approach to various situations. For example, consider
a vehicle associated with a daring brand personality (Aaker,
1997). This association may increase risk perceptions of the
whole system. An operator my not feel as secure when their
“daring” vehicle approaches a pedestrian crossing the street.
Similarly, an observant pedestrian may decide not to cross.
History represents a summary of past performance. Lee
and See (2004) include both current and historical operation of
the autonomy. Their description of Performance included how
well the system demonstrates expertise. However, the current
framework focuses on prospective trust; before experiencing
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 1763
the system, what will one expect to occur. It is focused on how
successful the brand has been in their specific industry.
History encapsulates the brand’s overall reputation, and how
their previous work forecasts future work in the context of
autonomous vehicles.
The considerations listed on the right side of Table 1 are
based on the literature. They summarize how each foundation
of trust informs an ultimate performance expectation, as
illustrated by the Branded Autonomy Framework (figure 2).
To establish higher levels of trust in the system, the
brand image and technology work together. Autonomous
vehicles should be designed with a clear intent to improve the
driving experience and benefit the user. Systems should
exhibit high reliability, with well established technical abilities
shown to have a very low chance for failure. Systems should
be easy to understand. They should provide clear feedback to
the user that communicates system status and facilitates any
operator intervention. Additionally, systems should be
predictable, producing consistent and positive outcomes.
Figure 2: This Framework for Branded Autonomy illustrates how perceived
intent, competence, method, and history inform performance expectations.
In the future, a questionnaire could be developed and
validated to measure expected performance of a branded
autonomous vehicles based on perceived intent, competence,
method and history of the system. Results could be used to
inform autonomous vehicle designs that facilitate appropriate
trust calibration. If producers knew the average level of user
trust in their systems, designs could be crafted to create a
better match between user trust and system capabilities.
For example, consider the Dyson brand. What is the first
thing that comes to mind? Probably, a vacuum – maybe a fan,
or even a hairdryer. But, the first thing that comes to mind is
probably not an electric vehicle. So, it might be shocking to
learn that Dyson plans to release a zero-emission electric
vehicle by 2020 (Stewart, 2017).
Due to the context dependent nature of trust, the Dyson
Brand would score very differently on a performance
expectations scale when considering a future vacuum versus a
future vehicle. Based on the trust-based considerations from
Table 1, how would a Dyson vehicle be expected to perform?
First, consider their intent. Why are they producing a
vehicle? Will their product benefit the operator? The Dyson
brand already has other successful products related to air
treatment on the market and as a brand, they value innovation.
Their other products provide convenient and safe solutions, as
demonstrated by their bag-less vacuum cleaners and bladeless
fans. Therefore, their intent seems very good.
However, their competence in the automobile industry
seems weak. Will the automation successfully achieve the
operator’s goals? Does the brand have the appropriate
expertise to develop this type of automation? Probably not, the
constructs of a good vacuum cleaner are very different from a
good vehicle. Their method is hard to even imagine. It is
difficult to anticipate how the system will approach difficult
situations and inform, or communicate with, the operator.
Therefore, their expected method seems negative.
When considering history, overall the Dyson brand has
been very successful. Desirable outcomes have been achieved
in the past. Their products are reliable, they are considered the
top of the line in the vacuum/air treatment market. However,
extending the brand into the automotive industry, their
performance in this space is a far leap, so one might not expect
it to be the most trustworthy vehicle on the market.
Autonomous vehicle brands should use this type of
analysis to identify areas where trust in their system is lacking.
Engineers, designers, and marketers would all benefit from
this information. Understanding how each area of
concentration is related to performance expectations would
facilitate collaboration to create a better system. This type of
analysis provides insight for system functionality, usability,
and desirability as whole. It involves everything from
technical capability to the system’s portrayal and presentation
because each component is never without the others.
CONCLUSION
Ultimately, Autonomous vehicle system performance
will always depend on the user, or the human(s) it is
interacting with, their feelings and willingness to adapt their
behavior and accept the system (Van Geenhuizen & Nijkamp,
2003).
Traditionally, the field of Human Factors has been
dedicated to designing systems for optimal performance.
Often desirability is overlooked. Consumer psychology and
branding research often focus on desirability to the exclusion
of human system performance. Here we integrate these two
branches, to better describe expectations for human-
automation interaction when a brand is involved.
Desirability is associated with feelings of security, well-
being, confidence, excitement, and satisfaction (Jordan, 1998).
Often specific design characteristics influence these feelings
and system trust (Lee & See, 2004).
Design choices often reflect a brand, which are
sometimes exaggerated by marketing tactics. In human-
automation interaction literature, brand is often excluded from
the discussion of automation performance. However, it is a
relevant factor in the evaluation of human-automation
interactions, especially in the automotive industry. Perhaps
instead of the traditional human-automation relationship, a
relationship exists between the human and the automation in
the context of its brand.
A deeper understanding of the factors that influence a
human's expectations for different brands of autonomous
vehicle systems would help producers create a better match
between what the user thinks the vehicle is capable of and
what it is actually capable of. Designing with trust-based
expectations in mind could improve system safety and
desirability.
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 1764
REFERENCES
Aaker, D. A. (1991). Managing brand equity: capitalizing on
the value of a brand name. New York: The Free Press.
Aaker, J. L. (1997). Dimensions of Brand Personality. Journal
of Marketing Research, 34(2), 347–356.
Bennett, P. D. (1995). Dictionary of Marketing Terms.
Chicago: American Marketing Association.
Boudette, N. E. (2017, November 01). Volkswagen Sales in
U.S. Rebound After Diesel Scandal. Retrieved February
10, 2018, from https://www.nytimes.com/2017/11/01/b
usiness/volkswagen-sales-diesel.html
Branaghan, R. J., & Hildebrand, E. A. (2011). Brand
personality, self-congruity, and preference: A
knowledge structures approach. Journal of Consumer
Behaviour, 10(5), 304–312.
Carlson, M. S., Desai, M., Drury, J. L., Kwak, H., & Yanco,
H. a. (2013). Identifying Factors that Influence Trust in
Automated Cars and Medical Diagnosis Systems. The
Intersection of Robust Intelligence and Trust in
Autonomous Systems: Papers from the AAAI Spring
Symposium, (Lin 2008), 20–27.
Choi, J. K., & Ji, Y. G. (2015). Investigating the Importance of
Trust on Adopting an Autonomous Vehicle.
International Journal of Human-Computer Interaction,
31(10), 692–702.
Deighton, John (1992). The Consumption of Performance.
Journal of Consumer Research, 19(3), 362-372.
Delgado-Ballester, E. (2003). Development and validation of a
brand trust scale. International Journal of Market
Research, 45(1), 35–54.
Desai, M., Medvedev, M., Vázquez, M., Mcsheehy, S.,
Gadea-Omelchenko, S., Bruggeman, C., Yanco, H.
(2012). Effects of changing reliability on trust of robot
systems. Proceedings of the seventh annual ACM/IEEE
international conference on Human-Robot Interaction -
HRI 12.
Ekman, F., Johansson, M., & Sochor, J. (2018). Creating
Appropriate Trust in Automated Vehicle Systems: A
Framework for HMI Design. IEEE Transactions on
Human-Machine Systems, 48(1), 95-101.
Fiske, S. T., Cuddy, A. J. C., & Glick, P. (2007). Universal
dimensions of social cognition: warmth and
competence. Trends in Cognitive Sciences, 11(2), 77–
83.
Freling T.H., & Forbes L. P. (2005). An examination of brand
personality through methodological triangulation.
Journal of Brand Management 13(2), 148–162.
Hoff, K. A., & Bashir, M. (2014). Trust in Automation:
Integrating empirical evidence on factors that influence
trust. Human Factors, 57(3), 407-434.
Keller, K. L. (1993). Conceptualizing, Measuring, and
Managing Customer-Based Brand Equity. Journal of
Marketing, 57(1), 1-22.
Kim, K. H., Kim, K. S., Kim, D. Y., Kim, J. H., & Kang, S. H.
(2008). Brand equity in hospital marketing. Journal of
Business Research, 61(1), 75–82.
Kotler, P., & Andreasen, A. R. (1991). Strategic marketing for
nonprofit organizations. Englewood Cliffs N.J:
Prentice-Hall.
Lee, J .D., Moray N. (1992) Trust, control strategies and
allocation of function in human- machine systems.
Ergonomics, 35, 1243-70.
Lee, J. D., & See, K. a. (2004). Trust in automation: designing
for appropriate reliance. Human Factors, 46(1), 50–80.
Mayer, R., Davis, J., & Schoorman, F. (1995). An integration
model of organizational trust. Academy of Management
Review, 20(3), 709–734.
Merritt, S., & Ilgen, D. (2008). Not All Trust Is Created Equal:
Dispositional and History-Based Trust in Human
Automation Interactions. Human Factors: The Journal
of the Human Factors and Ergonomics Society, 50(2),
194–210.
Mosier K, Skitka l, Heers S, B. M. (1998). Automaton Bias:
Decision Making and Performance in High-Tech
Cockpits. The International Journal of Aviation
Psychology, 8(1): 33–45.
Parasuraman, R., & Riley, V. (1997). Humans and
Automation: Use, Misuse, Disuse, Abuse. Human
Factors: The Journal of the Human Factors and
Ergonomics Society, 39(2), 230-253.
Peter, J. P., & Ryan, M. J. (1976). An investigation of
perceived risk at the brand level. Journal of Marketing
Research, 13(2), 184.
Rempel, J.K., Holmes, J.G. & Zanna, M.P. (1985). Trust in
close relationships. Journal of Personality and Social
Psychology, 49, 95-112.
Rotter, J. B. (1980). Interpersonal trust, trustworthiness, and
gullibility. American Psychologist, 35(1), 1-7.
Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P.
A. (2016). A meta-analysis of factors influencing the
development of trust in automation: Implications for
understanding autonomy in future systems. Human
Factors: The Journal of the Human Factors and
Ergonomics Society, 58(3), 377–400.
Sheridan, T. B. (1992). Introduction. In Telerobotics,
Automation, and Human Supervisory Control (pp. 1-3).
Cambridge (Mass.): MIT Press.
Stewart, J. (2017, October 02). Dysons Bid to Build an
Electric Car Just Might Work. Retrieved November 25,
2017, from https://www.wired.com/story/dyson-
electric-car/
Thomson, M., MacInnis, D. J., & Whan Park, C. (2005). The
Ties That Bind: Measuring the Strength of Consumers’
Emotional Attachments to Brands. Journal of
Consumer Psychology, 15(1), 77–91.
United States, National Highway Traffic Safety
Administration, U.S. Department of Transportation.
(2017). Automated driving systems 2.0: a vision for
safety.
Van Geenhuizen, M., & Nijkamp, P. (2003). Coping with
Uncertainty in the field of new transport technology.
Transportation Planning and Technology. 26(6), 449-
467.
Zeithaml, V. A. (1988). Consumer Perceptions of Price,
Quality, and Value: A Means-End Model and Synthesis
of Evidence. Journal of Marketing, 52(3), 2-22.
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 1765