Article

Direct and indirect scaling of membership functions of probability phrases

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A crucial issue in the empirical measurement of membership functions is whether the degree of fuzziness is invariant under different scaling procedures. In this paper a direct and an indirect procedure, magnitude estimation and graded pair-comparison, are compared in the context of establishing membership functions for probability phrases such as probable, rather likely, very unlikely, and so forth. Analyses at the level of individual respondents indicate that: (a) membership functions are stable over time; (b) functions for each phrase differ substantially over people; (c) the two procedures yield similarly shaped functions for a given person-phrase combination; (d) the functions from the two procedures differ systematically, in that those obtained directly dominate, or indicate greater fuzziness than do those obtained indirectly; and (e) where the two differ the indirectly obtained function may be the more accurate one. A secondary purpose of the paper is to evaluate the effects of the modifiers very and rather. Very has a general intensifying effect that is described by Zadeh's concentration model for 7 subjects and by a shift model for no one. The effects of rather are unsystematic and not described by any available model.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... An alternative approach, which elicits participant's fuller understanding of VPEs (Ho, Budescu, Dhami, & Mandel, 2016), is the 'membership function' paradigm, in which participants are asked to rate the appropriateness of a list of VPEs across the probability continuum (Budescu, Karelitz, & Wallsten, 2003;Budescu & Wallsten, 1990;Dhami & Wallsten, 2005;Fillenbaum, Wallsten, Cohen, & Cox, 1991;Karelitz & Budescu, 2004;Rapoport, Wallsten, & Cox, 1987;Wallsten, Budescu, et al., 1986). For instance, participants are asked to indicate how well a probability of 0.8 describes 'likely', with a membership value of zero indicating that the probability is not a substitute for the VPE (at all), and a membership value of one indicating that the probability is a complete substitute for the VPE. ...
... Use of such methods has been shown to reliably measure how people understand and use VPEs, and has also observed considerable interpersonal differences in interpretations (Budescu & Wallsten, 1990;Dhami & Wallsten, 2005;Rapoport et al., 1987;Wallsten, Budescu, et al., 1986). ...
... Despite the fact that the same individuals seem to interpret the same VPEs consistently over time (e.g., Beyth-Marom, 1982;Bryant & Norman, 1980;Budescu & Wallsten, 1985;Rapoport et al., 1987), there is much research indicating that this will not be the case when the same VPEs are presented across different contexts. A number of contextual effects arising from methodological design can influence the interpretations of VPEs. ...
Thesis
Full-text available
This thesis investigates the effect of communication format on the understanding of uncertainty communications and considers the implications of these findings for a communicator’s perceived credibility. The research compares five formats: verbal probability expressions (VPEs; e.g., ‘unlikely’); numerical expressions – point (e.g., ‘20% likelihood’) and range estimates (e.g., ‘10–30% likelihood’); and mixed expressions in two orders (verbal-numerical, e.g., ‘unlikely [20% likelihood]’ and numerical-verbal format, e.g., ‘20% likelihood [unlikely]’). Using the ‘which-outcome’ methodology, we observe that when participants are asked to estimate the probability of the outcome of a natural hazard that is described as ‘unlikely’, the majority indicate outcomes with a value exceeding the maximum value shown, equivalent to a 0% probability. Extending this work to numerical and mixed formats, we find that 0% interpretations are also given to communications using a verbal-numerical format (Chapter 2). If ‘unlikely’ is interpreted as referring to events which will never occur, there could be implications for a communicator’s perceived credibility should an ‘unlikely’ event actually occur. In the low probability domain, we find a communicator who uses a verbal format in their prediction is perceived as less credible and less correct than one who uses a numerical format. However, in the high probability domain (where a ‘likely’ event does not occur) such an effect of format is not consistently observed (Chapter 3). We suggest ‘directionality–outcome congruence’ can explain these findings. For example, the negatively directional term ‘unlikely’ led to harsher ratings because the outcome was counter to the original focus of the prediction (i.e., on its non-occurrence). Comparing communications featuring positively and negatively directional VPEs, we find that communicators are perceived as less credible and less correct given directionality–outcome incongruence (Chapter 4). Our findings demonstrate the influence of pragmatics on (a) the understanding of uncertainty communications and (b) perceived communicator credibility.
... Because the linguistic expression of uncertainty is so common, and in order to investigate more fully the issues discussed above, it is necessary to study how people understand and use verbal probability phrases. Our colleagues and we (Wallsten, Budescu, Rapoport, Zwick and Forsyth 1986b;Rapoport, Wallsten and Cox 1987) have developed and validated techniques for representing the vague meanings to individuals of linguistic probability expressions in specific contexts. These representations are in the form of functions over the [OJ] interval, as illustrated generically for four phrases in fig. 1. ...
... Because the pair-comparison procedures are so arduous, and in order to obtain converging evidence on the meaningfulness of the functions, Rapoport et al. (1987) compared the pair-comparison to a direct estimation technique. On each trial of the latter type, a probability phrase was paired with a probability value, with a response line just below them on the screen. ...
... It was concluded, among other things, that although the resulting functions are not identical, the less taxing direct estimation method may generally yield sufficiently good results for most purposes. Fig. 1 shows the three basic forms of membership functions that have been established for probability phrases by Wallsten et al. (1986b), Rapoport et al. (1987), and Fillenbaum, Wallsten, Cohen, and Cox (1987). Phrases denoting relatively central probabilities, such as possible or rather unlikely, tend to be represented by single peaked functions such as shown for w2 or wg. ...
Article
In many real-world situations requiring choices or judgments, the available evidence is sparse, indirect, or imprecise, resulting in uncertainty that is more easily expressed verbally than numerically. This paper describes a series of studies on how people understand and use linguistic uncertainties. Our research on measuring the meanings of linguistic probabilities and on comparing bids for gambles described verbally or numerically is reviewed briefly. The two sets of studies appear to contrast with each other, in that the measurement studies document the vagueness of the linguistic expressions while the bidding studies show relatively small differences in response to the verbal and numerical descriptors. A theory that may reconcile the differences is outlined and supported by data from a choice study. Improvements to the theory may also account for the effects of context on phrase meaning.
... Words are perceived as more flexible and less precise in meaning and, therefore, seem better suited to describe vague and imprecise opinions and beliefs. This property of probability phrases was recently demonstrated by Wallsten et al., (1986) and by Rapoport, Wallsten, and Cox (1987), who have proposed and evaluated various ways of modeling the vagueness of these expressions. ...
... Thus, the experimental procedure yielded verbal and numerical descriptions of the graphic displays that were as similar as possible in central probability meaning for each subject but dissimilar in at least two other regards. Specifically, the phrases were more vague than the numbers for each subject; this is consistent with much previous research (Rapoport et al., 1987;Wallsten et al., 1986). Also, when considering those phrases and numbers used by multiple subjects, we found that between-subjects variability and therefore individual differences were much greater in the verbal mode. ...
... Alternatively, n for a phrase Wean be thought of as the truth value of the statement, "The probability p is described by the phrase W," bounded by 0 (absolutely false) and 1 (absolutely true). Wallsten et al. (1986) and Rapoport et al., (1987) discuss properties of these functions in detail and have also developed methods of empirically establishing them in reliable and valid ways in the context of the representation of vagueness. ...
Article
Full-text available
A two-stage within subjects design was used to compare decisions based on numerically and verbally expressed probabilities. In Stage 1, subjects determined approximate equivalences between vague probability expressions, numerical probabilities, and graphical displays. Subsequently, in Stage 2 they bid for (Experiment 1) or rated (Experiment 2) gambles based on the previously equated verbal, numerical, and graphical descriptors. In Stage 1, numerical and verbal judgments were reliable, internally consistent, and monotonically related to the displayed probabilities. However, the numerical judgments were significantly superior in all respects because they were much less variable within and between subjects. In Stage 2, response times, bids, and ratings were inconsistent with both of two opposing sets of predictions, one assuming that imprecise gambles will be avoided and the other that verbal probabilities will be preferred. The entire pattern of results is explained by means of a general model of decision making with vague probabilities which assumes that in the present task, when presented with a vague probability word, people focus on an implied probability interval and sample values within it to resolve the vagueness prior to forming a bid or a rating. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
... Because the linguistic expression of uncertainty is so common, and in order to investigate more fully the issues discussed above, it is necessary to study how people understand and use verbal probability phrases. Our colleagues and we (Wallsten, Budescu, Rapoport, Zwick and Forsyth 1986b;Rapoport, Wallsten and Cox 1987) have developed and validated techniques for representing the vague meanings to individuals of linguistic probability expressions in specific contexts. These representations are in the form of functions over the [OJ] interval, as illustrated generically for four phrases in fig. 1. ...
... Because the pair-comparison procedures are so arduous, and in order to obtain converging evidence on the meaningfulness of the functions, Rapoport et al. (1987) compared the pair-comparison to a direct estimation technique. On each trial of the latter type, a probability phrase was paired with a probability value, with a response line just below them on the screen. ...
... It was concluded, among other things, that although the resulting functions are not identical, the less taxing direct estimation method may generally yield sufficiently good results for most purposes. Fig. 1 shows the three basic forms of membership functions that have been established for probability phrases by Wallsten et al. (1986b), Rapoport et al. (1987), and Fillenbaum, Wallsten, Cohen, and Cox (1987). Phrases denoting relatively central probabilities, such as possible or rather unlikely, tend to be represented by single peaked functions such as shown for w2 or wg. ...
Article
We consider a class of resource dilemmas of the following form: members of groups of size n are asked to share a common resource pool whose exact size, x, is not known. Rather, x is sampled randomly from a probability distribution which is common knowledge. Each group member j (j = 1,…,n) requests rj from the resource pool. Requests are made either simultaneously or sequentially. If (r1+r2+…+rn)⩽ x all members are granted their requests; otherwise, group members get nothing.For each protocol of play we present two alternative models - a game theoretical equilibrium solution and a psychological model incorporating the notion of focal points. We then report the results of two experiments designed to compare the two models under the two protocols of play.
... In addition, between-subject variability in assigning numbers to expressions far exceeds within-subject variability, which itself is not small. Rather than have subjects give numerical equivalents of phrases or rank order them according to implied probability, Wallsten et al. (1986) and Rapoport, Wallsten, and Cox (1987) developed techniques for representing the vague meanings of probability phrases to individuals as functions over the [0, I] probability interval. Both studies confiied that people interpret a probability phrase in a consistent vague manner within a given context and, further, that individual differences in the meanings of such phrases are substantial. ...
... The ordinate of a function, u, for a particular phrase denotes the degree of membership of a given probability in the vague concept denoted by the phrase. Alternatively, u for a phrase W can be thought of as the truth value of the statement "The probability p is described by the phrase W," bounded by 0 (absolutely false) and 1 (absolutely true), Wallsten ef al. (1986) and Rapoport et al. (1987) developed methods for empirically establishing such functions in reliable and valid ways for individual subjects. The membership functions vary considerably over individuals, but tend to be either monotonic (increasing for high phrases and decreasing for low phrases) or single-peaked. ...
... Following the completion of the bidding stage, membership functions were elicited, using the method described by Rapoport et al. (1987), for all the words employed during the decision stage. The subjects were instructed to judge the degree to which a given phrase described a certain numerical probability. ...
Article
The goal of the research was to compare decisions under risk in a situation in which forecasters (F) communicate to decision makers (DM) either numerically (e.g., .70) or verbally (e.g., likely) about the chances that a binary event will occur. Following each forecast, the DM bid for a winning or losing lottery based on the event. In Experiment 1 Fs and DMs also provided numerical translations of each verbal forecast after the DMs' bid. In Experiment 2 the DMs provided membership functions over the [0, 1] interval for each phrase used by the Fs. The primary results were: (a) extreme similarity in the DM's bids and rates of bidding under the two modes of communication; (b) greater variability in bids to specific verbal than numerical forecasts; (c) a pattern of bids, in which DMs demonstrated risk seeking for gains and risk neutrality for losses; (d) DMs' numerical translations in Experiment 1 were closer to .50 than were those of Fs; and (e) phrases selected by Fs had high membership values to DMs for the probabilities the Fs were attempting to describe. Points (a), (b), (d), and (e) are consistent with the ν-μ model which assumes that the vague meaning of a probability phrase can be represented by a membership function over the [0, 1] interval, and that in reaching a decision the DM focuses on a range of probabilities with sufficiently high membership. Point (c) is speculatively attributed to social aspects of the dyadic situation, and requires further investigation.
... The major empirical works that have appeared in the fuzzy set literature focus on measuring the membership function and evaluating the appropriateness of operations on fuzzy sets. (See, for example, Hersh and colleagues [15, 16]; Kochen [22]; Norwich and Turksen [26]; Oden [28, 29, 30, 31]; Rapoport and colleagues [33]; Thole and Zimmermann [35]; Wallsten and colleagues [37]; Zimmer [42]; Zysno [44] .) This article investigates experimentally the question of selecting an appropriate distance index for measuring similarity among fuzzy sets. ...
... A linguistic probability phrase is a value of the linguistic variable "probability" (Zadeh [41]). In this study we adopted the direct magnitude estimation technique (for instance, Norwich and Turksen [25, 26], Rapoport and colleagues [33]). In these trials, probabilities were represented as relative areas on a radially divided two-colored spinner (seeFigure 1). ...
... Hence, in the linguistic probability scaling trials, the subject's placement of the cursor yielded a realization of this random variable. On the basis of previous research (Wallsten and colleagues [37]; Rapoport and colleagues [33]), we concluded that a cubic polynomial can accurately represent the expected value of the membership function for a probability phrase. Note that a cubic polynomial resembles the "S" and "H" functions that have been proposed in the literature in this context (Eshragh and Mamdani [11]). ...
Article
Many measures of similarity among fuzzy sets have been proposed in the literature, and some have been incorporated into linguistic approximation procedures. The motivations behind these measures are both geometric and set-theoretic. We briefly review 19 such measures and compare their performance in a behavioral experiment. For crudely categorizing pairs of fuzzy concepts as either “similar” or “dissimilar,” all measures performed well. For distinguishing between degrees of similarity or dissimilarity, certain measures were clearly superior and others were clearly inferior; for a few subjects, however, none of the distance measures adequately modeled their similarity judgments. Measures that account for ordering on the base variable proved to be more highly correlated with subjects' actual similarity judgments. And, surprisingly, the best measures were ones that focus on only one “slice” of the membership function. Such measures are easiest to compute and may provide insight into the way humans judge similarity among fuzzy concepts.
... 1 . Subsets monotonically decrease (become narrower) as v increases . The notion of thresholds provides a convenient quantitative way to theorize about probabilities that are "sufficiently well described by a phrase", and we will use it subsequently . Figure 1 presents hypothetical examples of some membership functions of verbal expressions . and Rapoport et al . (1987) have shown that membership functions of the sort shown on Figure 1 can be empirically derived and validated at the individual subject level, and that the resulting scales satisfy the ordinal properties of a difference or a ratio representation (see also Norwich & Turksen, 1984) . Empirical procedures for establishing the membership func ...
... The analysis of individual membership functions carries this result a step further . We have shown Rapoport et al., 1987 ;) that the location and spread of functions representing any given term vary considerably across subjects . Even more impressive is the fact that the shape ofthese functions is not universal . ...
Article
This chapter discusses that practical issues arise because weighty decisions often depend on forecasts and opinions communicated from one person or set of individuals to another. The standard wisdom has been that numerical communication is better than linguistic, and therefore, especially in important contexts, it is to be preferred. A good deal of evidence suggests that this advice is not uniformly correct and is inconsistent with strongly held preferences. A theoretical understanding of the preceding questions is an important step toward the development of means for improving communication, judgment, and decision making under uncertainty. The theoretical issues concern how individuals interpret imprecise linguistic terms, what factors affect their interpretations, and how they combine those terms with other information for the purpose of taking action. The chapter reviews the relevant literature in order to develop a theory of how linguistic information about imprecise continuous quantities is processed in the service of decision making, judgment, and communication. It provides current view, which has evolved inductively, to substantiate it where the data allow, and to suggest where additional research is needed. It also summarizes the research on meanings of qualitative probability expressions and compares judgments and decisions made on the basis of vague and precise probabilities.
... < v < 1 . Subsets monotonically decrease (become narrower) as v increases . The notion of thresholds provides a convenient quantitative way to theorize about probabilities that are "sufficiently well described by a phrase", and we will use it subsequently .Figure 1 presents hypothetical examples of some membership functions of verbal expressions . Rapoport et al . (1987) have shown that membership functions of the sort shown onFigure 1 can be empirically derived and validated at the individual subject level, and that the resulting scales satisfy the ordinal properties of a difference or a ratio representation (see also Norwich & Turksen, 1984) . Empirical procedures for establishing the membership funct ...
... The analysis of individual membership functions carries this result a step further . We have shown ( Rapoport et al., 1987 ;) that the location and spread of functions representing any given term vary considerably across subjects . Even more impressive is the fact that the shape ofthese functions is not universal . ...
Article
This article reviews research on how people use and understand linguistic expressions of uncertainty, with a view toward the needs of researchers and others interested in artificial intelligence systems. We discuss and present empirical results within an inductively developed theoretical framework consisting of two background assumptions and six principles describing the underlying cognitive processes.
... One is based on graded pair-wise comparisons of words or phrases for any given probability (e.g. , and the other is direct rating of single word-or phrase-probability combinations (e.g. Rapoport et al., 1987). Results from numerous studies reviewed by Budescu and Wallsten (1995) or Wallsten and Budescu (1995) indicate that most MFs are single-peaked (with a minority monotonically increasing or decreasing), and that MFs vary across individuals and contexts. ...
... The methods employed in the previous studies (e.g. Jaffe-Katz et al., 1989;Rapoport et al., 1987;) relied on presenting a phrase with one probability value at a time. Following Torgerson's (1958, Chapter 4) classification, we refer to those as Single Stimulus Methods (SSM). ...
Article
Teigen and Brun have suggested that distinct from their numerical implications, most probability phrases are either positive or negative, in that they encourage one to think of reasons why the target event will or will not occur. We report two experiments testing our hypotheses that (a) the direction of a phrase can be predicted from properties of its membership function, and (b) this relation is invariant across contexts, and (c) —originally formulated by Teigen and Brun (1999)—that strong modifiers intensify phrase directionality. For each phrase, participants encoded membership functions by judging the degree to which it described the numerical probabilities 0.0, 0.1, …, 1.0, and also completed sentences including the target phrase. The types of reasons given in the sentence completion task were used to determine the phrase's directionality. The results support our hypotheses (a) and (b) regarding the relation between directionality and the membership functions, but we found only limited support for hypothesis (c) regarding the effects of modifiers on directionality. A secondary goal, to validate an efficient method of encoding membership functions, was also achieved. Copyright © 2003 John Wiley & Sons, Ltd.
... The decision makers made bids based on the numerical or verbal description. The results were consistent with the prior research ®ndings of individual-subject studies (Budescu et al., 1988;Rapoport et al., 1987;Wallsten et al., 1986). The responses to verbal descriptors were more variable than the responses to numerical descriptors. ...
... Research has con®rmed that people prefer to use verbal phrases rather than numerical probabilities when conveying uncertainty but prefer to receive it numerically (Wallsten et al., 1993b;Erev and Cohen, 1990;Fillenbaum et al., in press;Budescu and Wallsten, 1990;Rapoport et al., 1987). In addition, they attach dierent probabilities to individual phrases, and they overlap the meaning between them. ...
Article
This is an applied study of how to develop a standardized set of useful verbal probability phrases for communication purposes within an expert community. The analysis extends the previous research in two ways. First, the Analytic Hierarchy Process (AHP) is used to assess the relative weights associated with the verbal phrases employed by a group of thirty Financial Strategy experts at a major Wall Street Firm. Second, a quadratic least-squares technique is used to map these relative weights onto a subjective probability scale. The result was a consistent scaling of probabilistic phrases that the analysts prefer and actually use. This methodology can be used to minimize the problems associated with the use of probabilistic phrases by a group of experts who interact daily and who share assumptions, working knowledge and values.
... The results were similar toRips (2001).Heit and Rotello's (2005) Experiment 2 used quantified statements where class inclusion (all birds have property A, therefore robins have property A) or violations of class inclusion (premise and conclusion inverted). While these results are interesting, our feeling is that they tell us most about the semantics/pragmatics of the words "necessary" and "plausible," where it has been known for some time that they don't neatly carve up some underlying probability scale (seeDhami & Wallsten, in press;Rapoport, Wallsten, & Cox, 1987;Wallsten, Budescu, Rapoport, Zwick, & Forsyth, 1986 ). ...
Chapter
Without inductive reasoning, we couldn't generalize from one instance to another, derive scientific hypotheses, or predict that the sun will rise again tomorrow morning. Despite the widespread nature of inductive reasoning, books on this topic are rare. Indeed, this is the first book on the psychology of inductive reasoning in twenty years. The chapters survey recent advances in the study of inductive reasoning and address questions about how it develops, the role of knowledge in induction, how best to model people's reasoning, and how induction relates to other forms of thinking. Written by experts in philosophy, developmental science, cognitive psychology, and computational modeling, the contributions here will be of interest to a general cognitive science audience as well as to those with a more specialized interest in the study of thinking.
... However, it was left implicitly unanswerable. There is no absolute correct or wrong method in generating MFS [13]. Many came to a conclusion to use the guesstimating approach and apply the existing measurement in the literatures. ...
Article
Full-text available
This paper presents an approach for generating project risk membership function (MFS) based on experts’ estimates and α-level variations using simulations. The proposed algorithm employs combination of computer and mathematics application in the area of risk assessment. The determination of appropriate MFS plays a substantial role in the performance of a fuzzy system. Most of the discussions in the previous literature on MFS generated, the assumptions that the risks are outlooked from similar perspective of the experts. However, this would be unlikely true in the real life when there is more than one expert from different background and experience. Proposed simulation method focuses on characteristics of MFS as well as the fuzzy numbers generation incorporating uncertainties in the experts’ inputs. Furthermore, results of set of fuzzy numbers of triangular MFS generated is presented in the fuzzy probability distribution and fuzzy cumulative distribution functions.
... Rapoport et al. [3] pointed out that a crucial issue in the empirical measurement of membership functions is whether the degree of fuzziness is invariant under different scaling procedures. Smarandache [4] recently presented the neutrosophic set as a new stream to deal with uncertainty. ...
Article
Full-text available
As a generalization of several fuzzy tools, picture fuzzy sets (PFSs) hold a special ability to perfectly portray inherent uncertain and vague decision preferences. The intention of this paper is to present a Pearson’s picture fuzzy correlation-based model for multi-attribute decision-making (MADM) analysis. To this end, we develop a new correlation coefficient for picture fuzzy sets, based on which a Pearson’s picture fuzzy closeness index is introduced to simultaneously calculate the relative proximity to the positive ideal point and the relative distance from the negative ideal point. On the basis of the presented concepts, a Pearson’s correlation-based model is further presented to address picture fuzzy MADM problems. Finally, an illustrative example is provided to examine the usefulness and feasibility of the proposed methodology.
... This consensus emerged from early data on best estimates (e.g. Beyth-Marom, 1982;Johnson, 1973), and was corroborated by studies using membership functions (Rapoport, Wallsten, & Cox, 1987;Wallsten, Budescu, et al., 1986). Both methods reveal profound individual differences. ...
Chapter
We frequently communicate risk and uncertainty with verbal probability expressions (VPE): expressions such as "unlikely", "possible", and "likely". Such expressions are, it is believed, so vaguely understood that there is an "illusion of communication": speaker and hearer believe that they understand each other but, in fact, do not. In this chapter we question the strength of the evidence for the illusion of communication, and argue that the literature on VPEs has neglected the social and communicative contexts of communicating and reasoning with verbal probability expressions. We call on research on meaning in its social and communicative contexts - on natural-language pragmatics. We use this work to discuss existing research. Studying the social and communicative context may provide a route to revealing genuine, meaningful communication about risk and uncertainty. Future research of this kind may help to improve the communication of risk and uncertainty.
... Extensions of the present model to multiattribute choices may lead to new insights in the area of decision making with vague attributes. A series of studies carried out by Wallsten and colleagues have shown that verbal chance descriptors have different meanings to individuals and that those meanings can be represented with membership functions in the [0, 1] interval, using fuzzy set theory (Budescu & Wallsten, 1990;Budescu, Weinberg, & Wallsten, 1988;Rapoport, Wallsten, & Cox, 1987;Wallsten, Budescu, Rapoport, Zwick, & Forsyth, 1986). The research has also shown that, in spite of the great variability in the meaning the words convey, decision making is systematically, but slightly, affected by whether information is numerical or verbal. ...
Article
Full-text available
The stochastic difference model assumes that decision makers trade normalized attribute value differences when making choices. The model is stochastic, with choice probabilities depending on the normalized difference variable, d, and a decision threshold, δ. The decision threshold indexes a person's sensitivity to attribute value differences and is a free estimated parameter of the model. Depending on the choice context, a person may be more or less sensitive to attribute value differences, and hence δ may be used to measure context effects. With proportional difference used as the normalization, the proportional difference model (PD) was tested with 9 data sets, including published data (e.g., J. L. Myers, M. M. Suydam, & B. Gambino, 1965; A. Tversky, 1969). The model accounted for individual and group data well and described violations of stochastic dominance, independence, and weak and strong stochastic transitivity.
... A combined model to deal with these ideas is given by the concept of fuzzy probabilities (or fuzzyvalued probabilities), which has been examined in the literature. In this respect, we can point to the work done by Negoita and Ralescu (1975), Zadeh (1975Zadeh ( , 1984Zadeh ( , 1995, Nguyen (1979), Watson et al. (1979), Yager (1979Yager ( , 1984, Freeling (1980), Dubois and Prade (1982, 1985, 1989, Negoita and Ralescu (1987), Rappoport et al. (1987), Jain and Agogino (1988), Stein and Zwick (1988), Zwick and Wallsten (1990), Utkin (1993), and Ralescu (1995a) (see also Section B2.7 in this handbook). ...
Chapter
Full-text available
This section is devoted to presenting concepts and results concerning aggregation of fuzzy and statistical modelings. For this purpose, the incorporation of fuzzy elements in statistical problems is first discussed. Then, some of the problems and approaches more deeply analyzed in the literature are briefly summarized for both univariate and multivariate cases. Finally, references are made to some additional related studies. C2.3.
... Previous research has shown that the use of verbal quantifiers (such as offered by the 7-point rating scale) for the expression of numerical probabilities differs substantially between individuals. (8,(48)(49)(50) Similarly, one individual might assign a probability of 10% to the category "01" on the 11-point rating scale, while another might opt for "02" if "01" has already been used for smaller probabilities. In order to take such individual differences into account, we determined the individually ideal transformation function by regressing the objective probabilities onto the probability judgments. ...
Article
The risk of an event generally relates to its expected severity and the perceived probability of its occurrence. In risk research, however, there is no standard measure for subjective probability estimates. In this study, we compared five commonly used measurement formats-two rating scales, a visual analog scale, and two numeric measures-in terms of their ability to assess subjective probability judgments when objective probabilities are available. We varied the probabilities (low vs. moderate) and severity (low vs. high) of the events to be judged as well as the presentation mode of objective probabilities (sequential presentation of singular events vs. graphical presentation of aggregated information). We employed two complementary goodness-of-fit criteria: the correlation between objective and subjective probabilities (sensitivity), and the root mean square deviations of subjective probabilities from objective values (accuracy). The numeric formats generally outperformed all other measures. The severity of events had no effect on the performance. Generally, a rise in probability led to decreases in performance. This effect, however, depended on how the objective probabilities were encoded: pictographs ensured perfect information, which improved goodness of fit for all formats and diminished this negative effect on the performance. Differences in performance between scales are thus caused only in part by characteristics of the scales themselves-they also depend on the process of encoding. Consequently, researchers should take the source of probability information into account before selecting a measure.
... Instead, we assume subjective probability behaves as a random variable, and assume the existence of the distribution of subjective probability, which is called higher order probability. Note that the idea of higher order probability is not new in the Bayesian context (see Good, 1965Good, , 1971Goldsmith & Sahlin, 1983;Rapoport, Wallsten, & Cox, 1987) (3) If we have a clear idea of all possible outcomes of the experiment, as in a statistical experiment, the Bayes Theorem is the only valid tool to describe the change provided by the data. But in reality, new data may provide new insights, contingent information or possible new alternatives and the like, so we should treat the learning process from larger viewpoints than the Bayes ...
Article
Proposed the flexible Bayesian approach (FBA) to describe the psychological decision making process and examined whether the FBA could explain some counter-intuitive examples. 142 undergraduates were surveyed to evaluate the subjective probabilities and the betting preferences in the Ellsberg's Urn Problem (EUP) by D. Ellsberg (1961) and the Three Prisoners Problem by F. Mosteller (1965). For the EUP, higher order probability was adequate to explain the paradox, more so than the non-additive representation of uncertainty. For the TPP, higher order mathematical probability failed to explain Ss' responses, which were against Bayesian probability. The fuzzy representation of higher order probability by means of the membership function was adequate for the explanation of the paradox. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
... The results were similar toRips (2001).Heit and Rotello's (2005) Experiment 2 used quantified statements where class inclusion (all birds have property A, therefore robins have property A) or violations of class inclusion (premise and conclusion inverted). While these results are interesting, our feeling is that they tell us most about the semantics/pragmatics of the words "necessary" and "plausible," where it has been known for some time that they don't neatly carve up some underlying probability scale (seeDhami & Wallsten, in press;Rapoport, Wallsten, & Cox, 1987;Wallsten, Budescu, Rapoport, Zwick, & Forsyth, 1986 ). ...
Article
Full-text available
In this chapter, we argue that while people may well be capable of assessing deductive correctness when explicitly asked to, this is rarely, if ever, their focus of interest in evaluating an argument. Thus we will conclude that inductive strength is probably more important in determining people's behavior than deductive correctness even on putative deductive reasoning tasks. In Section 1, we begin by examining Rips' (2001) own evidence and arguments for a clear distinction between these two ways of evaluating inferences. In Section 2, we move on to look at conditional inference. Finally, we show in Section 3 how some very similar proposals address informal argument fallacies. The theme of this paper could be characterized as an attempt to demonstrate that an account of argument strength helps resolve many paradoxes and fallacies in reasoning and argumentation. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
... In the few studies that have allowed decomposition of the variance (Beyth-Marom, 1982; Budescu & Wallsten, 1985; Johnson, 1973), it was established that while individuals are relatively consistent in their assignment of numbers to phrases, there is high intersubject variability. More recent work with membership functions (Rapoport, Wallsten , & Cox, 1987; Wallsten, Budescu, Rapoport, et al., 1986) suggests that probability phrases have vague meanings for individuals also. The purpose of the present study is to probe further the similarities and differences between numerical and verbal expressions of probability. ...
Article
Two experiments involving paired comparisons of numerical and nonnumerical expressions of uncertainty are reported. Subjects were timed under two opposing sets of instructions (“choose higher probability” vs. “choose lower probability”). Numerical comparisons were consistently faster and easier than their nonnumerical counterparts. Consistent distance and congruity effects were obtained, illustrating that both numerical and nonnumerical expressions of uncertainty contain subjective magnitude information, and suggesting that similar processes are employed in manipulating and comparing numerical and verbal terms. To account for the general pattern of results obtained, Holyoak's reference point model (1978) was generalized by explicitly including the vagueness of the nonnumerical expressions. This generalized model is based on the notion that probability expressions can be represented by membership functions (Wallsten,Budescu, Rapoport, Zwick, & Forsyth, 1986) from which measures of location for each word, and measures of overlap for each pair can be derived. A good level of fit was obtained for this model at the individual level.
... The problem of developing appropriate qualitative-numeric transformations have been addressed by researchers in a variety of fields [24,12,19,4,29]. In this section we provide an overview of three research efforts [27,18,12] that are most relevant to this paper. ...
Article
Full-text available
It has long been recognized that the capability of using qualitative preferences to generate numeric judgments in expert systems and intelligent decision support systems (ES/IDSS) is essential. Although qualitative preferences and expressions facilitate communication and are useful for thinking about complex problems there is no simple and straightforward way to transform them for computer processing. Thus, the developer of the ES/IDSS must work with each expert to transform his/her vague and incomplete preferences into numeric estimates. This is a very difficult task and few techniques are available to assist developers with it. In this paper we present a qualitative discriminant process (QDP) for eliciting qualitative preferences from experts and generating appropriate numerical representations, as required by ES/IDSS, that utilize the Dempster-Shafer Theory. This approach provides a strategy for generating consistent numeric values for belief functions from qualitative preferences that can be used with the Dempster rules. We illustrate the approach with a case example.
... Any new phrase would pose a problem of interpretation, in contrast with a new number that can be easily understood because it can be placed unambiguously in the [O,l] number line. Measurement theory techniques mediated by sophisticated computer programs have been used to investigate individuals' interpretations of verbal expressions of probability (e.g., Wallsten ef al., 1986a; Rapoport, Wallsten, & Cox, 1987). These techniques require subjects to make many choices or ratings. ...
Article
A method for verbal expression of degree of uncertainty is described. It requires the subject to select a phrase from a list that spans the full range of probabilities. In a second, optional, step, the subject indicates the numerical meaning of each phrase. The method avoids two problems of verbal probabilities—the indefinitely large lexicon and the individual differences in the interpretation of words. To test whether context and ordinal position might bias subjects' selection or interpretation of the verbal expressions in the list, the list order was varied. When the verbal expressions were arranged in random order, ordinal position had a significant effect on the selection of expressions. However, these effects did not occur when the phrases were listed in ascending or descending order. Considerations of accuracy and interpersonal agreement also support the use of ordered phrase lists.
... Hence, several works investigated peculiarities in the human processing of linguistically conveyed uncertainty (e.g., [29,31]). Some of them used fuzzy set concepts to parallel numerical and linguistic uncertainty (e.g., [21]). At first, some authors argued that confidence about a single event (e.g., whether or not a given answer is currently correct) cannot be represented by a probability because probabilities are designed to describe uncertainty in the long run, not single events. ...
Article
Many works in the past showed that human judgments of uncertainty do not conform very well to probability theory. The present paper reports four experiments that were conducted in order to evaluate if human judgments of uncertainty conform better to possibility theory. At first, two experiments investigate the descriptive properties of some basic possibilistic measures. Then a new measurement apparatus is used, the Ψ-scale, to compare possibilistic vs. probabilistic disjunction and conjunction. Results strongly suggest that a human judgment is qualitative in essence, closer to a possibilistic than to a probabilistic approach of uncertainly. The paper also describes a qualitative heuristic, for conjunction, which was used by expert radiologists.
... In everyday life, uncertainties are most commonly expressed through verbal phrases, like " possibly " , " perhaps " … although some numerical estimates, usually given as percentages also have become a part of lay vocabulary of probability and risk. Attempts to quantify verbal expressions of uncertainty have demonstrated that different terms typically refer to different levels of probability [8] [2] [10]. For example, at the group level, several different studies conclude that the expression probable is used to express a mean subjective probability in the range of .70-.80, whereas improbable typically refers to a probability in the range of .12-.20. ...
Conference Paper
The aim of this paper is to test if conjunctive and disjunctive judgments are differently accounted for possibility and probability theories depending on whether (1) judgments are made on a verbal or a numerical scale, (2) the plausibility of elementary hypotheses is low or high. 72 subjects had to rate the extent to which they believe that two characters were individually, in conjunction or in disjunction, involved in a police case. Scenarios differed on the plausibility of the elementary hypotheses. Results show that the possibilistic model tends to fit the subjects' judgments in the low plausibility case, and the probabilistic model in the high plausibility case. Whatever the kind of scale, the possibilistic model matches the subjects' judgments for disjunction, but only tends to do it for conjunction with a verbal scale. The probabilistic model fits the subjects' judgments with a numerical scale, but only for disjunction. These results exhibit the polymorphism of human judgment under uncertainty.
... In any case, it is interesting to study experimental methods for obtaining the pair (Y, S), since it is sufficiently likely that different pairs are valid for different problems. Future studies could be based on the methods suggested in previous experimental studies on similar topics, for example, Rapoport et al. [13] and Zwick et al. [14]. ...
Article
The average value was introduced to help in the ordering of fuzzy numbers and was defined by means of an integrating process of a parametric function representing the position of every α-cut in the real line. Some well-known indices are included in this schema. We study some properties of the average value, and by interpreting the different parameters used to define it, we show that it can be adapted to the decision-maker's preference. Finally, distance measures between fuzzy numbers associated with the average value are defined.
Article
Intelligence agencies communicate uncertainty to decision‐makers through verbal probability phrases that correspond to numerical ranges (i.e., probability lexicons) and ordinal levels of confidence. However, decision‐makers may misinterpret the relationship between these concepts and form inappropriate interpretations of intelligence analysts' uncertainty. In two experiments, four ways of conveying second‐order probability to decision‐makers were compared: (a) probability and confidence phrases written in the text of a report, (b) the addition of a probability lexicon, (c) the addition of a probability lexicon that varied numerical ranges according to the level of confidence (i.e., revised lexicon), and (d) a probability phrase written in text followed by a numerical range that varied according to the level of confidence. The revised lexicon was expected to improve interpretations of second‐order probability. The 275 participants in Experiment 1 and 796 participants in Experiment 2 provided numerical estimates corresponding to analytic judgments provided in descriptions about three overseas military operations and also indicated their support for approving or delaying the operations. The results demonstrated that providing the numerical range in the text of the report or providing a probability lexicon, improved interpretations of probability phrases above the verbal phrase‐only condition, but not interpretations of confidence. Participants were unable to correctly interpret confidence with respect to the precision of their estimate intervals and their decisions about the operations. However, in Experiments 2 and 3 the effects on these variables of providing decision‐makers with information about the source of the analyst's uncertainty were examined. In Experiment 3 ( n = 510), providing this information improved correspondence between confidence level and approval of the operation. Recommendations are provided regarding additional methods of improving decision‐makers' interpretation of second‐order probability conveyed in intelligence reporting.
Article
In order to assess the uncertainty, verbal probability judgement is a more natural and easier device than the direct numerical assessment. We propose a new procedure to measure subjective probability by means of verbal probability judgment. This procedure provides the assessment of subjective probability in terms of the distribution rather than the unique value of estimate.There are two aspects of uncertainty involved in this judgment. The one is the uncertainty of subjects' belief, which means it cannot be represented by a unique value and instead we assume subjects' belief is distributed as a truncated normal distribution. The other is the unvertainty associated with each of verbal expressions. This uncertainty is represented as trapezoidal membership functions. Based on these two assumptions, we obtain the likelihood of probability (belief) value when subjects rate the appropriateness of each of verbal expressions for a particular event. Assuming that the prior distributions for parameters are uniform, we finally obtain the subjective probability distribution for each of eight events which we assigned.We measured subjective probabilities about various unvertain events. They agreed with our intuitions and they were more consistent than direct numerical assessments when available.
Article
Man is capable of reasoning when the available information is incomplete. the conclusions obtained are based on knowledge considered as “generally true.” These conclusions are temporary as they are liable to be challenged by the arrival of new pieces of information which will refine the information previously available (without necessarily contradicting it). Numerous studies have independently dealt with the formalization of this kind of revisable reasoning over the last ten years. the approaches used have been very diverse: some are purely symbolic, others make use of numbers to quantify the uncertainty; some are close to formal logic, and others are much less formalized; some only deal with exceptions, a smaller number which are somewhat more ambitious tackle the problem of knowledge base revision. This work presents the above approaches and compares them on a single example in order to better evaluate their similarities, their differences, their abilities to formalize certain aspects of so‐called “commonsense” reasoning, and also in order to try to lay bare the fundamental principles that underlie the different approaches. the logics considered in this work are default logic, nonmonotonic modal logics, including autoepistemic logic, circumscription, circumscription‐like approaches, supposition‐based logic, conditional logics, logics of uncertainty, i.e., probabilistic, and possibilistic logics as well as belief functions, numerical quantifier logic, and fuzzy logics. We also consider formalisms oriented towards causal reasoning and analogical reasoning. Lastly, we study the contribution of works on truth maintenance, action logic, as well as recent attempts to formalize the process of revision of a set of formulae closed by deduction. the systematic use of a single example and of the same evaluation criteria for each formalism, will enable the reader to better perceive the rationale behind the various approaches as well as to appreciate their interest.
Chapter
Full-text available
A distinction is drawn between precision, ambiguity, and vagueness. It is then argued that ambiguity represents confusion in communication and can always be avoided, while vagueness represents a state of mind given available information and is sometimes necessarily present. Vagueness, however, is different from probability; the former refers to imprecision in meaning and the latter to the chances that an event or a state of the world will occur or is true. A 2x2x2 classification is described, whereby events, uncertainty about those events, and the representation of the uncertainty each can be either vague or precise. Some cells of this matrix in principle do not exist, and the probability representation should be reserved only for the case of precise uncertainty about precise events. One means of representing vague uncertainties is linguistic, with terms such as doubtful and very good chance. Procedures are described for empirically measuring the vague meanings of such expressions as membership functions over the [0, 1] probability interval. Meanings vary systematically over individuals and as a function of context. Nevertheless, experiments comparing the economic consequences of decisions yield the surprising result of virtually no difference as a function of whether the description of the uncertainty is linguistic or numerical. Necessary research to test the limits of this conclusion is discussed, as are implications of these results for communication between experts and decision makers.
Chapter
While psychologists have known for some time that people use categories with blurred edges and gradations of membership, the field has been slow to take up ideas from fuzzy set theory and fuzzy logic. Despite critiques and controversies, by the late 80’s fuzzy sets achieved a degree of legitimacy in psychology. This chapter reviews recent developments and the current state of fuzzy set theory and applications in psychology. Beginning with the gradient thesis, it surveys the uses of fuzzy sets in the operationalization and measurement of psychological concepts. The focus then shifts to modeling and data analytic techniques, and ends with a review of psychological theories and frameworks that use fuzzy sets. Because of space limitations, many topics and important works have had to be treated with more brevity than we would have wished, but we have striven for a balanced account of this diverse and still rapidly growing literature.
Article
How is the understanding and use of vague probability expressions affected by the availability of other expressions and the particular communication task involved? As represented by membership functions (Wallsten, Budescu, Rapoport, Zwick, & Forsyth, 1986) the meanings of core terms (e.g., likely, probable) were not affected by the presence or absence of modified or anchor expressions (e.g., very likely, almost certain). However, the core terms rated best at each presented probability in the baseline condition were rated lower in the presence of modified or anchor expressions. The effects of communication task were substantial, in that membership functions were more frequently monotonic, broader, and located closer to the center of the probability interval for expressions that were received and evaluated than for ones that were selected.
Book
Full-text available
Decision has inspired reflection of many thinkers since the ancient times. With the rapid development of science and society, appropriate dynamic decision making has been playing an increasingly important role in many areas of human activity including engineering, management, economy and others. In most real-world problems, decision makers usually have to make decisions sequentially at different points in time and space, at different levels for a component or a system, while facing multiple and conflicting objectives and a hybrid uncertain environment where fuzziness and randomness co-exist in a decision making process. This leads to the development of fuzzy-like multiple objective multistage decision making. This book provides a thorough understanding of the concepts of dynamic optimization from a modern perspective and presents the state-of-the-art methodology for modeling, analyzing and solving the most typical multiple objective multistage decision making practical application problems under fuzzy-like uncertainty, including the dynamic machine allocation, closed multiclass queueing networks optimization, inventory management, facilities planning and transportation assignment. A number of real-world engineering case studies are used to illustrate in detail the methodology. With its emphasis on problem-solving and applications, this book is ideal for researchers, practitioners, engineers, graduate students and upper-level undergraduates in applied mathematics, management science, operations research, information system, civil engineering, building construction and transportation optimization
Article
Fuzzy set theory has primarily been associated with control theory and with the representation of uncertainty in applications in artificial intelligence. More recently, fuzzy methods have been proposed as alternatives to traditional statistical methods in statistical quality control, linear regression, and forecasting, among other areas. We review some basic concepts of fuzzy methods, point out some philosophical and practical problems, and offer simpler alternatives based on traditional probability and statistical theory. Applications in control theory and statistical quality control serve as our primary examples.
Article
Full-text available
Most decisions require an evaluation of the likelihood of events on which outcomes depend. A cotntnon mode of judgment utider uncertainty is the interpretation of statements of belief expressed by others. Most previous research on the cotnmunication of uncertainty has focused on the interpretation and use of quantitative versus qualitative expressions (e.g., "90% chance" vs. "extremely likely"); in addition, a handful of articles have addressed the effects of contextual base rates on the interpretation of expressed beliefs. In this article we argue that the social, informational, motivational, and discourse context in which beliefs are constructed and state- ments are formulated provides myriad additional cues that influence what is expressed by speakers and what is understood by listeners. We advance a framework for organizing the six sources of information on which listeners rely (in addition to the denotation of the speaker's words) when updating beliefs under uncertainty: (a) the listener's pdor beliefs and assumptions about the world; (b) the listetier's interpretation of the social and informational context in which the speaker's beliefs were formed; (c) the listener's evaluation of the speaker's credibility and judgmental tendencies; (d) the listener's interpretation of the social and motivational context in which the statement was made; (e) the li stener' s understanding of infonnation conveyed directly and indirectly by the speaker; and (f) the listener's interpretation of the social and discourse context in wMch the statement was embedded. Throughout this article we cite relevant research from decision making and social psychology, as well as examples from the risk cotnmunication literature. We conclude with some comments on the transmission of uncertain beliefs in groups, followed by a general discussion.
Article
Are verbal judgements of uncertainty more accurate and less conservative relative to the Bayesian calculations than are numerical judgements? This question was investigated by using a within-subject design in which subjects were required to estimate, numerically on some trials and verbally on others, the probability of one of two mutually exclusive hypotheses in a series of sequential probability revision tasks. The membership functions of the verbal phrases that were actually stated were assessed. Two alternative point values were used to represent these functions in a statistical comparison with the numerical probability judgements. The results show that verbal judgements are less conservative but more variable, and consequently less accurate, than numerical. The degree of conservatism and accuracy depends on the manner in which the membership function is converted to a point value.
Article
Full-text available
In this chapter we have gathered some of the approaches which have been introduced in the literature to deal with fuzzy statistical data, as well as some methods to solve inferential (in particular, parameter estimation and hypothesis testing) and decision problems from them.
Article
Full-text available
This report summarizes three years of research on the meanings of nonnumerical probability phrases. The work is relevant to military needs because often the uncertainty of decisions is not well represented by the probability theory, but rather is imprecise, vague, or based on linguistic input. Techniques were developed and validated for representing the vague meanings of linguistic probabilities in individuals in specific contexts as membership functions over the (0,1) interval. There are large, consistent individual differences in the meanings of probability phrases within a single context. Additional research investigate context factors that affect the meanings of such phrases, such as the available vocabulary, direction of communication, desirability of the forecasted events, and the base rates of the forecasted events. The researchers also summarized experiments that compare decision making in response to numerical and linguistic probabilities. Finally, a theory that handles virtually all the empirical results was outlined. This theory suggests how the vague meanings of probability phrases are altered by context and integrated into single values to make judgements and choices.
Article
Two experiments were performed to determine whether judgments of the relative chances of two independent events occurring are biased by constant outcome values contingent on the events when the uncertainties are specified by linguistic expressions (e.g. doubtful). In Experiment 1, subjects directly judged the relative chances of the two events, of which one was represented by a spinner and the other by a linguistic probability expression. In Experiment 2, only linguistic probability expressions were used to describe the two events and a betting procedure was used. A bias was evident in both studies, such that the relative judgments tended to favour the event with the positive rather than the negative contingent outcome. The bias was smaller for the low- than for the high-probability phrases. Individual differences were great, with the bias appearing strongly in only about one-third of the population. Theoretical implications of the present and related results are discussed.
Article
Full-text available
Despite the common reliance on numerical probability estimates in decision research and decision analysis, there is considerable interest in the use of verbal probability expressions to communicate opinion. A method is proposed for obtaining and quantitatively evaluating verbal judgments in which each analyst uses a limited vocabulary that he or she has individually selected and scaled. An experiment compared this method to standard numerical responding under three different payoff conditions. Response mode and payoff never interacted. Probability scores and their components were virtually identical for the two response modes and for all payoff groups. Also, judgments of complementary events were essentially additive under all conditions. The two response modes differed in that the central response category was used more frequently in the numerical than the verbal case, while overconfidence was greater verbally than numerically. Response distributions and degrees of overconfidence were also affected by payoffs. Practical and theoretical implications are discussed.
Article
Dealing with uncertainty is part of most intelligent behaviour and therefore techniques for managing uncertainty are a critical step in producing intelligent behaviour in machines. This paper discusses the concept of uncertainty and approaches that have been devised for its management in AI and expert systems. These are classified as quantitative (numeric) (Bayesian methods, Mycin's Certainty Factor model, the Dempster-Shafer theory of evidence and Fuzzy Set theory) or symbolic techniques (Nonmonotonic/Default Logics, Cohen's theory of Endorsements, and Fox's semantic approach). Each is discussed, illustrated, and assessed in relation to various criteria which illustrate the relative advantages and disadvantages of each technique. The discussion summarizes some of the criteria relevant to selecting the most appropriate uncertainty management technique for a particular application, emphasizes the differing functionality of the approaches, and outlines directions for future research. This includes combining qualitative and quantitative representations of information within the same application to facilitate different kinds of uncertainty management and functionality.
Article
The frequency with which verbal uncertainty expressions are employed suggests that they play an important role in the communication of states of uncertainty and may have an important role in emerging technologies such as Expert Systems. This article critically reviews empirical studies of verbal uncertainty expressions spanning two decades of research between 1967 and 1987 with the principal conclusions that: (1) People are highly internally consistent in their use of verbal uncertainty expressions; (2) No conclusions about between-subject variability are justified principally because (a) there is currently no consensus as to what is to count as consistent or inconsistent use and (b) there are several factors that confound purported analyses of between-subject consistency such as the composition of the stimulus set and the scaling tasks themselves; (3) One study suggests that assessments of the meaning of verbal uncertainty expressions may be conditioned by the prior perceived probabilities of the events they describe. However, other interpretations of this study are open. The review also discusses the more general epistemological question of whether the concept of uncertainty as manifest by verbal uncertainty expressions is really amenable to the unidimensional framework within which empirical studies have been conceived.
Article
In a previous study, Zwick, Budescu and Wallsten (1988) found that the membership functions representing the subjective combinations of two independent linguistic probabilistic judgements could not be predicted by applying any dual t- and co-t-norm to the functions of the underlying terms. Their results showed further that judgements involving the “and” connective were best modelled as the fuzzy mean of the two separate components. The present experiment extended those results by manipulating the instructions regarding the “and” connective and also including an additional task in which subjects selected a third phrase to represent the integration of the two independent judgements. Again, no t-norm rule predicted subjects' responses, which were now best modelled by the point-wise arithmetic or geometric means of the functions. In addition, most subjects selected phrases and provided membership functions in response to two identical forecasts that were more extreme and more precise than the individual forecast, a result inconsistent with any t-norm or averaging model. A minority of subjects responded with the same phrase contained in the forecasts. The entire pattern of results in the Zwick et al. (1988) and the present study is used to argue against the indiscriminate application of mathematically prescribed, but empirically unsupported operations in computerized expert systems intended to represent and combine linguistic information.
Article
In recent years, the problem of uncertainty modeling has received much attention from scholars in artificial intelligence and decision theory. Various formal settings, including but not restricted to fuzzy sets and possibility measures, have been proposed, based on different intuitions, and dealing with various kinds of uncertain data. The two main research directions are upper and lower probabilities which convey the idea of imprecisely estimated probability measures, and distorted probabilities for the descriptive assessment of partial belief. Possibility measures, and thereby fuzzy sets, stand at the crossroads of these new approaches. Traditional views, interpretive settings and canonical experiments for the measurement of probability such as frequentist approaches, betting theories, comparative uncertainty relations are currently extended to the generalized uncertainty measures. These works shed new light on various interpretations of fuzzy sets and clarify their links with probability theory; conversely Zadeh's logical point of view on fuzzy sets suggests a set-theoretic perspective on uncertainty measures, that brings together numerical quantification and logic.
Article
The effects of various types of information upon numerical and verbal probability judgements were evaluated in six problems. Subjects judged the occurence of a target event either by assessing the adequacy of the expressions probable and improbable or by assessing the numerical probability of the event. Different types of information were manipulated for a given problem: The weight of the target in relation to the total weight of all alternatives (global weight) or in relation to that of each alternative independently (local weights), the change over time of the target's weight (trend), and the stated base rate.The results revealed that particular types of information - for example local weight - had different effects upon the two response modes. A tentative interpretation of this verbal-numerical discordance is proposed, suggesting different strengths of use of the same information according to the response modes.
Conference Paper
When forecasters and decision-makers use different phrases to refer to the same event, there is opportunity for errors in communication.! In an effort to facilitate the communication process, we investigated various ways of "translating" a forecaster's verbal probabilities to a decision-maker's prob- ability phrases.! We describe a blueprint for a general trans- lator of verbal probabilities and report results from two em- pirical studies.! The results support the proposed methods and document the beneficial effects of two, relatively simple translation methods.
Book
Fuzzy set theory has been developed to solve problems where the descriptions of activities and observations are imprecise, vague, or uncertain. The term “fuzzy” refers to a situation in where there are no well defined boundaries of the set of activities or observations to which the descriptions apply. For example, one can easily assign a person 180cm tall to the “class of tall men”. But it would be difficult to justify the inclusion or exclusion of a 173cm tall person to that class, because the term “tall” does not constitute a well defined boundary. This notion of fuzziness exists almost everywhere in our daily life, such as a “class of red flowers,” a “class of good shooters,” a “class of comfortable speeds for traveling, a “numbers close to 10,” etc. These classes of objects cannot be well represented by classical set theory. In classical set theory, an object is either in a set or not in a set. An object cannot partially belong to a set.
Article
This paper provides a systematic treatment of possibly imprecisely or vaguely specified numerical quantifiers in default syllogisms, following an approach initiated by Zadeh. The obtained propagation rules are derived from simple properties of relative cardinality or, equivalently, conditional probability. Uncertainty in the description of numerical quantifiers is handled using possibility theory and, particularly, fuzzy arithmetic. The advantages of this default reasoning method are its ability to model any kind of quantifier and to build new defaults by chaining existing ones, in a rigorous manner. This approach also emphasizes the difference between two types of uncertain pieces of knowledge, i.e., conjectures versus general rules. Cet article présente un traitement systématique des quantificateurs numériques éventuellement imprécisément ou vaguement spécifié dans des syllogismes avec exceptions, selon l'approche proposée par Zadeh. Les règles de propagation obtenues dérivent de propriétés élémentaires des cardinalités relatives ou, de façon équivalente, des probabilités conditionnelles. L'incertitude dans la définition des quantificateurs numériques est traitée à l'aide de la théorie des possibilités et, plus particulièrement, de l'arithmétique floue. L'avantage de cette technique de raisonnement par défaut est son aptitude à prendre en compte n'importe quel quantificateur et à construire de nouvelles règles par défaut à partir de règles par défaut connues, par chainage. Cette approche met l'accent sur la différence entre deux types d'informations incertaines: les conjectures et les lois générales avee exceptions (ou règles par défaut).
Article
Full-text available
Can the vague meanings of probability terms such as doubtful, probable, or likely be expressed as membership functions over the [0, 1] probability interval? A function for a given term would assign a membership value of zero to probabilities not at all in the vague concept represented by the term, a membership value of one to probabilities definitely in the concept, and intermediate membership values to probabilities represented by the term to some degree. A modified pair-comparison procedure was used in two experiments to empirically establish and assess membership functions for several probability terms. Subjects performed two tasks in both experiments: They judged (a) to what degree one probability rather than another was better described by a given probability term, and (b) to what degree one term rather than another better described a specified probability. Probabilities were displayed as relative areas on spinners. Task a data were analyzed from the perspective of conjoint-measurement theory, and membership function values were obtained for each term according to various scaling models. The conjoint-measurement axioms were well satisfied and goodness-of-fit measures for the scaling procedures were high. Individual differences were large but stable. Furthermore, the derived membership function values satisfactorily predicted the judgments independently obtained in task b. The results support the claim that the scaled values represented the vague meanings of the terms to the individual subjects in the present experimental context. Methodological implications are discussed, as are substantive issues raised by the data regarding the vague meanings of probability terms.
Article
Full-text available
Examines the hypothesis that judges compare stimuli by ratio and subtractive operations when instructed to judge "ratios" and "differences." S. J. Rule and D. W. Curtis hold that magnitude estimations are a power function of subjective values, with an exponent between 1.1 and 2.1. Accordingly, the 2-operation model tested assumes magnitude estimations of "ratios" are a comparable power function of subjective ratios. In contrast, the present author and C. T. Veit (see PA, Vols 52:8992 and 53:8636) theorized that judges compare 2 stimuli by subtraction for both "ratio" and "difference" instructions and that magnitude estimations of "ratios" are approximately an exponential function of subjective differences. Three tests were used to compare the theory of 1 operation with the 2-operation theory for the data of 9 experiments. Results favor the theory that observers use the same operation for both instructions. (36 ref)
Chapter
Full-text available
Throughout fuzzy set literature there are a lot of speculations concerning the shape of mempership functions. In this paper a procedure will be presented which allows to determine them empirically. If the involved fuzzy set represents a subjective concept, then the corresponding membership value can only be obtained using man as a carrier of the relevant information and as a measurement instrument. Therefore a formal model is suggested which regards both aspects by different parameters. In order to test its predictability an empirical study has been carried out, concerning the fuzzy sets "very young man", "young man", " old man", and "very old man".
Article
Full-text available
Tested the proposition that natural language concepts are represented as fuzzy sets of meaning components and that language operators (adverbs, negative markers, and adjectives) can be considered as operators on fuzzy sets. The application of fuzzy set theory to the meaning of phrases such as very small, sort of large, etc., was examined in 4 experiments. In Exp I, 19 undergraduates judged the applicability of the set of phrases to a set of squares of varying size. Results indicate that the group interpretation of the phrases can be characterized within the framework of fuzzy set theory. Similar results were obtained in Exp II, where each S's responses were analyzed individually. Although the responses of the 4 Ss in general could be interpreted in terms of fuzzy logical operations, 1 S responded in a more idiomatic style. Exps III and IV were attempts to influence the logical-idiomatic distinction in interpretation by (a) varying the presentation mode of the phrases and (b) giving the 59 Ss only a single phrase to judge. Overall, results are consistent with the hypothesis that natural language concepts and operators can be described more completely and more precisely using the framework of fuzzy set theory. (35 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Subjectively, many things may be thought of as partially true and partially false. Two alternative pairs of rules were proposed as reasonable descriptions of the cognitive operations that determine the subjective truthfulness of conjunctions and disjunctions of statements that are true to some degree. 32 undergraduates judged the truthfulness of logical combinations of pairs of statements about class-membership relations. The degree of truth of these statements was varied factorially. The truthfulness of a conjunction of fuzzy statements was judged to be equal to the product of the truthfulness of its component statements. The falsity of a disjunction of fuzzy statements was judged to be equal to the product of the falsity of its statements. For each of 4 different stimulus sets, these rules provided a better account of the data than rules based on the minimum and maximum truthfulness of the component statements. (15 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
In many papers concerning fuzzy set theory it is assumed that the membership or an element in the intersection of two or more fuzzy sets is given by the minimum of product of the corresponding membership values. To use these operators in modelling aspects of the real world, such as decision making, however, it is necessary to prove their appropriateness empirically. The main question of this study is whether people rating the membership of objects in the intersection of two fuzzy sets behave in accordance with one of these models. An important problem in answering this question is how to measure membership which seems to have the characteristics of an absolute scale. No measurement structure is available at present, but a practical method for scaling is suggested. The results of our experiments indicate that neither the product nor the minimum fit the data sufficiently well, but the latter seems to be preferable.
Article
Full-text available
This article examines the hypothesis that judges compare stimuli by ratio and subtractive operations when instructed to judge" "ratios" and "differences." Rule and Curtis hold that magnitude estimations are a power function of subjective values, with an exponent between 1.1 and 2.1. Accordingly, the two-operation model tested assumes magnitude estimations of "ratios" are a comparable power function of subjective ratios. In contrast, Birnbaum and Veit theorize that judges compare two stimuli by subraction for both "ratio" and "difference" instructions and that magnitude estimations of "ratios" are approximately an exponential function of subjective differences. Three tests were used to compare the theory of one operation with the two-operation theory for the data of nine experiments. The results strongly favor the theory that observers use the same operation for both instructions.
Article
An experimental and theoretical study of the categorization of human height is reported. Subjects of both sexes whose ages ranged from 6 to 72 were asked to class the height of both men and women using the labels very very short, very short, short, tall, very tall, and very very tall. The experimental results confirm Zadeh's contention about the existence of fuzzy classification (the lack of sharp borders for the classes) but indicate that the hedge 'very' causes a shift of the class frontier rather than a steepening of the membership function as proposed by Zadeh. As a result of the experimental studies, a new modeling of the classification process in terms of a family of high- and low-pass filters is proposed. This model, where the filter parameters are related to the parameters of the normal distribution of height, yields a more satisfactory interpretation of the classification than the models of Zadeh and Lakoff.
Article
Class membership is a fundamental relationship between concepts in semantic memory. Recent research indicates that, class membership may subjectively be a continuous type of relationship. The processing of information about the degree to which items belong to a particular class was investigated in an experiment in which subjects compared two statements describing class membership relationships. The results strongly supported a simple model which describes the judgment process as directly involving subjective degree-of-truthfulness values. The success of the model indicates that the subjects were able to process this kind of fuzzy information in a consistent and systematic manner. Some of the implications of the human competancy for processing fuzzy information are discussed.
Article
Arguments, originally developed some fifteen years ago, concerning the sort of “direct” psychophysical measurement advocated by S. S. Stevens are reproduced here in order to facilitate access by a wider audience. Considerations of measurement theory, biological adaptation, perceptual invariance, and neurophysiological noise, as well as various empirical results, suggest (a) that it is primarily the relations—particularly the ratios—between stimulus magnitudes and not the individual magnitudes themselves that are represented within the organism and, accordingly, (b) that “direct” psychophysical measurement, by itself, uniquely determines neither a psychological magnitude of any one stimulus nor a psychophysical law relating such psychological magnitudes to corresponding physical magnitudes. What such measurement can uniquely determine, evidently, is a psychological parameter characterizing each sensory continuum.
Article
Saaty (1977–1983) presents an eigenvector (EV) procedure for analyzing matrices of subjective estimates of the utility of one entity relative to another. The procedure is an especially effective tool for analyzing hierarchical problems where the dependence of the entities at one level on the entities in adjacent levels is estimated subjectively. Despite the absence of a formal proof that the procedure has desirable qualities as an estimator of the underlying relative utilities, the process has gained an active following. This paper derives a comparable estimate, the geometric mean (GM) vector (also known as the logarithmic least squares method or LLSM), that can be applied to hierarchical problems in exactly the same way but is developed from statistical considerations. It is shown to be optimal when the judge's errors are multiplicative with a lognormal distribution. The GM shares the desirable qualities of the EV and is preferable to it in several important respects.
Article
Logicians have, by and large, engaged in the convenient fiction that sentences of natural languages (at least declarative sentences) are either true or false or, at worst, lack a truth value, or have a third value often interpreted as ‘nonsense’. And most contemporary linguists who have thought seriously about semantics, especially formal semantics, have largely shared this fiction, primarily for lack of a sensible alternative. Yet students of language, especially psychologists and linguistic philosophers, have long been attuned to the fact that natural language concepts have vague boundaries and fuzzy edges and that, consequently, natural language sentences will very often be neither true, nor false, nor nonsensical, but rather true to a certain extent and false to a certain extent, true in certain respects and false in other respects.
Article
A basic idea suggested in this paper is that a linguistic hedge such as very, more or less, much, essentially. slightly, etc. may be viewed as an operator which acts on the fuzzy set representing the meaning of its operand. For example, in the case of the composite term very tall man, the operator very acts on the fuzzy meaning of the term tall man.
Article
A point of view is presented concerning the psychological concept of subjective probability, both to study its relation to the corresponding mathematical and philosophical concepts and to provide a framework for the rigorous investigation of problems unique to psychology. In order to do this the empirical implications of axiom systems for measurement are discussed first, relying primarily on Krantz's work, with special emphasis, however, on some similarities and differences between psychological and physical variables. The psychological variable of uncertainty is then examined in this light, and it is concluded that few, if any, current theories are satisfactory when viewed from this perspective, particularly those deriving from the mathematical work in the axiomatic foundations of probability. This might appear to pose difficulties for applications to real problems of normative decision theory when those applications require numerical probability judgments from individuals. Two possible solutions are discussed briefly. (Author)
Article
Set theory begins to be useful when there is some natural criterion for defining belonging to a set. Sets of objects without properties are uninteresting. Elements are assigned to sets because they share properties or conform to a rule. A set of elements is said to be fuzzy when we allow some elements to belong to the set unequally or more strongly than others because they have more of the common properties. What we are doing here is to question the concept of equality of belonging of elements to sets. For example, the set of all red roses admits a wide range of redness even though some roses are more red than others. For some purposes, redness may be allowed to range from magenta red to light pink. For other purposes, the range of red may be narrow. If one wishes to be precise, one would have to measure redness in Angstrom units and admit in each set only those uniform roses (if such exist) which have that precise redness wavelength.
Article
We first discuss the fuzzy subset representation of the class of monotonic type linguistic values, i.e., small and large. We next show that for each of these the context, i.e., large apartment, determines the window or range in which the significant change in membership degree occurs. We discuss Zadehs approach to modifying a linguistic value by a hedge such as “very.” We next show that one interpretation of the effect of this hedge is to act as a context changer. We finally reconcile the experimental realizations of the effect of linguistic hedges with the approach suggested by Zadeh.
Article
Assessed membership functions over the [0,1] probability interval for several vague meanings of probability terms (e.g., doubtful, probable, likely), using a modified pair-comparison procedure in 2 experiments with 20 and 8 graduate business students, respectively. Ss performed 2 tasks in both experiments: They judged (A) to what degree one probability rather than another was better described by a given probability term and (B) to what degree one term rather than another better described a specified probability. Probabilities were displayed as relative areas on spinners. Task A data were analyzed from the perspective of conjoint-measurement theory, and membership function values were obtained for each term according to various scaling models. Findings show that the conjoint-measurement axioms were well satisfied and goodness-of-fit measures for the scaling procedures were high. Individual differences were large but stable, and the derived membership function values satisfactorily predicted the judgments independently obtained in Task B. Results indicated that the scaled values represented the vague meanings of the terms to the individual Ss in the present experimental context. (51 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
An experimental approach is described to obtaining membership functions which describe the values of linhuistic estimates of variables and mathematical tools for description of hedges and computation of composite linguistic estimates. Applied aspects of using the proposed approach are discussed.
Article
Representation and uniqueness theorems are presented for the fundamental measurement of the membership of a fuzzy set, when the domain of discourse is order-dense. The conclusion that membership is on an interval scale is justified by the inapplicability of extensive measurement to fuzziness and the lack of a natural origin for membership. Some descriptive statistics are also presented from an empirical study currently in progress to construct membership functions. They suggest that subjects differ in their perception of fuzziness and that the models in the literature for certain linguistic operators such as ‘not’, ‘very’, and ‘antonym’ are not sufficiently general to account for the range of individual perceptions of vague attributes. A more general mathematical model is proposed. The question of the meaningfulness of statements about membership functions is also discussed, in the light of the assertion that membership can be fundamentally measured at best on an interval scale. Some of the difficulties which arise in fuzzy set theory due to this property are illustrated and a solution is suggested, involving the use of a function which is derived from membership but is on an absolute scale.
Article
The early contributions of Saaty have spawned a multitude of applications of principal right (PR) eigenvector “scaling” of a dominance matrix [R]. Prior to Saaty's work (1977–1984) scaling of dominance matrices received little attention in multidimensional scaling, e.g., see Shepard (1972, pp. 26–27). This eigenvector method (EM) of scaling [R] yields ui scores (weights) popularly used at each branching of the Analytic Hierarchy Process (AHP) technique that has been increasingly applied in multiple criterion analysis of utility, preference, probability, and performance. In this paper, it is proposed that an alternate least squares method (LSM) scaling technique yielding least squares optimal scores (weights) provides values having a number of important advantages over ui scores popularly utilized to date.
Article
Conditions for rank preservation in a positive reciprocal matrix that is inconsistent are provided. Three methods of deriving ratio estimates are examined: the eigenvalue, the logarithmic least squares, and the least squares methods. It is shown that only the principal eigenvector directly deals with the question of inconsistency and captures the rank order inherent in the inconsistent data.
Article
This paper investigates the logarithmic least squares (LLSM) approach to Saaty's (Journal of Mathematical Psychology, 1977, 5, 234–281) scaling method for priorities in hierarchical structures. It is argued that statistical criteria are important in deciding the scaling method controversy. It is shown that LLSM is statistically optimal under a number of realistic and practical models. Variances and covariances of parameter estimates are derived. The covariance matrix associated with overall priority differences is also developed. These results allow for a significance analysis of apparent priority differences.
Article
The purpose of this paper is to investigate a method of scaling ratios using the principal eigenvector of a positive pairwise comparison matrix. Consistency of the matrix data is defined and measured by an expression involving the average of the nonprincipal eigenvalues. We show that λmax = n is a necessary and sufficient condition for consistency. We also show that twice this measure is the variance in judgmental errors. A scale of numbers from 1 to 9 is introduced together with a discussion of how it compares with other scales. To illustrate the theory, it is then applied to some examples for which the answer is known, offering the opportunity for validating the approach. The discussion is then extended to multiple criterion decision making by formally introducing the notion of a hierarchy, investigating some properties of hierarchies, and applying the eigenvalue approach to scaling complex problems structured hierarchically to obtain a unidimensional composite vector for scaling the elements falling in any single level of the hierarchy. A brief discussion is also included regarding how the hierarchy serves as a useful tool for decomposing a large-scale problem, in order to make measurement possible despite the now-classical observation that the mind is limited to 7 ± 2 factors for simultaneous comparison.
Article
If >=r and >=d are two quaternary relations on an arbitrary set A, a ratio/difference representation for >=r and >=d is defined to be a function f that represents >=r as an ordering of numerical ratios and >=d as an ordering of numerical differences. Krantz, Luce, Suppes and Tversky (1971, Foundations of Measurement. New York, Academic Press) proposed an axiomatization of the ratio/difference representation, but their axiomatization contains an error. After describing a counterexample to their axiomatization, Theorem 1 of the present article shows that it actually implies a weaker result: if >=r and >=d are two quaternary retations satisfying the axiomatization proposed by Krantz et al. (1971), and if >=r' and >=d' are the relations that are inverse to >=r and >=d, respectively, then either there exists a ratio/difference representation for >=r and >=d, or there exists a ratio/difference representation for >=r' and >=d', but not both. Theorem 2 identifies a new condition which, when added to the axioms of Krantz et al. (1971), yields the existence of a ratio/difference representation for relations >=r and >=d. Peer Reviewed http://deepblue.lib.umich.edu/bitstream/2027.42/25051/1/0000479.pdf
linguistic values of variables and hedges. Fuzzy Sets and Systems A parametric approach to description of * Lakoff logic of fuzzy concepts
  • Kuz
  • V B G Min
Foundations of measurement. Vol. 1. New York: Academic Press. Kuz'min, V. B. (1981). linguistic values of variables and hedges. Fuzzy Sets and Systems, 6, 27-41. A parametric approach to description of * Lakoff, G. (1973). logic of fuzzy concepts. Journal of Philosophical Logic, 458-508
Translation of Volume I reprinted as The logic of inexact concepts
  • Element
  • J A Gougen
Element. der psychophysik. Leipzig: (Translation of Volume I reprinted as Gougen, J. A. (1969). 1,325-373. The logic of inexact concepts. Synthese, Green, D. M., & Swets, J. A. (1966). psychophysics. New York: Wiley. Signal detection theory and Hersh, H. M., & Caramazza, A. (1976). modifiers and vagueness in natural language
itizzy set ano possibility theory measurement of membership and consequences of its empirical implementation A model for the Oden
  • R R In
  • Yager
In R. R. Yager (Ed.;, itizzy set ano possibility theory. New York: Pergamon Press, pp. 68-714. Norwich, A. M., & Turksen, 1. 8. (1984). measurement of membership and consequences of its empirical implementation. FuzzX Sets and Systems, 12, 1-25. A model for the Oden, G. C. (1977a)
Meaningfulness in fuzzy set theory
  • A.M. Norwich
  • I.B. Turksen
  • A.M. Norwich
  • I.B. Turksen
The construction of membership functions
  • Norwich
Norwich, A. M., & Turksen, I. B. (1982b). The construction of membership functions. In R. R. Yager (Ed.), Fuzzy set and possibility theory. New York: Pergamon Press, pp. 61-67.
Meaningfulness in fuzzy set theory
  • Norwich
Norwich, A. M., & Turksen, 1. B. (1982c). Meaningfulness in iuzzy se theory. In R. R. Yager (Ed.;, itizzy set ano possibility theory. New York: Pergamon Press, pp. 68-714.
&-orsytn, 6. kii65) Measuring ihia vdIauV im a,;,l=* %.' probability terms
  • T S Wallsten
  • D V Budescu
  • A Rapoport
  • R Zwick
Wallsten, T. S., Budescu, D. V., Rapoport, A., Zwick, R., &-orsytn, 6. kii65). Measuring ihia vdIauV im a,,;,l=* %.' probability terms. L. L. Thurstone Psychometric Laboratory
  • D H Krantz
  • R D Luce
  • P Suppes
  • A Tversky
Krantz, D. H., Luce, R. D., Suppes, P., & Tversky, A. (1971). Foundations of measurement. Vol. 1. New York: Academic Press.
Element. der psychophysik. Leipzig: Breitkopf und Hartel. (Translation of Volume I reprinted as Elements of psychophysics
  • G T Fechner
Fechner, G. T. (1860). Element. der psychophysik. Leipzig: Breitkopf und Hartel. (Translation of Volume I reprinted as Elements of psychophysics, New York: Holt, Rinehart, & Winston, 1966).
The analytic hierarchy process
  • T L Saaty
Saaty, T. L. (1980). The analytic hierarchy process. New York: McGraw-Hill.