Article

A Model for the Interpretation of Verbal Predictions

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

There is a marked gap between the demands on forecasting and the results that numerical forecasting techniques usually can provide. It is suggested that this gap can be closed by the implementation of experts' qualitative predictions into numerical forecasting systems. A formal analysis of these predictions can then be integrated into quantitative forecasts.In the framework of possibility theory, a model is developed which accounts for the verbal judgments in situations where predictions are made or knowledge is updated in the light of new information. The model translates verbal expressions into elastic constraints on a numerical scale. This numerical interpretation of qualitative judgments can then be implemented into numerical forecasting procedures.The applicability of this model was tested experimentally. The results indicate that the numerical predictions from the model agree well with the actual judgments and the evaluation behavior of the subjects.The applicability of this model is demonstrated in a study where bank clerks had to predict exchange rates. The analysis of qualitative judgments according to this model provided significantly more information than numerical predictions.A general framework for an interactive forecasting systems is suggested for further developments.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Moreover, people are more prone to interference from biasing tendencies if one forces them to give numerical estimates. This is due to the fact that by eliciting numerical estimates one is forcing people to operate in 'a mode' which requires more mental effort [15]. ...
... The meaning of verbal descriptors is usually vague and it may be difficult to find their numerical representations [15][16][17][18][19]. Nevertheless, in the area of systems safety, analysts have worked out a method for risk assessment which is primarily based on human judgment and experience. ...
... These descriptors are shown in Table 1. Table 1 Descriptors used in risk analysis after [2] Corresponding fuzzy linguistic values Since fuzzy set models of human judgment permit translation of verbal expressions into numerical ones [15], and deal with imprecisions in the expression of the occurrence of events, in this paper an attempt was made to develop the fuzzy linguistic model of the above practical risk analysis system. ...
Chapter
This paper discusses applications of approximate reasoning techniques in risk analysis. Vagueness and imprecision in mathematical quantification of risk are equated with fuzziness rather than randomness. The concept of fuzzy risk evaluation, using linguistic representation of the likelihood of the occurrence of a hazardous event, exposure, and possible consequences of that event, is proposed.
... It has been found that people rely more heavily on verbal information when the data are not easily quantiable, and more heavily on numerical information when it is available. Zimmer [21] found that bank clerks making numerical predictions of future currency exchange rates relied on variables that are usually stated numerically (e.g. GNP) in deriving their predictions. ...
... Zimmer [21] suggested that forcing people to give numerical estimates causes them to operate in a mode that requires more mental eort and is therefore more dicult to use. This, however, has not been conrmed in a study done by Budescu, Weinberg, and Wallsten [6]. ...
... Especially those subjects who chose verbal form as preferred form of communication, did just as good as those who chose numerical or graphical form. In another study, Zimmer [21] asked bank tellers to predict the exchange rate between US Dollar and German Mark one month ahead. One group was asked to make a verbal prediction (this is what his subjects usually did in their work) and the other group was asked to give numerical estimates in terms of percentage of change. ...
... Koriat, Lichtenstein, and Fischhoff (1980) and Ferrell and McGoey (1980) posed models in which individuals may have some difficulty expressing beliefs as numerical probabilities, but nevertheless concluded that elicitation of numerical subjective probabilities is feasible. However, Zimmer (1983Zimmer ( , 1984 argued that humans process information using verbal rather than numerical modes of thinking, and he concluded that expectations should be elicited in verbal rather than numerical forms. Erev and Cohen (1990) and Wallsten et al. (1993) have reported that a majority of respondents prefer to communicate their own beliefs verbally and to receive the beliefs of others in the form of numerical probabilities. ...
... Koriat, Lichtenstein, and Fischhoff (1980) and Ferrell and McGoey (1980) posed models in which individuals may have some difficulty expressing beliefs as numerical probabilities, but nevertheless concluded that elicitation of numerical subjective probabilities is feasible. However, Zimmer (1983Zimmer ( , 1984 argued that humans process information using verbal rather than numerical modes of thinking, and he concluded that expectations should be elicited in verbal rather than numerical forms. Erev and Cohen (1990) and Wallsten et al. (1993) have reported that a majority of respondents prefer to communicate their own beliefs verbally and to receive the beliefs of others in the form of numerical probabilities. ...
Article
Economists commonly suppose that persons have probabilistic expectations for uncertain events, yet empirical research measuring expectations was long rare. The inhibition against collection of expectations data has gradually lessened, generating a substantial body of recent evidence on the expectations of broad populations. This paper first summarizes the history leading to development of the modern literature and overviews its main concerns. I then describe research on three subjects that should be of direct concern to macroeconomists: expectations of equity returns, inflation expectations, and professional macroeconomic forecasters. I also describe work that questions the assumption that persons have well defined probabilistic expectations and communicate them accurately in surveys. Finally, I consider the evolution of thinking about expectations formation in macroeconomic policy analysis. I favorably observe the increasing willingness of theorists to study alternatives to rational expectations assumptions, but I express concern that models of expectations formation will proliferate in the absence of empirical research to discipline thinking. To make progress, I urge measurement and analysis of the revisions to expectations that agents make following occurrence of unanticipated shocks.
... On the expectation that forcing people to predict in the numerical mode requires them to exert more mental effort and therefore induces more bias (Zimmer 1983(Zimmer , 1984, several studies have investigated the mode-effect on biases. In a Bayesian updating (revision) task verbal probability judgments were indeed less biased towards conservatism than numerical estimates (Rapoport, Wallsten, Erev, & Cohen 1990). ...
... On the expectation that forcing people to predict in the numerical mode requires them to exert more mental effort and therefore induces more bias (Zimmer 1983(Zimmer , 1984, several studies have investigated the mode-effect on biases. In a Bayesian updating (revision) task verbal probability judgments were indeed less biased towards conservatism than numerical estimates (Rapoport, Wallsten, Erev, & Cohen 1990). ...
Book
Full-text available
One of the most difficult choices that organizations face is the choice to spend resources today to reduce the probability or negative impact of events that may happen tomorrow. In hindsight, it seems to be a waste to spend organizational resources on reducing the risk of low probability events that up to now never did materialize. Intuitively it appears much more prudent for an organization to spend resources on events that have a higher frequency of occurrence and for which it is easier to assess that resources have been well spent. But what if the consequences of the low probability events are catastrophic and threaten business continuity? Should the leadership of an organization be gambling on a catastrophic low probability event not to occur? The central theme of this PhD-dissertation is measuring an organization's willingness to accept risk in the pursuit of its objectives. It attempts to unravel this organizational risk appetite using decision theory. The study proposes to measure organizational risk appetite using unbiased measurements of the pleasure and pain (utility) that a leadership associates with the consequences of risky events. This unbiased utility is measured under prospect theory and used normatively under expected utility theory to validate tools that communicate organizational risk appetite. In one of these tools, the risk matrix, the economic law of diminishing marginal utility identifi es the low probability and large impact events as unacceptable. The dissertation introduces a new design for the measurement of utility under businesslike circumstances, evaluates risk appetite at a large organization, assesses values that decision makers associate with verbal expressions of probability and outcome, and experimentally tests the effect of performance-based incentives on risk appetite.
... He concluded that the difference among those scenarios did not almost affect those assignments. Meanwhile Zimmer [10][11] found that the interpretation of quantifiers related to probability, such as Òall,Ó changed with the contexts. He proposed Òthe scope functionsÓ to explain the obtained results. ...
... In the literatures [10][11] Zimmer has proposed the scope function to describe context dependability of quantifiers. The context dependability in the literatures means that Òthe standard meaning of quantifiers is modified by the subjects knowledge about the normal scope of discourse in the contexts (Zimmer [11], pp. ...
Article
Independence of functions of verbal hedges from modificands is important issue to imple- ment hedges to computers as fuzzy sets. In this paper, we aim to discuss the independence experimen- tally. First, we confirm that the differences of the modificands do not affect the functions of the hedges, based on the membership functions for the hedges obtained from 72 subjects. Then, we describe treat- ment of context dependability of hedges comparing the Zimmer's model and the model proposed in this paper.
... In addition, we may speculate that event importance and desirability also affect the meanings of probability expressions. Zimmer (1984) has suggested that the interpretations of probability expressions vary over knowledge domains. He has proposed a model in which each phrase has a basic meaning represented by a membership function, which is then operated on by a context-specific "scope function" to yield the phrase's context-bound meaning. ...
Article
Full-text available
Can the vague meanings of probability terms such as doubtful, probable, or likely be expressed as membership functions over the [0, 1] probability interval? A function for a given term would assign a membership value of zero to probabilities not at all in the vague concept represented by the term, a membership value of one to probabilities definitely in the concept, and intermediate membership values to probabilities represented by the term to some degree. A modified pair-comparison procedure was used in two experiments to empirically establish and assess membership functions for several probability terms. Subjects performed two tasks in both experiments: They judged (a) to what degree one probability rather than another was better described by a given probability term, and (b) to what degree one term rather than another better described a specified probability. Probabilities were displayed as relative areas on spinners. Task a data were analyzed from the perspective of conjoint-measurement theory, and membership function values were obtained for each term according to various scaling models. The conjoint-measurement axioms were well satisfied and goodness-of-fit measures for the scaling procedures were high. Individual differences were large but stable. Furthermore, the derived membership function values satisfactorily predicted the judgments independently obtained in task b. The results support the claim that the scaled values represented the vague meanings of the terms to the individual subjects in the present experimental context. Methodological implications are discussed, as are substantive issues raised by the data regarding the vague meanings of probability terms.
... Within the last decades, the use and interpretation of verbal probability expressions (VPE) has been intensively investigated from different perspectives such as the field of economics, politics or the health sector [1][2][3][4][5]. VPE are commonly used to describe situations of uncertainty or risk and according to [6] are easy and natural to most people. Brun and Teigen (1988) observed that physicians preferred communicating probabilities verbally whereas their patients rather preferred receiving health-related information numerically [2]. ...
Chapter
Full-text available
Introduction: Verbal probabilities such as "likely" or "probable" are commonly used to describe situations of uncertainty or risk and are easy and natural to most people. Numerous studies are devoted to the translation of verbal probability expressions to numerical probabilities. Methods: The present work aims to summarize existing research on the numerical interpretation of verbal probabilities. This was accomplished by means of a systematic literature review and meta-analysis conducted alongside the MOOSE-guidelines for meta-analysis of observational studies in epidemiology. Studies were included, if they provided empirical assignments of verbal probabilities to numerical values. Results: The literature search identified 181 publications and finally led to 21 included articles and the procession of 35 verbal probability expressions. Sample size of the studies ranged from 11 to 683 participants and covered a period of half a century from 1967 to 2018. In half of the studies, verbal probabilities were delivered in a neutral context followed by a medical context. Mean values of the verbal probabilities range from 7.24% for the term "impossible" up to 94.79% for the term "definite". Discussion: According to the results, there is a common 'across-study' consensus of 35 probability expressions for describing different degrees of probability, whose numerical interpretation follows a linear course. However, heterogeneity of studies was considerably high and should be considered as a limiting factor.
... Whereas we assume that persons use imprecise probabilities to express limited knowledge, some researchers have questioned whether persons think in quantitative probabilistic terms at all. In psychology, Zimmer (1983Zimmer ( , 1984 argued that humans process information using verbal rather than numerical modes of thinking, and he concluded that expectations should be elicited in verbal rather than numerical forms. In economics, some studies have sought to interpret elicited ordinal and verbal expressions of "confidence" (see Drerup, Enke, and Von Gaudecker 2017;Giglio et al. 2021 for recent examples). ...
Article
We elicit numerical expectations for late-onset dementia and long-term care (LTC) outcomes in the Health and Retirement Study. We provide the first empirical evidence on dementia-risk perceptions among dementia-free older Americans and establish important patterns regarding imprecision of subjective probabilities. Our elicitation distinguishes between precise and imprecise probabilities, while accounting for rounding of reports. Imprecise-probability respondents quantify imprecision using probability intervals. Nearly half of respondents hold imprecise dementia and LTC probabilities, while almost a third of precise-probability respondents round their reports. These proportions decrease substantially when LTC expectations are conditioned on hypothetical knowledge of the dementia state. Among rounding and imprecise-probability respondents, our elicitation yields two measures: an initial rounded or approximated response and a post-probe response, which we interpret as the respondent's true point or interval probability. We study the mapping between the two measures and find that respondents initially tend to over-report small probabilities and under-report large probabilities. Using a specific framework for study of LTC insurance choice with uncertain dementia state, we illustrate the dangers of ignoring imprecise or rounded probabilities for modelling and prediction of insurance demand.
... Ordered qualitative scales are frequently used not only in Marketing, but also in Economics, Psychology, Sensory Analysis, Sociology, etc., because they are more appropriate than numerical scales for dealing with the vagueness and imprecision of human beings when evaluating different issues (see Zimmer [33,34] and Windschitl and Wells [31], among others). ...
Article
Full-text available
(Free access until May 13: https://authors.elsevier.com/c/1cnvc5aecSjuah) In this paper, a new multi-criteria procedure is devised for new product development decision-making made from survey data. Groups of panelists evaluate several product categories regarding different criteria, each one through a specific qualitative scale, which ultimately will guide decision-makers to develop a new product in a specific category. These qualitative scales are equipped with ordinal proximity measures that collect the perceptions about the proximities between the terms of the scales by means of ordinal degrees of proximity. The linguistic assessments provided by panelists are compared with the highest terms of the corresponding qualitative scales. In order to aggregate the obtained ordinal degrees of proximity, a homogenization process is provided. It avoids any cardinalization procedure in the ordinal proximity measures associated with the ordered qualitative scales used for assessing the alternatives regarding different criteria. Products categories are ranked taking into account the medians of the homogenized ordinal degrees of proximity.
... Although coarse and fuzzy probability estimates may suffice or even be preferable in some communication contexts (Wallsten & Budescu, 1995;Zimmer, 1984), communicating with verbal probabilities may be woefully inadequate in others. Nor does our research (see Experiment 4) offer optimism for organizational remedies for the vagueness of verbal probabilities that call for translating a lexicon of verbal-probability terms into numeric ranges (e.g., Dhami, 2018;Ho et al., 2015) and embedding numeric-range equivalents where such terms appear in text (Budescu et al., 2009(Budescu et al., , 2014Mandel & Irwin, 2020a;Wintle et al., 2019). ...
Article
Probability information is regularly communicated to experts who must fuse multiple estimates to support decision-making. Such information is often communicated verbally (e.g., “likely”) rather than with precise numeric (point) values (e.g., “.75”), yet people are not taught to perform arithmetic on verbal probabilities. We hypothesized that the accuracy and logical coherence of averaging and multiplying probabilities will be poorer when individuals receive probability information in verbal rather than numerical point format. In four experiments (N = 213, 201, 26, and 343, respectively), we manipulated probability communication format between-subjects. Participants averaged and multiplied sets of four probabilities. Across experiments, arithmetic accuracy and coherence was significantly better with point than with verbal probabilities. These findings generalized between expert (intelligence analysts) and non-expert samples and when controlling for calculator use. Experiment 4 revealed an important qualification: whereas accuracy and coherence were better among participants presented with point probabilities than with verbal probabilities, imprecise numeric probability ranges (e.g., “.70 to .80”) afforded no computational advantage over verbal probabilities. Experiment 4 also revealed that the advantage of the point over the verbal format is partially mediated by strategy use. Participants presented with point estimates are more likely to use mental computation than guesswork, and mental computation was found to be associated with better accuracy. Our findings suggest that where computation is important, probability information should be communicated to end users with precise numeric probabilities.
... Therefore the evaluation of the factors and their importance can be better judged by subjective judgments. Zimmer (1983) has suggested that human are comparatively efficient in qualitative forecasting than quantitative prediction. People will be less biased when asked to give subjective estimates. ...
Article
Full-text available
This paper seeks to review the enablers for the Travel & Tourism (T&T) industry in India and to rank these factors. The paper aims to introduce a fuzzy TOPSIS approach for this purpose. The paper begins with a literature review to investigate the significant enablers in the T&T sector. The research was conducted among the tourists in the northern state of Uttarakhand, India which is a famous tourist destination both for adventure and pilgrimage. Fuzzy TOPSIS approach is used to meet the objectives of the study. Required information was gathered through a questionnaire. The results show that Safety & Security, Price, Transport and Infrastructure are the most important factors in Indian context.The paper will be helpful in enabling the T&T industry policy makers to identify the key service factors in the sector and take the improvement measuresThe concept of ranking T&T enablers using a Fuzzy TOPSIS is a new approach. The study is the unique application of a fuzzy approach to examine and rank customer expectations of the T&T enablers in Indian context.
... The evidence for other claims, however, is either lacking or conflicting. For instance, Zimmer (1984) suggests that people process information verbally through argumentation and so asking them to respond in the verbal mode (as opposed to the numeric mode) requires less cognitive effort and makes them less susceptible to bias and unreliability. Budescu, Weinberg, and Wallsten (1988) did not find evidence to support this claim. ...
Article
Full-text available
Intelligence analysis is fundamentally an exercise in expert judgment made under conditions of uncertainty. These judgments are used to inform consequential decisions. Following the major intelligence failure that led to the 2003 war in Iraq, intelligence organizations implemented policies for communicating probability in their assessments. Virtually all chose to convey probability using standardized linguistic lexicons in which an ordered set of select probability terms (e.g., highly likely) is associated with numeric ranges (e.g., 80-90%). We review the benefits and drawbacks of this approach, drawing on psychological research on probability communication and studies that have examined the effectiveness of standardized lexicons. We further discuss how numeric probabilities can overcome many of the shortcomings of linguistic probabilities. Numeric probabilities are not without drawbacks (e.g., they are more difficult to elicit and may be misunderstood by receivers with poor numeracy). However, these drawbacks can be ameliorated with training and practice, whereas the pitfalls of linguistic probabilities are endemic to the approach. We propose that, on balance, the benefits of using numeric probabilities outweigh their drawbacks. Given the enormous costs associated with intelligence failure, the intelligence community should reconsider its reliance on using linguistic probabilities to convey probability in intelligence assessments. Our discussion also has implications for probability communication in other domains such as climate science.
... The evidence for other claims, however, is either lacking or conflicting. For instance, Zimmer (1984) suggests that people process information verbally through argumentation and so asking them to respond in the verbal mode (as opposed to the numeric mode) requires less cognitive effort and makes them less susceptible to bias and unreliability. Budescu, Weinberg, and Wallsten (1988) did not find evidence to support this claim. ...
Preprint
Full-text available
Intelligence analysis is fundamentally an exercise in expert judgment made under conditions of uncertainty. These judgments are used to inform consequential decisions. Following the major intelligence failure that led to the 2003 war in Iraq, intelligence organizations implemented policies for communicating probability in their assessments. Virtually all chose to convey probability using standardized linguistic lexicons in which an ordered set of select probability terms (e.g., highly likely) is associated with numeric ranges (e.g., 80-90%). We review the benefits and drawbacks of this approach, drawing on psychological research on probability communication and studies that have examined the effectiveness of standardized lexicons. We further discuss how numeric probabilities can overcome many of the shortcomings of linguistic probabilities. Numeric probabilities are not without drawbacks (e.g., they are more difficult to elicit and may be misunderstood by receivers with poor numeracy). However, these drawbacks can be ameliorated with training and practice, whereas the pitfalls of linguistic probabilities are endemic to the approach. We propose that, on balance, the benefits of using numeric probabilities outweigh their drawbacks. Given the enormous costs associated with intelligence failure, the intelligence community should reconsider its reliance on using linguistic probabilities to convey probability in intelligence assessments. Our discussion also has implications for probability communication in other domains such as climate science.
... People are more comfortable using words rather than numbers to describe probabilities. In this sense, authors such as Zimmer [9,10] and Windschitl and Wells [11] point out that words are more natural than numbers, since verbal expressions of uncertainty are easily understood and, besides, they emerged long before the development of probability. On the other hand, agents' opinions and judgments are generally imprecise, and therefore it would be misleading to represent them by precise numerical values (see Beyth-Marom [12], Wallsten et al. [13] and Teigen [14], among others). ...
Article
Full-text available
Many decision problems manage linguistic information assessed through several ordered qualitative scales. In these contexts, the main problem arising is how to aggregate this qualitative information. In this paper, we present a multi-criteria decision-making procedure that ranks a set of alternatives assessed by means of a specific ordered qualitative scale for each criterion. These ordered qualitative scales can be non-uniform and be formed by a different number of linguistic terms. The proposed procedure follows an ordinal approach by means of the notion of ordinal proximity measure that assigns an ordinal degree of proximity to each pair of linguistic terms of the qualitative scales. To manage the ordinal degree of proximity from different ordered qualitative scales, we provide a homogenization process. We also introduce a stochastic approach to assess the robustness of the conclusions.
... The results are notable because organizations that generate probability estimates for consumption by other experts, decisionmakers, or the general public typically provide such estimates in the form of verbal probabilities. Although coarse and fuzzy probability estimates may suffice or even be preferable in some communication contexts (Wallsten & Budescu, 1995;Zimmer, 1984), communicating with verbal probabilities may be woefully inadequate in others. Our research highlights one basis for such concern-namely, when the estimates represent inputs to further assessments or decisions that are amenable to (or that simply require) mathematical calculations. ...
Preprint
Probability information is often communicated to others who must fuse multiple estimates to support decision-making. In many consequential domains (e.g., intelligence for national security), such information is communicated verbally (e.g., “likely”) rather than numerically (e.g., “p=.75”). However, whereas people are taught to arithmetically operate on fractions or decimals, they are not taught to do math with verbal probabilities. Accordingly, we hypothesized that the accuracy of arithmetic computations such as averaging or multiplying probabilities will be poorer when individuals receive probability information in verbal rather than numeric form. In two experiments (N=213 and 201, respectively) communication format was manipulated between-subjects. Participants were required to average and multiply small (k=4) sets of probabilities. Both experiments found support for a numeric superiority effect: arithmetic accuracy was significantly greater in the numeric condition than in the verbal condition. Translating from the verbal format to the numeric format tended to improve accuracy, whereas translating from the numeric format to the verbal format tended to reduce accuracy. The findings suggest that arithmetic computations such as averaging and multiplication, which are often required to support further analysis or decision-making, are facilitated by communication of probabilities in numeric form.
... This assumes that the imprecision arises from the vagueness in individuals' perception of the numerical objective probabilities. The support for this assumption comes from the psychophysics literature (see Zimmer, 1984;Wallsten et al., 1986;Budescu et al., 1988;Budescu and Wallsten, 1990;Wallsten and Budescu, 1995;Bisantz et al., 2005). ...
Article
Full-text available
The term ‘preference imprecision’ seems to have different meanings to different people. In the literature, one can find references to a number of expressions. For example: vagueness, incompleteness, randomness, unsureness, indecisiveness and thick indifference curves. Some of these are theoretical constructs, some are empirical. The purpose of this paper is to survey the various different approaches and to try to link them together: to see if they are all addressed to the same issue, and to come to some conclusions. In the course of this survey, we report on evidence concerning the existence of preference imprecision, and its impact on theoretical and empirical work.
... This assumes that the imprecision arises from the vagueness in individuals' perception of the numerical objective probabilities. The support for this assumption comes from the psychophysics literature (see Budescu et al, 1988;Budescu and Wallsten, 1990;Wallsten et al, 1986 andBisantz et al, 2005;Wallsten and Budescu, 1995;Zimmer, 1984). 13 IEUT is at the moment applicable only for two-outcome lotteries:   12 : , ; L x p x , where x 1 and x 2 denote monetary payoffs (  12 xx ) and p is the probability of winning x 1 . ...
Article
Full-text available
The term ‘preference imprecision’ seems to have different meanings to different people. In the literature, one can find references to a number of expressions. For example: vagueness, incompleteness, randomness, unsureness, indecisiveness and thick indifference curves. Some of these are theoretical constructs, some are empirical. The purpose of this paper is to survey the various different approaches and to try to link them together: to see if they are all addressed to the same issue, and to come to some conclusions. In the course of this survey, we report on evidence concerning the existence of preference imprecision, and its impact on theoretical and empirical work.
... This assumes that the imprecision arises from the vagueness in individuals' perception of the numerical objective probabilities. The support for this assumption comes from the psychophysics literature (see Budescu et al, 1988;Budescu and Wallsten, 1990;Wallsten et al, 1986 andBisantz et al, 2005;Wallsten and Budescu, 1995;Zimmer, 1984). 13 IEUT is at the moment applicable only for two-outcome lotteries:   12 : , ; L x p x , where x 1 and x 2 denote monetary payoffs (  12 xx ) and p is the probability of winning x 1 . ...
Preprint
Full-text available
The term ‘preference imprecision’ seems to have different meanings to different people. In the literature, one can find references to a number of expressions. For example: vagueness, incompleteness, randomness, unsureness, indecisiveness and thick indifference curves. Some of these are theoretical constructs, some are empirical. The purpose of this paper is to survey the various different approaches and to try to link them together: to see if they are all addressed to the same issue, and to come to some conclusions. In the course of this survey, we report on evidence concerning the existence of preference imprecision, and its impact on theoretical and empirical work.
... This assumes that the imprecision arises from the vagueness in individuals' perception of the numerical objective probabilities. The support for this assumption comes from the psychophysics literature (see Zimmer, 1984;Wallsten et al., 1986;Budescu et al., 1988;Budescu and Wallsten, 1990;Wallsten and Budescu, 1995;Bisantz et al., 2005). ...
... These authors identify a range of context sponsored sources of variability between assessors (also see Cohen et al, 1958;Pepper and Prytulak, 1974;Zimmer, 1984, cited in: Wallsten et al, 1986): ...
Technical Report
Full-text available
The aim of this review was to identify communication challenges associated with the expression of uncertainty in the Plant Health Risk Register (PHRR) and inform future Defra strategies for addressing these challenges. Our starting point is that the communication of uncertainty potentially relates to much more than issues of wording, numerical format and presentation. The effective communication of science and associated uncertainties, particularly for high profile / high consequence pests and diseases can require more than the application of tools and techniques that simplify and demystify complex phenomena. First we describe the PHRR itself, its history and the aspirations for it as a risk management tool (Section 2). A working definition of uncertainty and an overview of its various manifestations follow in Section 3. We consider the reasons why uncertainty should be communicated, and suggest the particular challenges of doing so within the PHRR (Section 4). This leads to a consideration of the reasons that there might be for expert risk-assessor reluctance to communicate uncertainty (Section 5) – which includes the evidence pertaining to media characterisations. Section 6 considers the way in which lay audiences might make sense of uncertainty. The next two sections move to what can be characterised as micro-level considerations, first considering evidence, largely from cognitive psychology, about the interplay between cognitive biases / recourse to heuristics and the characterisation of uncertainty that can impact on how risk is perceived and reacted to by stakeholders (Section 7) and second, broader social science insights on the accuracy of lay interpretations of alternative representations and characterisations of uncertainty (Section 8). In conclusion, in Section 9, we reflect on the main lessons to be drawn for the communication of uncertainty relating to the PHRR and, in particular, consider the implications of alternative formats for representing and characterising uncertainty in the PHRR.
... Two kinds of probability expressions, verbal and numerical, have been used to characterize the uncertainty that we face. The verbal mode of probability expressions has a long history and has been considered the more natural system for processing probabilistic information (Zimmer, 1984). The numerical mode of probability expressions was first invented by legal scholars and later connected to the mathematical games of chance in the seventeenth century (Shafer, 1988). ...
Article
Full-text available
Two kinds of probability expressions, verbal and numerical, have been used to characterize the uncertainty that people face. However, the question of whether verbal and numerical probabilities are cognitively processed in a similar manner remains unresolved. From a levels-of-processing perspective, verbal and numerical probabilities may be processed differently during early sensory processing but similarly in later semantic-associated operations. This event-related potential (ERP) study investigated the neural processing of verbal and numerical probabilities in risky choices. The results showed that verbal probability and numerical probability elicited different N1 amplitudes but that verbal and numerical probabilities elicited similar N2 and P3 waveforms in response to different levels of probability (high to low). These results were consistent with a levels-of-processing framework and suggest some internal consistency between the cognitive processing of verbal and numerical probabilities in risky choices. Our findings shed light on possible mechanism underlying probability expression and may provide the neural evidence to support the translation of verbal to numerical probabilities (or vice versa).
... ," "I really don't know, but . . . "), which people understand to imply different degrees of certainty (Wallsten & Budescu, 1983;Zimmer, 1984). ...
Article
INTRODUCTION: In many situations people must, at least implicitly, make subjective judgmentsabout how certain they are. Such judgments are part of decisions about whether to collect more information, whether to undertake a risky course of action, which contingencies to plan for, and so on. Underlying such decisions are subjective judgments about the quality of the decision maker's information. Accordingly, many researchers have been interested in the mental processes underlying such judgments, which go under the general label of confidence. There are many ways in which confidence can be expressed, both in the real world and in the research lab. For example, Yaniv and Foster (1995) present the concept of grain size. People communicate their confidence in an estimate via the precision (grain size) with which they express it. “I think it was during the last half of the nineteenth century” implies a different degree of confidence than “I think it was around 1875.” Listeners expect speakers to choose a grain size appropriate to their level of knowledge. People also use a variety of verbal probability terms to describe confidence in their predictions, choices, and estimates (e.g., “I'm pretty sure …,” “It's definitely not …,” “I really don't know, but …”), which people understand to imply different degrees of certainty (Wallsten and Budescu, 1983; Zimmer, 1984). In the lab, most studies use one of three predominant paradigms.
... There is also implicit evidence from Plott and Zeiler (2005) and Isoni et al (2011) who find that the endowment effect is observed only for the lottery tickets, but not for ordinary market goods such as mugs and candies. Zimmer (1984) introduced a useful insight from an evolutionary perspective: he noted that the probability concept in a numerical sense is a relatively new concept, appearing as recently as the 17 th century. However, people were communicating uncertainty via verbal expressions long before probability was codified in mathematical terms. ...
Article
This paper presents a new theory, called Preference Cloud Theory, of decision-making under uncertainty. This new theory provides an explanation for empirically-observed Preference reversals. Central to the theory is the incorporation of preference imprecision which arises because of individuals' vague understanding of numerical probabilities. We combine this concept with the use of the Alpha model (which builds on Hurwicz's criterion) and construct a simple model which helps us to understand various anomalies discovered in the experimental economics literature that standard models cannot explain.
... In addition, there is the fact that the interpretations of these verbal expressions of probability vary in different contexts even though they can be kept constant in each evaluator [14][15][16]. ...
Article
Full-text available
In the framework of CO2 capture and geological storage, risk analysis plays an important role, because it is an essential requirement of knowledge to make up a local, national and supranational definition and planning of carbon injection strategies. This is because each project is at risk of failure. Even from the early stages, it should take into consideration the possible causes of this risk and propose corrective methods along the process, i.e., managing risk. Proper risk management reduces the negative consequences arising from the project. The main method of reduction or neutralizing of risk is mainly the identification, measurement and evaluation of it, together with the development of decision rules. This report presents a methodology developed for risk analysis and the results of its application. The risk assessment requires determination of the random variables that will influence the functioning of the system. It is very difficult to set-up a probability distribution of a random variable in the classical sense (objective probability) when a particular event rarely occurred or even it has an incomplete development. In this situation, we have to determine the subjective probability, especially at an early stage of projects, when we have not enough information about the system. This subjective probability is constructed from assessment of expert judgement to estimate the possibility of certain random events could happen depending on geological features of the area of application. The proposed methodology is based on the application of Bayesian probabilistic networks to estimate the probability of risk of leakage. These probabilistic networks can define graphically the relations of dependence between the variables and joint probability function through a local factorization of probability functions.
... Studies suggest that subjective probabilities are internally represented as verbal information (Zimmer 1984). Supporting this view, several studies have found most people prefer to convey probability information as vague quantities (Olson andBudescu 1997, Wallsten et al. 1993). ...
Article
Full-text available
Many survey questions ask respondents to provide responses that contain quantitative information. These questions are often asked requiring open ended numeric responses, while others have been asked using vague quantifier scales. How these questions are asked, particularly in terms of the response format, can have an important impact on the data. Therefore, the response format is of particular importance for ensuring that any use of the data contains the best possible information. Generally, survey researchers have argued against the use of vague quantifier scales. This dissertation compares various measurement properties between numeric open ended and vague quantifier responses, using three studies containing questions with both formats. The first study examines uses new experimental data to compare accuracy between the measures; the second and third use existing data to compare predictive validity of the two formats, with one examining behavioral reports, the other examining subjective probabilities. All three studies examine the logical consistency between measures, and the potential correlates related to improved measurement properties. Importantly, these studies examine the influence of numeracy, a potentially important but rarely examined variable. The results of the three studies indicate that vague quantifiers may have better measurement properties than numeric open ended responses, contrary to many researchers’ arguments. Studies 2 and 3 are most clear about this increased strength; in both of the studies, using a number of tests, the predictive validity of vague quantifiers was consistently greater than that of numeric open ended responses, regardless of numeracy level. Study 1 shows that at that generally, vague quantifiers result in more accurate data than numeric, but this finding depends on other factors, such as numeracy. Therefore, numeracy was infrequently found to be important, but at times did have an impact on accuracy. Further, in the three studies, it was found that the two formats were logically consistent when translations between the questions were directly asked for, but inconsistency occurred when there was not a direct translation. Advisor: Robert Belli
... If interval/ratio data is stored then the query is structured in such a manner that in concept it is a labeling, or classification, process for retrieving 'labelled' data. This appears to be consistent with recent research in forecasting and man-machine studies suggest that human information processing is geared towards the processing of 'qualitative' information rather than 'quantitative' information (Zimmer, 1984). Figure 2 depicts the relation we suggest exists between the four fundamental data types, objective information, and meaning. ...
Article
Full-text available
We differentiate between two broad types of uncertainty. Type I uncertainty deals with our inability to measure or predict a characteristic or event with uncertainty, where the characteristic or event is inherently exact. In type II uncertainty, there is a situation of intrinsic ambiguity regarding the concept to represented. The many sources of the two types of uncertainty are listed. Methods of handling differing kinds of uncertainties contained in collections of spatial data are suggested. Due to recent advances and its ability to represent and process the vagueness of natural language concepts, fuzzy logic is identified as major area for future research in managing uncertainty in spatial information systems.
... The membership functions can be derived either subjectively (Zadeh 1972) or empirically (Boucher & Cogus, 2002;Narazaki & Relescu, 1994;Reventos, 1999;Saaty, 1974;Wallsten, Budescu, Rapoport, Zwick, & Forsyth, 1986;Witteman & Renooij, 2003;Zimmer, 1984). Some researchers (Watson, Weiss, & Donnell, 1979) support the use of the empirical methods in the derivation of membership functions. ...
Article
The concern of this study is the problem of ambiguity (vagueness) in accounting and auditing (hereafter accounting) and the use of decision theory as a general framework for accounting decision making. First, the study shows that decision theory fails to deal with the different sources of ambiguity in accounting which include, as identified in the study, vague probability judgments, imprecise payoffs, ambiguous states of nature, and varying degrees of precision. The failure to deal with these sources of ambiguity often leads to incomplete and unrealistic representation of accounting problems. Moreover, ambiguity should not be ignored for it affects decision making. Second, the study provides a general framework for accounting decision making under ambiguity. The proposed framework generalizes the traditional decision theory and expands previous ambiguity models to allow for the different sources of ambiguity in accounting and, thus, it provides for more realistic representation of accounting problems. The framework also incorporates decision making under risk and decision theory as a special case. Finally, the framework explains the behavioral evidence on Ellsberg's paradox and the impact of ambiguity, in particular, ambiguous probabilities on decision making.
... Numeric expressions of uncertainty were greatly affected by the base rates, whereas people's preferences (and their verbal expressions) were less affected by the base rates. A related finding was reported by Zimmer (1984), who found that bank clerks asked to provide verbal predictions of future monetary exchange rates reported using both qualitative and quantitative variables in making their predictions. However, clerks who were asked to provide numeric predictions tended This document is copyrighted by the American Psychological Association or one of its allied publishers. ...
Article
Full-text available
The authors argue that alternatives to the traditional numeric methods of measuring people's uncertainty may prove to hold important advantages under some conditions. In 3 experiments, the authors compared verbal measures involving responses such as very likely, and numeric measures involving responses such as 80% chance. The verbal measures were found to show more sensitivity to various manipulations affecting psychological uncertainty (Experiment 1), to be better predictors of individual preferences among options with unknown outcomes (Experiment 2), and to be better predictors of behavioral intentions (Experiment 3). Results suggest that numeric measures tend to elicit deliberate and rule-based reasoning from respondents, whereas verbal measures allow for more associative and intuitive thinking. Given that there may be many types of situations in which human decisions and behaviors are not based on deliberate and rule-based thinking, numeric measures may misrepresent how individuals think about uncertainty in those situations. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
... (i.) These respondents may have expectations but are unable to answer numerical probabilities (because their expectations are internally represented in another mode, e.g., a verbal mode as in Zimmer (1984)). Alternatively, (ii.) these respondents may have expectations which do not have all the structure of probability distributions, e.g., they think in terms of upper and lower probabilities as in Walley (1991). ...
Article
To understand how decisions to invest in stocks are taken, economists need to elicit expectations relative to expected risk-return trade-off. One of the few surveys which have included such questions is the Survey of Economic Expectations in 1999-2001. Using this survey, Dominitz and Manski find an important heterogeneity across respondents that can hardly be accounted for by simple models of expectations formation. This paper claims that much of the heterogeneity derives from pathologies affecting respondents. Adapting a principle of dual-reasoning borrowed from Kahneman, we classify respondents according to their sensitivity to these pathologies, and find a strong homogeneity across the less sensitive respondents. We then sketch a model of expectation formation
... Still, there are reasons why the survey designer should not rule out the use of vague quantifiers completely. Studies suggest that subjective probabilities are internally represented as verbal information (Zimmer 1984). Supporting this view, several studies have found most people prefer to convey probability information as vague quantities (Olson and Budescu 1997;Wallsten et al. 1993). ...
Article
Full-text available
Smoking is among the most debated areas of risk perception, with varying conclusions about people’s understanding of the risks. Part of the debate has focused on the measures being employed to support such claims. However, studies have not been conducted to compare the best methods to measure the perceived risk. This research first discusses the use of such measures, including survey data from tobacco industry archives that have not previously appeared in publication. Then, using nationally representative survey data of youth and adults in the USA, verbal probability scales and numeric scales are compared. The relationships between these measures are first examined for logical consistency with one another. Additionally, the strength of the relationships with and modeling power of the behavior of interest, smoking, are analyzed The results of difference of means tests, correlations and logistic regressions show that the numeric measures are inconsistent with logical semantic understanding, and more importantly, that vague quantifier scales show greater relationships and predictive power than numeric scales. Implications for survey design and further research are discussed.
Article
In order to assess the uncertainty, verbal probability judgement is a more natural and easier device than the direct numerical assessment. We propose a new procedure to measure subjective probability by means of verbal probability judgment. This procedure provides the assessment of subjective probability in terms of the distribution rather than the unique value of estimate.There are two aspects of uncertainty involved in this judgment. The one is the uncertainty of subjects' belief, which means it cannot be represented by a unique value and instead we assume subjects' belief is distributed as a truncated normal distribution. The other is the unvertainty associated with each of verbal expressions. This uncertainty is represented as trapezoidal membership functions. Based on these two assumptions, we obtain the likelihood of probability (belief) value when subjects rate the appropriateness of each of verbal expressions for a particular event. Assuming that the prior distributions for parameters are uniform, we finally obtain the subjective probability distribution for each of eight events which we assigned.We measured subjective probabilities about various unvertain events. They agreed with our intuitions and they were more consistent than direct numerical assessments when available.
Article
National security is one of many fields where experts make vague probability assessments when evaluating high-stakes decisions. This practice has always been controversial, and it is often justified on the grounds that making probability assessments too precise could bias analysts or decision makers. Yet these claims have rarely been submitted to rigorous testing. In this paper, we specify behavioral concerns about probabilistic precision into falsifiable hypotheses which we evaluate through survey experiments involving national security professionals. Contrary to conventional wisdom, we find that decision makers responding to quantitative probability assessments are less willing to support risky actions and more receptive to gathering additional information. Yet we also find that when respondents estimate probabilities themselves, quantification magnifies overconfidence, particularly among low-performing assessors. These results hone wide-ranging concerns about probabilistic precision into a specific and previously undocumented bias that training may be able to correct.
Chapter
Entscheidungen finden oft „unter Unsicherheit“ statt. Im Allgemeinen ist damit gemeint, dass für den Entscheider die möglichen Konsequenzen der Optionen unsicher sind, weil die Konsequenzen auch von anderen, durch ihn nicht kontrollierbaren Ereignissen abhängig sind. Wie Menschen mit dieser Unsicherheit umgehen, wie sie ihre „subjektiven Wahrscheinlichkeiten“ bilden, verändern, direkt zum Ausdruck bringen oder in ihrem Verhalten zeigen, ist ein zentrales Thema der Entscheidungsforschung. Damit werden wir uns in diesem Kapitel beschäftigen.
Article
This article presents a new model for decision-making under risk, which provides an explanation for empirically-observed preference reversals. Central to the theory is the incorporation of probability perception imprecision, which arises because of individuals’ vague understanding of numerical probabilities. We combine this concept with the use of the Alpha EU model and construct a simple model which helps us to understand anomalies, such as preference reversals and valuation gaps, discovered in the experimental economics literature, that standard models cannot explain.
Chapter
The paper presents the literature related to application of fuzzy set theory to Human Factors and has 784 research papers listed in the bibliography which is given at the end. As in any other collection this literature list has a subjective viewpoint and may be in some sense incomplete. Analysis of these data provides interesting insights regarding the characteristics of articles of the applications of fuzzy set theory, literature trends and the future. This paper consists of two parts. In Part I, the methodology used to gather the information, language, citation, and descriptor analysis, and conclusions are described. Part II is the main bibliography.
Chapter
Können sich Experten besser auf neue Anforderungen einstellen als Fortgeschrittene oder gar Anfänger? Sind sie eher in der Lage, die üblichen, eingespielten Problemlöseverfahren zu modifizieren und durch andere, unter den jeweils gegebenen Umständen besser passende zu ersetzen? Oder sind ihre bereichsspezifischen Kenntnisse und Fertigkeiten eher Ballast, der ihr Verhalten in vorgeprägte, eingefahrene Bahnen zwängt? Ziel dieses Kapitels ist es, Befunde zusammenzutragen und gegeneinander abzuwägen, die für bzw. gegen einen positiven Zusammenhang von kognitiver Flexibilität und Expertise sprechen. Sie stammen hauptsächlich aus — teils eigenen — empirischen Untersuchungen zur diagnostischen Urteilsbildung in der Medizin, zur Fehlersuche in technischen Systemen und zur Fehlersuche beim Programmieren. Zunächst werden einige Studien zusammenfassend dargestellt, die sich unmittelbar mit Expertise und Flexibilität befassen. Anschließend sind einige theoretische Überlegungen zu den Grundlagen der Flexibilität durch eine spezifische Form der Wissensorganisation und der Wissensanwendung in der Problemlösesituation zu diskutieren. Sie sind Grundlage einer detaillierteren Betrachtung der Hypothesenbildung und Strategieauswahl, die den vorletzten Abschnitt bestimmt. Das Kapitel endet mit einer Zusammenfassung und weiterführenden Überlegungen zum Zusammenhang von Expertise und Flexibilität.
Chapter
This chapter is concerned with models of reasoning about the evidence. We consider, in turn, “metre models” of the shifts of opinion in the adjudicators’ mind as items of information come in, and a distributed belief revision model for such dynamics. We discuss the weight given jury research in North America. Then we consider some seminal computer tools modelling the reasoning about a charge and explanations: Thagard’s ECHO (and its relation to Josephson’s PEIRCE-IGTT), and Nissan’s ALIBI, a planner seeking exoneration and producing explanations minimising liability. A quick survey of Bayesian approaches in law is followed with a discussion of the controversy concerning applications of Bayesianism to modelling juridical decision-making. We quickly sample some probabilistic applications (Poole’s Independent Choice Logic and reasoning about accounts of a crime, and next, dynamic uncertain inference concerning criminal cases, and Snow and Belis’s recursive multidimensional scoring). Finally, we consider Shimony and Nissan’s application of the kappa calculus to grading evidential strength, and then argue for trying to model relative plausibility.
Chapter
Full-text available
Zur Bewertung der Gestaltung von Wissensystemen (z.B. Datenbanken) ist es notwendig, Kriterien aus Theorien des Wissens abzuleiten. Wie Toulmin (1972) feststellt, besteht ein gravierender Unterschied zwischen philosophischen Theorien des Wissens und den Prozessen, die sich bei der Sammlung, Bewertung und Ableitung von Wissen in wissenschaftlichen wie alltäglichen Kontexten abspielen. Philosophische Theorien des Wissens sind nach Toulmin durch die Suche nach grundlegenden Prinzipien charakterisiert, mit Hilfe derer der menschliche Geist die intellektuelle Beherrschung einer als stabil angenommenen Ordnung der Natur zu erreichen sucht. Damit wird die Rolle der Wissensphilosophie auf die Bewertung von vorhandenem Wissen eingeschränkt; Popper (1934) grenzt sogar den Erwerb neuen Wissens oder die Aufstellung neuer Theorien explizit aus der Epistemologie aus und verweist darauf, daß dies Fragen der Psychologie seien.
Chapter
The study of uncertainty in many fields has been beset by debate and even confusion over the meaning(s) of uncertainty and the words that are used to describe it. Normative debates address questions such as whether there is more than one kind of uncertainty and how verbal descriptions of uncertainty ought to be used. Descriptive research, which we shall deal with in this paper, concerns how people actually use words to describe uncertainty and the distinct meanings they apply to those words. The main reason for what might seem an obvious statement is to clarify the somewhat odd context in which most studies of decision making take place.
Article
While ignorance has long troubled efforts to prevent, prepare for, or manage the aftermath of disasters, relatively little work has been done on the specific varieties of ignorance and the roles they play in disasters. The classical frameworks for decision-making under “uncertainty” are too restrictive, and many prescriptions for disaster management simply call for better communication or more data collection by way of reducing ignorance. Unfortunately, in connection with disasters, ignorance often is irreducible. This article presents a framework for understanding the various kinds of ignorance, and utilizes that framework to provide some insights and tools that may improve disaster preparedness, management, recovery, and learning.
Thesis
Die Arbeit trägt dem Sachverhalt Rechnung, dass Unschärfe in der Technologiefrühaufklärung als unvermeidbarer Teil der Realität akzeptiert werden muss, aber dennoch unter diesen Rahmenbedingungen Entscheidungen über die zukünftige technologische Ausrichtung von Unternehmen getroffen werden müssen. Hierzu fehlte bisher eine geeignete methodische Unterstützung und Vorgehensweise innerhalb des Technologiefrühaufklärungsprozesses. Die Arbeit leistet in diesem Zusammenhang einen Beitrag zur maßgeblichen methodischen Verbesserung der Technologiefrühaufklärung. Das Ergebnis der Arbeit ist ein Prototyp eines wissensbasierten Fuzzy-Systems zur Entscheidungsunterstützung und Verbesserung der Entscheidungsgrundlagen innerhalb des Technologiefrühaufklärungsprozesses durch die Integration und systematische Weiterverarbeitung unscharfer Informationen sowie die Entwicklung systematischer Transformationsvorschriften zur Überwindung der methodischen Fragmentierung innerhalb des Technologiefrühaufklärungsprozesses. Das Anwendungspotential liegt insbesondere in einer realitätsnäheren Abbildung und inhaltserhaltenden Verarbeitung von unscharfem Expertenwissen sowie einem daraus resultierenden differenzierteren Informationsgehalt der Entscheidungsgrundlagen für die Ableitung von Entscheidungen und Strategien zur Sicherung der technologischen Wettbewerbsfähigkeit von Unternehmen.
Chapter
Full-text available
This chapter examines common framework for colloquial quantifiers and probability terms. Colloquial quantifiers and uncertainty expressions can be interpreted as fuzzy numbers in the interval [0, 1]. Empirical procedures are suggested for the determination of these fuzzy numbers. The empirical results reveal that for propositions on a defined level of abstraction colloquial quantifiers and probability terms can not only be expressed as fuzzy numbers but furthermore can be used in according to the rules for fuzzy combination numbers. The operations with fuzzy numbers correspond to the standard operations in arithmetics. Any two fuzzy numbers can be concatenated by (1) calculating the resulting core by means of standard arithmetics, (ii) averaging the respective upper and lower boundaries, and by (iii) determining the resulting boundaries from the averaged boundaries in relation to the resulting core. Quantifiers expressed as fuzzy numbers in the interval [0, 1] and uncertainty expressions represented as fuzzy probabilities are comparable. The common framework for quantifiers and uncertainty expressions as established by the assignation of fuzzy numbers in [0, 1] has to be complemented by a comparison of the algorithms of inference.
Article
Full-text available
Article
The way in which information about proportions, amounts, frequencies, probabilities, degrees of confidence, and risk is portrayed in natural language is not neutral, but reflects presuppositions and assumed norms. In this paper we present a review of evidence in support of this position. We show that the choice of expressions for communication depends in a systematic way on the kinds of inferences communicators draw. We go on to discuss the consequences of this for attribution phenomena, aspects of reasoning, the portrayal of uncertainty, and responses to questionnaires. We also suggest that communicator preferences for using language rather than numbers may have to do with human reasoning being argument-based, rather than with a preference for vagueness, as has been commonly claimed. Copyright © 2000 John Wiley & Sons, Ltd.
Article
Line Balancing is the problem to assign tasks to stations while satisfying some managerial viewpoints. Most researches about the Mixed-Model Line Balancing problems are focused on the minimizing the total processing time or the number of workstations. Independently, some research reports consider the balance issues of the physical workloads on the assembly line. In this paper, we are presenting a new mathematical model to accomplish the line balance considering both the processing time and the workloads at the same time. To this, end, we propose an zero-one integer program problem, and we use the Chebyshev Goal Programming approach as the solution method. Some computational test runs are performed to compare the pay-offs between the processing time and the workloads. And, the test results show us that the reliable balanced work schedules can be obtained through the proposed model.
Article
Understanding and assessing risk are fundamental to success in Supply Chain Management. This paper develops and demonstrates a fuzzy risk assessment framework to effectively assess supply risk. The sources of risk were extracted based on industry expert views and prior research. A fuzzy inference engine which embeds human expert knowledge expressed through natural language is used. The case of a process industry showed that this method could capture imprecise perceptions about risk factors and quantify them effectively. The framework will be beneficial to researchers and practicing managers in identification of risk and improvement of reliability in the supply chain.
Article
Current explanations of basic anchoring effects, defined as the influence of an arbitrary number standard on an uncertain judgment, confound numerical values with vague quantifiers. I show that the consideration of numerical anchors may bias subsequent judgments primarily through the priming of quantifiers, rather than the numbers themselves. Study 1 varied the target of a numerical comparison judgment in a between--participants design, while holding the numerical anchor value constant. This design yielded an anchoring effect consistent with a quantifier priming hypothesis. Study 2 included a direct manipulation of vague quantifiers in the traditional anchoring paradigm. Finally, Study 3 examined the notion that specific associations between quantifiers, reflecting values on separate judgmental dimensions (i.e., the price and height of a target) can affect the direction of anchoring effects. Discussion focuses on the nature of vague quantifier priming in numerically anchored judgments.
Article
Full-text available
Tested the proposition that natural language concepts are represented as fuzzy sets of meaning components and that language operators (adverbs, negative markers, and adjectives) can be considered as operators on fuzzy sets. The application of fuzzy set theory to the meaning of phrases such as very small, sort of large, etc., was examined in 4 experiments. In Exp I, 19 undergraduates judged the applicability of the set of phrases to a set of squares of varying size. Results indicate that the group interpretation of the phrases can be characterized within the framework of fuzzy set theory. Similar results were obtained in Exp II, where each S's responses were analyzed individually. Although the responses of the 4 Ss in general could be interpreted in terms of fuzzy logical operations, 1 S responded in a more idiomatic style. Exps III and IV were attempts to influence the logical-idiomatic distinction in interpretation by (a) varying the presentation mode of the phrases and (b) giving the 59 Ss only a single phrase to judge. Overall, results are consistent with the hypothesis that natural language concepts and operators can be described more completely and more precisely using the framework of fuzzy set theory. (35 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Reviews evidence which suggests that there may be little or no direct introspective access to higher order cognitive processes. Ss are sometimes (a) unaware of the existence of a stimulus that importantly influenced a response, (b) unaware of the existence of the response, and (c) unaware that the stimulus has affected the response. It is proposed that when people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit causal theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response. This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them. Accurate reports will occur when influential stimuli are salient and are plausible causes of the responses they produce, and will not occur when stimuli are not salient or are not plausible causes. (86 ref)
Article
Full-text available
The formal practice of forecasting and planning (F&P) has risen to prominence within a few decades and now receives considerable attention from both academics and practitioners. This paper explicitly recognizes the nature of F&P as future-oriented decision making activities and, as such, their dependence upon judgmental inputs. A review of the extensive psychological literature on human judgmental abilities is provided from this perspective. It is argued that many of the numerous information processing limitations and biases revealed in this literature apply to tasks performed in F&P. In particular, the "illusion of control," accumulation of redundant information, failure to seek possible disconfirming evidence, and overconfidence in judgment are liable to induce serious errors in F&P. In addition, insufficient attention has been given to the implications of numerous studies that show that the predictive judgment of humans is frequently less accurate than that of simple quantitative models. Applied studies of F&P are also reviewed and shown to mirror many of the findings from psychology. The paper subsequently draws implications from these reviews and suggests reconceptualizing F&P through use of decision-theoretic concepts. At the organizational level this involves recognizing that F&P may perform many, often conflicting, manifest and latent functions which should be identified and evaluated through a multi-attribute utility framework. Operationally, greater use should be made of sensitivity analysis and the concept of the value of information.
Article
Full-text available
Considers that intuitive predictions follow a judgmental heuristic-representativeness. By this heuristic, people predict the outcome that appears most representative of the evidence. Consequently, intuitive predictions are insensitive to the reliability of the evidence or to the prior probability of the outcome, in violation of the logic of statistical prediction. The hypothesis that people predict by representativeness was supported in a series of studies with both naive and sophisticated university students (N = 871). The ranking of outcomes by likelihood coincided with the ranking by representativeness, and Ss erroneously predicted rare events and extreme values if these happened to be representative. The experience of unjustified confidence in predictions and the prevalence of fallacious intuitions concerning statistical regression are traced to the representativeness heuristic.
Article
Full-text available
3 experiments investigated the effects on posterior probability estimates of: (1) prior probabilities, amount of data, and diagnostic impact of the data; (2) payoffs; and (3) response modes. Ss usually behaved conservatively, i.e., the difference between their prior and posterior probability estimates was less than that prescribed by Bayes' theorem. Conservatism was unaffected by prior probabilities, remained constant as the amount of data increased, and decreased as the diagnostic value of each datum decreased. More learning occurred under payoff than under nonpayoff conditions and between-S variance was less under payoff conditions. Estimates were most nearly Bayesian under the (formally inappropriate) linear payoff, but considerable overestimation resulted; the log payoff condition yielded less conservatism than the quadratic payoff. Estimates were most nearly Bayesian when Ss estimated odds on a logarithmic scale.
Article
Evidence is reviewed which suggests that there may be little or no direct introspective access to higher order cognitive processes. Subjects are sometimes (a) unaware of the existence of a stimulus that importantly influenced a response, (b) unaware of the existence of the response, and (c) unaware that the stimulus has affected the response. It is proposed that when people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit causal theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response. This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them. Accurate reports will occur when influential stimuli are salient and are plausible causes of the responses they produce, and will not occur when stimuli are not salient or are not plausible causes.
Chapter
Three experiments in subjective probability forecasting were designed, and these experiments were conducted in four forecast offices of the U.S. National Weather Service. The first experiment involved credible interval temperature forecasts, the second experiment involved point and area precipitation probability forecasts, and the third experiment involved the effect of guidance forecasts on precipitation probability forecasts. In each case, some background material is presented; the design of the experiment is discussed; some preliminary results of the experiment are presented; and some implications of the experiment and the results for probability forecasting in general and probability forecasting in meteorology in particular are discussed.
Article
The number of dimensions used by 7 male expert livestock judges in making decisions was determined by 2 different experimental procedures. In the 1st, judgments were made of hypothetical gilts (female breeding pigs) described by verbal statements along 11 relevant dimensions. In the 2nd, judgments were made based on photographs of Poland–China breeding gilts. The judges were found to use 9–21 pieces of information in the 1st study but were generally found to use fewer than 3 in the 2nd study. These results indicate that expert judges can integrate a large number of dimensions but that intercorrelations present in real stimuli tend to reduce the number of dimensions found. This suggested that experts may be able to use substantially more information than was previously thought. (15 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
TAT, protocols from homosexual and normal college Ss and homosexual and normal prisoners were employed in 2 consecutive studies concerned with clinical and actuarial prediction. In the 1st study a clinician blindly predicted the criterion from TAT protocols with 95% accuracy. 20 objective TAT indices, when combined after-the-fact using actuarial methods, functioned nearly as well as the clinician. When applied to the prison population, the actuarial methods were totally ineffective, while 2 clinicians were more successful in predicting the criterion. (30 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
There is a tension between normative and descriptive elements in the theory of rational belief. This tension has been reflected in work in psychology and decision theory as well as in philosophy. Canons of rationality should be tailored to what is humanly feasible. But rationality has normative content as well as descriptive content. A number of issues related to both deductive and inductive logic can be raised. Are there full beliefs – statements that are categorically accepted? Should statements be accepted when they become overwhelmingly probable? What is the structure imposed on these beliefs by rationality? Are they consistent? Are they deductively closed? What parameters, if any, does rational acceptance depend on? How can accepted statements come to be rejected on new evidence Should degrees of belief satisfy the probability calculus? Does conformity to the probability calculus exhaust the rational constraints that can be imposed on partial beliefs? With the acquisition of new evidence, should beliefs change in accord with Bayes' theorem? Are decisions made in accord with the principle of maximizing expected utility? Should they be? A systematic set of answers to these questions is developed on the basis of a probabilistic rule of acceptance and a conception of interval-valued logical probability according to which probabilities are based on known frequencies. This leads to limited deductive closure, a demand for only limited consistency, and the rejection of Bayes' theorem as universally applicable to changes of belief. It also becomes possible, given new evidence, to reject previously accepted statements.
Article
Intuitive judgement forms the basis of decision making both by experts, in professional settings, and by people in everyday life. Psychologists have studied the rationality of intuitive judgements. In this paper three approaches to decision making will be discussed: unqualified rationalism, qualified rationalism and irrationalism. The first approach holds that man is essentially rational, the second that serious cognitive biases exist, and the third that thinking is strongly influenced by non-cognitive sources of distortion, i.e. emotions and motives. Evidence on judgement is reviewed and found to support the last two approaches. Various ways of improving judgements, as suggested by the three basic viewpoints, are then presented.
Article
This paper presents a communicational account of the interpretation of categorical propositions of the types used in syllogisms. By the account, subjects interpret the propositions as if they were obscurely stated attempts to communicate. Since the conventions of language differ from the conventions of logic, interpretations will go beyond the minimal commitment dictated by logic. In particular, subjects assume that the presented information is complete and ordered asymmetrically. By assuming completeness, they interpret particular statements (some, some-not) as contradictions of stronger universal statements (all, no). By assuming asymmetry, they interpret first-mentioned terms as being more general and salient than later-mentioned terms; thus the predicate is encoded only in sofar as it pertains to the subject. In consequence, subjects use a subjective logic consisting of three exclusive categories—all, some but not all, and none. They discriminate well between the categories, but make no further discriminations within the categories. The account is tested and contrasted with other accounts in three experiments that examine the psychological meaning of categorical propositions.
Article
Builders of expert rule-based systems attribute the impressive performance of their programs to the corpus of knowledge they embody: a large network of facts to provide breadth of scope, and a large array of informal judgmental rules (heuristics) which guide the system toward plausible paths to follow and away from implausible ones. Yet what is the nature of heuristics? What is the source of their power? How do they originate and evolve? By examining two case studies, the am and eurisko programs, we are led to some tentative hypotheses: Heuristics are compiled hindsight, and draw their power from the various kinds of regularity and continuity in the world; they arise through specialization, generalization, and—surprisingly often—analogy. Forty years ago, Polya introduced Heuretics as a separable field worthy of study. Today, we are finally able to carry out the kind of computation-intensive experiments which make such study possible.
Article
This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty.
A model for schema guided reasoning
  • A C Zimmer
ZIMMER, A. C. (submitted). A model for schema guided reasoning.
Human Nature and Predictability The logic of inexact concepts. Synth~se
  • M I Friedman
  • M R Willis
FRIEDMAN, M. I. & WILLIS, M. R. (1981). Human Nature and Predictability. Lexington, Massachusetts: Lexington Books. GOGUEN, J. A. (1969). The logic of inexact concepts. Synth~se, 19, 325-373.
Clinical versus Statistical Prediction Subjective probability forecasting: some real world experiments Human Inference: Straategies and Shortcomings of Social Judgement
  • P E Meehl
  • A H Murphy
  • R L R E Winkler
  • L Ross
MEEHL, P. E. (1954). Clinical versus Statistical Prediction. Minneapolis: University of Minnesota Press. MURPHY, A. H. & WINKLER, R. L. (1975). Subjective probability forecasting: some real world experiments. In WENDT, D. & VLEK, C., Eds, Utility, Probability, and Human Decision Making. Dordrecht: Reidel. NISBE'Iq', R. E. & Ross, L. (1980). Human Inference: Straategies and Shortcomings of Social Judgement. Englewood Cliffs, New Jersey: Prentice-Hall.
Theory of Fuzzy Subsets Rational belief
  • A Kaufmann
KAUFMANN, A. (1975). Theory of Fuzzy Subsets, vol. 1. New York: Academic Press. KYBURG, H. E. (1983). Rational belief. The Behavioral and Brain Sciences, 6(2), 231-273.
Computer-based Medical Consultations: MYCIN Aided and unaided decision making: improving intuitive judgement
  • E H Shortliffe
SHORTLIFFE, E. H. (1976). Computer-based Medical Consultations: MYCIN. New York: Elsevier. SJOBERG, L. (1981). Aided and unaided decision making: improving intuitive judgement. Journal of Forecasting, 1, 349-363.
Linguistic description of human judgments in expert systems, and in “soft” sciences
  • Freska
Verbal vs. numerical processing
  • Zimmer
Linguistic pattern recognition
  • Freska
Eine Formalisierung mnestisch stabilisierter Bezugssysteme auf der Grundlage von Toleranzmengen
  • Zimmer