ABSTRACT. Decisions to trust involve ambiguity (unknown probabilities) in strategic situations. Despite many theoretical studies on the role of ambiguity in game theory, empirical studies have lagged behind due to a lack of measurement methods for ambiguities in games, where separating ambiguity attitudes from beliefs is crucial for proper measurements. Baillon et al. (2018) introduced a method that allows for such a separation for individual choice. We extend this method to strategic situations and apply it to the trust game, providing new insights. Both people’s ambiguity attitudes and beliefs matter for their trust decisions. More ambiguity averse people decide to trust less, and people with more optimistic beliefs about others’ trustworthiness decide to trust more. However, people who are more a-insensitive (insufficient discrimination between different likelihood levels) are less likely to act upon their beliefs. Our measure of belief, free from contamination by ambiguity attitudes, shows that traditional introspective trust survey measures capture trust in the commonly accepted sense of belief in trustworthiness of others. Further, trustworthy people also decide to trust more due to their beliefs that others are similar to themselves. This paper has shown that applications of ambiguity theories to game theory can bring useful new empirical insights.
ABSTRACT. Ellsberg and others suggested that decision under ambiguity is a rich empirical domain with many phenomena to be investigated beyond the Ellsberg urns. We provide a systematic empirical investigation of this richness by varying the uncertain events, the outcomes, and combinations of these. Although ambiguity aversion is prevailing, we also find systematic ambiguity seeking, confirming insensitivity. We find that ambiguity attitudes depend on the source of uncertainty (the kind of uncertain event) but not on the outcomes. Ambiguity attitudes are closer to rationality (ambiguity neutrality) for natural uncertainties than for the Ellsberg urns. This also appears from the reductions of monotonicity violations and of insensitivity. Ambiguity attitudes have predictive power across different sources of uncertainty and outcomes, with individual-specific components. Our rich domain serves well to test families of weighting functions for fitting ambiguity attitudes. We find that two-parameter families, capturing not only aversion but also insensitivity, are desirable for ambiguity even more than for risk. The Goldstein-Einhorn family performs best for ambiguity.
ABSTRACT. Measurements of ambiguity attitudes have so far focused on artificial events, where (subjective) beliefs can be derived from symmetry of events and can be then controlled for. For natural events as relevant in applications, such a symmetry and corresponding control are usually absent, precluding traditional measurement methods. This paper introduces two indexes of ambiguity attitudes, one for aversion and the other for insensitivity/perception, for which we can control for likelihood beliefs even if these are unknown. Hence, we can now measure ambiguity attitudes for natural events. Our indexes are valid under many ambiguity theories, do not require expected utility for risk, and are easy to elicit in practice. We use our indexes to investigate time pressure under ambiguity. People do not become more ambiguity averse under time pressure but become more insensitive (perceive more ambiguity). These findings are plausible and, hence, support the validity of our indexes.
ABSTRACT. The Anscombe-Aumann (AA) model, originally introduced to give a normative basis to expected utility, is nowadays mostly used for another purpose: to analyze deviations from expected utility due to ambiguity (unknown probabilities). The AA model makes two ancillary assumptions that do not refer to ambiguity: expected utility for risk and backward induction. These assumptions, even if normatively appropriate, fail descriptively. We relax them while maintaining AA's convenient mixture operation, and thus make it possible to test and apply AA based ambiguity theories descriptively. We find three common assumptions violated: reference independence, universal ambiguity aversion, and weak certainty independence. We introduce and axiomatize a reference dependent generalization of Schmeidler's CEU theory that accommodates the violations found. That is, we extend the AA model to prospect theory.
ABSTRACT. Case-based decision theory (CBDT ) provided a new way of revealing preferences, with decisions under uncertainty determined by similarities with cases in memory. This paper introduces a method to measure CBDT that requires no commitment to parametric families and that relates directly to decisions. Thus, CBDT becomes directly observable and can be used in prescriptive applications. Two experiments on real estate investments demonstrate the feasibility of our method. Our implementation of real incentives not only avoids the income effect, but also avoids interactions between different memories. We confirm CBDT’s predictions except for one violation of separability of cases in memory.
ABSTRACT. This paper investigates the effects of predicting choices made by others on own choices. We follow up on promising first results in the literature that suggested improvements of rationality and, hence, new tools for nudging. We find improvements of strong rationality (risk neutrality) for losses, but no such improvements for gains. There are no improvements of weak rationality (avoiding preference reversals). Overall, risk aversion for choices increases. Conversely, for the effects of own choices on predictions of others’ choices, the risk aversion predicted in others’ choices is reduced if preceded by own choices, both for gains and for losses. We consider two psychological theories of risk: risk-as-feelings and risk-as-value, combined with anchoring or adjustment. Our results support risk-as-value combined with anchoring. Relative to preceding studies, we added real incentives, pure framing effects, and simplicity of stimuli that were maximally targeted towards the research questions of this paper.
ABSTRACT. We introduce a new method to measure the temporal discounting of money. Unlike preceding methods, our method requires neither knowledge nor measurement of utility. It is easier to implement, clearer to subjects, and requires fewer measurements than existing methods.
ABSTRACT. This paper investigates the rationality of group decisions versus individual decisions under risk. We study two group decision rules, majority and unanimity, in stochastic dominance and Allais paradox tasks. We distinguish communication effects (the effects of group discussions and interactions) from aggregation effects (mere impact of the voting procedure), which makes it possible to better understand the complex dynamics of group decision making. In an experiment, both effects occurred for intellective tasks whereas there were only aggregation effects in judgmental tasks. Communication effects always led to more rational choices; aggregation effects did so sometimes but not always. Groups violated stochastic dominance less often than individuals did, which was due to both aggregation and communication effects. In the Allais paradox tasks, there were almost no communication effects, and aggregation effects made groups deviate more from expected utility than individuals.
ABSTRACT. Nash is famous for many inventions, but it is less known that he, simultaneously with Marschak, also was the first to axiomatize expected utility for risk. In particular, these authors were the first to state the independence condition, a condition that should have been but was not stated by von Neumann and Morgenstern. Marschak’s paper resulted from interactions with several people at the Cowles Commission. We document unique letters and personal communications with Nash, Samuelson, Arrow, Dalkey, and others, making plausible that Nash made his discovery independently from the others.
ABSTRACT. This paper recommends using mosaics, rather than (s-)algebras, as collections of events in decision under uncertainty. We show how mosaics solve the main problem of Savage’s (1954) uncertainty model, a problem pointed out by Duncan Luce. Using mosaics, we can connect Luce’s modeling of uncertainty with Savage’s. Thus, the results and techniques developed by Luce and his co-authors become available to currently popular theories of decision making under uncertainty and ambiguity.
ABSTRACT. Using a theorem showing that matching probabilities of ambiguous events can capture ambiguity attitudes, we introduce a tractable method for measuring ambiguity attitudes and apply it in a large representative sample. In addition to ambiguity aversion, we confirm an ambiguity component recently found in laboratory studies: a insensitivity - the tendency to treat subjective likelihoods as fifty-fifty, thus overweighting extreme events. Our ambiguity measurements are associated with real economic decisions; specifically, a insensitivity is negatively related to stock market participation. Ambiguity aversion is also negatively related to stock market participation, but only for subjects who perceive stock returns as highly ambiguous.
ABSTRACT. We introduce a new type of preference conditions for intertemporal choice, requiring independence of present values from various other variables. The new conditions are more concise and more transparent than traditional ones. They are directly related to applications because present values are widely used tools in intertemporal choice. Our conditions give more general behavioral axiomatizations, which facilitates normative debates and empirical tests of time inconsistencies and related phenomena. Like other preference conditions, our conditions can be tested qualitatively. Unlike other preference conditions, however, our conditions can also be directly tested quantitatively, e.g. to verify the required independence of present values from predictors in regressions. We show how similar types of preference conditions, imposing independence conditions between directly observable quantities, can be developed for decision contexts other than intertemporal choice, and can simplify behavioral axiomatizations there. Our preference conditions are especially efficient if several types of aggregation are relevant, because we can handle them in one blow. We thus give an efficient axiomatization of a market pricing system that is (i) arbitrage-free for hedging uncertainties and (ii) time consistent.
ABSTRACT. In their famous 1982 paper in this journal, Loomes and Sugden introduced regret theory. Now, more than 30 years later, the case for the historical importance of this contribution can be made.
ABSTRACT. This paper presents the Metric-Frequency Calculator (MF Calculator), an online application to analyze similarity. The MF Calculator implements a MF similarity algorithm for the quantitative assessment of similarity in ill-structured data sets. It is widely applicable as it can be used with nominal, ordinal, or interval data when there is little prior control over the variables to be observed regarding number or content. The MF Calculator generates a proximity matrix in CSV, XML or DOC format that can be used as input of traditional statistical techniques such as hierarchical clustering, additive trees, or multidimensional scaling. The MF Calculator also displays a graphical representation of outputs using additive similarity trees. A simulated example illustrates the implementation of the MF Calculator. An additional example with real data is presented, in order to illustrate the potential of combining the MF Calculator with cluster analysis. The MF Calculator is a user-friendly tool available free of charge. It can be accessed from http://mfcalculator.celiasales.org/Calculator.aspx, and it can be used by non-experts from a wide range of social sciences.
ABSTRACT. Uncertainty pervades most aspects of life. From selecting a new technology to choosing a career, decision makers rarely know in advance the exact outcomes of their decisions. Whereas the consequences of decisions in standard decision theory are explicitly described (the decision from description (DFD) paradigm), the consequences of decisions in the recent decision from experience (DFE) paradigm are learned from experience. In DFD, decision makers typically overrespond to rare events. That is, rare events have more impact on decisions than their objective probabilities warrant (overweighting). In DFE, decision makers typically exhibit the opposite pattern, underresponding to rare events. That is, rare events may have less impact on decisions than their objective probabilities warrant (underweighting). In extreme cases, rare events are completely neglected, a pattern known as the “Black Swan effect.” This contrast between DFD and DFE is known as a description–experience gap. In this paper, we discuss several tentative interpretations arising from our interdisciplinary examination of this gap. First, while a source of underweighting of rare events in DFE may be sampling error, we observe that a robust description–experience gap remains when these factors are not at play. Second, the residual description–experience gap is not only about experience per se but also about the way in which information concerning the probability distribution over the outcomes is learned in DFE. Econometric error theories may reveal that different assumed error structures in DFD and DFE also contribute to the gap.
ABSTRACT. This paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequences. Thus we generalize many preceding results in the literature. For example, continuity in outcomes, a condition needed in other approaches, now is an option rather than a requirement. Applications to expected utility, decisions under ambiguity, welfare evaluations for variable population size, discounted utility, and quasilinear means in functional analysis are presented.
ABSTRACT. Prospect theory is the most popular theory for predicting decisions under risk. This paper investigates its predictive power for decisions under ambiguity, using its specification through the source method. We find that it outperforms its most popular alternatives, including subjective expected utility, Choquet expected utility, and three multiple priors theories: maxmin expected utility, maxmax expected utility, and a-maxmin expected utility.
ABSTRACT. A central question in many debates on paternalism is whether a decision analyst can ever go against the stated preference of a client, even if merely intending to improve the decisions for the client. Using four gedanken-experiments, this paper shows that this central question, so cleverly and aptly avoided by libertarian paternalism (nudge), cannot always be avoided. The four thought experiments, while purely hypothetical, serve to raise and specify the critical arguments in a maximally clear and pure manner. The first purpose of the paper is, accordingly, to provide a litmus test on the readers’ stance on paternalism. We thus also survey and organize the various stances in the literature. The secondary purpose of this paper is to argue that paternalism cannot always be avoided and consumer sovereignty cannot always be respected. However, this argument will remain controversial.
ABSTRACT. Behavioral conditions such as compound invariance for risky choice and constant decreasing relative impatience for intertemporal choice have surprising implications for the underlying decision model. They imply a multiplicative separability of outcomes and either probability or time. Hence the underlying model must be prospect theory or discounted utility on the domain of prospects with one nonzero outcome. We indicate implications for richer domains with multiple outcomes, and with both risk and time involved.
ABSTRACT. Doyle's (JDM 2013) theoretical survey of discount functions criticizes two parametric families abbreviated as CRDI and CADI families. We show that Doyle's criticisms are based on a mathematical mistake and are incorrect.
This paper presents preference axiomatizations of expected utility for nonsimple lotteries while avoiding continuity constraints. We use results by Fishburn (1975), Wakker (1993), and Kopylov (2010) to generalize results by Delbaen, Drapeau, and Kupper (2011). We explain the logical relations between these contributions for risk versus uncertainty, and for finite versus countable additivity, indicating what are the most general axiomatizations of expected utility existing today.
Time discounting and quality of life are two important factors in evaluations of medical interventions. The measurement of these two factors is complicated because they interact. Existing methods either simply assume one factor given, based on heuristic assumptions, or invoke complicating extraneous factors such as risk that generate extra biases. We introduce a new method for measuring discounting (and then quality of life) that involves no extraneous factors and that avoids all distorting interactions. Further, our method is considerably simpler and more realistic for subjects than existing methods. It is entirely choice-based and, thus, can be founded on the rationality requirements of economics. An experiment demonstrates the feasibility of our method. It can measure discounting not only for health, but for any other (“flow”) commodity that comes per time unit, such as salary.
Two experiments show that violations of expected utility due to ambiguity, found in general decision experiments, also affect belief aggregation. Hence we use modern ambiguity theories to analyze belief aggregation, thus obtaining more refined and empirically more valid results than traditional theories can provide. We can now confirm more reliably that conflicting (heterogeneous) beliefs where some agents express certainty are processed differently than informationally equivalent imprecise homogeneous beliefs. We can also investigate new phenomena related to ambiguity. For instance, agents who express certainty receive extra weight (a cognitive effect related to ambiguity-generated insensitivity) and generate extra preference value (source preference; a motivational effect related to ambiguity aversion). Hence, incentive compatible belief elicitations that prevent manipulation are especially warranted when agents express certainty. For multiple prior theories of ambiguity, our findings imply that the same prior probabilities can be treated differently in different contexts, suggesting an interest in corresponding generalizations.
This paper presents a general technique for comparing the concavity of different utility functions when probabilities need not be known. It generalizes: (a) Yaari’s comparisons of risk aversion by not requiring identical beliefs; (b) Kreps and Porteus’ informationtiming preference by not requiring known probabilities; (c) Klibanoff, Marinacci, and Mukerji’s smooth ambiguity aversion by not using subjective probabilities (which are not directly observable) and by not committing to (violations of) dynamic decision principles; (d) comparative smooth ambiguity aversion by not requiring identical secondorder subjective probabilities. Our technique completely isolates the empirical meaning of utility. It thus sheds new light on the descriptive appropriateness of utility to model risk and ambiguity attitudes.
Experiments frequently use a random incentive system (RIS), where only tasks that are randomly selected at the end of the experiment are for real. The most common type pays every subject one out of her multiple tasks (within-subjects randomization). Recently, another type has become popular, where a subset of subjects is randomly selected, and only these subjects receive one real payment (between-subjects randomization). In earlier tests with simple, static tasks, RISs performed well. The present study investigates RISs in a more complex, dynamic choice experiment. We find that between-subjects randomization reduces risk aversion. While within-subjects randomization delivers unbiased measurements of risk aversion, it does not eliminate carry-over effects from previous tasks. Both types generate an increase in subjects’ error rates. These results suggest that caution is warranted when applying RISs to more complex and dynamic tasks.
In economic decisions we often have to deal with uncertain events for which no probabilities are known. Several normative models have been proposed for such decisions. Empirical studies have usually been qualitative, or they estimated ambiguity aversion through one single number. This paper introduces the source method, a tractable method for quantitatively analyzing uncertainty empirically. The method can capture the richness of ambiguity attitudes. The theoretical key in our method is the distinction between different sources of uncertainty, within which subjective (choice-based) probabilities can still be defined. Source functions convert those subjective probabilities into willingness to bet. We apply our method in an experiment, where we do not commit to a particular model of ambiguity but let the data speak.
Utility independence is a central condition in multiattribute utility theory, where attributes of outcomes are aggregated in the context of risk. The aggregation of attributes in the absence of risk is studied in conjoint measurement. In conjoint measurement, standard sequences have been widely used to empirically measure and test utility functions, and to theoretically analyze them. This paper shows that utility independence and standard sequences are closely related: utility independence is equivalent to a standard sequence invariance condition when applied to risk. This simple relation between two widely used conditions in adjacent fields of research is surprising and useful. It facilitates the testing of utility independence because standard sequences are flexible and can avoid cancelation biases that affect direct tests of utility independence. Extensions of our results to nonexpected utility models can now be provided easily. We discuss applications to the measurement of quality-adjusted life-years (QALY) in the health domain.
Proper scoring rules serve to measure subjective degrees of belief. Traditional proper scoring rules are based on the assumption of expected value maximization. There are, however, many deviations from expected value due to risk aversion and other factors. Correcting techniques have been proposed in the literature for deviating (nonlinear) utility that still assumed expected utility maximization. More recently, corrections for deviations from expected utility have been proposed. The latter concerned, however, only the quadratic scoring rule, and could handle only half of the domain of subjective beliefs. Further, beliefs close to 0.5 could not be discriminated. This paper generalizes the correcting techniques to all proper scoring rules, covers the whole domain of beliefs and, in particular, can discriminate between all degrees of belief. Thus we fully extend the properness requirement (in the sense of identifying all degrees of subjective beliefs) to all models that deviate from expected value.
This paper provides a preference foundation of prospect theory for continuous distributions and unbounded utility. Thus we show, for instance, how applications of this theory to normal and lognormal distributions can be justified or falsified.
This paper finds preference reversals in measurements of ambiguity aversion, even if psychological and informational circumstances are kept constant. The reversals are of a fundamentally different nature than the reversals found before because they cannot be explained by context-dependent weightings of attributes. We offer an explanation based on Sugden’s random-reference theory, with different elicitation methods generating different random reference points. Then measurements of ambiguity aversion that use willingness to pay are confounded by loss aversion and hence overestimate ambiguity aversion.
This paper introduces a parameter-free method for measuring the weighting functions of prospect theory and rank-dependent utility. These weighting functions capture risk attitudes, subjective beliefs, and ambiguity attitudes. Our method, called the midweight method, is based on a convenient way to obtain midpoints in the weighting function scale. It can be used both for risk (known probabilities) and for uncertainty (unknown probabilities). The resulting integrated treatment of risk and uncertainty is particularly useful for measuring the differences between them: ambiguity. Compared to existing methods to measure ambiguity attitudes, our method is more efficient and it can accommodate violations of expected utility under risk. An experiment demonstrates the feasibility and tractability of our method, yielding plausible results such as ambiguity aversion for moderate and high likelihoods but ambiguity seeking for low likelihoods, as predicted by Ellsberg.
This paper discusses Jean-Yves Jaffray's ideas on ambiguity, and the views underlying his ideas. His models, developed 20 years ago, provide the most tractable separation of risk attitudes, ambiguity attitudes, and ambiguity beliefs available in the literature today.
This paper introduces time-tradeoff (TTO) sequences as a new tool to analyze time inconsistency and intertemporal choice. TTO sequences simplify the measurement of discount functions, requiring no assumptions about utility. They also simplify the qualitative testing, and allow for quantitative measurements, of time inconsistencies. TTO sequences can easily be administered. They readily show which subjects are most prone to time inconsistencies. We further use them to axiomatically analyze and empirically test (quasi )hyperbolic discount functions. An experiment demonstrates the feasibility of measuring TTO sequences. Our data falsify (quasi-)hyperbolic discount functions and call for the development of models that can accommodate increasing impatience.
When process fairness matters (by deviating from outcome fairness), dynamic inconsistencies can arise in the same way as they do in nonexpected utility under risk. Mark Machina introduced resolute choice so as to restore dynamic consistency under nonexpected utility without using Strotz's commitment devices. Machina's idea can similarly be used to justify dynamically consistent process fairness. Process fairness comprises a particularly convincing application of resolute choice.
This book need not be read continuously. The reader can pick out topics of interest, and then select preceding sections to be read in preparation as indicated in Appendix K. Thus, different readers can pick out different parts of interest. In particular, readers with little mathematical background can skip all advanced mathematics. Indexed exercises further allow readers to select and skip material within sections.
Ways are presented to empirically test the validity of theories and ways to test their qualitative properties. For all theories described, methods are provided for obtaining precise quantitative measurements of those theories and their concepts through so-called parameter-free methods. Such methods do not just fit models, but they also give insights into the concepts of the model (e.g., subjective probabilities) and into the underlying psychological processes. They can also be used in interactive prescriptive decision consultancies. The theories are presented in as elementary and transparent a manner as possible. This enhances the accessibility of the book to readers without much theoretical background.
The presentation of all models in this book follows the same line. First the model is defined, with special attention to the free parameters that characterize it, such as the utility function in expected utility. Next we see how those parameters can, in principle, be measured from decisions, and how well they can describe, predict, and prescribe decisions. The requirement that such measurements do not run into contradictions then gives so-called preference foundations of the models. Finally, we discuss empirical findings of the models, and sometimes give first suggestions for applications in various fields.
The main point that prospect theory adds to classical expected utility is that risk and ambiguity attitudes are no longer modeled solely through utility curvature, but depend also on nonadditive probability weighting and loss aversion. Loss aversion is one of the strongest empirical phenomena in decision theory, and the various ways people feel about probabilities and uncertainty (chance attitude) is just as important empirically as the various ways people feel about outcomes (utility). These new components of risk attitude and Ellsberg's ambiguity attitudes had been sorely missing in the literature up to the 1980s. This book aims to make these new concepts accessible to a wide audience, and to help initiate applications thereof.
The commonly used hyperbolic and quasi-hyperbolic discount functions have been developed to accommodate decreasing impatience, which is the prevailing empirical finding in intertemporal choice, in particular for aggregate behavior. These discount functions do not have the flexibility to accommodate increasing impatience or strongly decreasing impatience. This lack of flexibility is particularly disconcerting for fitting data at the individual level, where various patterns of increasing impatience and strongly decreasing impatience will occur for a significant fraction of subjects. This paper presents discount functions with constant absolute (CADI) or constant relative (CRDI) decreasing impatience that can accommodate any degree of decreasing or increasing impatience. In particular, they are sufficiently flexible for analyses at the individual level. The CADI and CRDI discount functions are the analogs of the well known CARA and CRRA utility functions for decision under risk.
Proper scoring rules, convenient and commonly used tools for eliciting subjective beliefs, are valid only under expected value maximization. This paper shows how proper scoring rules can be generalized to modern theories of risk and ambiguity, yielding mutual benefits. For practitioners of proper scoring rules, the validity of their measurement instrument is improved. For the study of risk and ambiguity, measurement tools are provided that are more efficient than the commonly used binary preferences. An experiment demonstrates the feasibility of our generalized measurement instrument, yielding plausible empirical results.
Similarity measures have been studied extensively in many domains, but usually with well-structured data sets. In many psychological applications, however, such data sets are not available. It often cannot even be predicted how many items will be observed, or what exactly they will entail. This paper introduces a similarity measure, called the metric-frequency (MF) measure, that can be applied to such data sets. If it is not known beforehand how many items will be observed, then the number of items actually observed in itself carries information. A typical feature of the MF is that it incorporates such information. The primary purpose of our measure is that is should be pragmatic, widely applicable, and tractable, even if data are complex. The MF generalizes Tversky's set-theoretic measure of similarity to cases where items may be present or absent and at the same time can be numerical as with Shepard's metric measure, but need not be so. As an illustration, we apply the MF to family therapy where it cannot be predicted what issues the clients will raise in therapeutic sessions. The MF is flexible enough to be applicable to idiographic data.
Many traditional conjoint representations of binary preferences are additively decomposable, or additive for short. An important generalization arises under rank-dependence, when additivity is restricted to cones with a fixed ranking of components from best to worst (comonotonicity), leading to configural weighting, rank-dependent utility, and rank- and sign-dependent utility (prospect theory). This paper provides a general result showing how additive representations on an arbitrary collection of comonotonic cones can be combined into one overall representation that applies to the union of all cones considered. The result is applied to a new paradigm for decision under uncertainty developed by Duncan Luce and others, which allows for violations of basic rationality properties such as the coalescing of events and other framing conditions. Through our result, a complete preference foundation of a number of new models by Luce and others can be obtained. We also show how additive representations on different full product sets can be combined into a representation on the union of these different product sets.
Koopmans provided a well-known preference axiomatization for discounted utility, the most widely used model for maximizing intertemporal choice. There were, however, some technical problems in his analysis. For example, there was an unforeseen implication of bounded utility. Some partial solutions have been advanced in various fields in the literature. The technical problems in Koopmans' analysis obscure the appeal of his intuitive axioms. This paper completely resolves Koopmans' technical problems. In particular, it obtains complete flexibility concerning the utility functions that can be used. This paper, thus, provides a clean and complete preference axiomatization of discounted utility, clarifying the appeal of Koopmans' intuitive axioms.
This paper examines the cross-fertilization of random utility models with the study of decision making under risk and uncertainty. We start with a description of Expected Utility (EU) theory and then consider deviations from the standard EU frameworks, involving the Allais paradox and the Ellsberg paradox, inter alia. We then discuss how the resulting Non-EU framework can be modeled and estimated within the framework of discrete choices in static and dynamic contexts. Our objectives in addressing risk and ambiguity in individual choice contexts are to understand the decision choice process, and to use behavioral information for prediction, prescription, and policy analysis.
Ambiguity aversion appears to have subtle psychological causes. Curley, Yates, and Abrams found that the fear of negative evaluation by others (FNE) increases ambiguity aversion. This paper introduces a design where preferences can be private information of individuals, so that FNE can be avoided entirely. Thus, we can completely control for FNE and other social factors, and can determine exactly to what extent ambiguity aversion is driven by such social factors. In our experiment ambiguity aversion, while appearing as commonly found in the presence of FNE, disappears entirely if FNE is eliminated. Implications are discussed.
A personal account is given of my experiences as an economist working in medical decision making. I discuss the differences between economic decision theory and medical decision making and give examples of the mutual benefits resulting from interactions. In particular, I discuss pros and cons of different methods for measuring quality of life (or, as economists would call it, utility), including the standard-gamble, the time-tradeoff, and the healthy-years-equivalent method.
The power family, also known as the family of constant relative risk aversion (CRRA), is the most widely used parametric family for fitting utility functions to data. Its characteristics have, however, been little understood, and have led to numerous misunderstandings. This paper explains these characteristics in a manner accessible to a wide audience.
This chapter deals with individual decision making under uncertainty (unknown probabilities). Risk (known probabilities) is not treated as a separate case, but as a subcase of uncertainty. Many results from risk naturally extend to uncertainty. The Allais paradox, commonly applied to risk, also reveals empirical deficiencies of expected utility for uncertainty. The Ellsberg paradox does not reveal deviations from expected utility in an absolute sense, but in a relative sense, giving within-person comparisons: for some events (ambiguous or otherwise) subjects deviate more from expected utility than for other events. Besides aversion, many other attitudes towards ambiguity are empirically relevant.
In an experiment, choice-based (revealed-preference) utility of money is derived from choices under risk, and choiceless (non-revealed-preference) utility from introspective strength-of-preference judgments. The well-known inconsistencies of risky utility under expected utility are resolved under prospect theory, yielding one consistent cardinal utility index for risky choice. Remarkably, however, this cardinal index also agrees well with the choiceless utilities, suggesting a relation between a choice-based and a choiceless concept. Such a relation implies that introspective judgments can provide useful data for economics, and can reinforce the revealed-preference paradigm. This finding sheds new light on the classical debate on ordinal versus cardinal utility.
This paper extends de Finetti's betting-odds method for assessing subjective beliefs to ambiguous events. de Finetti's method is so transparent that decision makers can evaluate the relevant tradeoffs in complex situations, for prospects with more than two uncertain outcomes. Such prospects are needed to test the novelty of the Quiggin-Schmeidler rank-dependent utility and of new prospect theory. Our extension is implemented in an experiment on predicting next-day's performance of the Dow Jones and Nikkei stock indexes, where we test the existence and violations of rank-dependence.
This paper was previously entitled: “Measuring Decision Weights of Ambiguous Events by Adapting de Finetti's
Betting-Odds Method to Prospect Theory.”
The introduction of the Euro gave a unique opportunity to empirically disentangle two components in the utility of money. The first is intrinsic value, a normative component that is central in economics. The second is numerical sensitivity, a descriptive component that is central in prospect theory and that underlies the money illusion. We measured relative risk aversion in Belgium before and after the introduction of the Euro, and could consider effects of changes in intrinsic value while keeping numbers constant, and effects of changes in numbers while keeping intrinsic value constant. Increasing intrinsic value led to a significant increase of relative risk aversion, but changes in numbers did not have significant effects.
This paper presents a field study into the effects of statistical information concerning risks on willingness to take insurance, with special attention being paid to the usefulness of these effects for the clients (the insured). Unlike many academic studies, we were able to use in-depth individual interviews of a large representative sample from the general public (N=476). The statistical information that had the most interesting effects, “individual own past-cost information,” unfortunately enhanced adverse selection, which we could directly verify because the real health costs of the clients were known. For a prescriptive evaluation this drawback must be weighted against some advantages: a desirable interaction with risk attitude, increased customer satisfaction, and increased cost awareness. Descriptively, ambiguity seeking was found rather than ambiguity aversion, and no risk aversion was found for loss outcomes. Both findings, obtained in a natural decision context, deviate from traditional views in risk theory but are in line with prospect theory. We confirmed prospect theory's reflection at the level of group averages, but falsified it at the individual level.
Whereas both the Allais paradox, the first empirical challenge of the classical rationality assumptions, and learning have been the focus of many experimental investigations, no experimental study exists today into learning in the pure context of the Allais paradox. This paper presents such a study. We find that choices converge to expected utility maximization if subjects are given the opportunity to learn by both thought and experience, but less so when they learn by thought only. To the extent that genuine preferences should be measured with proper learning and incentives, our study gives the first pure demonstration that irrationalities such as in the Allais-paradox are less pronounced than often thought.
This paper introduces the likelihood method for decision under uncertainty. The method allows the quantitative determination of subjective beliefs or decision weights without invoking additional separability conditions, and generalizes the Savage-de Finetti betting method. It is applied to a number of popular models for decision under uncertainty. In each case, preference foundations result from the requirement that no inconsistencies are to be revealed by the version of the likelihood method appropriate for the model considered. A unified treatment of subjective decision weights results for most of the decision models popular today. Savage's derivation of subjective expected utility can now be generalized and simplified. In addition to the intuitive and empirical contributions of the likelihood method, we provide a number of technical contributions: We generalize Savage's nonatomiticy condition (“P6”) and his assumption of (sigma) algebras of events, while fully maintaining his flexibility regarding the outcome set. Derivations of Choquet expected utility and probabilistic sophistication are generalized and simplified similarly. The likelihood method also reveals a common intuition underlying many other conditions for uncertainty, such as definitions of ambiguity aversion and pessimism.
To a considerable extent, the commonly observed risk aversion is caused by loss aversion. This paper proposes a quantitative index of loss aversion. Under prospect theory, the proposal leads to a decomposition of risk attitude into three independent components: intrinsic utility, probability weighting, and loss aversion. The main theorem shows how the index of loss aversion of different decision makers can be compared through observed choices.
This papers characterizes properties of chance attitudes (nonadditive measures). It does so for decision under uncertainty (unknown probabilities), where it assumes Choquet expected utility, and for decision under risk (known probabilities), where it assumes rank-dependent utility. It analyzes chance attitude independently from utility. All preference conditions concern simple violations of the sure-thing principle. Earlier results along these lines assumed richness of both outcomes and events. This paper generalizes such results to general state spaces as in Schmeidler's model of Choquet expected utility, and to general outcome spaces as in Gilboa's model of Choquet expected utility.
The utility of gambling, entailing an intrinsic utility or disutility of risk, has been alluded to in the economics literature for over a century. This paper presents a model of the phenomenon and demonstrates that any utility of gambling necessarily implies a violation of fundamental rationality properties, such as transitivity or stochastic dominance, which may explain why this often-debated phenomenon was never formalized in the economics literature. Our model accommodates well-known deviations from expected utility, such as the Allais paradox, the simultaneous existence of gambling and insurance, and the equity-premium puzzle, while minimally deviating from expected utility. Our model also sheds new light on risk aversion and the distinction between von Neumann-Morgenstern- and neo-classical (riskless) utility.
This paper introduces a new preference condition that can be used to justify (or criticize) expected utility. The approach taken in this paper is an alternative to Savage's, and is accessible to readers without a mathematical background. It is based on a method for deriving “comparisons of tradeoffs” from ordinal preferences. Our condition simplifies previously-published tradeoff conditions, and at the same time provides more general and more powerful tools to specialists. The condition is more closely related to empirical methods for measuring utility than its predecessors. It provides a unifying tool for qualitatively testing, quantitatively measuring, and normatively justifying expected utility.
The standard gamble (SG) method and the time tradeoff (TTO) method are commonly used tomeasure utilities. However, they are distorted by biases due to loss aversion, scale compatibility, utility curvature for life duration, and probability weighting. This article applies corrections for these biases and provides new data on these biases and their corrections. The SG and TTO utilities of 6 rheumatoid arthritis health states were assessed for 45 healthy respondents. Various corrections of utilities were considered. The uncorrected TTO scores and the corrected (for utility curvature) TTO scores provided similar results. This article provides arguments suggesting that the TTO scores are biased upward rather than having balanced biases. The only downward bias in TTO scores was small and probably cannot offset the upward biases. The TTO scores are higher than the theoretically most preferred correction of the SG, the mixed correction. These findings suggest that uncorrected SG scores, which are higher than TTO scores, are too high.
This paper proposes a decomposition of nonadditive decision weights into a component reflecting risk attitude and a component depending on belief. The decomposition is based solely on observable preference and does not invoke other empirical primitives such as statements of judged probabilities. The characterizing preference condition (less sensitivity towards uncertainty than towards risk) deviates somewhat from the often-studied ambiguity aversion but is confirmed in the empirical data. The decomposition only invokes one-nonzero-outcome prospects and is valid under all theories with a nonlinear weighting of uncertainty.
Several contributions in this book present axiomatizations of decision models, and of special forms thereof. This chapter explains the general usefulness of such axiomatizations, and reviews the basic axiomatizations for static individual decisions under uncertainty. It will demonstrate that David Schmeidler's contributions to this field were crucial.
This paper introduces anchor levels as a new tool for multiattribute utility theory. Anchor levels are attribute levels whose value is not affected by other attributes. They allow for new interpretations and generalizations of known representations and utility measurement techniques. Generalizations of earlier techniques are obtained because cases with complex interactions between attributes can now be handled. Anchor levels serve not only to enhance the generality, but also the tractability, of utility measurements, because stimuli can better be targeted towards the perception and real situation of clients. In an application, anchor levels were applied to the measurement of quality of life during radiotherapy treatment, where there are complex interactions with what happens before and after. Using anchor levels, the measurements could be related exactly to the situation of the clients, thus simplifying the clients' cognitive burden.
Levy and Levy (Management Science, 2002) present data that, according to their claims, violate prospect theory. They suggest that prospect theory's hypothesis of an S-shaped value function, concave for gains and convex for losses, is incorrect. However, all of the data of Levy and Levy are perfectly consistent with the predictions of prospect theory, as can be verified by simply applying prospect theory formulas. The mistake of Levy and Levy is that they, incorrectly, thought that probability weighting could be ignored.
This paper examines a tradeoff-consistency technique for testing and axiomatically founding decision models. The technique improves earlier tradeoff-consistency techniques by only considering indifferences, not strict preferences. The technical axioms used are mostly algebraic and not, as is more common, topological. The resulting foundations are, at a time, more general and more accessible than earlier results, regarding both the technical and the intuitive axioms. The technique is applied to three popular theories of individual decision under uncertainty and risk, i.e.\ expected utility, Choquet expected utility, and prospect theory. The conditions used are better suited for empirical measurements of utility than earlier conditions, and accordingly are easier to test.
This paper formalizes de Finetti's book-making principle as a static individual preference condition. It thus avoids the confounding strategic and dynamic effects of modern formulations that consider games with sequential moves between a bookmaker and a bettor. This paper next shows that the book-making principle, commonly used to justify additive subjective probabilities, can be modified to agree with nonadditive probabilities. The principle is simply restricted to comonotonic subsets which, as usual, leads to an axiomatization of rank-dependent utility theory. Typical features of rank-dependence such as hedging, ambiguity aversion, and pessimism and optimism can be accommodated. The model leads to suggestions for a simplified empirical measurement of nonadditive probabilities.
This paper provides two axiomatic derivations of a case-based decision rule. Each axiomatization shows that, if preference orders over available acts in various contexts satisfy certain consistency requirements, then these orders can be numerically represented by maximization of a similarity-weighted utility function. In each axiomatization, both the similarity function and the utility function are simultaneously derived from preferences, and the axiomatic derivation also suggests a way to elicit these theoretical concepts from in-principle observable preferences. The two axiomatizations differ in the type of decisions that they assume as data.
Most empirical studies of rank-dependent utility and cumulative prospect theory have assumed power utility functions, both for gains and for losses. As it turns out, a remarkably simple preference foundation is possible for such models: Tail independence (a weakening of comonotonic independence that underlies all rank-dependent models) together with constant proportional risk aversion suffice, in the presence of common assumptions (weak ordering, continuity, and first stochastic dominance), to imply these models. Thus, sign dependence, the different treatment of gains and losses, and the separation of decision weights and utility are obtained free of charge.
This paper uses decision-theoretic principles to obtain new insights into the assessment and updating of probabilities. First, a new foundation of Bayesianism is given. It does not require infinite atomless uncertainties as did Savage's classical result, and can therefore be applied to any finite Bayesian network. It neither requires linear utility as did de Finetti's classical result, and therefore allows for the empirically and normatively desirable risk aversion. Finally, by identifying and fixing utility in an elementary manner, our result can readily be applied to identify methods of probability updating. Thus, a decision-theoretic foundation is given to the computationally efficient method of inductive reasoning developed by Rudolf Carnap. Finally, recent empirical findings on probability assessments are discussed. It leads to suggestions for correcting biases in probability assessments, and for an alternative to the Dempster-Shafer belief functions that avoids the reduction to degeneracy after multiple updatings.
This paper proposes a quantitative modification of standard utility elicitation procedures, such as the probability and certainty equivalence methods, to correct for commonly observed violations of expected utility. Traditionally, decision analysis assumes expected utility not only for the prescriptive purpose of calculating optimal decisions but also for the descriptive purpose of eliciting utilities. However, descriptive violations of expected utility bias utility elicitations. That such biases are effective became clear when systematic discrepancies were found between different utility elicitation methods that, under expected utility, should have yielded identical utilities. As it is not clear how to correct for these biases without further knowledge of their size or nature, most utility elicitations still calculate utilities by means of the expected utility formula. This paper speculates on the biases and their sizes by using the quantitative assessments of probability transformation and loss aversion suggested by prospect theory. It presents quantitative corrections for the probability and certainty equivalence methods. If interactive sessions to correct for biases are not possible, then we propose to use the corrected utilities rather than the uncorrected ones in prescriptions of optimal decisions. In an experiment, the discrepancies between the probability and certainty equivalence methods are removed by our proposal.
In expected utility theory, risk attitudes are modeled entirely in terms of utility. In the rank-dependent theories, a new dimension is added: chance attitude, modeled in terms of nonadditive measures or nonlinear probability transformations that are independent of utility. Most empirical studies of chance attitude assume probabilities given and adopt parametric fitting for estimating the probability transformation. Only a few qualitative conditions have been proposed or tested as yet, usually quasi-concavity or quasi-convexity in the case of given probabilities. This paper presents a general method of studying qualitative properties of chance attitude such as optimism, pessimism, and the “inverse-S shape” pattern, both for risk and for uncertainty. These qualitative properties can be characterized by permitting appropriate, relatively simple, violations of the sure-thing principle. In particular, this paper solves a hitherto open problem: the preference axiomatization of convex (“pessimistic” or “uncertainty averse”) nonadditive measures under uncertainty. The axioms of this paper preserve the central feature of rank-dependent theories, i.e. the separation of chance attitude and utility.
Among the most popular models for decision under risk and uncertainty are the rank-dependent models, introduced by Quiggin and Schmeidler. Central concepts in these models are rank-dependence and comonotonicity. It has been suggested in the literature that these concepts are technical tools that have no intuitive or empirical content. This paper describes such contents. As a result, rank-dependence and comonotonicity become natural concepts upon which preference conditions, empirical tests, and improvements for utility measurement can be based. Further, a new derivation of the rank-dependent models is obtained. It is not based on observable preference axioms or on empirical data, but naturally follows from the intuitive perspective assumed. We think that the popularity of the rank-dependent theories is mainly due to the natural concepts adopted in these theories.
This paper shows how the signed Choquet integral, a generalization of the regular Choquet integral, can model violations of separability and monotonicity. Applications to intertemporal preference, asset pricing, and welfare evaluations are discussed.
Background and Purpose. To be able to perform decision analyses that
include stroke as one of the possible health states, the utility of stroke
states have to be determined. We reviewed the literature to obtain reliable
estimates of the utility of stroke, and explored the impact of the study
population from which the utility was assessed. Furthermore, these utilities
were compared with those obtained by the EuroQol classification system.
Methods. We searched the Medline database on papers reporting
empirical assessment of utilities. Mean utilities of major stroke (Rankin
scale 4-5) and minor stroke (Rankin 2-3) were calculated, stratified by study
population. Additionally, the modified Rankin scale was mapped onto the
EuroQol classification system.
Results. Utilities were obtained from 15 papers. Patients at risk
for stroke assigned utilities of 0.19 and 0.60 for major and minor stroke,
respectively. Healthy participants assigned a higher utility to major stroke
(0.35) but not to minor stroke (0.63). Stroke survivors assigned higher
utilities to both major (0.51) and minor stroke (0.71). Much heterogeneity
was found within the three types of study population. Differences in
definitions of the health states seem to explain most of this variation. The
EuroQol indicated a similar value for minor stroke but a value below zero for
major stroke.
Conclusions. For minor stroke, a utility of 0.60 seems to be
appropriate, both for decision analyses and cost-effectiveness studies. The
utility of major stroke is more problematic and requires further
investigation. It may range etween 0 and 0.20, and may possibly be even
negative.
Objective. Many studies suggest that impaired health states are
valued more positively when experienced than when still hypothetical. We
investigate to what extent discrepancies occur between hypothetical and actual
value judgements and examine four possible causes for it.
Patients and methods. Seventy breast cancer patients evaluated their actually
experienced health state and a radiotherapy scenario before, during, and after
post-operative radiotherapy. A chemotherapy scenario was evaluated as a control
scenario. Utilities were elicited by means of a Visual Analog Scale (VAS), a
Time Tradeoff (TTO), and a Standard Gamble (SG).
Results. The utilities
of the radiotherapy scenario (0.89), evaluated before radiotherapy, and the
actually experienced health state (0.92), evaluated during radiotherapy, were
significantly different for the TTO
(p <= 0.05). For the VAS and the SG, significant differences (p <= 0.01)
were found between the radiotherapy scenario and the actually experienced health
state, when both were evaluated during radiotherapy. The utilities of the
radiotherapy scenario and the chemotherapy scenario remained stable over time.
Conclusion. Our results suggest that utilities for hypothetical
scenarios remain stable over time but that utilities obtained through
hypothetical scenarios may not be valid predictors of the value judgements of
actually experienced health states. Discrepancies may be due to differences
between the situations in question rather than to a change in evaluation of the
same health state over time.
This paper shows that a “principle of complete ignorance” plays a central role in decisions based on Dempster belief functions. Such belief functions occur when, in a first stage, a random message is received and then, in a second stage, a true state of nature obtains. The uncertainty about the random message in the first stage is assumed to be probabilized, in agreement with the Bayesian principles. For the uncertainty in the second stage no probabilities are given. The Bayesian and belief function approaches part ways in the processing of uncertainty in the second stage. The Bayesian approach requires that this uncertainty also be probabilized, which may require a resort to subjective information. Belief functions follow the principle of complete ignorance in the second stage, which permits strict adherence to objective inputs.
Machina & Schmeidler (Econometrica, 60, 1992) gave preference conditions for probabilistic sophistication, i.e. decision making where uncertainty can be expressed in terms of (subjective) probabilities without commitment to expected utility maximization. This note shows that simpler and more general results can be obtained by combining results from qualitative probability theory with a “cumulative dominance” axiom.
This paper provides a state-dependent extension of Savage's expected utility when outcomes are real-valued (money, distance, etc.) and utility is increasing (or, equivalently, the “loss function” is decreasing). The first novelty concerns the very definition of the functional, which is not an integral. The existing results in the literature always invoke restrictive assumptions to reduce the functional to an integral, mostly by adding empirical primitives outside the realm of decision theory to allow for the identification of probability. A characterization in terms of preference conditions identifies the empirical content of our model; it amounts to a characterization of Savage's axiom system when the likelihood ordering axiom P4 is dropped. Bayesian updating of new information is still possible even while no prior probabilities are specified, suggesting that the sure-thing principle is at the heart of Bayesian updating. Prior probabilities simplify Bayesian updating, but are not essential.
Classical foundations of expected utility were provided by Ramsey, de Finetti, von Neumann & Morgenstern, Anscombe & Aumann, and others. These foundations describe preference conditions to capture the empirical content of expected utility. The assumed preference conditions, however, vary among the models and a unifying idea is not readily transparent. Providing such a unifying idea is the purpose of this paper. The mentioned derivations have in common that a cardinal utility index for outcomes, independent of the states and probabilities, can be derived. Characterizing that feature provides the unifying idea of the mentioned models.
Cumulative prospect theory was introduced by Tversky and Kahneman so as to combine the empirical realism of their original prospect theory with the theoretical advantages of Quiggin's rank-dependent utility. Preference axiomatizations were provided in several papers. All those axiomatizations, however, only consider decision under uncertainty. No axiomatization has been provided as yet for decision under risk, i.e., when given probabilities are transformed. Providing the latter is the purpose of this note. The resulting axiomatization is considerably simpler than that for uncertainty.
Objective. Temporary health states cannot be measured in the
traditional way by means of techniques such as the time tradeoff (TTO) and the
standard gamble (SG), where health states are chronic and are followed by death.
Chained methods have been developed to solve this problem. This study assesses
the feasibility of a chained TTO and a chained SG, and the consistency and
concordance between the two methods.
Patients and methods. Seventy
female
early-stage breast cancer patients were interviewed. In using both chained
methods, the temporary health state to be evaluated was weighed indirectly with
the aid of a temporary anchor health state. The patients were asked to evaluate
their actual health states, a hypothetical radiotherapy scenario, and a
hypothetical chemotherapy scenario.
Results.
Sixty-eight patients completed the interview. The use of the anchor health
state yielded some problems. A significant difference between the means of the
TTO and the SG was found for the anchor health state only. For the other health
states, the results were remarkably close, because the design avoided some of
the bias effects in traditional measurements.
Conclusion. The
feasibility and the consistency of the chained procedure were satisfactory for
both methods. The problems regarding the anchor health state can be solved by
adapting the methods and by the use of a carefully chosen anchor health state.
The chained method avoids biases present in the conventional method, and thereby
the TTO and the SG may be reconciled. Moreover, there are several psychological
advantages to the method, which makes it useful for diseases with uncertain
prognoses.
Nonadditive expected utility models were developed for explaining preferences in settings where probabilities cannot be assigned to events. In the absence of probabilities, difficulties arise in the interpretation of likelihoods of events. In this paper we introduce a notion of revealed likelihood that is defined entirely in terms of preferences and that does not require the existence of (subjective) probabilities. Our proposal is that decision weights rather than capacities are more suitable measures of revealed likelihood in rank-dependent expected utility models and prospect theory. Applications of our proposal to the updating of beliefs, to the description of attitudes towards ambiguity, and to game theory are presented.
This paper explores how some widely studied classes of nonexpected utility models could be used in dynamic choice situations. A new “sequential consistency” condition is introduced for single-stage and two-stage decision problems. Sequential consistency requires that if a decision maker has committed to a family of models (e.g., the rank dependent family, or the betweenness family) then he use the same family throughout. The conditions are presented under which dynamic consistency, consequentialism, and sequential consistency can be simultaneously preserved for a nonexpected utility maximizer. Each of the conditions is relevant in prescriptive decision making. We allow for cases where the exact sequence of decisions and events, and thus the dynamic structure of the decision problem, is relevant to the decision maker. In spite of this added flexibility of our analysis, our results show that nonexpected utility models can only be used in a considerably restrictive way in dynamic choice. A puzzling implication is that, for the currently most popular decision models (rank-dependent and betweenness), a departure from expected utility can only be made in either the first stage or the last stage of a decision tree. The results suggest either a development of new nonexpected utility models or a return to expected utility.
This paper studies the implications of the “zero-condition” for multiattribute utility theory. The zero-condition simplifies the measurement and derivation of the Quality Adjusted Life Year (QALY) measure commonly used in medical decision analysis. For general multiattribute utility theory, no simple condition has heretofore been found to characterize multiplicatively decomposable forms. When the zero-condition is satisfied, however, such a simple condition, “standard gamble invariance,” becomes available.
The papers collected in this book, applying nonexpected utility theories to insurance, are reviewed. At the end of the review, the new insights are described that Tversky & Kahneman's (1992) cumulative prospect theory offers into the subjects of the various chapters.
On p. 425, line -7, two lines below Eq. 3.1,
U_{k} should be dropped.
p. 7, FIGURE, lowest box
THM.IV.2:
THM.IV.2.7
P. 9, 5^{th} para (“In Chapter I we …”):
C(D
):
C(D)
p. 29 l. 10: “everything relevant for the future” is misleading. It should be everything relevant explicitly considered in the analysis. In other words, there should not be anything else considered in the analysis that is relevant. The sentence on p. 25/26 states explicitly what I want to exclude here, writing “There may be further, 'implicit', uncertainty in such consequences.” I surely do and did not want to follow Savage's, in my opinion unfortunate, assumption that the states of nature should specify all uncertainties whatsoever.
p. 47 l. 2 above III.4.2:
… proof. Outside …:
… proof. The concepts in Stage 2 can always be defined
under, mainly, restricted solvability.
Stage 3 uses the additivity axioms to show that the
concepts of Stage 2 have the required properties. Outside …
P. 59 last four lines: Assume here that m > 0.
P. 60 last para of the “Comment“ on top of the page:
Steps 5.1 to 5.5:
Steps 5.1 to 5.4
P. 66 l. 11: The first strict inequality (>) should be
reversed (<).
P. 87 Statement (ii) in Theorem IV.4.3: the last clause, the one following
the semicolon, can be dropped. It is trivially implied by the preceding clause
with i=1.
P. 89, l. -9:
Remark III.7.7:
Remark III.7.8
P. 93 l. 14: (“union of all sets”): Not true, there can be more simple acts if there are different but equivalent consequences that are not distinguished by the algebra on capital-delta D; this complication does not affect any aspect of the analysis, because the consequences just mentioned are indistinguishable for any purpose. Better call the union in question the set of step-acts iso simple acts.
P. 114 First para of proof of Lemma VI.4.5, last line:
… we conclude that
x_{-k}t_{k} ’
y_{-k}t_{k}. :
… we conclude that
x_{-k}s_{k} ’
y_{-k}s_{k}
implies x_{-k}t_{k} ’
y_{-k}t_{k}.
P. 114 last para: The proof shows that the case of A having one element implies the condition for general A.
P. 121, Lemma VI.7.5. The last line is not clear to me now (Feb. 2002). I think that the lemma holds, and that the proof is simple and does not need connectedness of Gamma. Take a countable dense subset A of Gamma. (z_{j-1},z_{j}) is open, and, therefore, any open subset relative to this set is open itself. It, therefore, intersects A, and the intersection of A with (z_{j-1},z_{j}) is a countable dense subset of (z_{j-1},z_{j}). So, the latter set is separable. Adding z_{j-1} and z_{j} to the intersection of A with (z_{j-1},z_{j}) gives a countable dense subset of E_{j}^{z}.
P. 123, PROOF of LEMMA VI.7.8, 1^{st} line, two times:
V^{h} :
V^{z}
P. 124, STAGE 3, last line:
more than one element:
more than one equivalence class
P. 124, Eq. (VI.7.4):
V^{t}_{1}:
V^{t}_{j}
P. 124, Stage 3 line 3:
E^{h}_{j}:
E^{z}_{j}
P. 126, COROLLARY VI.7.12, Statement (ii):
(ii) The binary relation does not reveal
comonotonic-contradictory tradeoffs. :
(ii) The binary relation satisfies CI.
P. 127, l. 5/6: This is not true. Noncontradictory tradeoffs is used essentially on the E^{z}'s with only coordinates 1 and n essential. Then Com.CI does not suffice. Hence we must, in Proposition VI.7.7, restrict attention to those z's with E^{z} having at least three essential coordinates, i.e., z_{1} z_{n-1}. This does not complicate the prooof further.
P. 130, NOTATION VI.9.3: The 3d and 4th capital gamma's
(G) should be capital C's.
The 1st, 2nd, and 5th are OK.
P. 130, LEMMA VI.9.4: Both capital gamma's, G, should be capital C's.
P. 133 l. 4: Because U* is continuous and the extension to U has not induced “gaps,” U, representing ’ on G, must be continuous.
P. 170, rf^{8}, l. 2: consequences: utilities
P. 172, rf^{17}:
Theorem 2.5.2:
Theorem I.2.5.2
P. 72 Example III.6.8, Case (b): In this case, Vind's (1986a) mean-groupoid approach neither works, so the algebraic approach is also more general than his. This was also pointed out by Jaffray (1974a).
P. 75 Section III.8: A remarkable place where in fact an additive
representation is obtained, using the Reidemeister condition, is
p. 125 in
Edwards, Ward (1962) “Subjective Probabilities Inferred from Decisions,”
Psychological Review 69, 109-135.
Edwards mentions G.J. Minty (referred to in
Wakker, Peter P. (1985) “Extending Monotone and Non-Expansive Mappings
by Optimization,”
Cahiers du C.E.R.O. 27, 141-149)
and L.J. Savage.
P. 124, Stage 5: A better derivation of this stage is on p. 516, Stage 5, of
P. 125, after l. 3: If there were maximal or minimal consequences, continuity would not yet follow. Here, however, it does.
P. 125, LEMMA VI.7.10: A better derivation is in Section 3.5 of
P. 146, Definition VII.5.2: The subscripts i and j can be dropped for the definition, and have been added only to link with figure VII.5.1.
P. 157 and P. 158: In November 2000, Horst Zank pointed out to me that
the idea of Theorem VII.7.5 of my book is essentially
contained in Corollary 1.1, and the idea of
Theorem VII.7.6 is essentially contained
in Theorem 3, of
Blackorby, Charles &
David Donaldson (1982) “Ratio-Scale and Translation-Scale
Full Interpersonal Comparability without Domain Restrictions:
Admissible Social Evaluation Functions,”
International Economic Review 23,
249-268.
P. 162, THEOREM A2.3 (Savage):
- With P convex-ranged, the preference
conditions are necessary and sufficient, not just sufficient, for the
representation.
- In P3, it is correct that event A should
be essential,
not just essential on the set of simple acts.
- P5: Savage actually requires the
restriction of
’
to the set of consequences to be nontrivial, not just
’
on the set of acts. Savage's condition is more restrictive.
I verified later that with little work (using P6 and maybe P7, I forgot),
the condition here and Savage's are equivalent.
- P7: This formulation is from Fishburn,
but I forgot where he gave it, probably in his 1970 or 1982 book.
P. 171, rf^{12}, l. 1: In addition to Section 6.5.5 of KLST, see also the second open problem in Section 6.13 of KLST.
p. 4 l. -5:
accordancing:
according
p. 40 l. 3:
whit:
with
p. 51 Step 2.1: This and all following step-headings should have been printed bold, as the preceding step-headings.
p. 51 l. -6:
leave out from notation:
suppress
p. 51, l. -5:
… those from w^{0}; …
… those of w^{0}; …
p. 56 FIGURE III.5.3, l. 4 of legend:
that The North-East:
that the North-East
P. 67 l. 4/5:
… stronger … condition:
… stronger than the hexagon condition, i.e. imply it.
P. 69, Lemma III.6.3: All the text should have been italics.
P. 71 Observation III.6.6': “Let us repeat … functions.”: This statement should have been put before, not in, the Observation.
P. 81, Figure IV.2.1. “Suppose” and “and,” on top of the figure, should not be bold.
p. 106: period at end of Example V.6.2.
p. 161, THEOREM A2.1, Statement (ii), the last two words, “on Ñ^{n},” can be dropped.
P. 168, rf^{3} l. 4:
linear/positive affine:
positive linear/affine
Literally speaking, every sentence in my paper is
factually true. However, the spirit is entirely misleading. I thought that
the spirit of the message in the title would be so absurd that
people would not take it seriously. Unfortunately, I have not succeeded in
conveying the real message and I know now that misunderstandings have arisen.
One real example concerns people presenting Arrow's impossibility theorem for voting as a proof that democracy cannot exist. E.g., a famous economist wrote: “The search of the great minds of recorded history for the perfect democracy, it turns out, is the search for a chimera, for a logical self-contradiction.” Such phrases grossly overstate the meaning of a combinatorial result, trying to impress nonmathematicians.
Another example is as follows. It seems that people have considered it a paradox that on Re_{-} one can't have risk aversion and weak continuity at the same time. This simply follows because risk aversion implies unbounded utility on Re_{-} whereas weak continuity requires bounded utility.
Due to a technical oversight of Savage (1954) (by imposing his axioms on all acts and therefore also on the unbounded), he came out with a utility function that has to be bounded, as was later discovered (Fishburn 1970 “Utility Theory for Decision Making”). Several people have used this as evidence to support that utility should be bounded and that the human capacity for experiencing happiness must be limited, etc. Of course, again I disagree with such conclusions. Nowadays, people are more careful and in the appropriate cases restrict preference axioms to bounded acts, exactly so as to avoid Savage's technical oversight (e.g., Schmeidler 1989 Econometrica). Let me repeat, Savage (1954) does not provide any argument whatsoever to support bounded utility.
There are many paradoxes based on nothing but technicalities of finite additivity versus countable additivity. Some papers have misused these. My paper describes one more paradox of this kind. The paradox results because I define strict stochastic dominance in the traditional way. Under finite additivity it is more appropriate to use a different, somewhat more complex, formulation of stochastic dominance. The different formulation is equivalent to the traditional under countable additivity. Under finite additivity, however, it is different and more appropriate. This different and preferable formulation is described at the end of my paper (undoubtedly known before to specialists).
Let me finally cite text from my letter of March 6, 1992, to the editor
Ian Jewitt of the Review of Economic Studies, in which I explained the
motives for writing this paper. (I am happy that Professor Jewitt was
willing to accept this unconventional paper.)
- MOTIVATIONAL COMMENTS. Here I must embark on a long undertaking, i.e., explain to you the ideas behind the paper, and the motivations that brought me to write the paper as I did. It is a tricky paper, different from most research papers.The referee ... points out, correctly, that the results would in a mathematical sense not be too surprising to anyone familiar with finite additivity. ... It is well-known that, under finite additivity, there exist strictly positive functions that have integral 0. Well, call such a function an act, call the 0-function an act, and there you got your violation of strict statewise monotonicity. No big deal! ... Most people ... had some courses on probability theory, where probabilities are assumed sigma-additive, but they do not realize that things they learned there do not always hold for finitely additive probability measures that may result in decision models such as Savage's. This is a continuing source of mistakes and misunderstandings, ... at the end of the paper I point things out, constructively, not trying to continue the confusion, but I try to show the way out.
The monotonicity axiom 3.3 cannot be dropped in Theorem 5.3. There it is needed to
avoid negative probabilities and probabilities exceeding 1 for horse events.
In the proof on p. 144, to use Proposition 8.2 and Theorem 6.3 of
Wakker & Tversky (1993), true mixedness must be verified. This condition
follows from stochastic dominance (which is taken in a strong sense with strict
preference in the present paper) as soon as there exist a gain and a loss.
If not, the gain-part or the loss-part becomes degenerate with a trivial
degenerate representation there. In such a degenerate case the representation
result remains valid. The weighting function, however,
is not uniquely determined, contrary to what the theorem claims, but can be chosen
arbitrarily. In summary, to have the theorem fully correct, the mentioned
nontriviality assumption must be added, but it is only needed to avoid that
there is no uniqueness of the weighting functions in degenerate cases.
Matching.
Although the paper does not explicitly use the term “matching,” all
measurements in this paper were based on matching questions. That is,
subjects directly stated the values to generate indifferences, and such
values were not derived indirectly from choices. Thus, as
written at the end of p. 1506, five CE questions give five CE values. That matching
was used is essential for the discussion of the reference point in the CE and PE
measurements, such as in Appendix B. For CE questions subjects are not given
choices between sure outcomes and gambles, in which case they could easily focus
on the sure outcome and take that as reference outcome. Instead, they are given
the gamble, and have to provide themselves the sure outcome to generate indifference.
So, there is no sure outcome before them that they can focus on and easily use as
reference point. This is contrary to the PE questions where the sure outcome is
part of the stimuli presented and they themselves have to provide the probability
to generate indifference. Then the sure outcome is available to them, and they can
easily use it as a reference point.
QUESTION 1. The discrepancies between PE and CE under the classical
elicitation assumption, indicated by black circles in Figure 2, are
largest for the small utilities. Figure 1, however, suggests that
the biases are
strongest for the large utilities, not the small. How can this be?
ANSWER. The discrepancies are generated by the difference
between the biases in PE and CE. Figure 1 demonstrates that this
difference is largest for small utilities. The difference is
generated by loss aversion, which is effective under PE but not under
CE and which is strongest for small utilities. It follows that the
correction formulas of prospect theory induce more reconciliation for
the small utilities than for the high ones.
QUESTION 2. Consider the classical elicitation assumption. Figure 1
suggests that there are no systematic biases for the TO method, but
that there are systematic upward biases for the CE utilities. The
latter should, therefore, be expected to be higher than the TO
utilities. The data, however, find that the CE utilities are usually
smaller than the TO utilities, not higher. See the black asterisks
in Figure 2, which are all negative and are all below the abscissa. How
can this be?
ANSWER. Figure 1 depicts measurements on a common domain. A
complication in the experiment described in Figure 2 is that the TO
and CE measurements were conducted on different domains. The CE
measurements concerned the domain [0,40], the TO measurements the
domain [0, x_{6}]. The latter interval was nearly always
smaller than the former, that is, x_{6} < 40. We then
compared the utilities on the common domain [0, x_{6}]. To
this effect, the CE utilities U_{CE} were renormalized on
this domain to be 1 at x_{6}, i.e. they were all replaced by
U_{CE}(.)/U_{CE}(x_{6}). The value
x_{6} usually lies in the upper part of the domain [0,40].
Its utility is largely overestimated under the classical elicitation
assumption, according to Figure 1. Therefore, the denominator in
U_{CE}/U_{CE}(x_{6}) is greatly
overestimated, and the fraction is underestimated. For each
x_{j}, especially for j=5, this effect is mitigated by the
overestimation of the numerator
U_{CE}(x_{j}).
U_{CE}(x_{5})/U_{CE}(x_{6}) will not
be far off, in agreement with Figure 2.
In general, it is safer to consider whether cardinal utilities are
more or less convex/concave, not if they are higher or lower. The
latter only makes sense if a common normalization has been chosen.
Another way of explaining Observation 2 is as follows. We did not
use the CE correction curve of Figure 1 on the whole domain [0,1],
but only on the subdomain [0,x_{6}/40]. This left part of
the CE correction curve is more concave than convex and, therefore,
our correction formulas make the CE curve more concave, not less
concave as would have happened had the CE correction curve on the
whole interval [0,1] been used.
This answer explains why our corrections, based on PT, make the
(renormalized) CE utilities more concave rather than more convex in our
experiment and move them towards the TO utilities.
QUESTION 3 (follow-up on Question 2). How can the discrepancies
between PE and TO in Figure 2 be derived from Figure 1?
ANSWER. The comparison is similar to the reasoning used in the
comparison of U_{CE} and U_{TO}. In this case,
however, the overestimation of the numerator in
U_{PE}(x_{j})/U_{PE}(x_{j})
has a stronger effect than the overestimation of the denominator
except for j=5.
The above reasoning essentially used strict stochastic dominance for the preference functional over risky gambles. This explains why the same requirement is used in the proof of Remark 8 and, for instance, the probability transformation at the end there is required to be strictly increasing. Otherwise, cases could arise with the probability weighting function flat near zero, so that the
rank-dependent utility utility of x is the same as of G, but x is still strictly preferred and v(x) must strictly exceed u(x).
This paper received helpful comments from Han Bleichrodt and Peter Klibanoff, and participants of the 11^{th} Conference on the Foundations of Utility and Risk Theory (FUR 2004), Cachan, France, where an earlier version was presented as “An Uncertainty-Oriented Approach to Subjective Expected Utility and its Extensions.”
I apologize to all for this omission.
TYPOS
On March 5, 2014, I discovered that Read (2001 JRU, Eq. 16) had the basic formula of CRDI too.
Last updated: 26 March, 2018