Papers, Books, & Their Abstracts by Peter P. Wakker; Appeared/in Press



Published Papers



2024

Li, Chen & Peter P. Wakker (2024) “A Simple and General Axiomatization of Average Utility Maximization for Infinite Streams,Journal of Economc Theory 216, 105795.

ABSTRACT. This paper provides, first, the most general preference axiomatization of average utility (AU) maximization over infinite sequences presently available, reaching almost complete generality (only restriction: all periodic sequences should be contained in the domain). Here, infinite sequences may designate intertemporal outcomes streams where AU models patience, or welfare allocations where AU models fairness, or decision under ambiguity where AU models complete ignorance. Second, as a methodological contribution, this paper shows that infinite-dimensional representations can be simpler, rather than more complex, than finite-dimensional ones: infinite dimensions provide a richness that is convenient rather than cumbersome. In particular, (empirically problematic) continuity assumptions are not needed. Continuity is optional.





2023

Wakker, Peter P. (2023) “The Correct Formula of 1979 Prospect Theory for Multiple Outcomes,Theory and Decision 94, 183-187.

ABSTRACT. Whereas original prospect theory was introduced over 40 years ago, its formula for multi-outcome prospects has never yet been published, resulting in many misunderstandings. This note provides that formula.



Wakker, Peter P. (2023) “A Criticism of Bernheim & Sprenger’s (2020) Tests of Rank Dependence,Journal of Behavioral and Experimental Economics 107, 101950. Online Appendix

ABSTRACT. Bernheim and Sprenger (2020, Econometrica; SB) claimed to experimentally falsify rank dependence in prospect theory. This paper criticizes SB’s results and novelty claims. Their experiments only captured well-known heuristics and not genuine preferences. Many falsifications of rank dependence have been made before, and SB’s equalizing reductions have also been used before. SB thought to identify probability weighting and utility where they are unidentifiable, invalidating all SB’s related claims. SB used an incorrect formula of original prospect theory. Their suggested alternative of rank-independent probability weighting with dependence on the number of outcomes (their “complexity aversion;” a misnomer) has long been discarded.





2022

Wakker, Peter P. (2022) “Transforming Ordinal Riskless Utility into Cardinal Risky Utility: A Comment on Chung, Glimcher, & Tymula (2019),American Economic Journal: Microeconomics 14, 561-565.

ABSTRACT. Chung, Glimcher, & Tymula (2019, this journal) observed both consumers’ choices over commodity bundles and choices under risk. They assumed a cardinal riskless utility function V representing consumer choices, and a cardinal risky utility function U. The two were inconsistent. This note shows that the two functions can be reconciled if we assume that V is ordinal. Then one utility function U can accommodate both risky and riskless choices.





2021

[21.1] Johnson, Cathleen, Aurélien Baillon, Han Bleichrodt, Zhihua Li, Dennie van Dolder, Peter P. Wakker (2021) “Prince: An Improved Method For Measuring Incentivized Preferences,Journal of Risk and Uncertainty, 62, 1-28. Online Appendix   & data set + stimuli

ABSTRACT. This paper introduces the Prince incentive system for measuring preferences. Prince combines the tractability of direct matching, allowing for the precise and direct elicitation of indifference values, with the clarity and validity of choice lists. It makes incentive compatibility completely transparent to subjects, avoiding the opaqueness of the Becker-DeGroot-Marschak mechanism. It can be used for adaptive experiments while avoiding any possibility of strategic behavior by subjects. To illustrate Prince’s wide applicability, we investigate preference reversals, the discrepancy between willingness to pay and willingness to accept, and the major components of decision making under uncertainty: utilities, subjective beliefs, and ambiguity attitudes. Prince allows for measuring utility under risk and ambiguity in a tractable and incentive-compatible manner even if expected utility is violated. Our empirical findings support modern behavioral views, e.g., confirming the endowment effect and showing that utility is closer to linear than classically thought. In a comparative study, Prince gives better results than a classical implementation of the random incentive system.

[21.2] Baillon, Aurélien, Han Bleichrodt, Chen Li, & Peter P. Wakker (2021) “Belief Hedges: Measuring Ambiguity for All Events and All Models,” Journal of Economic Theory 198, 105353.

ABSTRACT. We introduce belief hedges, i.e., sets of events whose uncertain subjective beliefs neutralize each other. Belief hedges allow us to measure ambiguity attitudes without knowing those subjective beliefs. They lead to improved ambiguity indexes that are valid under all popular ambiguity theories. Our indexes can be applied to real-world problems and do not require expected utility for risk or commitments to two-stage optimization, thereby increasing their descriptive power. Belief hedges make ambiguity theories widely applicable.



[21.3] Wakker, Peter P. & Jingni Yang (2021) “Concave/Convex Weighting and Utility Functions for Risk: A New Light on Classical Theorems,Insurance: Mathematics and Economics 100, 429-435.

ABSTRACT. This paper analyzes concave and convex utility and probability distortion functions for decision under risk (law-invariant functionals). We characterize concave utility for virtually all existing models, and concave/convex probability distortion functions for rank-dependent utility and prospect theory in complete generality, through an appealing and well-known condition (convexity of preference, i.e., quasiconcavity of the functional). Unlike preceding results, we do not need to presuppose any continuity, let be differentiability.
      An example of a new light shed on classical results: whereas, in general, convexity/concavity with respect to probability mixing is mathematically distinct from convexity/concavity with respect to outcome mixing, in Yaari's dual theory (i.e., Wang's premium principle) these conditions are not only dual, as was well-known, but also logically equivalent, which had not been known before.




2020

[20.1] (dated 2019 but appeared only in 2020)
Bleichrodt, Han, Jason N. Doctor, Yu Gao, Chen Li, Daniella Meeker, & Peter P. Wakker (2019) “Resolving Rabin’s Paradox,Journal of Risk and Uncertainty 59, 239-260.  Online Appendix;   Data set.

ABSTRACT. We present a theoretical model of Rabin’s long-standing calibration paradox that resolves confusions in the literature and that makes it possible to identify the causes of the paradox. Using suitable experimental stimuli, we show that the paradox truly violates expected utility and, further, that it is merely caused by reference-dependence. Rabin already showed that utility curvature alone cannot fully explain his paradox. We, more than that, do not find any contribution of utility curvature in explaining the paradox—and neither of probability weighting. Rabin’s paradox thus underscores the importance of reference dependence.



[20.2] Abdellaoui, Mohammed & Peter P. Wakker (2020) “Savage for Dummies and Experts,Journal of Economic Theory 186, article nr. 104991.

ABSTRACT. Savage's foundation of expected utility is considered to be the most convincing justification of Bayesian expected utility and the crowning glory of decision theory. It combines exceptionally appealing axioms with deep mathematics. Despite the wide influence and deep respect that Savage received in economics and statistics, virtually no one touched his mathematical tools. We provide an updated analysis that is more general and way more accessible. Our derivations are self-contained. This helps to understand the depth and beauty of Savage's work and the foundations of Bayesianism better, to more easily teach it, and to more easily develop non-Bayesian generalizations incorporating ambiguity.



[20.3] Doctor, Jason N., Peter P. Wakker, & Tong V. Wang (2020) “Economists’ Views on the Ergodicity Problem,Nature Physics 26 (IF: 19.25), 1168.  Online Appendix; typo in online appendix. Accessible lecture (12 minutes)

ABSTRACT. The physicist Ole Peters claimed that his ergodic theory is superior to, and can replace all of, economics. This paper shows that Peters’ work is entirely based on elementary and well-known misunderstandings, thus defending economics against naïve and haughty views of some physicists.



[20.4] Li, Chen, Uyanga Turmunkh, & Peter P. Wakker (2020) “Social and Strategic Ambiguity versus Betrayal Aversion,Games and Economic Behavior 123, 272-287.  Online Appendix”

ABSTRACT. This paper examines the difference between strategic ambiguity, as in game theory, versus “nature” ambiguity, as in individual decisions. We identify a new, non-strategic, component underlying all strategic ambiguities, called social ambiguity. We recommend correcting for it so as to better identify strategic causes. Thus, we shed new light on Bohnet and Zeckhauser’s betrayal aversion in the trust game. Contrary to preceding claims in the literature, ambiguity attitudes do play a role there. Social ambiguity, rather than betrayal aversion, can explain the empirical findings. These results show the importance of controlling for ambiguity attitudes before speculating on strategic factors.



[20.5] Wakker, Peter P. (2020) “A Personal Tribute to David Schmeidler’s Influence,Revue Economique 71, 387-390.

ABSTRACT. This paper describes how my work was inspired by David Schmeidler.



[20.6] Wakker, Peter P. (2020) “A One-Line Proof for Complementary Symmetry,Journal of Mathematical Psychology 98, 102406.

ABSTRACT. Complementary symmetry was derived before under particular theories, and used to test those. Progressively general results were published. This paper proves the condition in complete generality, providing a one-line proof, and shedding new light on its empirical implications.




2019

[19.1] Li, Chen, Uyanga Turmunkh, & Peter P. Wakker (2019) “Trust as a Decision under Ambiguity,Experimental Economics 22, 51-75.  Online Appendix; Comments & typos.   Data set.

ABSTRACT. Decisions to trust involve ambiguity (unknown probabilities) in strategic situations. Despite many theoretical studies on the role of ambiguity in game theory, empirical studies have lagged behind due to a lack of measurement methods for ambiguities in games, where separating ambiguity attitudes from beliefs is crucial for proper measurements. Baillon et al. (2018) introduced a method that allows for such a separation for individual choice. We extend this method to strategic situations and apply it to the trust game, providing new insights. Both people’s ambiguity attitudes and beliefs matter for their trust decisions. More ambiguity averse people decide to trust less, and people with more optimistic beliefs about others’ trustworthiness decide to trust more. However, people who are more a-insensitive (insufficient discrimination between different likelihood levels) are less likely to act upon their beliefs. Our measure of belief, free from contamination by ambiguity attitudes, shows that traditional introspective trust survey measures capture trust in the commonly accepted sense of belief in trustworthiness of others. Further, trustworthy people also decide to trust more due to their beliefs that others are similar to themselves. This paper has shown that applications of ambiguity theories to game theory can bring useful new empirical insights.



[19.2] Wakker, Peter P. (2019) “Book Review of”: Nicolas Jacquement & Olivier l’Haridon (2019) “Experimental Economics: Method and Applications,” Cambridge University Press, Cambridge; Cambridge; Oeconomia - History | Methodology | Philosophy 9, 193-197.



[19.3] Wakker, Peter P. & Jingni Yang (2019) “A powerful Tool for Analyzing Concave/Convex Utility and Weighting Functions,Journal of Eonomic Theory 181, 143-159.

ABSTRACT. This paper shows that convexity of preference has stronger implications for weighted utility models than had been known hitherto, both for utility and for weighting functions. Our main theorem derives concave utility from convexity of preference on the two-dimensional comonotonic cone, without presupposing continuity. Using this seemingly marginal result, we then obtain the most appealing and general axiomatizations of concave/convex utilities and decision weights for many decision models. Included are: risk aversion in expected utility, optimism/pessimism in rank-dependent utility and prospect theory, uncertainty aversion in Choquet expected utility, ambiguity aversion in the smooth model, and inequality aversion in utilitarianism. We provide some surprising relations between well-known conditions, e.g.: in Yaari’s dual theory, convexity/concavity in (“horizontal”) outcome mixing are not only dual, but also logically equivalent, to concavity/convexity in (“vertical”) probability mixing.




2018

[18.1] Baillon, Aurélien, Zhenxing Huang, Asli Selim, & Peter P. Wakker (2018) “Measuring Ambiguity Attitudes for All (Natural) Events,Econometrica 86, 1839-1858.
  Online Appendix (“Supplementary Material”);   Data set.

ABSTRACT. Measurements of ambiguity attitudes have so far focused on artificial events, where (subjective) beliefs can be derived from symmetry of events and can be then controlled for. For natural events as relevant in applications, such a symmetry and corresponding control are usually absent, precluding traditional measurement methods. This paper introduces two indexes of ambiguity attitudes, one for aversion and the other for insensitivity/perception, for which we can control for likelihood beliefs even if these are unknown. Hence, we can now measure ambiguity attitudes for natural events. Our indexes are valid under many ambiguity theories, do not require expected utility for risk, and are easy to elicit in practice. We use our indexes to investigate time pressure under ambiguity. People do not become more ambiguity averse under time pressure but become more insensitive (perceive more ambiguity). These findings are plausible and, hence, support the validity of our indexes.



[18.2] Li, Zhihua, Julia Müller, Peter P. Wakker, & Tong V. Wang (2018) “The Rich Domain of Ambiguity Explored, Management Science 64, 3227-3240.  Online Appendix;   Data set & some stimuli.

ABSTRACT. Ellsberg and others suggested that decision under ambiguity is a rich empirical domain with many phenomena to be investigated beyond the Ellsberg urns. We provide a systematic empirical investigation of this richness by varying the uncertain events, the outcomes, and combinations of these. Although ambiguity aversion is prevailing, we also find systematic ambiguity seeking, confirming insensitivity. We find that ambiguity attitudes depend on the source of uncertainty (the kind of uncertain event) but not on the outcomes. Ambiguity attitudes are closer to rationality (ambiguity neutrality) for natural uncertainties than for the Ellsberg urns. This also appears from the reductions of monotonicity violations and of insensitivity. Ambiguity attitudes have predictive power across different sources of uncertainty and outcomes, with individual-specific components. Our rich domain serves well to test families of weighting functions for fitting ambiguity attitudes. We find that two-parameter families, capturing not only aversion but also insensitivity, are desirable for ambiguity even more than for risk. The Goldstein-Einhorn family performs best for ambiguity.



[18.3] Trautmann, Stefan & Peter P. Wakker (2018) “Making the Anscombe-Aumann Approach to Ambiguity Suitable for Descriptive Applications,Journal of Risk and Uncertainty 56, 83-116.   Online Appendix;   Data set.

ABSTRACT. The Anscombe-Aumann (AA) model, originally introduced to give a normative basis to expected utility, is nowadays mostly used for another purpose: to analyze deviations from expected utility due to ambiguity (unknown probabilities). The AA model makes two ancillary assumptions that do not refer to ambiguity: expected utility for risk and backward induction. These assumptions, even if normatively appropriate, fail descriptively. We relax them while maintaining AA's convenient mixture operation, and thus make it possible to test and apply AA based ambiguity theories descriptively. We find three common assumptions violated: reference independence, universal ambiguity aversion, and weak certainty independence. We introduce and axiomatize a reference dependent generalization of Schmeidler's CEU theory that accommodates the violations found. That is, we extend the AA model to prospect theory.




2017

[17.1] Bleichrodt, Han, Martin Filko, Amit Kothiyal, & Peter P. Wakker (2017) “Making Case-Based Decision Theory Directly Observable,American Economic Journal: Microeconomics 9, 123-151. Data set.

ABSTRACT. Case-based decision theory (CBDT ) provided a new way of revealing preferences, with decisions under uncertainty determined by similarities with cases in memory. This paper introduces a method to measure CBDT that requires no commitment to parametric families and that relates directly to decisions. Thus, CBDT becomes directly observable and can be used in prescriptive applications. Two experiments on real estate investments demonstrate the feasibility of our method. Our implementation of real incentives not only avoids the income effect, but also avoids interactions between different memories. We confirm CBDT’s predictions except for one violation of separability of cases in memory.



[17.2] Li, Zhihua, Kirsten Rohde, & Peter P. Wakker (2017) “Improving One's Choices by Putting Oneself in Others' Shoes-An Experimental Analysis, Journal of Risk and Uncertainty 54, 1-13.   Data set.

ABSTRACT. This paper investigates the effects of predicting choices made by others on own choices. We follow up on promising first results in the literature that suggested improvements of rationality and, hence, new tools for nudging. We find improvements of strong rationality (risk neutrality) for losses, but no such improvements for gains. There are no improvements of weak rationality (avoiding preference reversals). Overall, risk aversion for choices increases. Conversely, for the effects of own choices on predictions of others’ choices, the risk aversion predicted in others’ choices is reduced if preceded by own choices, both for gains and for losses. We consider two psychological theories of risk: risk-as-feelings and risk-as-value, combined with anchoring or adjustment. Our results support risk-as-value combined with anchoring. Relative to preceding studies, we added real incentives, pure framing effects, and simplicity of stimuli that were maximally targeted towards the research questions of this paper.




2016

[16.1] Attema, Arthur E., Han Bleichrodt, Yu Gao, Zhenxing Huang, & Peter P. Wakker (2016) “Measuring Discounting without Measuring Utility, American Economic Review 106, 1476-1494.  Online Appendix;  Data set.

ABSTRACT. We introduce a new method to measure the temporal discounting of money. Unlike preceding methods, our method requires neither knowledge nor measurement of utility. It is easier to implement, clearer to subjects, and requires fewer measurements than existing methods.



[16.2] Baillon, Aurélien, Han Bleichrodt, Ning Liu, & Peter P. Wakker (2016) “Group Decision Rules and Group Rationality under Risk,Journal of Risk and Uncertainty 52, 99-116.  Online Appendix;   Data set.

ABSTRACT. This paper investigates the rationality of group decisions versus individual decisions under risk. We study two group decision rules, majority and unanimity, in stochastic dominance and Allais paradox tasks. We distinguish communication effects (the effects of group discussions and interactions) from aggregation effects (mere impact of the voting procedure), which makes it possible to better understand the complex dynamics of group decision making. In an experiment, both effects occurred for intellective tasks whereas there were only aggregation effects in judgmental tasks. Communication effects always led to more rational choices; aggregation effects did so sometimes but not always. Groups violated stochastic dominance less often than individuals did, which was due to both aggregation and communication effects. In the Allais paradox tasks, there were almost no communication effects, and aggregation effects made groups deviate more from expected utility than individuals.



[16.3] Bleichrodt, Han, Chen Li, Ivan Moscati, & Peter P. Wakker (2016) “Nash Was a First to Axiomatize Expected Utility, Theory and Decision 81, 309-312.

ABSTRACT. Nash is famous for many inventions, but it is less known that he, simultaneously with Marschak, also was the first to axiomatize expected utility for risk. In particular, these authors were the first to state the independence condition, a condition that should have been but was not stated by von Neumann and Morgenstern. Marschak’s paper resulted from interactions with several people at the Cowles Commission. We document unique letters and personal communications with Nash, Samuelson, Arrow, Dalkey, and others, making plausible that Nash made his discovery independently from the others.



[16.4] Chai, Junyi, Chen Li, Peter P. Wakker, Tong V. Wang, & Jingni Yang (2016) “Reconciling Savage’s and Luce’s Modeling of Uncertainty: The Best of Both Worlds,Journal of Mathematical Psychology 75, 10-18.

ABSTRACT. This paper recommends using mosaics, rather than (s-)algebras, as collections of events in decision under uncertainty. We show how mosaics solve the main problem of Savage’s (1954) uncertainty model, a problem pointed out by Duncan Luce. Using mosaics, we can connect Luce’s modeling of uncertainty with Savage’s. Thus, the results and techniques developed by Luce and his co-authors become available to currently popular theories of decision making under uncertainty and ambiguity.



[16.5] Dimmock, Stephen G., Roy Kouwenberg, & Peter P. Wakker (2016) “Ambiguity Attitudes in a Large Representative Sample, Management Science 62, 1363-1380. 
Web appendix;   Data set & stimuli;

ABSTRACT. Using a theorem showing that matching probabilities of ambiguous events can capture ambiguity attitudes, we introduce a tractable method for measuring ambiguity attitudes and apply it in a large representative sample. In addition to ambiguity aversion, we confirm an ambiguity component recently found in laboratory studies: a insensitivity - the tendency to treat subjective likelihoods as fifty-fifty, thus overweighting extreme events. Our ambiguity measurements are associated with real economic decisions; specifically, a insensitivity is negatively related to stock market participation. Ambiguity aversion is also negatively related to stock market participation, but only for subjects who perceive stock returns as highly ambiguous.




2015

[15.1] Bleichrodt, Han, Umut Keskin, Kirsten I.M. Rohde, Vitalie Spinu, & Peter P. Wakker (2015) “Discounted Utility and Present Value—A Close Relation,Operations Research 63, 1420-1430.

ABSTRACT. We introduce a new type of preference conditions for intertemporal choice, requiring independence of present values from various other variables. The new conditions are more concise and more transparent than traditional ones. They are directly related to applications because present values are widely used tools in intertemporal choice. Our conditions give more general behavioral axiomatizations, which facilitates normative debates and empirical tests of time inconsistencies and related phenomena. Like other preference conditions, our conditions can be tested qualitatively. Unlike other preference conditions, however, our conditions can also be directly tested quantitatively, e.g. to verify the required independence of present values from predictors in regressions. We show how similar types of preference conditions, imposing independence conditions between directly observable quantities, can be developed for decision contexts other than intertemporal choice, and can simplify behavioral axiomatizations there. Our preference conditions are especially efficient if several types of aggregation are relevant, because we can handle them in one blow. We thus give an efficient axiomatization of a market pricing system that is (i) arbitrage-free for hedging uncertainties and (ii) time consistent.



[15.2] Bleichrodt, Han & Peter P. Wakker (2015) “Regret Theory: A Bold Alternative to the Alternatives,Economic Journal 125, 493-532.

ABSTRACT. In their famous 1982 paper in this journal, Loomes and Sugden introduced regret theory. Now, more than 30 years later, the case for the historical importance of this contribution can be made.



[15.3] Sales, Célia M. D., Peter P. Wakker, Paula C. G. Alves, & Luís Faísca (2015) “MF Calculator: A Web-based Application for Analyzing Similarity,Journal of Statistical Software 65, May 2015, code snippet 2.

ABSTRACT. This paper presents the Metric-Frequency Calculator (MF Calculator), an online application to analyze similarity. The MF Calculator implements a MF similarity algorithm for the quantitative assessment of similarity in ill-structured data sets. It is widely applicable as it can be used with nominal, ordinal, or interval data when there is little prior control over the variables to be observed regarding number or content. The MF Calculator generates a proximity matrix in CSV, XML or DOC format that can be used as input of traditional statistical techniques such as hierarchical clustering, additive trees, or multidimensional scaling. The MF Calculator also displays a graphical representation of outputs using additive similarity trees. A simulated example illustrates the implementation of the MF Calculator. An additional example with real data is presented, in order to illustrate the potential of combining the MF Calculator with cluster analysis. The MF Calculator is a user-friendly tool available free of charge. It can be accessed from http://mfcalculator.celiasales.org/Calculator.aspx, and it can be used by non-experts from a wide range of social sciences.




2014

[14.1] de Palma, André, Mohammed Abdellaoui, Giuseppe Attanasi, Moshe Ben-Akiva, Ido Erev, Helga Fehr-Duda, Dennis Fok, Craig R. Fox, Ralph Hertwig, Nathalie Picard, Peter P. Wakker, Joan L. Walker, & Martin Weber (2014) “Beware of Black Swans,Marketing Letters 25, 269-280.

ABSTRACT. Uncertainty pervades most aspects of life. From selecting a new technology to choosing a career, decision makers rarely know in advance the exact outcomes of their decisions. Whereas the consequences of decisions in standard decision theory are explicitly described (the decision from description (DFD) paradigm), the consequences of decisions in the recent decision from experience (DFE) paradigm are learned from experience. In DFD, decision makers typically overrespond to rare events. That is, rare events have more impact on decisions than their objective probabilities warrant (overweighting). In DFE, decision makers typically exhibit the opposite pattern, underresponding to rare events. That is, rare events may have less impact on decisions than their objective probabilities warrant (underweighting). In extreme cases, rare events are completely neglected, a pattern known as the “Black Swan effect.” This contrast between DFD and DFE is known as a description–experience gap. In this paper, we discuss several tentative interpretations arising from our interdisciplinary examination of this gap. First, while a source of underweighting of rare events in DFE may be sampling error, we observe that a robust description–experience gap remains when these factors are not at play. Second, the residual description–experience gap is not only about experience per se but also about the way in which information concerning the probability distribution over the outcomes is learned in DFE. Econometric error theories may reveal that different assumed error structures in DFD and DFE also contribute to the gap.



[14.2] Kothiyal, Amit, Vitalie Spinu, & Peter P. Wakker (2014) “Average Utility Maximization: A Preference Foundation,Operations Research 62, 207-218.

ABSTRACT. This paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequences. Thus we generalize many preceding results in the literature. For example, continuity in outcomes, a condition needed in other approaches, now is an option rather than a requirement. Applications to expected utility, decisions under ambiguity, welfare evaluations for variable population size, discounted utility, and quasilinear means in functional analysis are presented.



[14.3] Kothiyal, Amit, Vitalie Spinu, & Peter P. Wakker (2014) “An Experimental Test of Prospect Theory for Predicting Choice under Ambiguity,Journal of Risk and Uncertainty 48, 1-17.
Web Appendix; Data set; Comments & typos.

ABSTRACT. Prospect theory is the most popular theory for predicting decisions under risk. This paper investigates its predictive power for decisions under ambiguity, using its specification through the source method. We find that it outperforms its most popular alternatives, including subjective expected utility, Choquet expected utility, and three multiple priors theories: maxmin expected utility, maxmax expected utility, and a-maxmin expected utility.



[14.4] Li, Chen, Zhihua Li, & Peter P. Wakker (2014) “If Nudge Cannot Be Applied: A Litmus Test of the Readers’ Stance on Paternalism,Theory and Decision 76, 297-315.

ABSTRACT. A central question in many debates on paternalism is whether a decision analyst can ever go against the stated preference of a client, even if merely intending to improve the decisions for the client. Using four gedanken-experiments, this paper shows that this central question, so cleverly and aptly avoided by libertarian paternalism (nudge), cannot always be avoided. The four thought experiments, while purely hypothetical, serve to raise and specify the critical arguments in a maximally clear and pure manner. The first purpose of the paper is, accordingly, to provide a litmus test on the readers’ stance on paternalism. We thus also survey and organize the various stances in the literature. The secondary purpose of this paper is to argue that paternalism cannot always be avoided and consumer sovereignty cannot always be respected. However, this argument will remain controversial.




[14.5] de Raat, Friederike, Erik Hordijk, & Peter P. Wakker (2014) “Laat het Los, Al Die Verzekeringen,NRC Handelsblads 8 February 2014, E18-E19. (NRC Handelsblad is a daily newspaper, with 200,000 copies per day, and is the 4th largest newspaper in the Netherlands.)

[14.6] Wakker Peter P. (2014) “Verliesangst,NRC Handelsblads (Delta Lloyd Magazine) 27 June, p. 9. (NRC Handelsblad is a daily newspaper, with 200,000 copies per day, and is the 4th largest newspaper in the Netherlands.)

[14.7] Vrieselaar, Nic, Ralph Koijen, & Peter P. Wakker (2014) “Sparen voor de Dood,Elsevier 70 (47), p. 73. (Elsevier is a weekly magazine with 100,000 subscriptions.)


2013

[13.1] Bleichrodt, Han, Amit Kothiyal, Drazen Prelec, & Peter P. Wakker (2013) “Compound Invariance Implies Prospect Theory for Simple Prospects,Journal of Mathematical Psychology 57, 68-77.

ABSTRACT. Behavioral conditions such as compound invariance for risky choice and constant decreasing relative impatience for intertemporal choice have surprising implications for the underlying decision model. They imply a multiplicative separability of outcomes and either probability or time. Hence the underlying model must be prospect theory or discounted utility on the domain of prospects with one nonzero outcome. We indicate implications for richer domains with multiple outcomes, and with both risk and time involved.


[13.2] Han Bleichrodt, Rogier J.D. Potter van Loon, Kirsten I.M. Rohde, & Peter P. Wakker (2013) “A Criticism of Doyle's Survey of Time Preference: A Correction Regarding the CRDI and CADI Families,Judgment and Decision Making 8, 630-631.
Web Appendix

ABSTRACT. Doyle's (JDM 2013) theoretical survey of discount functions criticizes two parametric families abbreviated as CRDI and CADI families. We show that Doyle's criticisms are based on a mathematical mistake and are incorrect.


[13.3] Spinu, Vitalie & Peter P. Wakker (2013) “Expected Utility without Continuity: A Comment on Delbaen, Drapeau, and Kupper (2011),Journal of Mathematical Economics 49, 28-30.

This paper presents preference axiomatizations of expected utility for nonsimple lotteries while avoiding continuity constraints. We use results by Fishburn (1975), Wakker (1993), and Kopylov (2010) to generalize results by Delbaen, Drapeau, and Kupper (2011). We explain the logical relations between these contributions for risk versus uncertainty, and for finite versus countable additivity, indicating what are the most general axiomatizations of expected utility existing today.


[13.4] Stallinga, Rob & Peter P. Wakker (2013) “Wie nooit wil verliezen, mist veel kansen,Safe 2013#02, p. 26. (Safe is a journal for clients of Robeco investment Engineers and the Rabobank.)



[13.5] Wijers, Suzanne, Guus de Jonge, & Peter P. Wakker (2013) “Effectieve Dekking zonder Oververzekering,Spits 11 June 2013, Personal Finance p. 6. (Spits is a free daily journal, with 500,000 copies per day distributed over the Netherlands, estimated to have 2,000,000 readers per day.)




2012

[12.1] Attema, Arthur E., Han Bleichrodt, & Peter P. Wakker (2012) “A Direct Method for Measuring Discounting and QALYs more Easily and Reliably,Medical Decision Making 32, 583-593.
Data set.

Time discounting and quality of life are two important factors in evaluations of medical interventions. The measurement of these two factors is complicated because they interact. Existing methods either simply assume one factor given, based on heuristic assumptions, or invoke complicating extraneous factors such as risk that generate extra biases. We introduce a new method for measuring discounting (and then quality of life) that involves no extraneous factors and that avoids all distorting interactions. Further, our method is considerably simpler and more realistic for subjects than existing methods. It is entirely choice-based and, thus, can be founded on the rationality requirements of economics. An experiment demonstrates the feasibility of our method. It can measure discounting not only for health, but for any other (“flow”) commodity that comes per time unit, such as salary.


[12.2] Baillon, Aurélien, Laure Cabantous, & Peter P. Wakker (2012) “Aggregating Imprecise or Conflicting Beliefs: An Experimental Investigation Using Modern Ambiguity Theories,Journal of Risk and Uncertainty 44, 115-147.
Web Appendix; Data set & analyses.

Two experiments show that violations of expected utility due to ambiguity, found in general decision experiments, also affect belief aggregation. Hence we use modern ambiguity theories to analyze belief aggregation, thus obtaining more refined and empirically more valid results than traditional theories can provide. We can now confirm more reliably that conflicting (heterogeneous) beliefs where some agents express certainty are processed differently than informationally equivalent imprecise homogeneous beliefs. We can also investigate new phenomena related to ambiguity. For instance, agents who express certainty receive extra weight (a cognitive effect related to ambiguity-generated insensitivity) and generate extra preference value (source preference; a motivational effect related to ambiguity aversion). Hence, incentive compatible belief elicitations that prevent manipulation are especially warranted when agents express certainty. For multiple prior theories of ambiguity, our findings imply that the same prior probabilities can be treated differently in different contexts, suggesting an interest in corresponding generalizations.


[12.3] Baillon, Aurélien, Bram Driesen, & Peter P. Wakker (2012) “Relative Concave Utility for Risk and Ambiguity,Games and Economic Behavior 75, 481-489.

This paper presents a general technique for comparing the concavity of different utility functions when probabilities need not be known. It generalizes: (a) Yaari’s comparisons of risk aversion by not requiring identical beliefs; (b) Kreps and Porteus’ informationtiming preference by not requiring known probabilities; (c) Klibanoff, Marinacci, and Mukerji’s smooth ambiguity aversion by not using subjective probabilities (which are not directly observable) and by not committing to (violations of) dynamic decision principles; (d) comparative smooth ambiguity aversion by not requiring identical secondorder subjective probabilities. Our technique completely isolates the empirical meaning of utility. It thus sheds new light on the descriptive appropriateness of utility to model risk and ambiguity attitudes.


[12.4] Baltussen, Guido, Thierry Post, Martijn J. van den Assem, & Peter P. Wakker (2012) “Random Incentive Systems in a Dynamic Choice Experiment,Experimental Economics 15, 418–443.
Data set.

Experiments frequently use a random incentive system (RIS), where only tasks that are randomly selected at the end of the experiment are for real. The most common type pays every subject one out of her multiple tasks (within-subjects randomization). Recently, another type has become popular, where a subset of subjects is randomly selected, and only these subjects receive one real payment (between-subjects randomization). In earlier tests with simple, static tasks, RISs performed well. The present study investigates RISs in a more complex, dynamic choice experiment. We find that between-subjects randomization reduces risk aversion. While within-subjects randomization delivers unbiased measurements of risk aversion, it does not eliminate carry-over effects from previous tasks. Both types generate an increase in subjects’ error rates. These results suggest that caution is warranted when applying RISs to more complex and dynamic tasks.



[12.5] Brinks, Mirjam & Wakker, Peter P. (2012) “Risico is geen Nederlands Woord,” Interview in Het Parool 09 Aug. 2012. (National Dutch newspaper)


[12.6] Wester, Jeroen & Peter P. Wakker (2012) “Heffen op Nationale Hobby: Verzekeren,” Interview in NRC 04 Oct 2012. (National Dutch newspaper)


[12.7] Boere, Raymond & Peter P. Wakker (2012) “Honderd Euro Polisgeld Is Snel Terugverdiend,” Interview in Algemeen Dagblad 04 Oct 2012. (National Dutch newspaper)




2011

[11.1] Abdellaoui, Mohammed, Aurélien Baillon, Laetitia Placido, & Peter P. Wakker (2011) “The Rich Domain of Uncertainty: Source Functions and Their Experimental Implementation,American Economic Review 101, 695-723.
Web Appendix;   Comments & typos;   Data set & analyses.

In economic decisions we often have to deal with uncertain events for which no probabilities are known. Several normative models have been proposed for such decisions. Empirical studies have usually been qualitative, or they estimated ambiguity aversion through one single number. This paper introduces the source method, a tractable method for quantitatively analyzing uncertainty empirically. The method can capture the richness of ambiguity attitudes. The theoretical key in our method is the distinction between different sources of uncertainty, within which subjective (choice-based) probabilities can still be defined. Source functions convert those subjective probabilities into willingness to bet. We apply our method in an experiment, where we do not commit to a particular model of ambiguity but let the data speak.


[11.2] Bleichrodt, Han, Jason N. Doctor, Martin Filko, & Peter P. Wakker (2011) “Utility Independence of Multiattribute Utility Theory is Equivalent to Standard Sequence Invariance of Conjoint Measurement,Journal of Mathematical Psychology 55, 451-456.

Utility independence is a central condition in multiattribute utility theory, where attributes of outcomes are aggregated in the context of risk. The aggregation of attributes in the absence of risk is studied in conjoint measurement. In conjoint measurement, standard sequences have been widely used to empirically measure and test utility functions, and to theoretically analyze them. This paper shows that utility independence and standard sequences are closely related: utility independence is equivalent to a standard sequence invariance condition when applied to risk. This simple relation between two widely used conditions in adjacent fields of research is surprising and useful. It facilitates the testing of utility independence because standard sequences are flexible and can avoid cancelation biases that affect direct tests of utility independence. Extensions of our results to nonexpected utility models can now be provided easily. We discuss applications to the measurement of quality-adjusted life-years (QALY) in the health domain.


[11.3] Kothiyal, Amit, Vitalie Spinu, & Peter P. Wakker (2011) “Comonotonic Proper Scoring Rules to Measure Ambiguity and Subjective Beliefs,Journal of Multi-Criteria Decision Analysis 17, 101-113.

Proper scoring rules serve to measure subjective degrees of belief. Traditional proper scoring rules are based on the assumption of expected value maximization. There are, however, many deviations from expected value due to risk aversion and other factors. Correcting techniques have been proposed in the literature for deviating (nonlinear) utility that still assumed expected utility maximization. More recently, corrections for deviations from expected utility have been proposed. The latter concerned, however, only the quadratic scoring rule, and could handle only half of the domain of subjective beliefs. Further, beliefs close to 0.5 could not be discriminated. This paper generalizes the correcting techniques to all proper scoring rules, covers the whole domain of beliefs and, in particular, can discriminate between all degrees of belief. Thus we fully extend the properness requirement (in the sense of identifying all degrees of subjective beliefs) to all models that deviate from expected value.


[11.4] Kothiyal, Amit, Vitalie Spinu, & Peter P. Wakker (2011) “Prospect Theory for Continuous Distributions: A Preference Foundation,Journal of Risk and Uncertainty 42, 195-210.

This paper provides a preference foundation of prospect theory for continuous distributions and unbounded utility. Thus we show, for instance, how applications of this theory to normal and lognormal distributions can be justified or falsified.


[11.5] Trautmann, Stefan T., Ferdinand M. Vieider, & Peter P. Wakker (2011) “Preference Reversals for Ambiguity Aversion,Management Science 57, 1320-1333.
Data set.

This paper finds preference reversals in measurements of ambiguity aversion, even if psychological and informational circumstances are kept constant. The reversals are of a fundamentally different nature than the reversals found before because they cannot be explained by context-dependent weightings of attributes. We offer an explanation based on Sugden’s random-reference theory, with different elicitation methods generating different random reference points. Then measurements of ambiguity aversion that use willingness to pay are confounded by loss aversion and hence overestimate ambiguity aversion.


[11.6] van de Kuilen, Gijs, & Peter P. Wakker (2011) “The Midweight Method to Measure Attitudes toward Risk and Ambiguity,Management Science 57, 582-598.
Web Appendix;   Data set.

This paper introduces a parameter-free method for measuring the weighting functions of prospect theory and rank-dependent utility. These weighting functions capture risk attitudes, subjective beliefs, and ambiguity attitudes. Our method, called the midweight method, is based on a convenient way to obtain midpoints in the weighting function scale. It can be used both for risk (known probabilities) and for uncertainty (unknown probabilities). The resulting integrated treatment of risk and uncertainty is particularly useful for measuring the differences between them: ambiguity. Compared to existing methods to measure ambiguity attitudes, our method is more efficient and it can accommodate violations of expected utility under risk. An experiment demonstrates the feasibility and tractability of our method, yielding plausible results such as ambiguity aversion for moderate and high likelihoods but ambiguity seeking for low likelihoods, as predicted by Ellsberg.


[11.7] Wakker, Peter P. (2011) “Jaffray's Ideas on Ambiguity,Theory and Decision 71, 11-22.

This paper discusses Jean-Yves Jaffray's ideas on ambiguity, and the views underlying his ideas. His models, developed 20 years ago, provide the most tractable separation of risk attitudes, ambiguity attitudes, and ambiguity beliefs available in the literature today.




2010

[10.1] Attema, Arthur E., Han Bleichrodt, Kirsten I.M. Rohde, & Peter P. Wakker (2010) “Time-Tradeoff Sequences for Analyzing Discounting and Time Inconsistency,Management Science 56, 2015-2030.
Data set.    Typo.

This paper introduces time-tradeoff (TTO) sequences as a new tool to analyze time inconsistency and intertemporal choice. TTO sequences simplify the measurement of discount functions, requiring no assumptions about utility. They also simplify the qualitative testing, and allow for quantitative measurements, of time inconsistencies. TTO sequences can easily be administered. They readily show which subjects are most prone to time inconsistencies. We further use them to axiomatically analyze and empirically test (quasi )hyperbolic discount functions. An experiment demonstrates the feasibility of measuring TTO sequences. Our data falsify (quasi-)hyperbolic discount functions and call for the development of models that can accommodate increasing impatience.


[10.2] Trautmann, Stefan T. & Peter P. Wakker (2010) “Process Fairness and Dynamic Consistency,Economic Letters 109, 187-189.

When process fairness matters (by deviating from outcome fairness), dynamic inconsistencies can arise in the same way as they do in nonexpected utility under risk. Mark Machina introduced resolute choice so as to restore dynamic consistency under nonexpected utility without using Strotz's commitment devices. Machina's idea can similarly be used to justify dynamically consistent process fairness. Process fairness comprises a particularly convincing application of resolute choice.


[10.3] Wakker, Peter P. (2010) “Prospect Theory for Risk and Ambiguity.” Cambridge University Press, Cambridge, UK.
Further material (corrections, extra exercises, and so on).
How to buy.

This book deals with individual decision making under uncertainty. Expected utility, the classical model, and its most popular generalization, prospect theory (in its modern “rank-dependent” version), will be central. These theories are most widely used today, and they are suited for elementary measurements and tests. This book aims to be accessible to students from various disciplines, including economics, management science, medicine, and psychology. It may be of interest to specialists because it shows that theories such as Schmeidler's (1989) Choquet expected utility and Tversky & Kahneman's (1992) new (cumulative) prospect theory, commonly thought to be complex, can be presented and derived in elementary manners if we use so-called ranks instead of comonotonicity.
      This book need not be read continuously. The reader can pick out topics of interest, and then select preceding sections to be read in preparation as indicated in Appendix K. Thus, different readers can pick out different parts of interest. In particular, readers with little mathematical background can skip all advanced mathematics. Indexed exercises further allow readers to select and skip material within sections.
      Ways are presented to empirically test the validity of theories and ways to test their qualitative properties. For all theories described, methods are provided for obtaining precise quantitative measurements of those theories and their concepts through so-called parameter-free methods. Such methods do not just fit models, but they also give insights into the concepts of the model (e.g., subjective probabilities) and into the underlying psychological processes. They can also be used in interactive prescriptive decision consultancies. The theories are presented in as elementary and transparent a manner as possible. This enhances the accessibility of the book to readers without much theoretical background.
      The presentation of all models in this book follows the same line. First the model is defined, with special attention to the free parameters that characterize it, such as the utility function in expected utility. Next we see how those parameters can, in principle, be measured from decisions, and how well they can describe, predict, and prescribe decisions. The requirement that such measurements do not run into contradictions then gives so-called preference foundations of the models. Finally, we discuss empirical findings of the models, and sometimes give first suggestions for applications in various fields.
      The main point that prospect theory adds to classical expected utility is that risk and ambiguity attitudes are no longer modeled solely through utility curvature, but depend also on nonadditive probability weighting and loss aversion. Loss aversion is one of the strongest empirical phenomena in decision theory, and the various ways people feel about probabilities and uncertainty (chance attitude) is just as important empirically as the various ways people feel about outcomes (utility). These new components of risk attitude and Ellsberg's ambiguity attitudes had been sorely missing in the literature up to the 1980s. This book aims to make these new concepts accessible to a wide audience, and to help initiate applications thereof.


2009

[09.1] Bleichrodt, Han, Kirsten I.M. Rohde, & Peter P. Wakker (2009) “Non-Hyperbolic Time Inconsistency,Games and Economic Behavior 66, 27-38.
Comment.

The commonly used hyperbolic and quasi-hyperbolic discount functions have been developed to accommodate decreasing impatience, which is the prevailing empirical finding in intertemporal choice, in particular for aggregate behavior. These discount functions do not have the flexibility to accommodate increasing impatience or strongly decreasing impatience. This lack of flexibility is particularly disconcerting for fitting data at the individual level, where various patterns of increasing impatience and strongly decreasing impatience will occur for a significant fraction of subjects. This paper presents discount functions with constant absolute (CADI) or constant relative (CRDI) decreasing impatience that can accommodate any degree of decreasing or increasing impatience. In particular, they are sufficiently flexible for analyses at the individual level. The CADI and CRDI discount functions are the analogs of the well known CARA and CRRA utility functions for decision under risk.


[09.2] Offerman, Theo, Joep Sonnemans, Gijs van de Kuilen, & Peter P. Wakker (2009) “A Truth Serum for Non-Bayesians: Correcting Proper Scoring Rules for Risk Attitudes,Review of Economic Studies 76, 1461-1489.
Background material (Data set & further analyses.)
Comments

Proper scoring rules, convenient and commonly used tools for eliciting subjective beliefs, are valid only under expected value maximization. This paper shows how proper scoring rules can be generalized to modern theories of risk and ambiguity, yielding mutual benefits. For practitioners of proper scoring rules, the validity of their measurement instrument is improved. For the study of risk and ambiguity, measurement tools are provided that are more efficient than the commonly used binary preferences. An experiment demonstrates the feasibility of our generalized measurement instrument, yielding plausible empirical results.


[09.3] Sales, Célia M.D. & Peter P. Wakker (2009) “The Metric-Frequency Measure of Similarity for Ill-Structured Data Sets, with an Application to Family Therapy,British Journal of Mathematical and Statistical Psychology 62, 663-682.
Data, and software for calculating the similarity index.

Similarity measures have been studied extensively in many domains, but usually with well-structured data sets. In many psychological applications, however, such data sets are not available. It often cannot even be predicted how many items will be observed, or what exactly they will entail. This paper introduces a similarity measure, called the metric-frequency (MF) measure, that can be applied to such data sets. If it is not known beforehand how many items will be observed, then the number of items actually observed in itself carries information. A typical feature of the MF is that it incorporates such information. The primary purpose of our measure is that is should be pragmatic, widely applicable, and tractable, even if data are complex. The MF generalizes Tversky's set-theoretic measure of similarity to cases where items may be present or absent and at the same time can be numerical as with Shepard's metric measure, but need not be so. As an illustration, we apply the MF to family therapy where it cannot be predicted what issues the clients will raise in therapeutic sessions. The MF is flexible enough to be applicable to idiographic data.




2008

[08.1] Bleichrodt, Han, Kirsten I.M. Rohde, & Peter P. Wakker (2008) “Combining Additive Representations on Subsets into an Overall Representation,Journal of Mathematical Psychology 52, 304-310.
Comments

Many traditional conjoint representations of binary preferences are additively decomposable, or additive for short. An important generalization arises under rank-dependence, when additivity is restricted to cones with a fixed ranking of components from best to worst (comonotonicity), leading to configural weighting, rank-dependent utility, and rank- and sign-dependent utility (prospect theory). This paper provides a general result showing how additive representations on an arbitrary collection of comonotonic cones can be combined into one overall representation that applies to the union of all cones considered. The result is applied to a new paradigm for decision under uncertainty developed by Duncan Luce and others, which allows for violations of basic rationality properties such as the coalescing of events and other framing conditions. Through our result, a complete preference foundation of a number of new models by Luce and others can be obtained. We also show how additive representations on different full product sets can be combined into a representation on the union of these different product sets.


[08.2] Bleichrodt, Han, Kirsten I.M. Rohde, & Peter P. Wakker (2008) “Koopmans' Constant Discounting for Intertemporal Choice: A Simplification and a Generalization,Journal of Mathematical Psychology 52, 341-347.

Koopmans provided a well-known preference axiomatization for discounted utility, the most widely used model for maximizing intertemporal choice. There were, however, some technical problems in his analysis. For example, there was an unforeseen implication of bounded utility. Some partial solutions have been advanced in various fields in the literature. The technical problems in Koopmans' analysis obscure the appeal of his intuitive axioms. This paper completely resolves Koopmans' technical problems. In particular, it obtains complete flexibility concerning the utility functions that can be used. This paper, thus, provides a clean and complete preference axiomatization of discounted utility, clarifying the appeal of Koopmans' intuitive axioms.


[08.3] de Palma, André, Moshe Ben-Akiva, David Brownstone, Charles Holt, Thierry Magnac, Daniel McFadden, Peter Moffatt, Nathalie Picard, Kenneth Train, Peter P. Wakker, & Joan Walker (2008) “Risk, Uncertainty and Discrete Choice Models,Marketing Letters 19, 269-285.

This paper examines the cross-fertilization of random utility models with the study of decision making under risk and uncertainty. We start with a description of Expected Utility (EU) theory and then consider deviations from the standard EU frameworks, involving the Allais paradox and the Ellsberg paradox, inter alia. We then discuss how the resulting Non-EU framework can be modeled and estimated within the framework of discrete choices in static and dynamic contexts. Our objectives in addressing risk and ambiguity in individual choice contexts are to understand the decision choice process, and to use behavioral information for prediction, prescription, and policy analysis.


[08.4] Trautmann, Stefan T., Ferdinand M. Vieider, & Peter P. Wakker (2008) “Causes of Ambiguity Aversion: Known versus Unknown Preferences,Journal of Risk and Uncertainty 36, 225-243.
Data set.

Ambiguity aversion appears to have subtle psychological causes. Curley, Yates, and Abrams found that the fear of negative evaluation by others (FNE) increases ambiguity aversion. This paper introduces a design where preferences can be private information of individuals, so that FNE can be avoided entirely. Thus, we can completely control for FNE and other social factors, and can determine exactly to what extent ambiguity aversion is driven by such social factors. In our experiment ambiguity aversion, while appearing as commonly found in the presence of FNE, disappears entirely if FNE is eliminated. Implications are discussed.


[08.5] Wakker, Peter P. (2008) “Lessons Learned by (from?) an Economist Working in Medical Decision Making,Medical Decision Making 28, 690-698.

A personal account is given of my experiences as an economist working in medical decision making. I discuss the differences between economic decision theory and medical decision making and give examples of the mutual benefits resulting from interactions. In particular, I discuss pros and cons of different methods for measuring quality of life (or, as economists would call it, utility), including the standard-gamble, the time-tradeoff, and the healthy-years-equivalent method.


[08.6] Wakker, Peter P. (2008) “Explaining the Characteristics of the Power (CRRA) Utility Family,Health Economics 17, 1329-1344.
Comments

The power family, also known as the family of constant relative risk aversion (CRRA), is the most widely used parametric family for fitting utility functions to data. Its characteristics have, however, been little understood, and have led to numerous misunderstandings. This paper explains these characteristics in a manner accessible to a wide audience.


[08.7] Wakker, Peter P. (2008) “Uncertainty.In Lawrence Blume & Steven N. Durlauf (Eds.) The New Palgrave: A Dictionary of Economics 8, 428-439, The MacMillan Press, London.
Comments

This chapter deals with individual decision making under uncertainty (unknown probabilities). Risk (known probabilities) is not treated as a separate case, but as a subcase of uncertainty. Many results from risk naturally extend to uncertainty. The Allais paradox, commonly applied to risk, also reveals empirical deficiencies of expected utility for uncertainty. The Ellsberg paradox does not reveal deviations from expected utility in an absolute sense, but in a relative sense, giving within-person comparisons: for some events (ambiguous or otherwise) subjects deviate more from expected utility than for other events. Besides aversion, many other attitudes towards ambiguity are empirically relevant.




2007

[07.1] Abdellaoui, Mohammed, Carolina Barrios, & Peter P. Wakker (2007) “Reconciling Introspective Utility with Revealed Preference: Experimental Arguments Based on Prospect Theory,Journal of Econometrics 138, 356-378.
Comments. Data set.

In an experiment, choice-based (revealed-preference) utility of money is derived from choices under risk, and choiceless (non-revealed-preference) utility from introspective strength-of-preference judgments. The well-known inconsistencies of risky utility under expected utility are resolved under prospect theory, yielding one consistent cardinal utility index for risky choice. Remarkably, however, this cardinal index also agrees well with the choiceless utilities, suggesting a relation between a choice-based and a choiceless concept. Such a relation implies that introspective judgments can provide useful data for economics, and can reinforce the revealed-preference paradigm. This finding sheds new light on the classical debate on ordinal versus cardinal utility.


[07.2] Diecidue, Enrico, Peter P. Wakker, & Marcel Zeelenberg (2007) “Eliciting Decision Weights by Adapting de Finetti's Betting-Odds Method to Prospect Theory,Journal of Risk and Uncertainty 34, 179-199.
Data set.

This paper extends de Finetti's betting-odds method for assessing subjective beliefs to ambiguous events. de Finetti's method is so transparent that decision makers can evaluate the relevant tradeoffs in complex situations, for prospects with more than two uncertain outcomes. Such prospects are needed to test the novelty of the Quiggin-Schmeidler rank-dependent utility and of new prospect theory. Our extension is implemented in an experiment on predicting next-day's performance of the Dow Jones and Nikkei stock indexes, where we test the existence and violations of rank-dependence.
This paper was previously entitled: “Measuring Decision Weights of Ambiguous Events by Adapting de Finetti's Betting-Odds Method to Prospect Theory.”


[07.3] Köbberling, Veronika, Christiane Schwieren, & Peter P. Wakker (2007) “Prospect-Theory's Diminishing Sensitivity versus Economics' Intrinsic Utility of Money: How the Introduction of the Euro Can Be Used to Disentangle the Two Empirically,Theory and Decision 63, 205-231.
Data set.

The introduction of the Euro gave a unique opportunity to empirically disentangle two components in the utility of money. The first is intrinsic value, a normative component that is central in economics. The second is numerical sensitivity, a descriptive component that is central in prospect theory and that underlies the money illusion. We measured relative risk aversion in Belgium before and after the introduction of the Euro, and could consider effects of changes in intrinsic value while keeping numbers constant, and effects of changes in numbers while keeping intrinsic value constant. Increasing intrinsic value led to a significant increase of relative risk aversion, but changes in numbers did not have significant effects.


[07.4] Wakker, Peter P., Danielle R.M. Timmermans, & Irma A. Machielse (2007) “The Effects of Statistical Information on Risk and Ambiguity Attitudes, and on Rational Insurance Decisions,Management Science 53, 1770-1784.
Data set.

This paper presents a field study into the effects of statistical information concerning risks on willingness to take insurance, with special attention being paid to the usefulness of these effects for the clients (the insured). Unlike many academic studies, we were able to use in-depth individual interviews of a large representative sample from the general public (N=476). The statistical information that had the most interesting effects, “individual own past-cost information,” unfortunately enhanced adverse selection, which we could directly verify because the real health costs of the clients were known. For a prescriptive evaluation this drawback must be weighted against some advantages: a desirable interaction with risk attitude, increased customer satisfaction, and increased cost awareness. Descriptively, ambiguity seeking was found rather than ambiguity aversion, and no risk aversion was found for loss outcomes. Both findings, obtained in a natural decision context, deviate from traditional views in risk theory but are in line with prospect theory. We confirmed prospect theory's reflection at the level of group averages, but falsified it at the individual level.




2006

[06.1] van de Kuilen, Gijs & Peter P. Wakker (2006) “Learning in the Allais Paradox,Journal of Risk and Uncertainty 33, 155-164.
Data set.

Whereas both the Allais paradox, the first empirical challenge of the classical rationality assumptions, and learning have been the focus of many experimental investigations, no experimental study exists today into learning in the pure context of the Allais paradox. This paper presents such a study. We find that choices converge to expected utility maximization if subjects are given the opportunity to learn by both thought and experience, but less so when they learn by thought only. To the extent that genuine preferences should be measured with proper learning and incentives, our study gives the first pure demonstration that irrationalities such as in the Allais-paradox are less pronounced than often thought.




2005

[05.1] Abdellaoui, Mohammed & Peter P. Wakker (2005) “The Likelihood Method for Decision under Uncertainty,Theory and Decision 58, 3-76.
Comments

This paper introduces the likelihood method for decision under uncertainty. The method allows the quantitative determination of subjective beliefs or decision weights without invoking additional separability conditions, and generalizes the Savage-de Finetti betting method. It is applied to a number of popular models for decision under uncertainty. In each case, preference foundations result from the requirement that no inconsistencies are to be revealed by the version of the likelihood method appropriate for the model considered. A unified treatment of subjective decision weights results for most of the decision models popular today. Savage's derivation of subjective expected utility can now be generalized and simplified. In addition to the intuitive and empirical contributions of the likelihood method, we provide a number of technical contributions: We generalize Savage's nonatomiticy condition (“P6”) and his assumption of (sigma) algebras of events, while fully maintaining his flexibility regarding the outcome set. Derivations of Choquet expected utility and probabilistic sophistication are generalized and simplified similarly. The likelihood method also reveals a common intuition underlying many other conditions for uncertainty, such as definitions of ambiguity aversion and pessimism.


[05.2] Köbberling, Veronika & Peter P. Wakker (2005) “An Index of Loss Aversion,Journal of Economic Theory 122, 119-131.
Selected as one of the 50 most influential papers published in Journal of Economic Theory. Reprinted in special issue: Karl Shell, Tilman Borgers, & Alessandro Pavan (Eds., 2020) “Articles Celebrating the 50th Anniversary of the Journal of Economic Theory,” May 2020. link

To a considerable extent, the commonly observed risk aversion is caused by loss aversion. This paper proposes a quantitative index of loss aversion. Under prospect theory, the proposal leads to a decomposition of risk attitude into three independent components: intrinsic utility, probability weighting, and loss aversion. The main theorem shows how the index of loss aversion of different decision makers can be compared through observed choices.
Typo.

[05.3] Wakker, Peter P. (2005) “Decision-Foundations for Properties of Nonadditive Measures; General State Spaces or General Outcome Spaces,Games and Economic Behavior 50, 107-125.
Comments

This papers characterizes properties of chance attitudes (nonadditive measures). It does so for decision under uncertainty (unknown probabilities), where it assumes Choquet expected utility, and for decision under risk (known probabilities), where it assumes rank-dependent utility. It analyzes chance attitude independently from utility. All preference conditions concern simple violations of the sure-thing principle. Earlier results along these lines assumed richness of both outcomes and events. This paper generalizes such results to general state spaces as in Schmeidler's model of Choquet expected utility, and to general outcome spaces as in Gilboa's model of Choquet expected utility.




2004

[04.1] Diecidue, Enrico, Ulrich Schmidt, & Peter P. Wakker (2004) “The Utility of Gambling Reconsidered,Journal of Risk and Uncertainty 29, 241-259.
Comments

The utility of gambling, entailing an intrinsic utility or disutility of risk, has been alluded to in the economics literature for over a century. This paper presents a model of the phenomenon and demonstrates that any utility of gambling necessarily implies a violation of fundamental rationality properties, such as transitivity or stochastic dominance, which may explain why this often-debated phenomenon was never formalized in the economics literature. Our model accommodates well-known deviations from expected utility, such as the Allais paradox, the simultaneous existence of gambling and insurance, and the equity-premium puzzle, while minimally deviating from expected utility. Our model also sheds new light on risk aversion and the distinction between von Neumann-Morgenstern- and neo-classical (riskless) utility.


[04.2] Köbberling, Veronika & Peter P. Wakker (2004) “A Simple Tool for Qualitatively Testing, Quantitatively Measuring, and Normatively Justifying Savage's Subjective Expected Utility,Journal of Risk and Uncertainty 28, 135-145.

This paper introduces a new preference condition that can be used to justify (or criticize) expected utility. The approach taken in this paper is an alternative to Savage's, and is accessible to readers without a mathematical background. It is based on a method for deriving “comparisons of tradeoffs” from ordinal preferences. Our condition simplifies previously-published tradeoff conditions, and at the same time provides more general and more powerful tools to specialists. The condition is more closely related to empirical methods for measuring utility than its predecessors. It provides a unifying tool for qualitatively testing, quantitatively measuring, and normatively justifying expected utility.


[04.3] van Osch, Sylvie M.C., Peter P. Wakker, Wilbert B. van den Hout, & Anne M. Stiggelbout (2004) “Correcting Biases in Standard Gamble and Time Tradeoff Utilities,Medical Decision Making 24, 511-517.

The standard gamble (SG) method and the time tradeoff (TTO) method are commonly used tomeasure utilities. However, they are distorted by biases due to loss aversion, scale compatibility, utility curvature for life duration, and probability weighting. This article applies corrections for these biases and provides new data on these biases and their corrections. The SG and TTO utilities of 6 rheumatoid arthritis health states were assessed for 45 healthy respondents. Various corrections of utilities were considered. The uncorrected TTO scores and the corrected (for utility curvature) TTO scores provided similar results. This article provides arguments suggesting that the TTO scores are biased upward rather than having balanced biases. The only downward bias in TTO scores was small and probably cannot offset the upward biases. The TTO scores are higher than the theoretically most preferred correction of the SG, the mixed correction. These findings suggest that uncorrected SG scores, which are higher than TTO scores, are too high.


[04.4] Wakker, Peter P. (2004) “On the Composition of Risk Preference and Belief,Psychological Review 111, 236-241. Role of Amos Tversky.

This paper proposes a decomposition of nonadditive decision weights into a component reflecting risk attitude and a component depending on belief. The decomposition is based solely on observable preference and does not invoke other empirical primitives such as statements of judged probabilities. The characterizing preference condition (less sensitivity towards uncertainty than towards risk) deviates somewhat from the often-studied ambiguity aversion but is confirmed in the empirical data. The decomposition only invokes one-nonzero-outcome prospects and is valid under all theories with a nonlinear weighting of uncertainty.


[04.5] Wakker, Peter P. (2004) “Preference Axiomatizations for Decision under Uncertainty.In Itzhak Gilboa (Ed.) Uncertainty in Economic Theory: Essays in Honor of David Schmeidler's 65th Birthday, 20-35, Routledge, London.

Several contributions in this book present axiomatizations of decision models, and of special forms thereof. This chapter explains the general usefulness of such axiomatizations, and reviews the basic axiomatizations for static individual decisions under uncertainty. It will demonstrate that David Schmeidler's contributions to this field were crucial.


[04.6] Wakker, Peter P., Sylvia J.T. Jansen, & Anne M. Stiggelbout (2004) “Anchor Levels as a New Tool for the Theory and Measurement of Multiattribute Utility,Decision Analysis 1, 217-234.

This paper introduces anchor levels as a new tool for multiattribute utility theory. Anchor levels are attribute levels whose value is not affected by other attributes. They allow for new interpretations and generalizations of known representations and utility measurement techniques. Generalizations of earlier techniques are obtained because cases with complex interactions between attributes can now be handled. Anchor levels serve not only to enhance the generality, but also the tractability, of utility measurements, because stimuli can better be targeted towards the perception and real situation of clients. In an application, anchor levels were applied to the measurement of quality of life during radiotherapy treatment, where there are complex interactions with what happens before and after. Using anchor levels, the measurements could be related exactly to the situation of the clients, thus simplifying the clients' cognitive burden.




2003

[03.1] Wakker, Peter P. (2003) “The Data of Levy and Levy (2002) “Prospect Theory: Much Ado about Nothing?” Actually Support Prospect Theory,Management Science 49, 979-981.
Reply to my note by Levy & Levy
Comments (on multi-publication by Levy & Levy; and typos)

Levy and Levy (Management Science, 2002) present data that, according to their claims, violate prospect theory. They suggest that prospect theory's hypothesis of an S-shaped value function, concave for gains and convex for losses, is incorrect. However, all of the data of Levy and Levy are perfectly consistent with the predictions of prospect theory, as can be verified by simply applying prospect theory formulas. The mistake of Levy and Levy is that they, incorrectly, thought that probability weighting could be ignored.


[03.2] Köbberling, Veronika & Peter P. Wakker (2003) “Preference Foundations for Nonexpected Utility: A Generalized and Simplified Technique,Mathematics of Operations Research 28, 395-423. Background paper, used in the proofs.

This paper examines a tradeoff-consistency technique for testing and axiomatically founding decision models. The technique improves earlier tradeoff-consistency techniques by only considering indifferences, not strict preferences. The technical axioms used are mostly algebraic and not, as is more common, topological. The resulting foundations are, at a time, more general and more accessible than earlier results, regarding both the technical and the intuitive axioms. The technique is applied to three popular theories of individual decision under uncertainty and risk, i.e.\ expected utility, Choquet expected utility, and prospect theory. The conditions used are better suited for empirical measurements of utility than earlier conditions, and accordingly are easier to test.




2002

[02.1] Diecidue, Enrico & Peter P. Wakker (2002) “Dutch Books: Avoiding Strategic and Dynamic Complications, and a Comonotonic Extension,Mathematical Social Sciences 43, 135-149.

This paper formalizes de Finetti's book-making principle as a static individual preference condition. It thus avoids the confounding strategic and dynamic effects of modern formulations that consider games with sequential moves between a bookmaker and a bettor. This paper next shows that the book-making principle, commonly used to justify additive subjective probabilities, can be modified to agree with nonadditive probabilities. The principle is simply restricted to comonotonic subsets which, as usual, leads to an axiomatization of rank-dependent utility theory. Typical features of rank-dependence such as hedging, ambiguity aversion, and pessimism and optimism can be accommodated. The model leads to suggestions for a simplified empirical measurement of nonadditive probabilities.


[02.2] Gilboa, Itzhak, David Schmeidler, & Peter P. Wakker (2002) “Utility in Case-Based Decision Theory,Journal of Economic Theory 105, 483-502.
Comments

This paper provides two axiomatic derivations of a case-based decision rule. Each axiomatization shows that, if preference orders over available acts in various contexts satisfy certain consistency requirements, then these orders can be numerically represented by maximization of a similarity-weighted utility function. In each axiomatization, both the similarity function and the utility function are simultaneously derived from preferences, and the axiomatic derivation also suggests a way to elicit these theoretical concepts from in-principle observable preferences. The two axiomatizations differ in the type of decisions that they assume as data.


[02.3] Wakker, Peter P. & Horst Zank (2002) “A Simple Preference-Foundation of Cumulative Prospect Theory with Power Utility,European Economic Review 46, 1253-1271.

Most empirical studies of rank-dependent utility and cumulative prospect theory have assumed power utility functions, both for gains and for losses. As it turns out, a remarkably simple preference foundation is possible for such models: Tail independence (a weakening of comonotonic independence that underlies all rank-dependent models) together with constant proportional risk aversion suffice, in the presence of common assumptions (weak ordering, continuity, and first stochastic dominance), to imply these models. Thus, sign dependence, the different treatment of gains and losses, and the separation of decision weights and utility are obtained free of charge.


[02.4] Peter P. Wakker (2002) Decision-Principles to Justify Carnap's Updating Method and to Suggest Corrections of Probability Judgments.In Adnam Darwiche & Nir Friedman (Eds.) Uncertainty in Artificial Intelligence, Proceedings of the Eighteenth Conference, 544-551, Morgan Kaufmann, San Francisco, CA.
Comments

This paper uses decision-theoretic principles to obtain new insights into the assessment and updating of probabilities. First, a new foundation of Bayesianism is given. It does not require infinite atomless uncertainties as did Savage's classical result, and can therefore be applied to any finite Bayesian network. It neither requires linear utility as did de Finetti's classical result, and therefore allows for the empirically and normatively desirable risk aversion. Finally, by identifying and fixing utility in an elementary manner, our result can readily be applied to identify methods of probability updating. Thus, a decision-theoretic foundation is given to the computationally efficient method of inductive reasoning developed by Rudolf Carnap. Finally, recent empirical findings on probability assessments are discussed. It leads to suggestions for correcting biases in probability assessments, and for an alternative to the Dempster-Shafer belief functions that avoids the reduction to degeneracy after multiple updatings.




2001

[01.1] Bleichrodt, Han, Jose Luis Pinto, & Peter P. Wakker (2001) “Making Descriptive Use of Prospect Theory to Improve the Prescriptive Use of Expected Utility,Management Science 47, 1498-1514.
Comments. Data set.

This paper proposes a quantitative modification of standard utility elicitation procedures, such as the probability and certainty equivalence methods, to correct for commonly observed violations of expected utility. Traditionally, decision analysis assumes expected utility not only for the prescriptive purpose of calculating optimal decisions but also for the descriptive purpose of eliciting utilities. However, descriptive violations of expected utility bias utility elicitations. That such biases are effective became clear when systematic discrepancies were found between different utility elicitation methods that, under expected utility, should have yielded identical utilities. As it is not clear how to correct for these biases without further knowledge of their size or nature, most utility elicitations still calculate utilities by means of the expected utility formula. This paper speculates on the biases and their sizes by using the quantitative assessments of probability transformation and loss aversion suggested by prospect theory. It presents quantitative corrections for the probability and certainty equivalence methods. If interactive sessions to correct for biases are not possible, then we propose to use the corrected utilities rather than the uncorrected ones in prescriptions of optimal decisions. In an experiment, the discrepancies between the probability and certainty equivalence methods are removed by our proposal.


[01.2] Wakker, Peter P. (2001) “Testing and Characterizing Properties of Nonadditive Measures through Violations of the Sure-Thing Principle,Econometrica 69, 1039-1059.

In expected utility theory, risk attitudes are modeled entirely in terms of utility. In the rank-dependent theories, a new dimension is added: chance attitude, modeled in terms of nonadditive measures or nonlinear probability transformations that are independent of utility. Most empirical studies of chance attitude assume probabilities given and adopt parametric fitting for estimating the probability transformation. Only a few qualitative conditions have been proposed or tested as yet, usually quasi-concavity or quasi-convexity in the case of given probabilities. This paper presents a general method of studying qualitative properties of chance attitude such as optimism, pessimism, and the “inverse-S shape” pattern, both for risk and for uncertainty. These qualitative properties can be characterized by permitting appropriate, relatively simple, violations of the sure-thing principle. In particular, this paper solves a hitherto open problem: the preference axiomatization of convex (“pessimistic” or “uncertainty averse”) nonadditive measures under uncertainty. The axioms of this paper preserve the central feature of rank-dependent theories, i.e. the separation of chance attitude and utility.


[01.3] Diecidue, Enrico & Peter P. Wakker (2001) “On the Intuition of Rank-Dependent Utility,Journal of Risk and Uncertainty 23, 281-298.

Among the most popular models for decision under risk and uncertainty are the rank-dependent models, introduced by Quiggin and Schmeidler. Central concepts in these models are rank-dependence and comonotonicity. It has been suggested in the literature that these concepts are technical tools that have no intuitive or empirical content. This paper describes such contents. As a result, rank-dependence and comonotonicity become natural concepts upon which preference conditions, empirical tests, and improvements for utility measurement can be based. Further, a new derivation of the rank-dependent models is obtained. It is not based on observable preference axioms or on empirical data, but naturally follows from the intuitive perspective assumed. We think that the popularity of the rank-dependent theories is mainly due to the natural concepts adopted in these theories.


[01.4] De Waegenaere, Anja & Peter P. Wakker (2001) “Nonmonotonic Choquet Integrals,Journal of Mathematical Economics 36, 45-60.
Comments

This paper shows how the signed Choquet integral, a generalization of the regular Choquet integral, can model violations of separability and monotonicity. Applications to intertemporal preference, asset pricing, and welfare evaluations are discussed.


[01.5] Post, Piet N., Anne M. Stiggelbout, & Peter P. Wakker (2001) “The Utility of Health States Following Stroke; a Systematic Review of the Literature,Stroke 32, 1425-1429.

Background and Purpose. To be able to perform decision analyses that include stroke as one of the possible health states, the utility of stroke states have to be determined. We reviewed the literature to obtain reliable estimates of the utility of stroke, and explored the impact of the study population from which the utility was assessed. Furthermore, these utilities were compared with those obtained by the EuroQol classification system.
Methods. We searched the Medline database on papers reporting empirical assessment of utilities. Mean utilities of major stroke (Rankin scale 4-5) and minor stroke (Rankin 2-3) were calculated, stratified by study population. Additionally, the modified Rankin scale was mapped onto the EuroQol classification system.
Results. Utilities were obtained from 15 papers. Patients at risk for stroke assigned utilities of 0.19 and 0.60 for major and minor stroke, respectively. Healthy participants assigned a higher utility to major stroke (0.35) but not to minor stroke (0.63). Stroke survivors assigned higher utilities to both major (0.51) and minor stroke (0.71). Much heterogeneity was found within the three types of study population. Differences in definitions of the health states seem to explain most of this variation. The EuroQol indicated a similar value for minor stroke but a value below zero for major stroke.
Conclusions. For minor stroke, a utility of 0.60 seems to be appropriate, both for decision analyses and cost-effectiveness studies. The utility of major stroke is more problematic and requires further investigation. It may range etween 0 and 0.20, and may possibly be even negative.




2000

[00.1] Jansen, Sylvia J.T., Anne M. Stiggelbout, Peter P. Wakker, Marianne A. Nooij, E.M. Noordijk, & Job Kievit (2000) “Unstable Preferences: A Shift in Valuation or an Effect of the Elicitation Procedure?,Medical Decision Making 20, 62-71.
Awards:
Poster-1999 award of ISOQOL, Dutch MTA publication award of 2001, and INFORMS Decision-Analysis Society Publication Award of 2002.

Objective. Many studies suggest that impaired health states are valued more positively when experienced than when still hypothetical. We investigate to what extent discrepancies occur between hypothetical and actual value judgements and examine four possible causes for it.
Patients and methods. Seventy breast cancer patients evaluated their actually experienced health state and a radiotherapy scenario before, during, and after post-operative radiotherapy. A chemotherapy scenario was evaluated as a control scenario. Utilities were elicited by means of a Visual Analog Scale (VAS), a Time Tradeoff (TTO), and a Standard Gamble (SG).
Results. The utilities of the radiotherapy scenario (0.89), evaluated before radiotherapy, and the actually experienced health state (0.92), evaluated during radiotherapy, were significantly different for the TTO (p <= 0.05). For the VAS and the SG, significant differences (p <= 0.01) were found between the radiotherapy scenario and the actually experienced health state, when both were evaluated during radiotherapy. The utilities of the radiotherapy scenario and the chemotherapy scenario remained stable over time.
Conclusion. Our results suggest that utilities for hypothetical scenarios remain stable over time but that utilities obtained through hypothetical scenarios may not be valid predictors of the value judgements of actually experienced health states. Discrepancies may be due to differences between the situations in question rather than to a change in evaluation of the same health state over time.


[00.2] Wakker, Peter P. (2000) “Dempster Belief Functions Are Based on the Principle of Complete Ignorance,International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 8, 271-284.

This paper shows that a “principle of complete ignorance” plays a central role in decisions based on Dempster belief functions. Such belief functions occur when, in a first stage, a random message is received and then, in a second stage, a true state of nature obtains. The uncertainty about the random message in the first stage is assumed to be probabilized, in agreement with the Bayesian principles. For the uncertainty in the second stage no probabilities are given. The Bayesian and belief function approaches part ways in the processing of uncertainty in the second stage. The Bayesian approach requires that this uncertainty also be probabilized, which may require a resort to subjective information. Belief functions follow the principle of complete ignorance in the second stage, which permits strict adherence to objective inputs.


[00.3] Sarin, Rakesh H. & Peter P. Wakker (2000) “Cumulative Dominance and Probabilistic Sophistication,Mathematical Social Sciences 40, 191-196.

Machina & Schmeidler (Econometrica, 60, 1992) gave preference conditions for probabilistic sophistication, i.e. decision making where uncertainty can be expressed in terms of (subjective) probabilities without commitment to expected utility maximization. This note shows that simpler and more general results can be obtained by combining results from qualitative probability theory with a “cumulative dominance” axiom.


[00.4] Wakker, Peter P. (2000) “Uncertainty Aversion: A Discussion of Critical Issues in Health Economics,Health Economics 9, 261-263.


[00.5] Wakker, Peter P. (2000) “Luce's Paradigm for Decision under Uncertainty.” Book Review of: R. Duncan Luce (2000) “Utility of Gains and Losses: Measurement-Theoretical and Experimental Approaches,” Lawrence Erlbaum Publishers, London; Journal of Mathematical Psychology 44, 488-493.


[00.6] Wakker, Peter P. (2000) Book Review of: Barberà, Salvador, Peter J. Hammond, & Christian Seidl (1998) “Handbook of Utility Theory, Vol. 1, Principles,” Kluwer Academic Publishers, Dordrecht; Journal of Economic Literature 38, 638-639.




1999

[99.1] Wakker, Peter P. & Horst Zank (1999) “State Dependent Expected Utility for Savage's State Space,Mathematics of Operations Research 24, 8-34.

This paper provides a state-dependent extension of Savage's expected utility when outcomes are real-valued (money, distance, etc.) and utility is increasing (or, equivalently, the “loss function” is decreasing). The first novelty concerns the very definition of the functional, which is not an integral. The existing results in the literature always invoke restrictive assumptions to reduce the functional to an integral, mostly by adding empirical primitives outside the realm of decision theory to allow for the identification of probability. A characterization in terms of preference conditions identifies the empirical content of our model; it amounts to a characterization of Savage's axiom system when the likelihood ordering axiom P4 is dropped. Bayesian updating of new information is still possible even while no prior probabilities are specified, suggesting that the sure-thing principle is at the heart of Bayesian updating. Prior probabilities simplify Bayesian updating, but are not essential.


[99.2] Wakker, Peter P. & Horst Zank (1999) “A Unified Derivation of Classical Subjective Expected Utility Models through Cardinal Utility,Journal of Mathematical Economics 32, 1-19.

Classical foundations of expected utility were provided by Ramsey, de Finetti, von Neumann & Morgenstern, Anscombe & Aumann, and others. These foundations describe preference conditions to capture the empirical content of expected utility. The assumed preference conditions, however, vary among the models and a unifying idea is not readily transparent. Providing such a unifying idea is the purpose of this paper. The mentioned derivations have in common that a cardinal utility index for outcomes, independent of the states and probabilities, can be derived. Characterizing that feature provides the unifying idea of the mentioned models.


[99.3] Chateauneuf, Alain & Peter P. Wakker (1999) “An Axiomatization of Cumulative Prospect Theory for Decision under Risk,Journal of Risk and Uncertainty 18, 137-145.
Comments

Cumulative prospect theory was introduced by Tversky and Kahneman so as to combine the empirical realism of their original prospect theory with the theoretical advantages of Quiggin's rank-dependent utility. Preference axiomatizations were provided in several papers. All those axiomatizations, however, only consider decision under uncertainty. No axiomatization has been provided as yet for decision under risk, i.e., when given probabilities are transformed. Providing the latter is the purpose of this note. The resulting axiomatization is considerably simpler than that for uncertainty.




1998

[98.1] Jansen, Sylvia J.T., Anne M. Stiggelbout, Peter P. Wakker, Thea P.M. Vliet Vlieland, Jan-Willem H. Leer, Marianne A. Nooy, & Job Kievit (1998) “Patient Utilities for Cancer Treatments: A Study of the Chained Procedure for the Standard Gamble and Time TradeOff,Medical Decision Making 18, 391-399.

Objective. Temporary health states cannot be measured in the traditional way by means of techniques such as the time tradeoff (TTO) and the standard gamble (SG), where health states are chronic and are followed by death. Chained methods have been developed to solve this problem. This study assesses the feasibility of a chained TTO and a chained SG, and the consistency and concordance between the two methods.
Patients and methods. Seventy female early-stage breast cancer patients were interviewed. In using both chained methods, the temporary health state to be evaluated was weighed indirectly with the aid of a temporary anchor health state. The patients were asked to evaluate their actual health states, a hypothetical radiotherapy scenario, and a hypothetical chemotherapy scenario.
Results. Sixty-eight patients completed the interview. The use of the anchor health state yielded some problems. A significant difference between the means of the TTO and the SG was found for the anchor health state only. For the other health states, the results were remarkably close, because the design avoided some of the bias effects in traditional measurements.
Conclusion. The feasibility and the consistency of the chained procedure were satisfactory for both methods. The problems regarding the anchor health state can be solved by adapting the methods and by the use of a carefully chosen anchor health state. The chained method avoids biases present in the conventional method, and thereby the TTO and the SG may be reconciled. Moreover, there are several psychological advantages to the method, which makes it useful for diseases with uncertain prognoses.


[98.2] Sarin, Rakesh K. & Peter P. Wakker (1998) “Revealed Likelihood and Knightian Uncertainty,Journal of Risk and Uncertainty 16, 223-250.

Nonadditive expected utility models were developed for explaining preferences in settings where probabilities cannot be assigned to events. In the absence of probabilities, difficulties arise in the interpretation of likelihoods of events. In this paper we introduce a notion of revealed likelihood that is defined entirely in terms of preferences and that does not require the existence of (subjective) probabilities. Our proposal is that decision weights rather than capacities are more suitable measures of revealed likelihood in rank-dependent expected utility models and prospect theory. Applications of our proposal to the updating of beliefs, to the description of attitudes towards ambiguity, and to game theory are presented.


[98.3] Sarin, Rakesh K. & Peter P. Wakker (1998) “Dynamic Choice and Nonexpected Utility,Journal of Risk and Uncertainty 17, 87-119.

This paper explores how some widely studied classes of nonexpected utility models could be used in dynamic choice situations. A new “sequential consistency” condition is introduced for single-stage and two-stage decision problems. Sequential consistency requires that if a decision maker has committed to a family of models (e.g., the rank dependent family, or the betweenness family) then he use the same family throughout. The conditions are presented under which dynamic consistency, consequentialism, and sequential consistency can be simultaneously preserved for a nonexpected utility maximizer. Each of the conditions is relevant in prescriptive decision making. We allow for cases where the exact sequence of decisions and events, and thus the dynamic structure of the decision problem, is relevant to the decision maker. In spite of this added flexibility of our analysis, our results show that nonexpected utility models can only be used in a considerably restrictive way in dynamic choice. A puzzling implication is that, for the currently most popular decision models (rank-dependent and betweenness), a departure from expected utility can only be made in either the first stage or the last stage of a decision tree. The results suggest either a development of new nonexpected utility models or a return to expected utility.


[98.4] Miyamoto, John M., Peter P. Wakker, Han Bleichrodt, & Hans J.M. Peters (1998) “The Zero-Condition: A Simplifying Assumption in QALY Measurement and Multiattribute Utility,Management Science 44, 839-849.

This paper studies the implications of the “zero-condition” for multiattribute utility theory. The zero-condition simplifies the measurement and derivation of the Quality Adjusted Life Year (QALY) measure commonly used in medical decision analysis. For general multiattribute utility theory, no simple condition has heretofore been found to characterize multiplicatively decomposable forms. When the zero-condition is satisfied, however, such a simple condition, “standard gamble invariance,” becomes available.


[98.5] Wakker, Peter P. (1998) “Non-EU and Insurance.” Book Review of: Christian Gollier & Mark J. Machina (Eds., 1995) “Non-Expected Utility and Risk Management,” Kluwer Academic Publishers; Journal of Behavioral Decision Making 11, 151-160.

The papers collected in this book, applying nonexpected utility theories to insurance, are reviewed. At the end of the review, the new insights are described that Tversky & Kahneman's (1992) cumulative prospect theory offers into the subjects of the various chapters.




1997

[97.1] Kahneman, Daniel, Peter P. Wakker, & Rakesh K. Sarin (1997) “Back to Bentham? Explorations of Experienced Utility,Quarterly Journal of Economics 112, 375-405.

Two core meanings of “utility” are distinguished. “Decision utility” is the weight of an outcome in a decision. “Experienced utility” is hedonic quality, as in Bentham's usage. Experienced utility can be reported in real time (instant utility), or in retrospective evaluations of past episodes (remembered utility). Psychological research has documented systematic errors in retrospective evaluations, which can induce a preference for dominated options. We propose a formal normative theory of the total experienced utility of temporally extended outcomes. Measuring the experienced utility of outcomes permits tests of utility maximization and opens other lines of empirical research.

[97.2] Sarin, Rakesh K. & Peter P. Wakker (1997) “A Single-Stage Approach to Anscombe and Aumann's Expected Utility,Review of Economic Studies 64, 399-409.
Comments

Anscombe and Aumann showed that if one accepts the existence of a physical randomizing device such as a roulette wheel then Savage's derivation of subjective expected utility can be considerably simplified. They, however, invoked compound gambles to define their axioms. We demonstrate that the subjective expected utility derivation can be further simplified and need not invoke compound gambles. Our simplification is obtained by closely following the steps by which probabilities and utilities are elicited.

[97.3] Fennema, Hein & Peter P. Wakker (1997) “Original and Cumulative Prospect Theory: A Discussion of Empirical Differences,Journal of Behavioral Decision Making 10, 53-64.
Data set has been lost.

This note discusses differences between prospect theory and cumulative prospect theory. It shows that cumulative prospect theory is not merely a formal correction of some theoretical problems in prospect theory, but it also gives different predictions. Experiments are described that favor cumulative prospect theory.

[97.4] Bleichrodt, Han, Peter P. Wakker, & Magnus Johannesson (1997) “Characterizing QALYs by Risk Neutrality,Journal of Risk and Uncertainty 15, 107-114.

This paper shows that QALYs can be derived from more elementary conditions than thought hitherto in the literature: it suffices to impose risk neutrality for life years in every health state. This derivation of QALYs is appealing because it does not require knowledge of concepts from utility theory such as utility independence -risk neutrality is a well known condition. Therefore our axiomatization greatly facilitates the assessment of the normative validity of QALYs in medical decision making. Moreover, risk neutrality can easily be tested in experimental designs, which makes it straightforward to assess the descriptive (non)validity of QALYs.

[97.5] Stalmeier, Peep F.M., Peter P. Wakker, & Thom G.G. Bezembinder (1997) “Preference Reversals: Violations of Unidimensional Procedure Invariance,Journal of Experimental Psychology, Human Perception and Performance 23, 1196-1205.
Data set.

Preference reversals have usually been explained by weighted additive models, in which different tasks give rise to different importance weights for the stimulus attributes, resulting in contradictory tradeoffs. This article presents a preference reversal of a more extreme nature. Let (10, 5 Migr) denote living 10 years with a migraine for 5 days per week. Many participants preferred (10, 5 Migr) to (20, 5 Migr). However, when asked to equate these two options with a shorter period of good health, they usually demanded more healthy life years for (20, 5 Migr) than for (10, 5 Migr). This preference reversal within a single dimension cannot be explained by different importance weights and suggests irrationalities at a more fundamental level. Most participants did not change their responses after being confronted with their inconsistencies.

[97.6] Wakker, Peter P., Richard H. Thaler, & Amos Tversky (1997) “Probabilistic Insurance,Journal of Risk and Uncertainty 15, 7-28.
Data set has been lost.

Probabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in premium to compensate for a 1% default risk. These observations cannot be explained by expected utility in which the reduction in premium is approximately equal to the default risk, as we demonstrate under highly plausible assumptions about the utility function. However, the reluctance to buy probabilistic insurance is predicted by the weighting function of prospect theory. It appears that the purchase of insurance is driven primarily by the overweighting of small probabilities rather than by diminishing marginal utility.


1996

[96.1] Wakker, Peter P. & Daniel Deneffe (1996) “Eliciting von Neumann-Morgenstern Utilities when Probabilities Are Distorted or Unknown,Management Science 42, 1131-1150.
Typos. Data set. Jason Doctor's program for TO-utility measurement of life years.

This paper proposes a new method, the (gamble-)tradeoff method, for eliciting utilities in decision under risk or uncertainty. The elicitation of utilities, to be used in the expected utility criterion, turns out to be possible even if probabilities are ambiguous or unknown. A disadvantage of the tradeoff method is that a few more questions usually must be asked to clients. Also, the lotteries that are needed are somewhat more complex than in the certainty-equivalent method or in the probability-equivalent method. The major advantage of the tradeoff method is its robustness against probability distortions and misconceptions, which constitute a major cause of violations of expected utility and generate inconsistencies in utility elicitation. Thus the tradeoff method retains full validity under prospect theory, rank-dependent utility, and the combination of the two, i.e., cumulative prospect theory.
The tradeoff method is tested for monetary outcomes and for outcomes describing life-duration. We find higher risk aversion for life duration, but the tradeoff method elicits similar curvature of utility. Apparently the higher risk aversion for life duration is due to more pronounced deviations from expected utility.

[96.2] Chew, Soo Hong & Peter P. Wakker (1996) “The Comonotonic Sure-Thing Principle,Journal of Risk and Uncertainty 12, 5-27.

This article identifies the common characterizing condition, the comonotonic sure-thing principle, that underlies the rank-dependent direction in non-expected utility. This condition restricts Savage's sure-thing principle to comonotonic acts, and is characterized in full generality by means of a new functional form -cumulative utility- that generalizes the Choquet integral. Thus, a common generalization of all existing rank-dependent forms is obtained, including rank-dependent expected utility, Choquet expected utility, and cumulative prospect theory.

[96.3] Miyamoto, John & Peter P. Wakker (1996) “Multiattribute Utility Theory without Expected Utility Foundations,Operations Research 44, 313-326.
Comments.

Methods for determining the form of utilities are needed for the implementation of utility theory in specific decisions. An important step forward was achieved when utility theorists characterized useful parametric families of utilities, and simplifying decompositions of multiattribute utilities. The standard development of these results is based on expected utility theory which is now known to be descriptively invalid. The empirical violations of expected utility impair the credibility of utility assessments. This paper shows, however, that parametric and multiattribute utility results are robust against the major violations of expected utility. They retain their validity under non-expected utility theories that have been developed to account for actual choice behavior. To be precise, characterizations of parametric and multiattribute representations are extended to rank dependent utility, state dependent utility, Choquet expected utility, and prospect theory.
Added after publication: Our Theorems 1 and 2 were essentially already given as Theorems 4 and 3, respectively, in Udo Ebert (1988) “Measurement of Inequality: An Attempt at Unification and Generalization,” Social Choice and Welfare 5, 147-169. We discovered this only after publication.

[96.4] Peters, Hans J.M. & Peter P. Wakker (1996) “Cycle-Preserving Extension of Demand Functions to New Commodities,Journal of Mathematical Economics 25, 281-290.

A method is given to extend demand functions to new commodities under preservation of the cycle number, i.e., the minimal length of a preference cycle revealed by the demand function. Thus Gale's (Economica, N.S., 1960, 27, 348-354) demand function that shows that the weak axiom of revealed preference does not imply the strong axiom of revealed preference for three commodities can be extended to more than three commodities. Also Shafer's (Journal of Economic Theory, 1977, 16, 293-309) result, that arbitrarily high cycle numbers exist for three commodities, can now be extended to any number of commodities larger than three. This completely settles a question raised by Samuelson (Economica, N.S., 1953, 20, 1-9).

[96.5] Fennema, Hein & Peter P. Wakker (1996) “A Test of Rank-Dependent Utility in the Context of Ambiguity,Journal of Risk and Uncertainty 13, 19-35.
Data set.

Experimental investigations of nonexpected utility have primarily concentrated on decision under risk (“probability triangles”). The literature suggests, however, that ambiguity is one of the main causes for deviations from expected utility. This paper investigates the descriptive performance of rank-dependent utility in the context of choice under ambiguity. We use the axiomatic difference between rank-dependent utility and expected utility to critically test the two models against each other. Surprisingly, rank-dependent utility does not provide any descriptive improvement over expected utility. Our data suggest other, “framing,” factors that do provide descriptive improvements over expected utility.

[96.6] Wakker, Peter P. (1996) “A Criticism of Healthy-Years Equivalents,Medical Decision Making 16, 207-214.

The following questions describe the scope of this paper. When decision trees are used to analyze optimal decisions, should end nodes be evaluated on the basis of QALYs or on the basis of healthy-years equivalents? Which measures should be used in communication with others, e.g., patients? Which of these measures incorporate risk attitudes, and which do not? It is demonstrated that the healthy-years equivalent measure does not stand scrutiny.

[96.7] Deneffe, Daniel & Peter P. Wakker (1996) “Mergers, Strategic Investments and Antitrust Policy,Managerial and Decision Economics 17, 231-240.

Established firms can diversify into new markets in two distinct modes: through internal development or through conglomerate merger. Building on a dynamic three-stage bargaining model with variable threats, this paper shows that a lenient antitrust position toward horizontal mergers can induce established firms that would otherwise not have entered to enter via conglomerate merger. The vigor of antitrust enforcement toward horizontal mergers also affects the conglomerate acquisition price but it does not influence the choice of entry mode. Finally, the paper brings to light a heretofore neglected avenue through which conglomerate mergers can increase welfare.

[96.8] Wakker, Peter P. (1996) “The Sure-Thing Principle and the Comonotonic Sure-Thing Principle: An Axiomatic Analysis,Journal of Mathematical Economics 25, 213-227.

This paper compares classical expected utility with the more general rank-dependent utility models. It shows that the difference between the sure-thing principle for preferences of expected utility and its comonotonic generalization in rank-dependent utility provides the exact demarcation between expected utility and rank-dependent models.

[96.9] Wakker, Peter P. (1996) “Time Preference.” Book Review of: George F. Loewenstein & John Elster (1992) “Choice over Time,” Russell Sage Foundation, New York; Journal of Behavioral Decision Making 9, 297-303.


[96.10] Wakker, Peter P. (1996) Book Review of: Patrick Rivett (1994) “The Craft of Decision Modelling,” Wiley, New York; Journal of Behavioral Decision Making 9, 150-151.




1995

[95.1] Fishburn, Peter C. & Peter P. Wakker (1995) “The Invention of the Independence Condition for Preferences,Management Science 41, 1130-1144.
Explanation that Samuelson (1940) was the first to write an independence-type (separability) condition.

This paper discusses the history and interrelations of three central ideas in preference theory: the independence condition in decision making under risk, the sure-thing principle in decision making under uncertainty, and independence in consumer theory. Independence and the sure-thing principle are equivalent for decision making under risk, but in a less elementary way than has sometimes been thought. The sure-thing principle can be formulated, more generally, for decision making under uncertainty. In a mathematical sense, the sure-thing principle and independence in consumer theory are identical.
Independence was recognized as an important component of decision making under risk in the late 1940s by Jacob Marschak, John Nash, Herman Rubin and Norman Dalkey, and first appeared in publication in Marschak (1950) and Nash (1950). The sure-thing principle can be credited to Savage (1953, 1954). Independence for consumer theory was introduced by Sono (1943) and Leontief (1947a,b); a form of it can also be recognized in Samuelson (1947, Formula 32).
      The mathematics of the above results had been known before. The independence condition for decision making under risk can be recognized in the characterization of “associative means,” the independence condition of consumer theory in the solutions to the “generalized associativity functional equation.”

[95.2] Maas, Arne, Thom G.G. Bezembinder & Peter P. Wakker (1995) “On Solving Intransitivities in Repeated Pairwise Choices,Mathematical Social Sciences 29, 83-101.
Data set has been lost.

A method is presented to transform intransitive, possibly incomplete, preferences between objects into a transitive ordering. In most cases the method provides a unique solution which is easily computed, also if many objects are involved. The method takes into account information about stabililty or intensity of preferences. Preferences are represented as arcs in a digraph. The number of arc reversals that form the solution often coincides with the minimal number of arc reversals known as Slater's i. A Monte Carlo study is reported that strongly supports the method. The method is shown to require polynomial computation time.

[95.3] Wakker, Peter P. & Anne M. Stiggelbout (1995) “Explaining Distortions in Utility Elicitation through the Rank-Dependent Model for Risky Choices,Medical Decision Making 15, 180-186.

The standard gamble (SG) method has been accepted as the gold standard for the elicitation of utility when risk or uncertainty is involved in decisions, and thus for the measurement of utility in medical decisions. Unfortunately, the SG method is distorted by a general dislike for gambles, the “gambling effect,” leading to an overestimation of risk aversion and of utility of impaired health. This problem does not occur for visual analogue scales or the time tradeoff method. For risky decisions, however, the latter methods lack validity. This paper shows how “rank-dependent utility” theory, a newly developed theory in the decision science literature, can provide a new explanation for the gambling effect. Thus it provides a means to correct the SG method and to improve the assessments of quality adjusted life years for medical decisions in which there is uncertainty about outcomes.

[95.4] Wakker, Peter P. & Marc P. Klaassen (1995) “Confidence Intervals for Cost/Effectiveness Ratios,Health Economics 4, 373-381.

The reduction of costs is becoming increasingly important in the medical field. The relevant topic of many clinical trials is not effectiveness per se, but rather cost-effectiveness ratios. Strangely enough, no statistical tools for analyzing cost-effectiveness ratios have been provided in the medical literature yet. This paper explains the gap in the literature, and provides a first technique for obtaining confidence intervals for cost-effectiveness ratios. The technique does not use sophisticated tools to achieve maximal optimality criteria, but seeks for tractability and ease of application, while still satisfying all formal statistical requirements.

[95.5] Tversky, Amos & Peter P. Wakker (1995) “Risk Attitudes and Decision Weights,Econometrica 63, 1255-1280.
comment here

To accommodate the observed pattern of risk-aversion and risk-seeking, as well as common violations of expected utility (e.g., the certainty effect), we introduce and characterize a weighting function according to which an event has greater impact when it turns impossibility into possibility, or possibility into certainty, than when it merely makes a possibility more or less likely. We show how to compare such weighting functions (of different individuals) with respect to the degree of departure from expected utility, and we present a method for comparing an individual's weighting functions for risk and for uncertainty.

[95.6] Timmermans, Danielle R.M., Peter Politser, & Peter P. Wakker (1995) “Aggregation, Rationality, and Risk Communication: Three Current Debates in Medical Decision Making.In Jean-Paul Caverni, Maya Bar-Hillel, Francis Hutton Barron, & Helmut Jungermann (Eds.) Contributions to Decision Making -I, 111-117, Elseviers Science, Amsterdam.




1994

[94.1] Wakker, Peter P. (1994) “Separating Marginal Utility and Probabilistic Risk Aversion,Theory and Decision 36, 1-44.
Correction.

This paper is motivated by the search for one cardinal utility for decisions under risk, welfare evaluations, and other contexts. This cardinal utility should have meaning prior to risk, with risk depending on cardinal utility, not the other way around. The rank-dependent utility model can reconcile such a view on utility with the position that risk attitude consists of more than marginal utility, by providing a separate risk component: a “probabilistic risk attitude” towards probability mixtures of lotteries, modeled through a transformation for cumulative probabilities. While this separation of risk attitude into two independent components is the characteristic feature of rank-dependent utility, it had not yet been axiomatized. Doing that is the purpose of this paper. Therefore, in its second part, the paper extends Yaari's axiomatization to nonlinear utility, and provides separate axiomatizations for increasing/decreasing marginal utility and for optimistic/pessimistic probability transformations. This is generalized to interpersonal comparability. It is also shown that two elementary and often-discussed properties, quasi-convexity (“aversion”) of preferences with respect to probability mixtures, and convexity (“pessimism”) of the probability transformation, are equivalent.

[94.2] Wakker, Peter P., Ido Erev, & Elke U. Weber (1994) “Comonotonic Independence: The Critical Test between Classical and Rank-Dependent Utility Theories,Journal of Risk and Uncertainty 9, 195-230.
Typos. Data set.

This paper compares classical expected utility with the more general rank-dependent utility models. First we show that it is the difference between the independence condition for preferences of expected utility and its comonotonic generalization in rank-dependent utility, that provides the exact demarcation between the two models. Other axiomatic differences are not essential. Next an experimental design is described that tests this difference between independence and comonotonic independence in its most basic form and is robust against violations of other assumptions that may confound the results, in particular the reduction principle, transitivity, and independence of utility from events. It is well-known that in the classical counterexamples to expected utility, comonotonic independence performs better than full-force independence. In general, however, we find that comonotonic independence does not perform better. This is contrary to our prior expectation and suggests that the rank-dependent models, in full generality, do not provide a descriptive improvement over expected utility. For the future of the rank-dependent models, submodels and phenomena must be identified for which rank-dependence does contribute descriptively.

[94.3] Sarin, Rakesh K. & Peter P. Wakker (1994) “Folding Back in Decision Tree Analysis,Management Science 40, 625-628.

This note demontrates that two minimal requirements of decision tree analysis, the folding back procedure and the interchangeability of consecutive chance nodes, imply independence.

[94.4] Sarin, Rakesh K. & Peter P. Wakker (1994) “A General Result for Quantifying Beliefs,Econometrica 62, 683-685. extended version

This paper presents conditions under which a person's beliefs about the occurrence of uncertain events are quantified by a capacity measure, i.e., a nonadditive probability. Additivity of probability is violated in a large number of applications where probabilities are vague or ambiguous due to lack of information.
The key feature of the theory presented in this paper is a separation of the derivation of capacities for events from a specific choice model. This is akin to eliciting a probability distribution for a random variable without committing to a specific decision model. Conditions are given under which Choquet expected utility, the Machina-Schmeidler probabilistically sophisticated model, and subjective expected utility can be derived as special cases of our general model.

[94.5] Peters, Hans J.M. & Peter P. Wakker (1994) “WARP Does not Imply SARP for More Than Two Commodities,Journal of Economic Theory 62, 152-160.

The only examples available in literature to show that the Weak Axiom of Revealed Preference does not imply the Strong Axiom of Revealed Preference, the examples of Gale and Shafer, apply only to the case of three commodities. This paper constructs examples for four or more commodities.

[94.6] Maas, Arne & Peter P. Wakker (1994) “Additive Conjoint Measurement for Multiattribute Utility,Journal of Mathematical Psychology 38, 86-101.

This paper shows that multiattribute utility can be simplified by methods from additive conjoint measurement. Given additive conjoint measurability under certainty, axiomatizations can be simplified, and implementation and reliability of elicitation can be improved. This also contributes to the discussion of (non-)identity of utility under certainty and under uncertainty. Examples are obtained where two such utilities must be different on normative grounds. In establishing this difference, a classical meaningfulness pitfall is to be avoided. Finally, additive conjoint measurement can be used to refine the measurement of QALYs (quality adjusted life years) in medical decision making; this is demonstrated by an experiment.

[94.7] Quiggin, John & Peter P. Wakker (1994) “The Axiomatic Basis of Anticipated Utility; A Clarification,Journal of Economic Theory 64, 486-499.

Quiggin (1982) introduced anticipated (“rank-dependent”) utility theory into decision making under risk. Questions have been raised about mathematical aspects in Quiggin's analysis. This paper settles these questions and shows that a minor modification of Quiggin's axioms leads to a useful and correct result, with features not found in other recent axiomatizations.

[94.8] Sarin, Rakesh K. & Peter P. Wakker (1994) “Gains and Losses in Nonadditive Expected Utility,In Mark J. Machina & Bertrand R. Munier (Eds.) Models and Experiments on Risk and Rationality, 157-172, Kluwer Academic Publishers, Dordrecht.

This paper provides a simple approach for deriving cumulative prospect theory. The key axiom is a cumulative dominance axiom which requires that a prospect be judged more attractive if in it greater gains are more likely and greater losses are less likely. In the presence of this cumulative dominance, once a model is satisfied on a “sufficiently rich” domain, then it holds everywhere. This leads to highly transparent results.

[94.9] Fennema, Hein & Peter P. Wakker (1994) “An Explanation and Characterization for the Buying of Lotteries.In Sixto Rios (Ed.) Decision Theory and Decision Analysis: Trends and Challenges, 163-175, Kluwer Academic Publishers, Dordrecht. Correction

Popular lotteries typically give a very small probability to win a large prize and a moderate chance to win a smaller prize. In this paper a rank dependent model is axiomatized with an S-shaped weighting function, capable of giving an account for the popularity of these lotteries. Also, the role of utility, loss aversion and scale compatibility in explaining the buying of lotteries is discussed.

[94.10] Wakker, Peter P. (1994) “Expected versus Nonexpected Utility: The State of the Art.” Book Review of: Ward Edwards (ed., 1992) “Utility measurements and Applications,” Kluwer Academic Publishers, Dordrecht; Journal of Mathematical Psychology 38, 521-524.


[94.11] Wakker, Peter P. (1994) “Quiggin's Rank-Dependent Model.” Book Review of: John Quiggin (1993) “Generalized Expected Utility Theory: The Rank-Dependent Model,” Kluwer Academic Press, Dordrecht; Journal of Mathematical Psychology 38, 525-526.




1993

[93.1] Wakker, Peter P. (1993) “Additive Representations on Rank-Ordered Sets II. The Topological Approach,Journal of Mathematical Economics 22, 1-26.

Additive representation theory on subsets of Cartesian products has characteristics different from additive representation theory on full Cartesian products. This paper describes the difficulties that can arise on subsets. These difficulties have been underestimated in the literature. For the special case of rank-ordered subsets of Cartesian products this paper obtains characterizations of additive representations. These results can be applied in the modern rank-dependent approaches to decision making under risk/uncertainty, and to generalizations of the Gini index in the measurement of inequality.

[93.2] Wakker, Peter P. (1993) “Counterexamples to Segal's Measure Representation Theorem,Journal of Risk and Uncertainty 6, 91-98.

This article discusses relations between several notions of continuity in rank-dependent utility, and in the generalized version of rank-dependent utility as initiated by Segal. Primarily, examples are given to show logical independencies between these notions of continuity. This also leads to counterexamples to Segal's (1989) characterizing theorem 1.

[93.3] Chew, Soo Hong, Larry G. Epstein, & Peter P. Wakker (1993) “A Unifying Approach to Axiomatic Non-Expected Utility Theories: Correction and Comment,Journal of Economic Theory 59, 183-188.

Chew and Epstein attempted to provide a unifying axiomatic framework for a number of generalizations of expected utility theory. Wakker pointed out that Theorem A, on which the central unifying proposition is based, is false. In this note, we apply Segal's result to prove that Theorem 2 is nevertheless valid with minor modifications.

[93.4] Wakker, Peter P. (1993) “Clarification of some Mathematical Misunderstandings about Savage's Foundations of Statistics, 1954,Mathematical Social Sciences 25, 199-202.

This note discusses some mathematical misunderstandings about Savage (1954). It is shown that in his model the probability measure cannot be countably additive, that the set of events must be a sigma-algebra and not just an algebra, that Savage did not characterize all atomless finitely additive probability measures, and that the state space in his model, while infinite, does not have to be uncountable.

[93.5] Wakker, Peter P. (1993) “Unbounded Utility for Savage's “Foundations of Statistics,” and other Models,Mathematics of Operations Research 18, 446-485.

A general procedure for extending finite-dimensional additive-like representations to infinite-dimensional integral-like representations is developed by means of a condition called truncation-continuity. The restriction of boundedness of utility, met throughout the literature, can now be dispensed with, and for instance normal distributions, or any other distribution with finite first moment, can be incorporated. Classical representation results of expected utility, such as Savage (1954), von Neumann & Morgenstern (1944), Anscombe & Aumann (1963) de Finetti (1937), and many others, can now be extended. The results are generalized to Schmeidler's approach with nonadditive measures and Choquet integrals. The different approaches have been brought together in this long paper to bring to the fore the unity in the extension process.

[93.6] Wakker, Peter P. (1993) “Savage's Axioms Usually Imply Violation of Strict Stochastic Dominance,Review of Economic Studies 60, 487-493.
This paper is a persiflage: elucidation.

Contrary to common belief, Savage's axioms do not guarantee strict stochastic dominance, but rather do they usually necessitate violation of that. Violation occurs as soon as the range of the utility function is rich enough, e.g. contains an interval. Also an example is given where all of Savage's axioms are satisfied, but still strict monotonicity is violated: The decision maker is willing to exchange an act for another act that with certainty will result in a strictly worse outcome. Thus book can be made against the decision maker. Weak stochastic dominance and weak monotonicity are always satisfied, as well as strict stochastic dominance and strict monotonicity when restricted to acts with finitely many outcomes.

[93.7] Wakker, Peter P. & Amos Tversky (1993) “An Axiomatization of Cumulative Prospect Theory,Journal of Risk and Uncertainty 7, 147-176.
Typos.

This paper presents a method for axiomatizing a variety of models for decision making under uncertainty, including Expected Utility and Cumulative Prospect Theory. This method identifies, for each model, the situations that permit consistent inferences about the ordering of value differences. Examples of rank-dependent and sign-dependent preference patterns are used to motivate the models and the “tradeoff consistency” axioms that characterize them. The major properties of the value function in Cumulative Prospect Theory --diminishing sensitivity and loss aversion-- are contrasted with the principle of diminishing marginal utility that is commonly assumed in Expected Utility.

[93.8] Chateauneuf, Alain & Peter P. Wakker (1993) “From Local to Global Additive Representation,Journal of Mathematical Economics 22, 523-545.

This paper studies continuous additive representations of transitive preferences on connected subdomains of product sets. Contrary to what has sometimes been thought, local additive representability does not imply global additive representability. It is shown that the result can nevertheless be established under some additional connectedness conditions. This generalizes previous results on additive representations on (subsets of) product sets.

[93.9] Jaffray, Jean-Yves & Peter P. Wakker (1993) “Decision Making with Belief Functions: Compatibility and Incompatibility with the Sure-Thing Principle,Journal of Risk and Uncertainty 7, 255-271.

This paper studies situations in which information is ambiguous, and only part of it can be probabilized. It is shown that the information can be modeled through belief functions if and only if the nonprobabilizable information is subject to the principles of complete ignorance. Next the representability of decisions by belief functions on outcomes is justified by means of a neutrality axiom. The natural weakening of Savage's sure-thing principle to unambiguous events is examined, and its implications for decision making are identified.


1992

[92.1] Sarin, Rakesh K. & Peter P. Wakker (1992) “A Simple Axiomatization of Nonadditive Expected Utility,Econometrica 60, 1255-1272.
Typos.

This paper provides an extension of Savage's subjective expected utility theory for decisions under uncertainty. It includes in the set of events both unambiguous events for which probabilities are additive as well as ambiguous events for which probabilities are permitted to be nonadditive. The main axiom is cumulative dominance which adapts stochastic dominance to decision making under uncertainty. We derive a Choquet expected utility representation and show that a modification of cumulative dominance leads to the classical expected utility representation. The relationship of our approach with that of Schmeidler who uses a two-stage formulation to derive Choquet expected utility is also explored.

[92.2] Wakker, Peter P. (1992) “Characterizing Stochastically Monotone Functions by Multi-Attribute Utility Theory,Economic Theory 2, 565-566.

This note relates a recent result of Bergin and Brandenburger (1990) on stochastically monotone functions to well-known results from multi-attribute utility theory. In that theory, many generalizations can be found.


1991

[91.1] Wakker, Peter P. (1991) “Additive Representation for Equally Spaced Structures,Journal of Mathematical Psychology 35, 260-266.

It is shown that additive conjoint measurement theory can be considerably generalized and simplified in the equally spaced case.

[91.2] Wakker, Peter P. (1991) “Continuity of Transformations,Journal of Mathematical Analysis and Applications 162, 1-6.

Let u be a continuous function from Re to Re. Let f be a transformation (which in our terminology does not have to be bijective) from the range of u to Re. v = f(u(.)) is the composition of f and u. It is well-known that continuity of f implies continuity of v. We consider the reversed question: Does continuity of v imply continuity of f? Elementary as this question may be, we did not find a place in literature where the answer is given. In fact it is our experience that the probability that a mathematician at first sight will gamble on the wrong answer, is an increasing function of his familiarity with elementary analysis, and is always above 1/2. This paper will answer the reversed question above, in a somewhat more general setting, and give applications.

[91.3] Wakker, Peter P. (1991) “Additive Representations on Rank-Ordered Sets. I. The Algebraic Approach,Journal of Mathematical Psychology 35, 501-531.

This paper considers additive conjoint measurement on subsets of Cartesian products containing “rank-ordered” n-tuples. Contrary to what has often been thought, additive conjoint measurement on subsets of Cartesian products has characteristics different from additive conjoint measurement on full Cartesian products.

[91.4] Peters, Hans J.M. & Peter P. Wakker (1991) “Independence of Irrelevant Alternatives and Revealed Group Preferences,Econometrica 59, 1787-1801. Reprinted in William Thomson (2010, ed.) “Bargaining and the Theory of Cooperative Games: John Nash and Beyond,” Ch. 4, Edward Elgar Publisher, Northampton, MA.

It is shown that a Pareto optimal and continuous single-valued choice function defined on the compact convex subsets of the positive orthant of the n-dimensional Euclidean space maximizes a real-valued function if and only if it satisfies the independence of irrelevant alternatives condition if n=2, and the strong axiom of revealed preference otherwise. The results can be applied to consumer demand theory to deal with nonlinear budget sets, and to bargaining game theory to generalize the Nash bargaining solution.

[91.5] Wakker, Peter P. (1991) “Additive Representations of Preferences, A New Foundation of Decision Analysis; The Algebraic Approach.In Jean-Paul Doignon & Jean-Claude Falmagne (Eds.) Mathematical Psychology: Current Developments, 71-87, Springer, Berlin.

In Wakker (1989, “Additive Representations of Preferences, A New Foundation of Decision Analysis”), a new foundation of decision analysis was given. The main tool was a way to derive comparisons of “tradeoffs” from ordinal preferences, with comparisons of tradeoffs revealing orderings of utility differences. These comparisons of tradeoffs underly the construction of standard sequences in conjoint measurement theory. The restrictive structural assumption (every approach has its restrictive structural assumption) was of a topological nature, requiring continuity. This paper adapts the main results of Wakker (1989) to the algebraic approach, where a solvability condition is required which is less restrictive than continuity.


1990

[90.1] Wakker, Peter P. (1990) “A Behavioral Foundation for Fuzzy Measures,Fuzzy Sets and Systems 37, 327-350.

In Savage (1954) a behavioral foundation was given for subjective probabilities, to be used in the maximization of expected utility. This paper analogously gives a behavioral foundation for fuzzy measures, to be used in the maximization of the Choquet-integral. This opens the way to empirical verification or falsification of fuzzy measures, and frees them of their ad hoc character.

[90.2] Wakker, Peter P. (1990) “Characterizing Optimism and Pessimism Directly through Comonotonicity,Journal of Economic Theory 52, 453-463.

Schmeidler (1982) introduced comonotonic independence to characterize nonadditivity of probabilities. This note obtains natural and very simple characterizations of convexity, concavity (and additivity) of nonadditive probabilities, by relating these directly to the pessimism and optimism inherent in comonotonicity.

[90.3] Wakker, Peter P. (1990) “Under Stochastic Dominance Choquet-Expected Utility and Anticipated Utility are Identical,Theory and Decision 29, 119-132.

The aim of this paper is to convince the reader that Choquet-expected utility, as initiated by Schmeidler (1982) for decision making under uncertainty, when formulated for decision making under risk naturally leads to Yaari (1987)'s anticipated utility. Thus the two generalizations of expected utility in fact are one.

[90.4] Bezembinder, Thom G.G. & Peter P. Wakker (1990) Review of Chapter 2.10 of Richard C. Atkinson, R.J. Hernnstein, Gardner E. Lindzey, & R. Duncan Luce (1988, Eds.), “Stevens Handbook of Experimental Psychology” (Wiley, New York), Acta Psychologica 75, 193-194.


[90.5] Peters, Hans J.M. & Peter P. Wakker (1990) “Independence of Irrelevant Alternatives and Revealed Group Preferences” (Extended abstract). In Tatsuro Ichiishi, Abraham Neyman, & Yair Tauman (Eds.) Game Theory and Applications, 404-406, Academic Press, New York. (The full paper corresponding to this extended abstract, is no. 91.4 ).

It is shown that a Pareto optimal and continuous single-valued choice function defined on the compact convex subsets of the positive orthant of the plane maximizes a real-valued function if and only if it satisfies the independence of irrelevant alternatives condition. Further, this real-valued function must be strongly quasiconcave. The result can be applied to consumer demand theory to deal with nonlinear budget sets, and to bargaining game theory to generalize the Nash bargaining solution.

[90.6] Schmeidler, David & Peter P. Wakker (1990) “Expected Utility and Mathematical Expectation.In John Eatwell, Murray Milgate, & Peter Newman (Eds.) Utility and Probability. The New Palgrave, 70-78, the MacMillan Press, London.   (Reprint of paper no 87.3)

In a Bayesian vein the history of, and the ideas behind, expected utility and mathematical expectation are presented at an elementary level.




1989

[89.1] Wakker, Peter P. (1989) “Continuous Subjective Expected Utility with Nonadditive Probabilities,Journal of Mathematical Economics 18, 1-27.

A well-known theorem of Debreu about additive representations of preferences is applied in a nonadditive context, to characterize continuous subjective expected utility maximization for the case where the probability measures may be nonadditive. The approach of this paper does not need the assumption that lotteries with known (objective) probability distributions over consequences are available.

[89.2] Wakker, Peter P. (1989) “A Graph-Theoretic Approach to Revealed Preference,Methodology and Science 22, 53-66.

One of the issues in the impossibility theorem of Arrow is the difference between choice behaviour, as considered by Arrow in most of the illustrations for the conditions in his theorem, and binary relations as dealt with in Arrow's theorem. The relations between choice behaviour and binary relations are studied in revealed preference theory, a theory which originates from consumer demand theory. This paper presents a graph-theoretic approach to revealed preference theory. This is done by considering alternatives as vertices, and choice situations as arcs. By means of this method alternative proofs are obtained for some known results. In particular it is shown that many results from literature can be derived from what may be the main result from revealed preference theory, a theorem of Richter (1966). Next a duality approach is sketched, where vertices and arcs are interchanged as done in dual graph theory. Finally some results are given for non-transitive binary relations. For these there is an increasing interest because of Arrow's theorem.

[89.3] Wakker, Peter P. (1989) “Subjective Expected Utility with Non-Increasing Risk Aversion,Annals of Operations Research 19, 219-228.

It is shown that assumptions about risk aversion, usually studied under the presupposition of expected utility maximization, have a surprising extra merit at an earlier stage of the measurement work: together with the sure-thing principle, these assumptions imply subjective expected utility maximization, for monotonic continuous weak orders.

[89.4] Wakker, Peter P. (1989) Transforming Probabilities without Violating Stochastic Dominance.In Edward E.Ch.I. Roskam (Ed.) Mathematical Psychology in Progress, 29-47, Springer, Berlin.

The idea of expected utility, to transform payments into their utilities before calculating expectation, traces back at least to Bernoulli (1738). It is a very natural idea to transform, analogously, probabilities. This paper gives heuristic visual arguments to show that the, at first sight, natural way to do this, at second thought seems questionable. At second thought a sound and natural way is the way indicated by Quiggin (1982) and Yaari (1987a).

[89.5] Wakker, Peter P. (1989) “Additive Representations of Preferences, A New Foundation of Decision Analysis.” Kluwer Academic Publishers, Dordrecht.
Comments How to buy
Pdf file(s) of the book (if you have legal access to Springer)

Rigorous preference foundations are given for Savage's (1954) subjective expected utility, Gilboa's (1987) and Schmeidler's (1989) Choquet expected utility (“rank-dependence”), and other models, using a general tradeoff technique for analyzing cardinal (interval-scale) utility, and based on the economically realistic assumption of a continuum of outcomes.


1988

[88.1] Wakker, Peter P. (1988) “Nonexpected Utility as Aversion of Information,Journal of Behavioral Decision Making 1, 169-175.
(Discussion in Journal of Behavioral Decision Making 2, 1989, 197-202.)

This paper argues that a violation of the independence condition, (to be) used in the characterization of expected utility maximization by von Neumann and Morgenstern, will always lead to the existence of choice situations in which information-aversion is exhibited.

[88.2] Wakker, Peter P. (1988) “Continuity of Preference Relations for Separable Topologies,International Economic Review 29, 105-110.

A preference relation is shown to be continuous with respect to some separable topology, if and only if the preference relation is embeddable in the product set of Re and {0,1}, endowed with the lexicographic ordering. This result is used as the starting point to obtain alternative proofs for some representation theorems of consumer theory.

[88.3] Wakker, Peter P. (1988) “The Algebraic versus the Topological Approach to Additive Representations,Journal of Mathematical Psychology 32, 421-435.
Typos.

It is proved that, under a nontriviality assumption, an additive function on a Cartesian product of connected topological spaces is continuous, whenever the preference relation, represented by this function, is continuous. The result is used to generalize a theorem of Debreu (1960) on additive representations, and to argue that the algebraic approach of Krantz, Luce, Suppes, & Tversky (1971, Foundations of Measurement) to additive conjoint measurement is preferable to the more customary topological approach. Applications are given to the representation of strength of preference relations, and to the characterization of subjective expected utility maximization.

[88.4] Wakker, Peter P. (1988) “Derived Strength of Preference Relations on Coordinates,Economics Letters 28, 301-306.

A way is indicated to derive strengths of preferences on coordinate sets from solely an ordinal preference relation on a Cartesian product. These strength of preferences are then used to reformulate several well-known properties of preference relations on Cartesian products, and to make their meaning more transparent. A new result for dynamic contexts is given.

[88.5] Wakker, Peter P. (1988) “Characterizations of Quasilinear Representing Functions, and Specified Forms of These.In Wolfgang Eichhorn (Ed.) Measurement in Economics (Theory and Applications of Economic Indices), 311-326, Physica-Verlag, Heidelberg.

In this paper two versions of equivalence independence for binary relations on Cartesian products are introduced to characterize special kinds of representing functions. The obtained results are used to characterize quasilinear economic indexes, and several specified forms of these.


1987

[87.1] Peters, Hans J.M. & Peter P. Wakker (1987) “Convex Functions on Non-Convex Domains,Economics Letters 22, 251-255.

It is shown that a convex function, defined on an arbitrary, possibly finite, subset of a linear space, can be extended to the whole space. An application to decision making under risk is given.

[87.2] Wakker, Peter P. (1987) “Subjective Probabilities for State-Dependent Continuous Utility,Mathematical Social Sciences 14, 289-298.

For the expected utility model with state dependent utilities, Karni, Schmeidler & Vind (1983, Econometrica) show how to recover uniquely the involved subjective probabilities if the preferences, contingent on a hypothetical probability distribution over the state space, are known. This they do for consequence spaces, consisting of lotteries on sets of prizes. This paper adapts their work to consequence spaces that are connected topological spaces, without using lotteries on them. For example, consequences may now be money or commodity bundles.

[87.3] Schmeidler, David & Peter P. Wakker (1987) “Expected Utility and Mathematical Expectation.In John Eatwell, Murray Milgate, & Peter Newman (Eds.) The New Palgrave: A Dictionary of Economics, 229-232, the MacMillan Press, London.
  1990 reprint of same text with repagination and nicer layout

In a Bayesian vein the history of, and the ideas behind, expected utility and mathematical expectation are presented at an elementary level.

[87.4] Wakker, Peter P. (1987) “From Decision Making under Uncertainty to Game Theory.In Hans J.M. Peters & Koos J. Vrieze (Eds.) Surveys of Game Theory and Related Topics, 163-180, CWI Tract 39, Centre for Mathematics and Computer Science, Amsterdam.

This paper presents several new results for game theory. The results have in common that they have been obtained from the literature on (probability theory and) decision making under uncertainty by simple translation algorithms, mainly by replacing “state of nature” by “player.” The aim of this paper is to show the usefulness of such translation algorithms.


1986

[86.1] Wakker, Peter P., Hans J.M. Peters, & Tom B.P.L. van Riel (1986) “Comparisons of Risk Aversion, with an Application to Bargaining,Methods of Operations Research 54, 307-320.

Key results of Pratt (1964), and Arrow (1971), Yaari (1969), and Kihlstrom & Mirman (1974) on the comparability of the risk aversion of different decision makers, are extended to completely general consequence sets, without any restriction on the utility functions, for decision making under risk, and to topologically connected consequence spaces with continuous utility functions, for decision making under uncertainty. An application to bargaining theory is given.

[86.2] Wakker, Peter P. (1986) “The Repetitions Approach to Characterize Cardinal Utility,Theory and Decision 20, 33-40.

Building on previous work of A. Camacho, we give necessary and sufficient conditions for the existence of a cardinal utility function to represent, through summation, a preference relation on sequences of alternatives. A counterexample shows that the representation used in Camacho's work is incorrect.

[86.3] Wakker, Peter P. (1986) “Concave Additively Decomposable Representing Functions and Risk Aversion.In Luciano Daboni, Aldo Montesano, & Marji Lines (Eds.) Recent Developments in the Foundations of Utility and Risk Theory, 249-262, Reidel, Dordrecht.

A condition for preference relations on Cartesian products is introduced, the concavity assumption. It gives the behavioural foundation to concave additively decomposable representing functions (prominent in optimization). Its connections with strong separability (=coordinate or preferential independence) combined with convexity are studied. The cardinal coordinate independence condition, by itself necessary and sufficient for subjective expected utility maximization, is combined with the concavity assumption into concave cardinal coordinate independence, a concise condition characterizing the customary expected utility maximization with risk aversion.

[86.4] Wakker, Peter P. (1986) Review of Dennis V. Lindley (1985) “Making Decisions.” Wiley, New York, Kwantitatieve Methoden 20, 144-145.


[86.5] Wakker, Peter P. (1986) “Representations of Choice Situations.” Tilburg Univeraity, the Netherlands.




1985

[85.1] Wakker, Peter P. (1985) “Continuous Expected Utility for Arbitrary State Spaces,Methods of Operations Research 50, 113-129.

Subjective expected utility maximization with continuous utility is characterized, extending the result of Wakker (1984, Journal of Mathematical Psychology) to infinite state spaces. In Savage (1954, The Foundations of Statistics) the main restriction, P6, requires structure for the state space, e.g. this must be infinite. The main restriction of this paper, requiring continuity of the utility function, may be more natural in economic contexts, since it is based on topological structure of the consequence space, structure that usually is present in economic contexts anyhow. Replacing the state interpretation by a time interpretation yields a characterization for dynamic contexts.

[85.2] Wakker, Peter P. (1985) “Extending Monotone and Non-Expansive Mappings by Optimization,Cahiers du C.E.R.O. 27, 141-149.

A theoretical application of optimization theory is demonstrated. The theory is used to prove theorems on the extendeability of the domain of non-expansive, and (strictly) monotone, mappings (under preservation of the characteristic property), by formulating the key problem in it as an optimization problem.


1984

[84.1] Wakker, Peter P. (1984) “Cardinal Coordinate Independence for Expected Utility,Journal of Mathematical Psychology 28, 110-117.

A representation theorem for binary relations on (Re)n is derived. It is interpreted in the context of decision making under uncertainty. There we consider the existence of a subjective expected utility model to represent a preference relation of a person on the set of bets for money on a finite state space. The theorem shows that, for this model to exist, it is not only necessary (as has often been observed), but it also is sufficient, that the appreciation for money of the person has a cardinal character, independent of the state of nature. This condition of cardinal appreciation is simple and thus easily testable in experiments. Also it may be of help in relating the neo-classical economic interpretation of cardinal utility to the von Neumann-Morgenstern interpretation.


1983

[83.1] Koster, Rene de, Hans J.M. Peters, Stef H. Tijs, & Peter P. Wakker (1983) “Risk Sensitivity, Independence of Irrelevant Alternatives and Continuity of Bargaining Solutions,Mathematical Social Sciences 4, 295-300.

Bargaining solutions are considered which have the following four properties: individual rationality, Pareto optimality, independence of equivalent utility representations, and independence of irrelevant alternatives. A main result of this paper is a simple proof of the fact that all such bargaining solutions are risk sensitive. Further a description is given of all bargaining solutions satisfying the four mentioned properties. Finally, a continuous bargaining solution, satisfying the first three properties, is given which is not risk sensitive.


1982

-


1981

[81.1] Wakker, Peter P. (1981) “Agreeing Probability Measures for Comparative Probability Structures,Annals of Statistics 9, 658-662.

It is proved that fine and tight comparative probability structures (where the set of events is assumed to be an algebra, not necessarily a sigma-algebra) have agreeing probability measures. Although this was often claimed in the literature, all proofs the author encountered are not valid for the general case, but only for sigma-algebras. Here the proof of Niiniluoto (1972, Annals of Mathematical Statistics) is supplemented. Furthermore an example is presented that reveals many misunderstandings in the literature. At the end a necessary and sufficient condition is given for comparative probability structures to have an almost agreeing probability measure.



3. Elucidations and Comments


Typo: In the paper
[88.3] Wakker, Peter P. (1988) “The Algebraic versus the Topological Approach to Additive Representations,” Journal of Mathematical Psychology 32, 421-435.

On p. 433 l. 1, it should be Theorem 2.8.3 of Narens (1985), not Theorem 1.8.3.

On p. 425, line -7, two lines below Eq. 3.1, Uk should be dropped.


Comments on

[89.5] Wakker, Peter P. (1989) “Additive Representations of Preferences, A New Foundation of Decision Analysis.” Kluwer Academic Publishers, Dordrecht.
ISBN-13: 9780792300502.

89.5.1. Corrections

p. 7, FIGURE, lowest box
THM.IV.2:
THM.IV.2.7

P. 9, 5th para (“In Chapter I we …”):
C(D ):
C(D)

p. 29 l. 10: “everything relevant for the future” is misleading. It should be everything relevant explicitly considered in the analysis. In other words, there should not be anything else considered in the analysis that is relevant. The sentence on p. 25/26 states explicitly what I want to exclude here, writing “There may be further, 'implicit', uncertainty in such consequences.” I surely do and did not want to follow Savage's, in my opinion unfortunate, assumption that the states of nature should specify all uncertainties whatsoever.

p. 47 l. 2 above III.4.2:
… proof. Outside …:
… proof. The concepts in Stage 2 can always be defined under, mainly, restricted solvability. Stage 3 uses the additivity axioms to show that the concepts of Stage 2 have the required properties. Outside …

P. 59 last four lines: Assume here that m > 0.

P. 60 last para of the “Comment“ on top of the page:
Steps 5.1 to 5.5:
Steps 5.1 to 5.4

P. 66 l. 11: The first strict inequality (>) should be reversed (<).

P. 87 Statement (ii) in Theorem IV.4.3: the last clause, the one following the semicolon, can be dropped. It is trivially implied by the preceding clause with i=1.

P. 89, l. -9:
Remark III.7.7:
Remark III.7.8

P. 93 l. 14: (“union of all sets”): Not true, there can be more simple acts if there are different but equivalent consequences that are not distinguished by the algebra on capital-delta D; this complication does not affect any aspect of the analysis, because the consequences just mentioned are indistinguishable for any purpose. Better call the union in question the set of step-acts iso simple acts.

P. 114 First para of proof of Lemma VI.4.5, last line:
… we conclude that x-ktk y-ktk. :
… we conclude that x-ksk y-ksk implies x-ktk y-ktk.

P. 114 last para: The proof shows that the case of A having one element implies the condition for general A.

P. 121, Lemma VI.7.5. The last line is not clear to me now (Feb. 2002). I think that the lemma holds, and that the proof is simple and does not need connectedness of Gamma. Take a countable dense subset A of Gamma. (zj-1,zj) is open, and, therefore, any open subset relative to this set is open itself. It, therefore, intersects A, and the intersection of A with (zj-1,zj) is a countable dense subset of (zj-1,zj). So, the latter set is separable. Adding zj-1 and zj to the intersection of A with (zj-1,zj) gives a countable dense subset of Ejz.

P. 123, PROOF of LEMMA VI.7.8, 1st line, two times:
Vh :
Vz

P. 124, STAGE 3, last line:
more than one element:
more than one equivalence class

P. 124, Eq. (VI.7.4):
Vt1:
Vtj

P. 124, Stage 3 line 3:
Ehj:
Ezj

P. 126, COROLLARY VI.7.12, Statement (ii):
(ii) The binary relation does not reveal comonotonic-contradictory tradeoffs. :
(ii) The binary relation satisfies CI.

P. 127, l. 5/6: This is not true. Noncontradictory tradeoffs is used essentially on the Ez's with only coordinates 1 and n essential. Then Com.CI does not suffice. Hence we must, in Proposition VI.7.7, restrict attention to those z's with Ez having at least three essential coordinates, i.e., z1  zn-1. This does not complicate the prooof further.

P. 130, NOTATION VI.9.3: The 3d and 4th capital gamma's (G) should be capital C's.
The 1st, 2nd, and 5th are OK.

P. 130, LEMMA VI.9.4: Both capital gamma's, G, should be capital C's.

P. 133 l. 4: Because U* is continuous and the extension to U has not induced “gaps,” U, representing on G, must be continuous.

P. 170, rf8, l. 2: consequences: utilities

P. 172, rf17:
Theorem 2.5.2:
Theorem I.2.5.2

89.5.2. Further comments

P. 72 Example III.6.8, Case (b): In this case, Vind's (1986a) mean-groupoid approach neither works, so the algebraic approach is also more general than his. This was also pointed out by Jaffray (1974a).

P. 75 Section III.8: A remarkable place where in fact an additive representation is obtained, using the Reidemeister condition, is p. 125 in
Edwards, Ward (1962) “Subjective Probabilities Inferred from Decisions,” Psychological Review 69, 109-135.
Edwards mentions G.J. Minty (referred to in Wakker, Peter P. (1985) “Extending Monotone and Non-Expansive Mappings by Optimization,” Cahiers du C.E.R.O. 27, 141-149) and L.J. Savage.

P. 124, Stage 5: A better derivation of this stage is on p. 516, Stage 5, of

Wakker, Peter P. (1991) “Additive Representations on Rank-Ordered Sets. I. The Algebraic Approach,” Journal of Mathematical Psychology 35, 501-531,

P. 125, after l. 3: If there were maximal or minimal consequences, continuity would not yet follow. Here, however, it does.

P. 125, LEMMA VI.7.10: A better derivation is in Section 3.5 of

Wakker, Peter P. (1991) “Additive Representations on Rank-Ordered Sets. I. The Algebraic Approach,” Journal of Mathematical Psychology 35, 501-531,

P. 146, Definition VII.5.2: The subscripts i and j can be dropped for the definition, and have been added only to link with figure VII.5.1.

P. 157 and P. 158: In November 2000, Horst Zank pointed out to me that the idea of Theorem VII.7.5 of my book is essentially contained in Corollary 1.1, and the idea of Theorem VII.7.6 is essentially contained in Theorem 3, of
     Blackorby, Charles & David Donaldson (1982) “Ratio-Scale and Translation-Scale Full Interpersonal Comparability without Domain Restrictions: Admissible Social Evaluation Functions,” International Economic Review 23, 249-268.

P. 162, THEOREM A2.3 (Savage):
- With P convex-ranged, the preference conditions are necessary and sufficient, not just sufficient, for the representation.
- In P3, it is correct that event A should be essential, not just essential on the set of simple acts.
- P5: Savage actually requires the restriction of to the set of consequences to be nontrivial, not just on the set of acts. Savage's condition is more restrictive. I verified later that with little work (using P6 and maybe P7, I forgot), the condition here and Savage's are equivalent.
- P7: This formulation is from Fishburn, but I forgot where he gave it, probably in his 1970 or 1982 book.

P. 171, rf12, l. 1: In addition to Section 6.5.5 of KLST, see also the second open problem in Section 6.13 of KLST.

89.5.3. Minor Typos and Corrections (for myself; not worth your time)
Spelling of Choquet integral, with our without hyphen, is not consistent throughout the book.

p. 4 l. -5:
accordancing:
according

p. 40 l. 3:
whit:
with

p. 51 Step 2.1: This and all following step-headings should have been printed bold, as the preceding step-headings.

p. 51 l. -6:
leave out from notation:
suppress

p. 51, l. -5:
… those from w0; …
… those of w0; …

p. 56 FIGURE III.5.3, l. 4 of legend:
that The North-East:
that the North-East

P. 67 l. 4/5:
… stronger … condition:
… stronger than the hexagon condition, i.e. imply it.

P. 69, Lemma III.6.3: All the text should have been italics.

P. 71 Observation III.6.6': “Let us repeat … functions.”: This statement should have been put before, not in, the Observation.

P. 81, Figure IV.2.1. “Suppose” and “and,” on top of the figure, should not be bold.

p. 106: period at end of Example V.6.2.

p. 161, THEOREM A2.1, Statement (ii), the last two words, “on Ñn,” can be dropped.

P. 168, rf3 l. 4:
linear/positive affine:
positive linear/affine



Typos/corrections in the paper

[92.1] Sarin, Rakesh K. & Peter P. Wakker (1992) “A Simple Axiomatization of Nonadditive Expected Utility,Econometrica 60, 1255-1272.

P. 1269: [pointed out by Shiu Liu on Sept. 4, 2024]. In the first line in the proof of Lemma A.1, it should be P3 instead of P2.

P. 1269: [pointed out by Shiu Liu on Sept. 4, 2024]. Three lines below the "Q.E.D." it should be P4 instead of P2.


COMMENT: my paper
[93.6], “Savage's Axioms Usually Imply Violation of Strict Stochastic Dominance,” Review of Economic Studies 60, 487-493,
was meant to be a PERSIFLAGE.

1. INTRODUCTION

This paper has two purposes. (1) Warn decision theorists that finite additivity has intricacies that are often overlooked. (2) Provide a persiflage: Several papers have presented technical mathematical aspects of a model without empirical importance in misleading manners so as to suggest empirical content to non-mathematicians. My paper is intended to be a persiflage of such papers. Last thing I wanted to do when writing the paper is cast any doubt on the rationality of Savage's (1954) model (see p. 491 third paragraph).

Literally speaking, every sentence in my paper is factually true. However, the spirit is entirely misleading. I thought that the spirit of the message in the title would be so absurd that people would not take it seriously. Unfortunately, I have not succeeded in conveying the real message and I know now that misunderstandings have arisen.

2. EXAMPLES OF PAPERS OF WHICH I WANTED TO WRITE A PERSIFLAGE

As a first, only hypothetical, example, consider Zeno's paradox, about Achilles never catching up with a tortoise (each time when Achilles has reached the place where the tortoise was before, the tortoise has moved farther, and so Achilles never catches up). The example says something about mathematical limit taking. If people were to present this as a shocking new discovery about athletics, then they would do the misleading thing that I dislike.

One real example concerns people presenting Arrow's impossibility theorem for voting as a proof that democracy cannot exist. E.g., a famous economist wrote: “The search of the great minds of recorded history for the perfect democracy, it turns out, is the search for a chimera, for a logical self-contradiction.” Such phrases grossly overstate the meaning of a combinatorial result, trying to impress nonmathematicians.

Another example is as follows. It seems that people have considered it a paradox that on Re- one can't have risk aversion and weak continuity at the same time. This simply follows because risk aversion implies unbounded utility on Re- whereas weak continuity requires bounded utility.

Due to a technical oversight of Savage (1954) (by imposing his axioms on all acts and therefore also on the unbounded), he came out with a utility function that has to be bounded, as was later discovered (Fishburn 1970 “Utility Theory for Decision Making”). Several people have used this as evidence to support that utility should be bounded and that the human capacity for experiencing happiness must be limited, etc. Of course, again I disagree with such conclusions. Nowadays, people are more careful and in the appropriate cases restrict preference axioms to bounded acts, exactly so as to avoid Savage's technical oversight (e.g., Schmeidler 1989 Econometrica). Let me repeat, Savage (1954) does not provide any argument whatsoever to support bounded utility.

There are many paradoxes based on nothing but technicalities of finite additivity versus countable additivity. Some papers have misused these. My paper describes one more paradox of this kind. The paradox results because I define strict stochastic dominance in the traditional way. Under finite additivity it is more appropriate to use a different, somewhat more complex, formulation of stochastic dominance. The different formulation is equivalent to the traditional under countable additivity. Under finite additivity, however, it is different and more appropriate. This different and preferable formulation is described at the end of my paper (undoubtedly known before to specialists).

Let me finally cite text from my letter of March 6, 1992, to the editor Ian Jewitt of the Review of Economic Studies, in which I explained the motives for writing this paper. (I am happy that Professor Jewitt was willing to accept this unconventional paper.)

- MOTIVATIONAL COMMENTS. Here I must embark on a long undertaking, i.e., explain to you the ideas behind the paper, and the motivations that brought me to write the paper as I did. It is a tricky paper, different from most research papers.

The referee ... points out, correctly, that the results would in a mathematical sense not be too surprising to anyone familiar with finite additivity. ... It is well-known that, under finite additivity, there exist strictly positive functions that have integral 0. Well, call such a function an act, call the 0-function an act, and there you got your violation of strict statewise monotonicity. No big deal! ... Most people ... had some courses on probability theory, where probabilities are assumed sigma-additive, but they do not realize that things they learned there do not always hold for finitely additive probability measures that may result in decision models such as Savage's. This is a continuing source of mistakes and misunderstandings, ... at the end of the paper I point things out, constructively, not trying to continue the confusion, but I try to show the way out.



Typos in the paper
[93.7] Wakker, Peter P. & Amos Tversky (1993) “An Axiomatization of Cumulative Prospect Theory,” Journal of Risk and Uncertainty 7, 147-176.

Mistakes because proof corrections were not carried out correctly:

In the following places, a small f should be the “funny” and curly symbol capital F introduced on p. 151, in line 3 of Section 3, as the set of prospects:
p. 152 first line below the first displayed equation
p. 152 five lines below the first displayed equation (only in the first f - at the beginning of the line)

In the following places, a special “thin” capital F should be the “funny” and curly symbol capital F introduced on p. 151, in line 3 of Section 3, as the set of prospects:
p. 153, line 8, at the end of the line.
p. 155, Theorem 4.3, line 2, twice: in F0.
p. 160, line -3 (legend to Figure 1)
p. 169, legend to Figure 4 (twice)
p. 174, line 2, near middle (in FR)
p. 174, line 3, near end (in FR)

p. 157, 4th line of Section 6: In the definition of maximal sign-comonotonic sets, before o(1) should be added
: f
so that after the funny capital F we read   : f(o(1))
p. 159, first line in Section 7: every symbol j should be a subscript, also in x'jf, which should be x'jf (twice).
p. 160: line 1 of Section 8.1, the last four words should be: and can then be
p. 161, line 8: every symbol j should be a subscript, also in x'jf, which should be x'jf.
p. 167, line -3: xi at the end of the line should be xif
p.172, proof of Proposition 8.2, line 6: Wakker (1993a, Theorem 3.2)


Further mistakes (mine)

p. 158, Eq. 6.4: j £ k (instead of j < k).
p. 158, Eq. 6.5: j > k (instead of j ³ k).
p. 166: In the first two lines of the Proof of Lemma A1 there are eight a2's. The last four should, however, be b2's.
p. 167: In the proof of Theorem 6.3, starting the second paragraph, Statement (ii) of the Theorem is assumed and Statement (i) is derived.


Typos/corrections in the paper
[94.1] Wakker, Peter P. (1994) “Separating Marginal Utility and Probabilistic Risk Aversion,” Theory and Decision 36, 1-44.

Pp. 32-33: [correction of proof of Theorem 14; pointed out by Tim Ickenroth on Sept. 11, 2014]. The interval S around the nonextreme mu is such that the implication of Eq. (4) can be reversed there for U1. It should, however, also satisfy this reversal of implication for U2. This reversal can be obtained by constructing an interval S' which does for U2 what S does for U1, and then taking the intersection of S and S' instead of S. This construction is needed at the end of the proof. By stronger decrease of marginal utility, we only get the negation of >2*. Then we need reversal of the implication of Eq. (4) for U2, to derive the rest of the proof.


Typos in the paper
[94.2] Wakker, Peter P., Ido Erev, & Elke U. Weber (1994) “Comonotonic Independence: The Critical Test between Classical and Rank-Dependent Utility Theories,” Journal of Risk and Uncertainty 9, 195-230.

p. 218, Table 2: In the last row S (for set 6), the two stars should be double stars, i.e., the values .36 and .40 are even significant at the level 0.01 and not just at the level 0.05.


Additions to the paper
[95.1] Fishburn, Peter C. & Peter P. Wakker (1995) “The Invention of the Independence Condition for Preferences,” Management Science 41, 1130-1144.
Explanation that Samuelson (1940) was the first to write an independence-type (separability) condition.


Comments on
[96.1] Wakker, Peter P. & Daniel Deneffe (1996) “Eliciting von Neumann-Morgenstern Utilities when Probabilities Are Distorted or Unknown,” Management Science 42, 1131-1150.

TYPOS
  1. P. 1134, l. 1: (M,p,0) iso (0,p,M). (To have utility p in the next sentence correct.)
  2. 1st column 3rd para (on the probability equivalent method):
    3rd line: x > y > z;
    l. -2: (M, p, 0).
    (These two changes, again, to have utility p in the last line of that para correct.)

Comments on
[96.3] Miyamoto, John & Peter P. Wakker (1996) “Multiattribute Utility Theory without Expected Utility Foundations,” Operations Research 44, 313-326.

Only after publication, John Miyamoto and I discovered that Theorem 1 had been obtained before as Theorem 4 in Ebert (1988, Social Choice and Welfare 5), and Theorem 2 as Ebert's Theorem 3 (pointed out to me by Alain Chateauneuf in June 1998).

p. 320/321: In Lemma 1, Int(C) should be replaced by C\{best and worst outcomes}, because the interior in Wakker (1993a) is with respect to the preference topology. (Pointed out to me by Han Bleichrodt in August 2001.)


Comments on
[97.2] Sarin, Rakesh K. & Peter P. Wakker (1997) “A Single-Stage Approach to Anscombe and Aumann's Expected Utility,” Review of Economic Studies 64, 399-409.

In an email of January 24, 2007, Pavlo Blavatskyy pointed out to me that the monotonicity axiom 3.3 can be dropped in Theorem 3.8 because it is used nowhere in the proof. In addition, it can easily be derived from the other axioms. For roulette events it is implied mainly by the independence axiom 3.4, and also follows as a corollary of the well-known von Neumann-Morgenstern expected utility theorem there. For horse events it then easily follows from the additivity axiom 3.6.

The monotonicity axiom 3.3 cannot be dropped in Theorem 5.3. There it is needed to avoid negative probabilities and probabilities exceeding 1 for horse events.


Comments on
[99.3] Chateauneuf, Alain & Peter P. Wakker (1999) “An Axiomatization of Cumulative Prospect Theory for Decision under Risk,” Journal of Risk and Uncertainty 18, 137-145.

In Theorem 2.3, a nontriviality condition should be added that there exist both a gain and a loss, i.e. that the status quo is neither the best nor the worst outcome. This was pointed out to me by Han Bleichrodt in an email of December 21, 2005. I give details hereafter.

In the proof on p. 144, to use Proposition 8.2 and Theorem 6.3 of Wakker & Tversky (1993), true mixedness must be verified. This condition follows from stochastic dominance (which is taken in a strong sense with strict preference in the present paper) as soon as there exist a gain and a loss. If not, the gain-part or the loss-part becomes degenerate with a trivial degenerate representation there. In such a degenerate case the representation result remains valid. The weighting function, however, is not uniquely determined, contrary to what the theorem claims, but can be chosen arbitrarily. In summary, to have the theorem fully correct, the mentioned nontriviality assumption must be added, but it is only needed to avoid that there is no uniqueness of the weighting functions in degenerate cases.


Comments on
[01.1] Bleichrodt, Han, Jose Luis Pinto, & Peter P. Wakker (2001) “Making Descriptive Use of Prospect Theory to Improve the Prescriptive Use of Expected Utility,” Management Science 47, 1498-1514.

Additional analyses, alluded to but not given in the paper, can be found here.

Matching. Although the paper does not explicitly use the term “matching,” all measurements in this paper were based on matching questions. That is, subjects directly stated the values to generate indifferences, and such values were not derived indirectly from choices. Thus, as written at the end of p. 1506, five CE questions give five CE values. That matching was used is essential for the discussion of the reference point in the CE and PE measurements, such as in Appendix B. For CE questions subjects are not given choices between sure outcomes and gambles, in which case they could easily focus on the sure outcome and take that as reference outcome. Instead, they are given the gamble, and have to provide themselves the sure outcome to generate indifference. So, there is no sure outcome before them that they can focus on and easily use as reference point. This is contrary to the PE questions where the sure outcome is part of the stimuli presented and they themselves have to provide the probability to generate indifference. Then the sure outcome is available to them, and they can easily use it as a reference point.


Questions About the Relations between Figures 1 and 2 That Were Asked During Lectures, and the Answers to these Questions

QUESTION 1. The discrepancies between PE and CE under the classical elicitation assumption, indicated by black circles in Figure 2, are largest for the small utilities. Figure 1, however, suggests that the biases are strongest for the large utilities, not the small. How can this be?

ANSWER. The discrepancies are generated by the difference between the biases in PE and CE. Figure 1 demonstrates that this difference is largest for small utilities. The difference is generated by loss aversion, which is effective under PE but not under CE and which is strongest for small utilities. It follows that the correction formulas of prospect theory induce more reconciliation for the small utilities than for the high ones.

QUESTION 2. Consider the classical elicitation assumption. Figure 1 suggests that there are no systematic biases for the TO method, but that there are systematic upward biases for the CE utilities. The latter should, therefore, be expected to be higher than the TO utilities. The data, however, find that the CE utilities are usually smaller than the TO utilities, not higher. See the black asterisks in Figure 2, which are all negative and are all below the abscissa. How can this be?

ANSWER. Figure 1 depicts measurements on a common domain. A complication in the experiment described in Figure 2 is that the TO and CE measurements were conducted on different domains. The CE measurements concerned the domain [0,40], the TO measurements the domain [0, x6]. The latter interval was nearly always smaller than the former, that is, x6 < 40. We then compared the utilities on the common domain [0, x6]. To this effect, the CE utilities UCE were renormalized on this domain to be 1 at x6, i.e. they were all replaced by UCE(.)/UCE(x6). The value x6 usually lies in the upper part of the domain [0,40]. Its utility is largely overestimated under the classical elicitation assumption, according to Figure 1. Therefore, the denominator in UCE/UCE(x6) is greatly overestimated, and the fraction is underestimated. For each xj, especially for j=5, this effect is mitigated by the overestimation of the numerator UCE(xj). UCE(x5)/UCE(x6) will not be far off, in agreement with Figure 2.
In general, it is safer to consider whether cardinal utilities are more or less convex/concave, not if they are higher or lower. The latter only makes sense if a common normalization has been chosen. Another way of explaining Observation 2 is as follows. We did not use the CE correction curve of Figure 1 on the whole domain [0,1], but only on the subdomain [0,x6/40]. This left part of the CE correction curve is more concave than convex and, therefore, our correction formulas make the CE curve more concave, not less concave as would have happened had the CE correction curve on the whole interval [0,1] been used. This answer explains why our corrections, based on PT, make the (renormalized) CE utilities more concave rather than more convex in our experiment and move them towards the TO utilities.

QUESTION 3 (follow-up on Question 2). How can the discrepancies between PE and TO in Figure 2 be derived from Figure 1?

ANSWER. The comparison is similar to the reasoning used in the comparison of UCE and UTO. In this case, however, the overestimation of the numerator in UPE(xj)/UPE(xj) has a stronger effect than the overestimation of the denominator except for j=5.



Comments on
[01.4] De Waegenaere, Anja & Peter P. Wakker (2001) “Nonmonotonic Choquet Integrals,” Journal of Mathematical Economics 36, 45-60.

Atsushi Kajii let me know in September 2004 that Theorem 3, on convexity of signed capacities, may also be in:
Lovász, Laszlo (1983) “Submodular Functions and Convexity.” In Achim Bachem, Martin Grötschel & Bern Korte (Eds.) Mathematical Programming—The State of the Art, 235-257, Springer, Berlin.


Comments on
[02.2] Gilboa, Itzhak, David Schmeidler, & Peter P. Wakker (2002) “Utility in Case-Based Decision Theory,” Journal of Economic Theory 105, 483-502.

P. 490, l. 2: (A4') should be (P4')

P. 496, proof of Lemma 3: Han Bleichrodt pointed out a problem in the proof in August 2004. Further explanation, and a corrected proof, are here.


Comments on
[02.4] Peter P. Wakker (2002) “Decision-Principles to Justify Carnap's Updating Method and to Suggest Corrections of Probability Judgments.” In Adnam Darwiche & Nir Friedman (Eds.) Uncertainty in Artificial Intelligence, Proceeding of the Eighteenth Conference, 544-551, Morgan Kaufmann, San Francisco, CA.

The published text does not have proofs. I deo not remember how this has happened. I am strongly against publishing theorems without proofs. Here is a working paper that does contain proofs: “Version with proofs

P. 548, following Eq. 4.4., writes, incorrectly,
“By repeated application, the condition implies that the probability of a sequence of observations depends only on the number of observations of each disease in the sequence, and not on the order in which these observations were made.”
Counterexamples can show that this claim is not correct. The condition called exchangeability in the text is better called future-exchangeability, because it only governs future observations, not past. Past-exchangeability, requiring that reversing two observations in evidence E does not affect the certainty-equivalent of any gamble (di:1), should be added. Past- and future-exchangeability together can then be called exchangeability. Counterexamples can also show that past-exchangeability does not imply future-exchangeability, i.e. the two conditions are logically independent. Exchangeability as now defined is indeed equivalent to the exchangeability condition common in statistics. The upper half in Figure 1 illustrates only past-exchangeability, not future. I do not know an easy way to illustrate future-exchangeability in Figure 1. In Theorem 4.1 on p. 549, exchangeability should be taken in the strong sense as just defined.


Comments on
[03.1] Wakker, Peter P. (2003) “The Data of Levy and Levy (2002) “Prospect Theory: Much Ado about Nothing?” Support Prospect Theory,” Management Science 49, 979-981.

The paper criticized,

Levy, Moshe & Haim Levy (2002c) “Prospect Theory: Much Ado about Nothing,” Management Science 48, 1334-1349,

has another problem not discussed in my note. This paper, and three others by these same authors, have too much overlap without cross-references. The authors published, as if new each time, not only the same ideas, but even the same experiments and data. Detailed comments are here. Those three other papers are:

2. Levy, Haim & Moshe Levy (2002a) “Arrow-Pratt Risk Aversion, Risk Premium and Decision Weights,” Journal of Risk and Uncertainty 25, 265-290.
3. Levy, Haim & Moshe Levy (2002b) “Experimental Test of Prospect Theory Value Function: A Stochastic Dominance Approach,” Organizational Behavior and Human Decision Processes 89, 1058-1081.
4. Levy, Moshe & Haim Levy (2001) “Testing for Risk Aversion: A Stochastic Dominance Approach,” Economics Letters 71, 233-240.

Management Science did not carry out the proof corrections for the appendix, and apologized for it. Here they are: Proof corrections for the appendix.


Comments on
[04.1] Enrico Diecidue, Ulrich Schmidt, & Peter P. Wakker (2004) “The Utility of Gambling Reconsidered,” Journal of Risk and Uncertainty 29, 241-259.

P. 255, next-to-last line: That x has a strictly higher value than a gamble, denoted G here, that assigns a positive probability to a strictly worse outcome, is seen as follows: If G assigns probability 1 to the strictly worse outcome, then the result is immediate. Otherwise, G is risky. Say the probability assigned to the strictly worse outcome is p>0. Let G' be the gamble resulting from G by moving p/2 probability mass from the strictly worse outcome to x. By stochastic dominance, x is weakly preferred to G'. G' is strictly preferred to G because the preference between G' and G is governed by expected utility, satisfying strict stochastic dominance. By transitivity, x is strictly preferred to G.

The above reasoning essentially used strict stochastic dominance for the preference functional over risky gambles. This explains why the same requirement is used in the proof of Remark 8 and, for instance, the probability transformation at the end there is required to be strictly increasing. Otherwise, cases could arise with the probability weighting function flat near zero, so that the rank-dependent utility utility of x is the same as of G, but x is still strictly preferred and v(x) must strictly exceed u(x).


Comments on
[05.3] Wakker, Peter P. (2005) “Decision-Foundations for Properties of Nonadditive Measures; General State Spaces or General Outcome Spaces,” Games and Economic Behavior 50, 107-125.

The claims of Table 2 on p. 117, and some similar results, are proved here.


Comments on
[05.2] Köbberling, Veronika & Peter P. Wakker (2005) “An Index of Loss Aversion,” Journal of Economic Theory 122, 119-131.

A typo: On p. 123, line 12, drop the last two words, “are empty.” The typo was caused by the journal not carrying out the proof corrections correctly.


Comments on
[05.1] Abdellaoui, Mohammed & Peter P. Wakker (2005) “The Likelihood Method for Decision under Uncertainty,” Theory and Decision 58, 3-76.

P.s.: I consider this paper, almost never cited, to be the best paper I ever co-authored. The time investment was infinite. After an unsuccessful submission to Econometrica, we did not want it to be warped in the referee game, and chose a permissive outlet. Unfortunately, the journal had nondense page printing in those days. This paper would have taken 40 pages in regular journals, but took 74 pages in this journal.

TYPOS
  1. The acknowledgment, to which the asterisk at the title is supposed to refer, was dropped by accident. It should have been:

    This paper received helpful comments from Han Bleichrodt and Peter Klibanoff, and participants of the 11th Conference on the Foundations of Utility and Risk Theory (FUR 2004), Cachan, France, where an earlier version was presented as “An Uncertainty-Oriented Approach to Subjective Expected Utility and its Extensions.”

    I apologize to all for this omission.

  2. P. 21, 3rd para, 1st line (definition of nonnull): the last gamma should be beta.

TYPOS DUE TO PROOF CORRECTOR:
  1. P. 72, line 4, the first symbol should be a j and not a k.



Comments on
[07.1] Abdellaoui, Mohammed, Carolina Barrios, & Peter P. Wakker (2005) “Reconciling Introspective Utility with Revealed Preference: Experimental Arguments Based on Prospect Theory,” Journal of Econometrics 138, 356-378.

TYPOS
  1. P. 363: All symbols z should be x. (This happens three times: Twice in the displayed formulas for the expo-power family, and once in the footnote.)



Comments on
[08.1] Bleichrodt, Han, Kirsten I.M. Rohde, & Peter P. Wakker (2008) “Combining Additive Representations on Subsets into an Overall Representation,” Journal of Mathematical Psychology 52, 304-310.

Insignificant typo: In footnote 1 on p. 306, the word “a” should be dropped.


Comments on
[08.6] Wakker, Peter P. (2008) “Explaining the Characteristics of the Power (CRRA) Utility Family,” Health Economics 17, 1329-1344.

Further useful comments are in Section 1.3 of
Doyle, John R. (2013) “Survey of Time Preference, Delay Discounting Models,” Judgment and Decision Making 8, 116-135.
For example, the logpower family is known as the Box-Cox transformation in statistics.


Comments on
[08.7] Wakker, Peter P. (2008) “Uncertainty.” In Lawrence Blume & Steven N. Durlauf (Eds.) The New Palgrave: A Dictionary of Economics, 6780-6791, The MacMillan Press, London.

Following the example at the end of the section entitled “Model-free approaches to ambiguity” (last page) it is claimed that similar examples can be devised where E and not-E themselves are unambiguous, there is “nonuniform” ambiguity conditional on E, this ambiguity is influenced by outcomes conditional on not-E through nonseparable interactions typical of nonexpected utility, and Eq. 6 erroneously ascribes the ambiguity that holds within E to E as a whole. Such an example is here.

TYPOS

  1. P. 432 para following Eq. 5, last line: R = B iso R = A.
TYPOS DUE TO PROOF CORRECTOR:
All my last-round proof corrections, pointed out in my email of 22Dec07, have not been carried out, contrary to what was promised in an email by the publisher (Ruth Lefevre) of 02Jan08 that also confirmed receipt of my proof corrections. The main corrections are:
  1. P. 429, 1st column, 2nd of two displayed lines: The last number just before the last closing bracket should be 0 rather than 25K.
  2. P. 429, Figure 1b (so the right half of Figure 1): For the sure prospect, concerning the circle with an s, the lowest outcome (with prob. 0.07) should not be 0 but 25K. That is, the s prospect should yield 25K for sure.
  3. P. 430 first sentence: Replace “The following figures depict” with “Figure 2 depicts”
  4. P. 435 Eq. 6: The second line has a part e(E:c that should be removed.



Comments on
[09.1] Bleichrodt, Han, Kirsten I.M. Rohde, & Peter P. Wakker (2009) “Non-Hyperbolic Time Inconsistency,” Games and Economic Behavior 66, 27-38.

On July 1, 2010, Drazen Prelec pointed out to us that our CRDI function appeared before in
      Prelec, Drazen (1998) “The Probability Weighting Function,” Econometrica 66, 497-527.
It was defined there on p. 511, Eq. 4.2. Prelec also provided an axiomatization by his conditional invariance preference condition (p. 511 top), which is almost identical to our CRDI preference condition. Our CRDI condition is slightly weaker, being the special case of Prelec´s conditional invariance with q=r and x'=y. Thus, our theorem is slightly more general, but this difference is minor. Prelec formulated his theorem for the context of decision under risk, with his p from [0,1] or from (0,1), designating probability. We formulated our theorem for intertemporal choice, with our t (the same role as Prelec's p) from any subinterval from [0, ¥), and with utility slightly more general. Our details are again slightly more general than Prelec's, but, again, the differences are minor. Thus, the priority of the CRDI family is with Prelec (1998). I regret that we did not know this at the time of writing our paper and, accordingly, could not properly credit Prelec then.

On March 5, 2014, I discovered that Read (2001 JRU, Eq. 16) had the basic formula of CRDI too.


Comments on
[09.2] Offerman, Theo, Joep Sonnemans, Gijs van de Kuilen, & Peter P. Wakker (2009) “A Truth-Serum for Non-Bayesians: Correcting Proper Scoring Rules for Risk Attitudes,” Review of Economic Studies 76, 1461-1489.

TYPOS DUE TO PROOF CORRECTOR:
  1. P. 1489, reference to Wakker's 2009 book: this was forthcoming, and had not yet appeared. It was referenced correctly as forthcoming and not appeared in all our versions of the paper and also in the proof corrections. Strangely, the journal incorrectly took “forthcoming” out after.



Comments on
[10.1] Attema, Arthur E., Han Bleichrodt, Kirsten I.M. Rohde, & Peter P. Wakker (2010) “Time-Tradeoff Sequences For Analyzing Discounting and Time Inconsistency,” Management Science 56, 2015-2030.

The normalization x-tilde in Eq. 20 (p. 2023) is not right, and should be xtilde = (ti - t0)/(tn - t0).


Comments on
[11.1] Abdellaoui, Mohammed, Aurélien Baillon, Laetitia Placido, & Peter P. Wakker (2011) “The Rich Domain of Uncertainty: Source Functions and Their Experimental Implementation,” American Economic Review 101, 695-723.

TYPOS
  1. P. 705 Figure 3: The lowest “Urn K” should be “Urn U.”
  2. P. 713 Fig. 9B: alpha = 0.76 should be alpha = 0.51. (0.76 is value of hypothetical group.)



Comments on
[14.3] Kothiyal, Amit, Vitalie Spinu, & Peter P. Wakker (2014) “An Experimental Test of Prospect Theory for Predicting Choice under Ambiguity,” Journal of Risk and Uncertainty 48, 1-17.

TYPOS
Throughout the paper, and the online appendix, we confused the colors blue and yellow relative to Hey, Lotito, & Maffioletti (2010). After simply interchanging these two colors, all results are consistent with Hey, Lotito, & Maffioletti (2010). Of course, reading the paper without this correction leads to no problem other than that the colors are inconsistent with Hey, Lotito, & Maffioletti (2010). In the data set, everything is correct, i.e., consistent with Hey, Lotito, & Maffioletti (2010).


Comments on
[19.1] Li, Chen, Uyanga Turmunkh, & Peter P. Wakker (2019) “Trust as a Decision under Ambiguity,” Experimental Economics 22, 51-75.

TYPO
For Eq. 3.3 on p. 56, the paper incorrectly cites an Online Appendix C that does not exist anymore. Instead, the equation can be found as Eq. A.6 in the following paper, which also gives a proof:
Aurélien Baillon, Zhenxing Huang, Asli Selim, & Peter P. Wakker (May 2016) “Measuring Ambiguity Attitudes for All (Natural) Events,” working paper.
Notation there: a = 1-s.
The above paper was the first version of the paper that later appeared as
[18.1] Aurélien Baillon, Zhenxing Huang, Asli Selim, & Peter P. Wakker (2018) “Measuring Ambiguity Attitudes for All (Natural) Events,” Econometrica 86, 1839-1858.


Comments on
[20.3] Doctor, Jason N., Peter P. Wakker, & Tong V. Wang (2020 Economists’ Views on the Ergodicity Problem,” Nature Physics 26, 1168.

TYPO
In the Supplementary Information (Online Appendix), p. 1, last line: p. 1218 iso p. 2018.


Last updated: 5 Sept., 2024