1. OVERVIEW
My research concerns the mathematical foundations of decision theory, especially the modeling of rational choice under uncertainty. The standard paradigm of rational choice is the theory of subjective expected utility, which makes assumptions about the consistency and determinacy of behavior by an ideal economic agent. These assumptions imply that the agent's beliefs are represented by numerical probabilities, her preferences are represented by numerical utilities, and her decision-making objective is to maximize the expected value of utility. This model of rationality underlies Bayesian methods of inference and decision analysis and much of mathematical economics, especially microeconomic models of games and markets.
After decades of extensions, refinements, applications, and empirical tests of this paradigm, many interesting and important issues still remain unresolved or controversial. For example, normative and behavioral decision theorists continue to debate the scope and limits of rational choice theory; decision analysts and game theorists continue to dispute the distinction between uncertainty about states of nature and uncertainty about actions of a rational opponent; and a variety of disparate and sometimes contradictory equilibrium models are used to describe collective behavior in games and markets. My work focuses on some of these unresolved issues and seeks to develop a simpler, more cohesive, and more realistic theory of rational choice.
Several main themes run through my research. One is a search for unity among models of rational choice in different domains of application. This has led me to focus on no-arbitrage (avoidance of sure loss) as the fundamental principle of economic rationality. No-arbitrage is a "primal" definition of rationality, formulated in terms of extensive variables such as quantities of money and commodities. By comparison, expected-utility maximization and various notions of equilibrium are all "dual" concepts, formulated in terms of intensive variables such as probabilities, utilities, and prices. One thrust of my research has been to show that the primal, no-arbitrage characterization of rationality provides a more seamless transition from the modeling of single-agent decisions to the modeling of n-agent games and many-agent markets. The common thread is that agents, whether acting alone or in large or small groups, should not create arbitrage opportunities for an outside observer. In all of these settings, the corresponding dual requirement is that agents should act "as if" maximizing an appropriate utility function and also "as if" they had implemented an appropriate equilibrium. The primal approach has the advantages that it refers to first-order observable quantities and hence is more robust against problems of measurement (see the second theme below) and it is inherently a system-level property and so is more robust against the realities of boundedly rational behavior at the level of the individual agent (see the third theme below).
A second main theme in my research is a concern with questions of measurement and communication. How can an agent articulate her beliefs and preferences in numerical terms which have a direct material significance? How can the agent credibly reveal those beliefs and preferences to a disinterested observer or (more importantly) to another agent with whom she may be in competition? How do agents arrive at a state of "common knowledge" of each others' beliefs and preferences, and how precise can we expect this knowledge to be? To what extent can beliefs be separated from values--and does it matter? What are the practical implications of "reciprocal expectations" of rationality in games and markets? If agents were behaving irrationally, how would we know? What dynamic or institutional forces drive agents to "equilibrium" positions?
My approach to these questions is to assume that communication among agents, or between agents and an observer, is mediated by material transactions, usually involving money. In this respect I follow in the tradition of de Finetti (1937, 1974), who defined subjective probability in terms of the odds at which an agent would bet money on or against the occurrence of an event at the discretion of an opponent. Such an approach is advantageous for several reasons. First, the fundamental measurements explicitly involve two agents: one who offers to bet or trade and another who may act upon the offer. Thus, transactions among agents enter the theory at the most primitive level, setting the stage for models of inter-agent behavior. Second, money plays a distinguished role as a yardstick for measuring beliefs and values, which parallels the role it plays in real economic systems. Third, and most importantly, the core principle of rationality in this setting is simply the avoidance of sure loss. De Finetti referred to this as coherence, but it is known elsewhere as the Dutch book argument or the no-arbitrage principle. De Finetti's approach traditionally has been used to model beliefs (i.e., probabilities) alone, under an assumption of constant marginal utility for money. My work shows that money-based measurements can be applied to preferences as well as beliefs, that this measurement process can be carried out in the presence of nonconstant marginal utility for money, and that it leads to a new synthesis of decision analysis, game theory, and market theory.
A third theme in my research is an effort to reconcile normative and behavioral views of rationality. Normative theory models the decision processes of idealized economic agents, whereas behavioral theory describes and interprets decision making as it actually happens in the laboratory and the field. There has long been a behavioral countermovement in rational choice research, dating back to work of Herbert Simon and others in the 1950's, but in the last 15 years or so there has been an explosive growth of work in this area. Systematic violations of the expected-utility hypothesis have been demonstrated in a variety of laboratory situations and corroborated by studies of how decision making typically occurs in organizations and markets. Real economic agents often appear to be rule-followers, role-players, imitators, and heuristic problem-solvers rather than paragons of consistency and numerical optimization. These results suggest that normative theories should be formulated so as to allow for the bounded rationality of real economic agents and the role of institutions in shaping decision-making behavior. A number of generalized normative theories of "non-expected utility" have been proposed in recent years, but none has succeeded particularly well in giving a parsimonious fit to a wide range of experimental data and at the same time providing a tractable basis for economic modeling. My own approach has been to focus on relaxing those assumptions of the standard theory which are most implausible from a behavioral viewpoint (especially the completeness, or perfect precision, of beliefs and preferences) and to focus on methods of measurement and standards of rationality which are closely related to those found in real institutions (e.g., the use of money as a medium of communication and exchange, and the avoidance of arbitrage).
The following sections discuss the contributions of specific papers in various topic areas.
Game theory generally starts from the assumption that the "rules of the game" (i.e., the players' utilities for outcomes and probabilities for states of nature) are common knowledge, and it then grapples with the problem of simultaneous expected-utility maximization.
"Thus each participant attempts to maximize a function... of which he does not control all variables. This is certainly no maximum problem but a disconcerting mixture of several conflicting maximum problems. Every participant is guided by another principle and neither determines all variables which affect his interest. This kind of problem is nowhere dealt with in classical mathematics.'' (von Neumann and Morgenstern, Theory of Games and Economic Behavior)
Numerous solution concepts have been proposed for this problem, of which the most widely used is still Nash's (1951) equilibrium concept: the players should select pure or independently randomized strategies which are best replies to each other. Harsanyi (1967) generalized Nash's concept to the case of incomplete-information games (where there is uncertainty about states of nature) by assuming that the players hold a common prior distribution over states, and his "common prior assumption" has been widely used in other economic models. Over the last 15 years there has been considerable interest in refinements of Nash equilibrium (e.g., "perfect"' or "sequential"' equilibrium) and also in coarsenings (e.g., "rationalizability," "correlated equilibrium," "communication equilibrium"). Aumann (1987) has argued that correlated equilibrium (his coarsening of the Nash concept which allows correlated strategies) is "the" expression of Bayesian rationality in games, but this claim---which involves an appeal to the common prior assumption---has been controversial. Meanwhile, decision analysts generally have been skeptical about any game-theoretic solution concept which circumscribes the beliefs that one agent is permitted to hold about the actions of another (Kadane and Larkey 1982).
The paper "Coherent Behavior in Noncooperative Games" (with Kevin McCardle) shows that, if the rules of the game are revealed through material (i.e. money-based) measurements in the spirit of de Finetti, then common knowledge of rationality takes on the following simple and precise definition: the players should not expose themselves collectively to a sure loss---i.e., they should be jointly coherent. This simple requirement captures the intuitive idea of an infinite regress of reciprocal expectations of rationality, namely that the players should not behave irrationally as individuals, nor bet on each other to behave irrationally, nor bet on each other to bet on each other to behave irrationally, and so on. (Significantly, the same requirement---i.e., collective avoidance of sure loss, or arbitrage---also characterizes competitive equilibria in markets. More about this below.) It is then proved that this requirement is satisfied if and only if the outcome of the game is one that occurs with positive probability in a correlated equilibrium of the game. In other words, rationality requires the players to behave as if they had implemented Aumann's concept, of which Nash's concept is a special case. This paper also gives an elementary proof of the existence of correlated equilibria, which had been a significant unsolved problem in game theory since Aumann's original (1974) definition of the concept. (Another elementary existence proof was developed independently and roughly contemporaneously by Hart and Schmeidler (1989), although our proof is mathematically simpler and somewhat more intuitive. It uses a Markov chain argument that has subsequently been adapted by Myerson (1995) to define the solution concept of "dual reduction.")
The preceding results are generalized to incomplete-information games in "Joint Coherence in Games of Incomplete Information", where they are shown to lead to a correlated generalization of Harsanyi's Bayesian equilibrium concept (in games without mechanical communication devices) or to the communication equilibrium concept (in games with such devices). Harsanyi's assumption of the common prior is seemingly vindicated--but these results are obtained under a simplifying assumption of constant utility for money. If nonconstant utility for money is (realistically) allowed, a very different picture emerges, as described in the more recent paper "Arbitrage-Free Correlated Equilibria." Because of portfolio effects, we may expect agents' utilities for money to depend on the outcomes of events. The odds at which they will bet on such events---i.e., their "revealed"' probabilities under de Finetti's elicitation method---will then depend on their utilities for money as well as their probabilities. More precisely, an agent's revealed probability distribution will be proportional to the product of her true probabilities and her state-dependent marginal utilities for money. This amalgam of probabilities and utilities is known as a risk neutral probability distribution in the literature of asset pricing. Similar distortions will affect the agents' revealed utilities for outcomes. Therefore, the apparent common prior, and the apparent equilibrium distribution supporting a jointly coherent outcome of the game, must be interpreted as risk neutral distributions, not the true distributions of any of the players: their true probability distributions will generally be heterogeneous. When the revelation of the rules of the game is thus endogenized through side-gambles among risk averse players, the possibility also arises that they may rewrite the rules in the process. Through an accumulation of gambles with each other or an observer, the agents may even strategically decouple their actions. For example, if risk-averse players are placed in a game which is zero-sum in terms of monetary payoffs, the elicitation process may lead to a Pareto superior allocation in which both players' payoffs are constant. This stylized fact illustrates that the natural decoupling effects of monetary transactions in a public market often reduce the need for strategic behavior between agents.
The preceding results constitute a "dual" theory of rational strategic choice: rather than viewing rational agents first-and-foremost as optimizers, they are viewed as avoiders of sure loss. The two formulations of the agent's problem---optimization versus avoiding sure loss---are dual to each other in the sense of the duality theory of linear programming, but the latter formulation extends much more readily from the single-agent to the multi-agent case. The very existence of a dual formulation does not become apparent until the question is asked: how do the rules of the game become known? How might the players credibly measure each other's probabilities and utilities? What is the "acid test" for a mutually rational outcome? The dual formulation of rationality in the multi-agent case is simply a collective, or market-level, no-arbitrage condition---essentially the same condition that underlies models of rational asset pricing in finance and (as will be seen) competitive allocations in welfare economics. (Actually, the no-arbitrage version ought to be viewed as "primal" and the equilibrium concept as "dual," for reasons noted earlier, but in any case each is the dual of the other.)
And, as in asset pricing models, risk neutral probabilities emerge as important variables: the "Harsanyi doctrine" of the common prior is reinterpreted to apply to risk neutral probabilities, not true probabilities. In this way the objectionable assumption of homogeneous true beliefs is relaxed, while the main lines of Harsanyi's and Aumann's equilibrium concepts are retained. These results also show that there is no essential difference between uncertainty about actions of nature and uncertainty about actions of intelligent opponents: rational agents should avoid "making book"' against themselves in either case.
The preceding results suggest that common-knowledge and common-prior assumptions elsewhere in economics might be subject to similar reinterpretations, which led me to reexamine Aumann's (1976) seminal paper on "Agreeing to Disagree." Here, Aumann gives a formal definition of common knowledge (since used by many others) and shows that when agents hold common prior beliefs and subsequently receive heterogeneous information leading them to revise those beliefs, their posterior probabilities cannot be common knowledge unless they, too, are identical. In other words, agents cannot pass from a state of common knowledge of homogeneous beliefs to common knowledge of heterogeneous beliefs. This result is widely perceived to imply that the receipt of heterogenous information cannot provide incentives for trade among perfectly rational agents, because the disclosure of willingness to trade would render posterior beliefs common knowledge, at which point they would become identical. Variations on this no-expected-gain-from-trade result have been proved by Milgrom and Stokey (1982) and many others, and it is still viewed as problematic for models of securities markets with asymmetric information.
In "The Incoherence of Agreeing to Disagree,'' I show that no-expected-gain-from-trade results are illusory. If Aumann's and Milgrom-Stokey's results are re-cast in terms of material measurements, then the common prior must be interpreted as a risk neutral distribution and its construction must be viewed as the outcome of a process of trade, namely the agents' measurements of each others' beliefs through betting. Thus, to assume a common prior is to assume the existence of prior trade, erasing pre-existing differences in apparent beliefs.
As for posterior trade upon receipt of new information, the no-expected-gain-from trade results are based on a misapplication of Bayes' theorem as a model of learning over time, and consequent confusion between "conditional" and "posterior" probabilities. Aumann's and Milgrom-Stokey's results refer to the agents' conditional probabilities held today, given (hypothetically) some information which is expected to arrive tomorrow. These probabilities must agree, as a condition of joint coherence. However, the agents' posterior probabilities held tomorrow upon actual receipt of that information may still differ: beliefs may drift over time, like prices in a securities market, and this process of drift may create incentives for renewed trade. In fact, there even will be incentives to trade contingent claims in order to quantify the expected volatility of beliefs. (These results are discussed in a working paper in progress on "The Volatility of Beliefs.")
"Arbitrage, Rationality, and Equilibrium" (with Kevin McCardle) develops the theme, suggested by earlier work noted above, that coherence or no-arbitrage is the core principle of rationality which unifies decision analysis, market theory, and game theory. This is the natural law of the market---if only because of the vigilance of arbitrageurs---and its effects trickle down to its constituents, giving them the appearance of expected-utility maximizers. Thus, rationality in economic systems can be viewed from the top down, rather than from the bottom up, and only bounded rationality need be assumed at the agent level. This paper traces the history of the arbitrage principle through statistics, economics, and finance, and discusses the role of risk neutral probabilities and the question of market completeness. New results are presented concerning the relation between decision analysis and options pricing methods and concerning the nature of equilibria in securities markets and exchange economies.
In section 5 of this paper, the role of arbitrage arguments in decision analysis is discussed. Many important business decisions (e.g., capital budgeting decisions) take place against the backdrop of a market where projects can be financed and risks can be hedged by trading securities. A variety of analytic methods are used in practice to solve such problems. A popular "naive" form of decision analysis is the method of discounted cash flows, in which all cash flows are discounted at a fixed rate which is chosen to reflect either the firm's or the market's attitude toward risks of a similar nature. Thus, risk preferences are confounded with time preferences. A more modern approach, which is claimed to be superior, is to use option pricing methods (a.k.a. "contingent claim analysis") in which, for each project under consideration, a portfolio of securities and options yielding the same cash flows is constructed (where possible), and the project is judged worthwhile if its cost is less than the price of the portfolio. In such a case, the firm could conceivably sell the portfolio short while undertaking the project, thus reaping an arbitrage profit. A mathematically equivalent "dual" method is to compute risk neutral probabilities from market prices for contingent claims and then discount all cash flows at the market risk-free interest rate and compute expectations according to the risk neutral probabilities. The paper shows that standard methods of decision analysis, in which the firm's own subjective probabilities and utilities are used to select among projects, yield precisely the same results as option pricing methods when the decision problem is properly extended by including the real possibilities for borrowing at market rates and investing in securities. In this case, the market acts like a "heat sink," pulling the risk neutral probabilities of the decision maker in line with those of the market in any optimal decision. The case of complete markets is emphasized here; the more general case of incomplete markets is analyzed in a later paper (with Jim Smith) on "Valuing Risky Projects: Option Pricing Theory and Decision Analysis" (see below).
In section 6 of "Arbitrage, Rationality, and Equilibrium," a conjugate relationship is shown to hold between the multivariate normal distribution, the exponential utility function, and the quadratic wealth function with respect to the calculation of risk neutral probabilities. Thus, if the agent's subjective probability distribution over security returns is multivariate normal, and her true utility function is exponential, and her wealth is a quadratic function of the returns, then her risk neutral distribution remains in the multivariate normal family. Multivariate normal distributions and exponential utility functions are widely used in financial economics. The novelty here lies in the explicit introduction of a quadratic wealth term representing holdings of nonlinear contingent claims ("quadratic options"), whose effect is to shift the covariance matrix of the agent's risk neutral distribution. Using this conjugate relationship, the conditions for a common risk neutral distribution (i.e., no arbitrage) are derived, and it is shown that these conditions are a CAPM-type relation (i.e., the expected excess return on a security equals its "beta'' times the expected excess return on the market) in terms of aggregated subjective beliefs and risk preferences. The aggregation formulas are generalizations of Lintner's (1969) heterogeneous-expectations CAPM to the case of complete markets for contingent claims. The significance of this result is that it shows the CAPM can be given a purely subjective interpretation, with no reference either to hypothetical "true" means and covariances or to historical sample statistics. It also suggests a role for quadratic options in asset pricing models. (The concept of a quadratic option has subsequently been discussed by Brennan (1995).)
Section 7 of the paper re-examines the classical welfare theorem relating a Pareto optimal wealth allocation to the existence of a competitive price system. It is shown that if agent preferences are revealed through willingness to trade commodities (one of which is money), then common knowledge of Pareto optimality is simply the no-arbitrage principle under another name, and by the standard duality argument this requires the existence of prices and marginal utilities with respect to which the existing allocation of wealth is competitive. In other words, when utilities are revealed via material measurements, the elimination of arbitrage opportunities is necessary and sufficient to drive the economy to a competitive equilibrium. (As in other settings, the fact that the measurement process involves incremental transfers of wealth is significant here: trade typically occurs at non-equilibrium prices.) The rationale for the existence of a competitive equilibrium has been debated since the time of Walras, and "tatonnement"' mechanisms and other hypothetical coordination schemes for reaching equilibrium have been discussed. This result shows that, under conditions of common knowledge, a competitive equilibrium is merely what will remain after all the free lunches have been eaten.
4. RESEARCH ON DECISION ANALYSIS
"Coherent Decision Analysis with Inseparable Probabilities and Utilities" explores in detail the dual relationship between avoiding sure loss and maximizing expected utility in the case of a decision-analysis problem faced by a single agent. (This paper also digs a deeper foundation under the game-theoretic results discussed earlier.) It shows that, through the myopic acceptance of small monetary gambles, the agent can gradually reveal everything about her probabilities and utilities which is needed to determine her expected-utility-maximizing decision, and furthermore these gambles will expose her to a sure loss if she fails to choose that decision. Thus, expected-utility-maximizing behavior in the "grand world" of her original decision problem corresponds to coherent behavior in the "small world" of money-based measurements of her beliefs and preferences.
This approach to decision analysis addresses the problem of separating probabilities from utilities, which has recently received attention in papers by Kadane and Winkler (1988), Schervish, Seidenfeld, and Kadane (1990), and Karni and Schmeidler (1993). If we observe only an agent's material preferences---i.e., if we cannot credibly obtain purely "intuitive"' judgments of belief---her probabilities and utilities will generally be confounded. If we try to elicit her probabilities via money bets, what we observe are risk neutral probabilities---products of her true probabilities and her marginal utilities for money. However, my paper shows that information about utilities can be elicited in a complementary way, such that the distorting effect of state-dependent marginal utility for money cancels out when differences in expected utility between decisions are calculated. Hence, the inability to separate probability from utility presents no difficulty, in principle, for either statistical decision theory or economics.
The role of risk neutral probabilities in decision analysis is also discussed in section 5 of "Arbitrage, Rationality, and Equilibrium" (as noted earlier) and in more detail in "Valuing Risky Projects: Options Pricing Theory and Decision Analysis." The latter paper (with Jim Smith) shows that options pricing methods and standard methods of decision analysis are fully consistent (when properly carried out) and can be profitably integrated. In complete markets, the options pricing framework provides a convenient separation of the grand decision problem into an "investment" problem (i.e., whether or not to undertake a project at a specified cost) and a "financing" problem (how to optimally borrow to pay for the project and/or hedge its risks by investing in securities). The investment problem can be solved using only market data, whereas the financing problem normally requires firm-specific data. However, in cases where markets are incomplete, option pricing methods do not yield a precise estimate of the value of a project nor a complete separation of the grand problem into subproblems. We show that a partial separation result and a simple procedure for rolling back the decision tree can be obtained in incomplete markets under restrictions on preferences (essentially time-additive exponential utility functions).
In "Indeterminate Probabilities on Finite Sets" and related work, I construct a theory of confidence-weighted subjective probabilities. This is a model of "second-order uncertainty" which addresses some questions about the determinacy of beliefs which have perenially arisen in Bayesian inference and decision theory. Most persons instinctively feel that some of their beliefs can be quantified more precisely than others and that such differences in precision should somehow be taken into account in any decision analysis or inference based on those beliefs. Skepticism about the universal precision of subjective probabilities is widely felt to have hindered the acceptance of Bayesian methods, and ad hoc methods of sensitivity analysis are often applied to prior probabilities in practice.
Many decision theorists have tried to generalize the foundations of subjective probability theory to embrace the intuitive notion of precision or confidence associated with probabilities. One approach is to represent beliefs by intervals of probabilities rather than point-valued probabilities, and a consistent theory of interval-valued probabilities can be obtained by merely dropping the axiom of completeness from the standard theory of de Finetti or Savage. (The completeness axiom requires that, given any event and any set of odds, an agent must be willing to bet either on it or against it---or both.) However, this approach still leaves some questions unanswered. For example, why should the endpoints of a probability interval themselves be precisely determined? How should sensitivity analysis be carried out beyond these endpoints? How should an inconsistent assessment of interval probabilities be reconciled? A number of researchers have previously tried, without great success, to develop a nontrivial and axiomatically sound model of second-order uncertainty to address these issues.
The theory of confidence-weighted probabilities fulfills this goal. I show that if the axiom of completeness is dropped and the axiom of transitivity is also weakened---while still holding on to coherence---then beliefs are described by lower and upper probabilities qualified by numerical confidence weights. Thus, for example, an agent might assert with 100% confidence that the probability of an event is at least 0.5, and with 50% confidence that it is at least 0.6. In terms of material measurements, this means that the stake for which she would bet on it at a rate of 0.6 is only half as large as the stake for which she would bet on it at a rate of 0.5. (A modified version of de Finetti's elicitation method is applicable here, in which the betting opponent may take only convex combinations of offered bets rather than arbitrary non-negative linear combinations.) The confidence-weighted probabilities describing an agent's belief in an event are summarized by a concave function on the unit interval, which can be loosely interpreted as the indicator function of a "fuzzy" probability interval or as an "epistemic reliability function" in the terminology of Gardenfors and Sahlin (1982, 1983). In fact, the laws of confidence-weighted probabilities are quite similar to the laws of fuzzy probabilities used by some authors, but the model is not based in fuzzy set theory. Rather, it provides independent support for the idea that fuzzy sets might be useful for representing imprecise personal probabilities and expectations (which are subjectively determined subsets of the real numbers in the first place), but not necessarily for representing other forms of cognitive imprecision.
The companion paper on "Decision Analysis with Indeterminate or Incoherent Probabilities" applies confidence-weighted probabilities to the analysis of a finite-state decision problem and shows in detail how they can be used to perform sensitivity analysis and to reconcile incoherence. This line of work shows that the standard Bayesian model of determinate probabilities and the more general model of interval-valued probabilities can both be embedded in a more general framework of confidence-weighted probabilities which is axiomatically sound, which satisfies the intuitive desiderata for a theory of second-order uncertainty, and which justifies a pragmatic approach to sensitivity analysis and the reconciliation of inconsistency in decision-analysis models.
Another potentially interesting application of confidence-weighted probabilities is to the problem of combining expert judgments. A well-known impossibility theorem (Genest and Zidek 1986) states that there can be no formula for combining ordinary probability judgments from different individuals that preserves consensus (where it exists) and simultaneously satisfies Bayesian principles of updating probabilities upon receipt of new information. However, confidence-weighted probabilities from different individuals can be combined by a simple linear pooling formula which does preserve consensus and respect Bayesian updating. In other words, the confidence-weighted probability model is "closed" under the operation of combining judgments. The intuitive reason for this is that the problem of combining judgments inherently involves tradeoffs among the possibly-conflicting beliefs of several individuals who may have differing degrees of reliability or expertise. Such tradeoffs implicitly require some kind of relative weighting of judgments both within and between individuals. The confidence-weighted probability model has such a set of weights built-in, and so a combination of confidence-weighted probability judgments from different individuals has the same qualitative properties as a set of judgments from single individual. (Or to put it another way, the confidence-weighted probability model represents a somewhat schizophrenic individual whose judgments may have differing degrees of confidence and may even be inconsistent. Such an individual can be compared to a roomful of experts with different opinions.)
A more recent paper on "The Shape of Incomplete Preferences" presents a joint axiomatization of subjective probability and utility minus the completeness axiom. This result combines the features of Smith's (1961) theory of interval-valued probabilities with those of Aumann's (1962) theory of interval-valued utilities, and provides a foundation for methods of robust Bayesian statistics. It has close connections to recent work on partially ordered preferences by Seidenfeld, Schervish, and Kadane (1995) but emphasizes duality arguments mores strongly.
The paper "Should Scoring Rules be 'Effective'?" considers the elicitation of probabilities via scoring rules (reward or penalty functions) rather than bets, and examines a number of desiderata which have been proposed for such rules. Characterizations of important classes of scoring rules are given for both continuous and discrete probability forecasts, and theorems are proved concerning the relationships between scoring rules and metrics for measuring distances between probability distributions. It is argued that the only essential property of a scoring rule is that of properness (i.e., rewarding honest reporting of probabilities) and that the choice of a scoring rule should be tailored, insofar as it is possible, to the decision problem for which the probability is relevant.
The note "Blau's Dilemma Revisited" examines the controversy between Bayesian utility maximization and chance-constrained programming as paradigms of choice under uncertainty, pointing out some of the shortcomings of the latter.
Last updated March 27, 1998