A Veritist's Guide to Strict Propriety
Epistemic utility functions must be strictly proper. That is, on any reasonable way of measuring epistemic utility, every probabilistically coherent credence function must expect itself to be strictly better than any other credence function. This critical assumption underpins most of the results in epistemic utility theory. Unfortunately, the most popular arguments for strict propriety are bad ones. As a result, the whole edifice of epistemic utility theory is on shaky ground. The purpose of this paper is to shore up that ground.To that end, I first show that strict propriety is entailed by another, more fundamental constraint on epistemic utility functions: coherent admissibility. Coherent admissibility says that reasonably measures of epistemic utility never render probabilistically coherent credence functions dominated. Then I provide a new argument for coherent admissibility. Unlike extant arguments for coherent admissibility, my argument is axiological rather than deontological. It proceeds not from assumptions about which credence functions are permissible or impermissible, but rather from assumptions about what it takes for a utility function to reflect the right type of value; what it takes for a utility function to evaluate credence functions qua assignments of truth-value estimates. To reflect the right type of value, I argue, an epistemic utility function must be coherent admissible.
The Simplest Possible Accuracy Argument for the Principal Principle
Pettigrew (2012, 2013, 2016) provides a series of increasingly sophisticated accuracy arguments for Lewis' Principal Principle. These arguments, however, presuppose a substantive constraint of rationality governing conditional credences. In particular, they presuppose what Hajek (2003) calls the ratio constraint: for any propositions X and Y, a rational agent's credence for X conditional on Y, i.e., c(X|Y), equals the ratio of her unconditional credences for X&Y and Y, respectively, i.e., c(X&Y)/c(Y). The aim of this paper is to provide a simpler accuracy argument for the Principal Principle. Moreover, rather than presupposing the ratio constraint, the argumentative strategy employed here could plausibly be used to justify it.
Moss (2013) argues that partial beliefs, or credences can amount to knowledge in much the way that full beliefs can. This paper explores a new kind of objective Bayesianism designed to take us some way toward securing such ‘probabilistic knowledge’. Whatever else it takes for an agent’s credences to amount to knowledge, their success, or accuracy must be the product of cognitive ability or skill. The brand of Bayesianism developed here helps ensure this ability condition is satisfied. Cognitive ability, in turn, helps make credences valuable in other ways: it helps mitigate their dependence on epistemic luck, for example. As a result, this new set of Bayesian tools delivers credences that are particularly good candidates for probabilistic knowledge. In addition, examining the character of these credences teaches us an important lesson about what, at bottom, cognitive ability and probabilistic knowledge demand from us: they demand that we give theoretical hypotheses equal consideration, in a certain sense, rather than equal treatment.
Unspecific evidence calls for imprecise credence. My aim is to vindicate this thought. First, I will pin down what it is that makes one's imprecise credences more or less epistemically valuable. Then I will use this account of epistemic value to delineate a class of reasonable epistemic scoring rules for imprecise credences. Finally, I will show that if we plump for one of these scoring rules as our measure of epistemic value or utility, then a popular family of decision rules recommends imprecise credences. In particular, a range of Hurwicz criteria, which generalise the Maximin decision rule, recommend imprecise credences. If correct, the moral is this: an agent who adopts precise credences, rather than imprecise ones, in the face of unspecific and incomplete evidence, goes wrong by gambling with the epistemic utility of her doxastic state in too risky a fashion. Precise credences represent an overly risky epistemic bet, according to the Hurwicz criteria.
According to accuracy-first epistemology, accuracy is the fundamental epistemic good. Epistemic norms — Probabilism, Conditionalization, the Principal Principle, etc. — have their binding force in virtue of helping to secure this good. To make this idea precise, accuracy-firsters invoke Epistemic Decision Theory (EpDT) to determine which epistemic policies are the best means toward the end of accuracy. Hilary Greaves and others have recently challenged the tenability of this programme. Their arguments purport to show that EpDT encourages obviously epistemically irrational behavior. We develop firmer conceptual foundations for EpDT. First, we detail a theory of praxic and epistemic good. Then we show that, in light of their very different good-making features, EpDT will evaluate epistemic states and epistemic acts according to different criteria. So, in general, rational preference over states and acts won’t agree. Finally, we argue that based on direction-of-fit considerations, it’s preferences over the former that matter for normative epistemology, and that EpDT, properly spelt out, arrives at the correct verdicts in a range of putative problem cases.
The twin pillars of Levi’s epistemology are his infallibilism and his corrigibilism. According to infallibilism, any agent is committed to being absolutely certain about anything she fully believes. From her own perspective, there is no serious possibility that any proposition she believes is false. She takes her own beliefs to be infallible, in this sense. But this need not make her dogmatic, on Levi’s view. According to his corrigibilism, an agent might come to have good reason to change her beliefs and respond accordingly. She might also recognise this possibility ex ante, despite being absolutely certain that her current beliefs are true. This brief review explores whether Levi’s infallibilism can be made to sit comfortably both with his account of rational belief change, and his account of epistemic value (or with any reasonable account of epistemic value for that matter). I argue that it cannot.
If chances are propensities, is there any good reason to expect them to be probabilities? I will offer a new answer to this question. It comes in two parts. First, I will defend an accuracy-centred account of what it is for a causal system to have precise propensities in the first place. Second, I will prove that, given some pretty weak assumptions about the nature of comparative causal dispositions, and some fairly standard assumptions about reasonable measures of inaccuracy, propensities must be probabilities.