- Ur-Priors, Conditionalization, and Ur-Prior Conditionalization, forthcoming in Ergo.
- Conditionalization is a widely endorsed rule for updating one?s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule ? one that appeals to what I?ll call ?ur-priors?. But different authors have understood the rule in different ways, and these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I?ll call ?loaded evidential standards?, are especially promising.
- Can All-Accuracy Accounts Justify Evidential Norms?, forthcoming in Epistemic Consequentialism, eds. Kristoffer Ahlstrom-Vij and Jeff Dunn.
- Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such ?all-accuracy? or ?purely alethic? approaches can accommodate and justify evidential Bayesian norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle.
- The Meta-Reversibility Objection, forthcoming in Time's Arrow and the Probability Structure of the World, ed. Loewer, Weslake and Winsberg.
- One popular approach to statistical mechanics understands statistical mechanical probabilities as measures of rational indifference. Naive formulations of this ``indifference approach'' face reversibility worries - while they yield the right prescriptions regarding future events, they yield the wrong prescriptions regarding past events. This paper begins by showing how the indifference approach can overcome the standard reversibility worries by appealing to the Past Hypothesis. But, the paper argues, positing a Past Hypothesis doesn't free the indifference approach from all reversibility worries. For while appealing to the Past Hypothesis allows it to escape one kind of reversibility worry, it makes it susceptible to another - the Meta-Reversibility Objection. And there is no easy way for the indifference approach to escape the Meta-Reversibility Objection. As a result, reversibility worries pose a steep challenge to the viability of the indifference approach.
- Understanding Conditionalization, The Canadian Journal of Philosophy, 45 (2016): 767-797.
- At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject?s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization.
- No Work For a Theory of Universals, with Maya Eddon, in Jonathan Schaffer and Barry Loewer (eds.) A Companion to David Lewis, (2015): 116-137.
- Several variants of Lewis's Best System Account of Lawhood have been proposed that avoid its commitment to perfectly natural properties. There has been little discussion of the relative merits of these proposals, and little discussion of how one might extend this strategy to provide natural property-free variants of Lewis's other accounts, such as his accounts of duplication, intrinsicality, causation, counterfactuals, and reference. We undertake these projects in this paper. We begin by providing a framework for classifying and assessing the variants of the Best System Account. We then evaluate these proposals, and identify the most promising candidates. We go on to develop a proposal for systematically modifying Lewis's other accounts so that they, too, avoid commitment to perfectly natural properties. We conclude by briefly considering a different route one might take to developing natural property-free versions of Lewis's other accounts, drawing on recent work by Williams.
- Autonomous Chances and the Conflicts Problem, in Alastair Wilson (ed.), Asymmetries in Time and Chance, (2014): 45-67.
- In recent work, Callender and Cohen (2009) and Hoefer (2007) have proposed variants of the account of chance proposed by Lewis (1994). One of the ways in which these accounts diverge from Lewis's is that they allow the special sciences and the macroscopic realm to have chances that are autonomous from those of physics and the microscopic realm. A worry for these proposals is that autonomous chances may place incompatible constraints on rational belief. I examine this worry, and attempt to determine (i) what kinds of conflicts would be problematic, and (ii) whether these proposals lead to problematic conflicts. After working through a pair of cases, I conclude that these proposals do give rise to problematic conflicts.
- Impermissive Bayesianism, forthcoming in Erkenntnis, S6 (2013) 1-33.
- This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White's (2010) defense of Indifference Principles is unsuccessful. Second, it contends that White's (2005) arguments against permissive views do not succeed.
- Review of Toby Handfield's A Philosophical Guide to Chance, in Notre Dame Philosophical Reviews, 1/12/2013.
- This is a review of Toby Handfield's book, A Philosophical Guide to Chance, that discusses Handfield's Debunking Argument against realist accounts of chance.
- Person-Affecting Views and Saturating Counterpart Relations, Philosophical Studies, 158 (2012): 257-287.
- In Reasons and Persons, Parfit (1984) posed a challenge: provide a satisfying normative account that solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox. In response, some have suggested that we look toward person-affecting views of morality for a solution. But the person-affecting views that have been offered so far have been unable to satisfy Parfit's four requirements, and these views have been subject to a number of independent complaints. This paper describes a person-affecting account which meets Parfit's challenge. The account satisfies Parfit's four requirements, and avoids many of the criticisms that have been raised against person-affecting views.
- Representation Theorems and the Foundations of Decision Theory, with Jonathan Weisberg, forthcoming in the Australasian Journal of Philosophy, 89 (2011): 641-663.
- Representation theorems are often taken to provide the foundations for
decision theory. First, they are taken to characterize degrees of belief and utilities.
Second, they are taken to justify two fundamental rules of rationality: that we
should have probabilistic degrees of belief and that we should act as expected
utility maximizers. We argue that representation theorems cannot serve either of
these foundational purposes, and that recent attempts to defend the foundational
importance of representation theorems are unsuccessful. As a result, we should
reject these claims, and lay the foundations of decision theory on firmer ground.
- Contemporary Approaches to Statistical Mechanical Probabilities: A Critical Commentary; Part I: The Indifference Approach, Philosophy Compass, 5 (2010): 1116-1126.
- Contemporary Approaches to Statistical Mechanical Probabilities: A Critical Commentary; Part II: The Regularity Approach, Philosophy Compass, 5 (2010): 1127-1136.
- This pair of articles provides a critical commentary on contemporary approaches to
statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the
epistemic indifference approach, and the Lewis-style regularity approach. These articles
describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The first of these articles provides a brief sketch of statistical
mechanics, and discusses the indifference approach to statistical mechanical probabilities. The second of these articles discusses the regularity approach to statistical mechanical probabilities, and describes some areas where further research is needed.
- Binding and Its Consequences, Philosophical Studies, 149 (2010): 49-71.
- In "Bayesianism, Infinite Decisions, and Binding", Arntzenius, Elga and Hawthorne present cases in which agents who cannot bind themselves are driven to choose sequences of actions with disastrous consequences, given standard decision theory. They defend standard decision theory by arguing that if a decision rule leads agents to disaster only when they cannot bind themselves, this should not be taken to be a mark against the decision rule. I show that this claim has surprising implications for a number of other debates in decision theory. I then assess the plausibility of this claim, and suggest that it should be rejected.
- Unravelling the Tangled Web: Continuity, Internalism, Non-Uniqueness and Self-Locating Beliefs, in Tamar Szabo Gendler and John Hawthorne (eds.), Oxford Studies in Epistemology, Volume 3, (2010): 86-125.
- A number of cases involving self-locating beliefs have been discussed in the Bayesian
literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I
propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I
sketch some ways of extending Bayesianism in order to accommodate these issues. Then,
putting these other issues aside, I sketch some ways of extending Bayesianism in order to
accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the
"Learning Principle", which rules out certain kinds of troubling belief changes, and I use
this principle to assess some of the available options.
- Unravelling the Tangled Web: Continuity, Internalism, Non-Uniqueness and Self-Locating Beliefs, (extended version).
- This is an extended version of the Tangled Web paper, which includes a discussion of how these issues bear on the Many Worlds interpretation of quantum mechanics.
- Two Mistakes Regarding The Principal Principle, British Journal for the Philosophy of Science, 61 (2010): 407-431.
- This paper examines two mistakes regarding David Lewis' Principal Principle that
have appeared in the recent literature. These particular mistakes are worth looking at for
several reasons: the thoughts that lead to these mistakes are natural ones, the principles
that result from these mistakes are untenable, and these mistakes have led to significant
misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in
hand, the paper concludes by showing that one way of formulating the chance-credence
relation has a distinct advantage over its rivals.
- Sleeping Beauty and the Dynamics of De Se Beliefs, Philosophical Studies, 138 (2008): 245-269.
- This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga's and Lewis' treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed account also leads to counterintuitive consequences, but they're not as bad as those of Elga's account, and no worse than those of Lewis' account.
- Three Proposals Regarding A Theory of Chance, Philosophical Perspectives 19 (2005): 281-307.
- I argue that the theory of chance proposed by David Lewis has three problems: (i) it is time asymmetric in a manner incompatible with some of the chance theories of physics, (ii) it is incompatible with statistical mechanical chances, and (iii) the content of Lewis's Principal Principle depends on how admissibility is cashed out, but there is no agreement as to what admissible evidence should be. I proposes two modifications of Lewis's theory which resolve these difficulties. I conclude by tentatively proposing a third modification of Lewis's theory, one which explains many of the common features shared by the chance theories of physics.
- Clark and Shackel on the Two-Envelope Paradox, with Jonathan Weisberg, Mind 112 (2003): 685-689.
- Clark and Shackel (2000) have argued that previous attempts to resolve the two-envelope paradox fail, and that we must look to symmetries of the relevant expected-value computations for a solution. Clark and Shackel also argue for a novel solution to the peeking case, a variant of the two-envelope scenario in which you are allowed to look in your envelope before deciding whether or not to swap. We argue that Clark and Shackel's proposal requires a revision of standard decision theory. Understood as such, we argue that their proposal is both implausible and unnecessary.