Skip to main content
After an opening section surveying some possible alternative ways of employing semantic plasticity to handle the puzzles, this chapter discusses two challenges to the view developed in chapters 11 and 12. One involves the threat of... more
After an opening section surveying some possible alternative ways of employing semantic plasticity to handle the puzzles, this chapter discusses two challenges to the view developed in chapters 11 and 12. One involves the threat of rampant error in counterfactual speech reports. The second involves certain uncomfortable consequences of applying our favoured treatment of words like ‘that’ and ‘table’ to words like ‘I’, ‘you’, ‘person’, ‘thinker’, and ‘conscious’. We show how considerations of semantic plasticity militate in the direction of a kind of “metaphysical misanthropy”, and explore its ethical ramifications.
This is the second of two chapters devoted to a special subclass of Tolerance Puzzles based on ‘indiscernible modality’, on which qualitative truths are automatically necesssary. This chapter develops our favoured solution to these... more
This is the second of two chapters devoted to a special subclass of Tolerance Puzzles based on ‘indiscernible modality’, on which qualitative truths are automatically necesssary. This chapter develops our favoured solution to these puzzles, which involves denying the qualitativeness of properties like being a table. We introduce a metaphysical notion of “aboutness” which can be used to probe the sources of non-qualitativeness, and consider some special challenges that arise on the assumption that there could be new objects that aren’t among the objects there actually are.
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of... more
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, we gather lessons from the established work on credence aggregation, and extend this work with two new impossibility results. We then explore contrasting features of two kinds of rules that satisfy the constraints we articulate: one kind uses fixed prior credences, and the other uses geometric averaging, as opposed to arithmetic averaging. We also prove a new characterisation result for geometric averaging. Finally we consider applications to neighboring philosophical issues, including the epistemology of disagreement
This book didn’t have to consist of exactly the sentences that it in fact contains: any one of its sentences could have been very different. But it could not have consisted of an entirely different collection of sentences, such as to make... more
This book didn’t have to consist of exactly the sentences that it in fact contains: any one of its sentences could have been very different. But it could not have consisted of an entirely different collection of sentences, such as to make it a gothic novel or a treatise on wine-tasting. Other familiar objects are similarly capable of being moderately different, but not radically different, in various respects. But there are puzzling arguments which threaten these apparently obvious judgments, exploiting the fact that an appropriate sequence of small differences can add up to a radical difference. This book presents the first full-length treatment of these puzzles, using them as an entry point to a broad range of metaphysical questions about possibility, necessity, and identity. It introduces tools of higher-order modal logic which enable a rigorous treatment of the puzzles, and develops a strategy for resolving them based on a plenitudinous ontology of material objects, which induces fine-grained variability in the reference of words like ‘book’
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Ian Hacking and Roger White influentially argue that it is not. We approach this question through a systematic framework for self-locating... more
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Ian Hacking and Roger White influentially argue that it is not. We approach this question through a systematic framework for self-locating epistemology. As it turns out, leading approaches to self-locating evidence agree that the fact that our own universe contains fine-tuned life indeed confirms the existence of a multiverse (at least in a suitably idealized setting). This convergence is no accident: we present two theorems showing that, in this setting, any updating rule that satisfies a few reasonable conditions will have the same feature. The conclusion that fine-tuned life provides evidence for a multiverse is hard to escape.
This chapter explores Tolerance Puzzles in which the operative modality is that of objective chance. We show that a principle of ‘Chance Fixity’, according to which facts about the chances at a given time are not themselves matters of... more
This chapter explores Tolerance Puzzles in which the operative modality is that of objective chance. We show that a principle of ‘Chance Fixity’, according to which facts about the chances at a given time are not themselves matters of chance at that time, is deeply embedded in ordinary and scientific reasoning about chance and rules out Iteration-denial for the relevant chance operators. We also develop a new ‘Robustness Puzzle’ in which the analogue of Hypertolerance completely untenable. This puzzle turns on strengthenings of Tolerance claims to claims about high (conditional) chance, as opposed to mere positive chance.
This is the second of two chapters exploring the option of resolving various Tolerance Puzzles by accepting Hypertolerance, the conclusion that the objects in question could have been arbitrarily different in the respects relevant to the... more
This is the second of two chapters exploring the option of resolving various Tolerance Puzzles by accepting Hypertolerance, the conclusion that the objects in question could have been arbitrarily different in the respects relevant to the puzzle. This chapter considers what seems to us to be the most promising strategy for arguing against Hypertolerance, based on a physicalist supervenience principle. We show how this principle rules out Hypertolerance in certain “fine-grained” Tolerance Puzzles, and consider to what extent this generalises to other Hypertolerance claims.
This chapter presents the system of classical higher-order modal logic which will be employed throughout this book. Nothing more than a passing familiarity with classical first-order logic and standard systems of modal logic is... more
This chapter presents the system of classical higher-order modal logic which will be employed throughout this book. Nothing more than a passing familiarity with classical first-order logic and standard systems of modal logic is presupposed. We offer some general remarks about the kind of commitment involved in endorsing this logic, and motivate some of its more non-standard features. We also discuss how talk about possible worlds can be represented within the system.
This chapter provides a general schema for regimenting a broad family of puzzles of modal variation. These puzzles begin with a ‘Tolerance’ premise according to which an objects (or a certain kind of object) can differ in any small way... more
This chapter provides a general schema for regimenting a broad family of puzzles of modal variation. These puzzles begin with a ‘Tolerance’ premise according to which an objects (or a certain kind of object) can differ in any small way along a certain parameter. This is supplemented with a ‘Non-contingency’ premise according to which the Tolerance premise is necessarily true if true at all, an ‘Iteration’ premise according to which anything possibly possible is possible, and a ‘Persistent Closeness’ premise according to which what counts as a ‘small difference’ is modally constant. These premises jointly imply the conclusion, ‘Hypertolerance’, that the object or objects in question can differ arbitrarily along the relevant parameter. We show how this schema is general enough to subsume puzzles involving time or objective chance, and discuss some difficulties that arise in trying to formulate compelling instantiations of the schema involving variation in originating matter.
This chapter presents and discusses a general schema that subsumes a variety of puzzles having to do with the modal behaviour of material objects, some new and some familiar. These puzzles involve ‘Robustness’ premises according to which... more
This chapter presents and discusses a general schema that subsumes a variety of puzzles having to do with the modal behaviour of material objects, some new and some familiar. These puzzles involve ‘Robustness’ premises according to which certain objects of a given kind are counterfactually robust in certain respects; ‘Non-coincidence’ premises according to which distinct objects of that kind are incapable of coinciding, and ‘Non-distinctness’ premises that rule out the scenarios in which actually distinct objects could have been identical; these jointly entail an absurd conclusion.
This chapter takes up the question of how to motivate the crucial ‘Non-Contingency’ premise in the Tolerance Puzzles introduced in Chapter 2, a question that has received surprisingly little attention in the literature on these puzzles.... more
This chapter takes up the question of how to motivate the crucial ‘Non-Contingency’ premise in the Tolerance Puzzles introduced in Chapter 2, a question that has received surprisingly little attention in the literature on these puzzles. We articulate and set aside some dubious motivations for the premise, including motivations which assimilate Tolerance Puzzles to the well-known Sorites Paradox. In place of these, we develop a ‘Security Argument’ for Non-contingency, based on the thought that it is not just a matter of chance or luck that we avoid error in believing the Tolerance premise.
This is the first of two chapters exploring the option of resolving various Tolerance Puzzles by denying Iteration, the claim that whatever is possibly possible is possible. In this chapter we grant for the sake of argument that Iteration... more
This is the first of two chapters exploring the option of resolving various Tolerance Puzzles by denying Iteration, the claim that whatever is possibly possible is possible. In this chapter we grant for the sake of argument that Iteration fails for metaphysical necessity, and consider whether there are other Tolerance Puzzles which remain problematic even on that assumption. Our main focus is on puzzles involving ancestral metaphysical possibility—the status of being either possible, or possibly possible, or possibly possibly possible, or…—for which Iteration is guaranteed by our basic modal logic. We argue that plausible higher-order identities suggest that ancestral metaphysical possibility is not a trivial status even for those who deny Iteration for metaphysical possibility.
This is the first of two chapters exploring the option of resolving various Tolerance Puzzles by accepting Hypertolerance, the conclusion that the objects in question could have been arbitrarily different in the respects relevant to the... more
This is the first of two chapters exploring the option of resolving various Tolerance Puzzles by accepting Hypertolerance, the conclusion that the objects in question could have been arbitrarily different in the respects relevant to the puzzle. This chapter discusses two influential objections to certain Hypertolerance claims, one based on the doctrine of ‘Anti-haecceitism’ (according to which an object’s qualitative profile suffices for its identity), and another based on the doctrine of ‘Overlap Essentialism’ (according to which a table originally made of certain matter could not have been originally made of entirely non-overlapping matter). We consider some arguments for Overlap Essentialism from certain ‘sufficiency of origin’ principles, and discuss some difficult cases which put pressure on Overlap Essentialism.
This chapter further develops the framework introduced in the previous chapter. We suggest that the best approach to many Tolerance Puzzles involves some contextual flexibility, allowing not only for contexts in which Non-contingency is... more
This chapter further develops the framework introduced in the previous chapter. We suggest that the best approach to many Tolerance Puzzles involves some contextual flexibility, allowing not only for contexts in which Non-contingency is false but also for contexts in which Hypertolerance is true. We discuss how plasticity and plenitude can also be used to solve the Coincidence Puzzles introduced in chapter 4, and conclude by considering a range of open questions and case studies.
This chapter develops a strategy for resolving Tolerance Puzzles based on two central ideas. The first idea is a principle of ‘plenitude’, according to which any given material objects coincides with innumerably many others differing from... more
This chapter develops a strategy for resolving Tolerance Puzzles based on two central ideas. The first idea is a principle of ‘plenitude’, according to which any given material objects coincides with innumerably many others differing from it in a wide variety of modal respects. The second idea is that because of this plenitude of candidate referents, the singular terms (like ‘this table’) and common nouns (like ‘table’) that feature in Tolerance Puzzles are subject to a high degree of semantic plasticity: small changes in the world, e.g. in the selection of parts to be made into tables, suffice to make a difference to what we refer to with these words. Such plasticity undermines the Security Argument for Non-contingency developed in chapter 2, by suggesting that even though Tolerance could easily have been false, Tolerance speeches robustly express truths.
This is the second of two chapters exploring the option of resolving various Tolerance Puzzles by denying Iteration, the claim that whatever is possibly possible is possible. This chapter argues for Iteration for metaphysical possibility,... more
This is the second of two chapters exploring the option of resolving various Tolerance Puzzles by denying Iteration, the claim that whatever is possibly possible is possible. This chapter argues for Iteration for metaphysical possibility, based on the premise that metaphysical possibility is the broadest form of possibility. Some reject this on the grounds that, for example, it is logically possible (although metaphysically impossible) that Hesperus is distinct from Phosphorus. We show that those who accept this premise should reject the form of existential generalization required to derive the conclusion that there is a form of possibility that attaches to the proposition that Hesperus is distinct from Phosphorus. We show how under certain attractive assumptions about the grain of higher-order reality one can show that there is a broadest form of possibility, and indeed define it in purely logical terms.
This is the first of two chapters devoted to a special subclass of Tolerance Puzzles based on ‘indiscernible modality’, on which qualitative truths are automatically necesssary. The interest in these puzzles lies in the fact that there is... more
This is the first of two chapters devoted to a special subclass of Tolerance Puzzles based on ‘indiscernible modality’, on which qualitative truths are automatically necesssary. The interest in these puzzles lies in the fact that there is a distinctive argument for Non-contingency based on the premise that properties like being a table are qualitative. This chapter explores the options for resolving the puzzles compatible with accepting that premise, and hence Non-contingency for indiscernible modality.
Many philosophers have thought that Tolerance Puzzles can be easily dissolved by adopting some form of counterpart theory, which is roughly the view that being possibly a certain way is having a counterpart that is that way. This chapter... more
Many philosophers have thought that Tolerance Puzzles can be easily dissolved by adopting some form of counterpart theory, which is roughly the view that being possibly a certain way is having a counterpart that is that way. This chapter shows how standard versions of counterpart theory involve radical departures from standard modal logic (going far beyond Iteration-denial) which we claim are unacceptable, and argues that once counterpart theory is developed in such a way as to avoid such logical revisionism, it has no special capacity to resolve the puzzles.
This book didn’t have to consist of exactly the sentences that it in fact contains: any one of its sentences could have been very different. But it could not have consisted of an entirely different collection of sentences, such as to make... more
This book didn’t have to consist of exactly the sentences that it in fact contains: any one of its sentences could have been very different. But it could not have consisted of an entirely different collection of sentences, such as to make it a gothic novel or a treatise on wine-tasting. Other familiar objects are similarly capable of being moderately different, but not radically different, in various respects. But there are puzzling arguments which threaten these apparently obvious judgments, exploiting the fact that an appropriate sequence of small differences can add up to a radical difference. This book presents the first full-length treatment of these puzzles, using them as an entry point to a broad range of metaphysical questions about possibility, necessity, and identity. It introduces tools of higher-order modal logic which enable a rigorous treatment of the puzzles, and develops a strategy for resolving them based on a plenitudinous ontology of material objects, which induce...
– We offer a new motivation for imprecise probabilities. We argue that there are propositions to which precise probability cannot be assigned, but to which imprecise probability can be assigned. In such cases the alternative to imprecise... more
– We offer a new motivation for imprecise probabilities. We argue that there are propositions to which precise probability cannot be assigned, but to which imprecise probability can be assigned. In such cases the alternative to imprecise probability is not precise probability, but no probability at all. And an imprecise probability is substantially better than no probability at all. Our argument is based on the mathematical phenomenon of non-measurable sets. Non-measurable propositions cannot receive precise probabilities, but there is a natural way for them to receive imprecise probabilities. The mathematics of non-measurable sets is arcane, but its epistemological import is far-reaching; even apparently mundane propositions are liable to be affected by non-measurability. The phenomenon of non-measurability dramatically reshapes the dialectic between critics and proponents of imprecise credence. Non-measurability offers natural rejoinders to prominent critics of imprecise credence. Non-measurability even reverses some of the critics’ arguments—by the very lights that have been used to argue against imprecise credences, imprecise credences are better than precise credences.
David Builes presents a paradox concerning how confident you should be that any given member of an infinite collection of fair coins landed heads, conditional on the information that they were all flipped and only finitely many of them... more
David Builes presents a paradox concerning how confident you should be that any given member of an infinite collection of fair coins landed heads, conditional on the information that they were all flipped and only finitely many of them landed heads. We argue that if you should have any conditional credence at all, it should be 1/2.
Most meanings we express belong to large families of variant meanings, among which it would be implausible to suppose that some are much more apt for being expressed than others. This abundance of candidate meanings creates pressure to... more
Most meanings we express belong to large families of variant meanings, among which it would be implausible to suppose that some are much more apt for being expressed than others. This abundance of candidate meanings creates pressure to think that the proposition attributing any particular meaning to an expression is modally plastic: its truth depends very sensitively on the exact microphysical state of the world. However, such plasticity seems to threaten ordinary counterfactuals whose consequents contain speech reports, since it is hard to see how we could reasonably be confident in a counterfactual whose consequent can be true only if a certain very finely tuned microphysical configuration obtains. This essay develops the foregoing puzzle and explores several possible solutions.
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results... more
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so on. A popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics.
David Builes presents a paradox concerning how confident you should be that any given member of an infinite collection of fair coins landed heads, conditional on the information that they were all flipped and only finitely many of them... more
David Builes presents a paradox concerning how confident you should be that any given member of an infinite collection of fair coins landed heads, conditional on the information that they were all flipped and only finitely many of them landed heads. We argue that if you should have any conditional credence at all, it should be 1 2 .
Here is a compelling principle concerning our knowledge of coin flips: FAIR COINS: If you know that a coin is fair, and for all you know it is going to be flipped, then for all you know it will land tails. The idea is that the only way to... more
Here is a compelling principle concerning our knowledge of coin flips: FAIR COINS: If you know that a coin is fair, and for all you know it is going to be flipped, then for all you know it will land tails. The idea is that the only way to be in a position to know that a fair coin won't land a certain way is to be in a position to know that it won't be flipped at all. 1 One class of putative counterexamples to FAIR COINS which we want to set aside involves knowledge delivered by oracles, clairvoyance, and so forth. A second, and more interesting, class of counterexamples involves knowledge under unusual modes of presentation. For example, if you introduce the name 'Headsy' for the first
Seth Yalcin has pointed out some puzzling facts about the behaviour of epistemic modals in certain embedded contexts. For example, conditionals that begin 'If it is raining and it might not be raining, …' sound unacceptable, unlike... more
Seth Yalcin has pointed out some puzzling facts about the behaviour of epistemic modals in certain embedded contexts. For example, conditionals that begin 'If it is raining and it might not be raining, …' sound unacceptable, unlike conditionals that begin 'If it is raining and I don't know it, …'. These facts pose a prima facie problem for an orthodox treatment of epistemic modals as expressing propositions about the knowledge of some contextually specified individual or group. This paper develops an explanation of the puzzling facts about embedding within an orthodox framework.
Not the final version
Penultimate version
The concept of being in a position to know is an increasingly popular member of the epistemologist’s toolkit. Some have used it as a basis for an account of propositional justification. Others, following Timothy Williamson, have used it... more
The concept of being in a position to know is an increasingly popular member of the epistemologist’s toolkit. Some have used it as a basis for an account of propositional justification.  Others, following Timothy Williamson,  have used it as a vehicle for articulating interesting luminosity and anti-luminosity theses. It is tempting to think that while knowledge itself does not obey any closure principles other than those that follow from the factivity of knowledge,  being in a position to know does. For example, if one  knows both p and p -> q, but one dies or gets distracted before being able to perform modus ponens on these items of knowledge and for that reason one does not know q, one is still plausibly in a position to know q. It is also tempting to suppose that, while one does not know all logical truths, one is nevertheless in a position to know every logical truth. Putting these temptations together, we get the view that being in a position to know has a normal modal logic. A recent literature has begun to investigate whether it is a good idea to give in to these twin temptations—in particular the first one.  That literature assumes very naturally that one is in a position to know everything one knows and that one is not in a position to know things that one cannot know. It has succeeded in showing that, given the modest closure condition that knowledge distributes over conjunction, being a position to know cannot satisfy the so-called K axiom (closure of being in a position to know under modus ponens) of normal modal logics. In this paper, we explore the question of the normality of the logic of being in a position to know in a more far-reaching and systematic way. Assuming that being in a position to know entails the possibility of knowing and that knowing entails being in a position to know, we can demonstrate radical failures of normality without assuming any closure principles at all for knowledge other than those that follow from the factivity of knowledge. Moreover, the failure of normality cannot be laid at the door of the K axiom for knowledge, since the standard principle GEN of modal generalization (or ‘necessitation’) also fails for being in a position to know. After laying out and explaining our results, we briefly survey the coherent options that remain.
In his recent monograph, ​ Objects​ , Daniel Korman contrasts ontological conservatives with ​ permissivists and ​ eliminativists ​ about ontology. Roughly speaking, conservatives admit the existence of " ordinary objects " like trees,... more
In his recent monograph, ​ Objects​ , Daniel Korman contrasts ontological conservatives with ​ permissivists and ​ eliminativists ​ about ontology. Roughly speaking, conservatives admit the existence of " ordinary objects " like trees, dogs, and snowballs, but deny the existence of " extraordinary objects " , like composites of trees and dogs (" trogs ") or snowball-like objects capable of surviving some but not all kinds of substantial squashings (" snowdiscalls "). Eliminativists, on the other hand, deny many or all ordinary objects, while permissivists accept both ordinary and extraordinary objects. Korman is a conservative. We are permissivists. In what follows, we will say something about why we are drawn to permissivism and not very drawn at all to conservatism. In the first section, we discuss a tempting epistemic line of argument against conservatism. This isn't a line of argument we find especially promising, and in this we agree substantially with Korman. However, we do disagree about the tricky issue of ​ why this argument isn't promising, and what should be said about the other epistemic issues in the vicinity. We offer what we hope will be helpful remarks about the relevant epistemic issues, often drawing on epistemic ideology that guides our own thinking but is less salient in Korman's presentation. Along the way, we outline some of our misgivings about Korman's discussion. Our most basic complaint against conservatism is not the thought that conservatism has poor epistemic standing even if true, but instead the thought that conservatism is ​ weird​ , and altogether foreign to our own metaphysical sensibilities. We develop this thought through our discussion of arbitrariness and parity in Section 5. Along the way, we voice some misgivings about Korman's own presentation and appraisal of arbitrariness arguments. In a final section we discuss some larger methodological issues about the project of ontology.
Research Interests:
Various standard epistemological notions, such as apriority and being in a position to know, are commonly thought to entail the metaphysical possibility of knowledge. In this paper we argue that the logics of such notions are inevitably... more
Various standard epistemological notions, such as apriority and being in a position to know, are commonly thought to entail the metaphysical possibility of knowledge. In this paper we argue that the logics of such notions are inevitably extremely weak. In particular, we argue that they do not have normal modal logics. The consequences of this are briefly explored.
Research Interests:
Some have argued for a division of epistemic labor in which mathematicians supply truths and philosophers supply their necessity. We argue that this is wrong: mathematics is committed to its own necessity. Counterfactuals play a starring... more
Some have argued for a division of epistemic labor in which mathematicians supply truths and philosophers supply their necessity. We argue that this is wrong: mathematics is committed to its own necessity. Counterfactuals play a starring role.
Research Interests:
Research Interests:
Seth Yalcin has pointed out some puzzling facts about the behaviour of epistemic modals in certain embedded contexts. For example, conditionals that begin 'If it is raining and it might not be raining, …' sound unacceptable, unlike... more
Seth Yalcin has pointed out some puzzling facts about the behaviour of epistemic modals in certain embedded contexts. For example, conditionals that begin 'If it is raining and it might not be raining, …' sound unacceptable, unlike conditionals that begin 'If it is raining and I don't know it, …'. These facts pose a prima facie problem for an orthodox treatment of epistemic modals as expressing propositions about the knowledge of some contextually specified individual or group. This paper develops an explanation of the puzzling facts about embedding within an orthodox framework.
Research Interests:
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of... more
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a natural constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, we gather lessons from the established work on credence aggregation, and extend this work with two new impossibility results. We then explore contrasting features of two kinds of rules that satisfy the constraints we articulate: one kind uses fixed prior credences, and the other uses geometric averaging, as opposed to arithmetic averaging. We also prove a new characterisation result for geometric averaging. Finally we consider applications to neighboring philosophical issues, including the epistemology of disagreement.
Research Interests:
Research Interests:
Research Interests:
The principle of universal instantiation plays a pivotal role both in the derivation of intensional paradoxes such as Prior’s paradox and Kaplan’s paradox and the debate between necessitism and contingentism. We outline a distinctively... more
The principle of universal instantiation plays a pivotal role both in the derivation of intensional paradoxes such as Prior’s paradox and Kaplan’s paradox and the debate between necessitism and contingentism. We outline a distinctively free logical approach to the intensional paradoxes and note how the free logical outlook allows one to distinguish two different, though allied themes in higher-order necessitism. We examine the costs of this solution and compare it with the more familiar ramificationist approaches to higher-order logic. Our assessment of both approaches is largely pessimistic, and we remain reluctantly inclined to take Prior’s and Kaplan’s derivations at face value.
Research Interests:

And 13 more

(A lightly edited transcript of a pre-read talk I gave at Oxford some years ago for a series of talks organized by Ofra Magidor. I plan to publish some version it some time in a planned collection of epistemological essays.)
Research Interests:
Many philosophers think that given the choice between saving the life of an innocent person and averting many minor ailments or inconveniences, it would be better to save the life. These intuitions concern cases where stakes are... more
Many philosophers think that given the choice between saving the life of an innocent person and averting many minor ailments or inconveniences, it would be better to save the life. These intuitions concern cases where stakes are certain––X many headaches vs. a life. It is less clear how to accommodate cases with uncertain stakes––X many headaches vs. some nonzero probability of risk to a life. This paper explores one of the more promising strategies for developing an absolutist approach for decisions under uncertainty.
Research Interests:
We begin with a puzzle that is adequately solved by appeal to certain facts about our assessment procedures for conditionals combined with certain facts about the structure of knowledge. That solution is unavailable to proponents of KK.... more
We begin with a puzzle that is adequately solved by appeal to certain facts
about our assessment procedures for conditionals combined with certain facts about the structure of knowledge. That solution is unavailable to proponents of KK. This, we argue, is a signi cant cost for KK. We go on to defuse a battery of arguments due to Kevin Dorst that try to motivate KK on the basis of the infelicity of a di erent class of conditionals. Along the way we expose the defects of some prima facie promising ideas about how to assess conditionals for their assertability.
Research Interests:
A version of this will appear in Oxford Studies in Metaphysics.
Erratum: On page 26 line 16, in the displayed indented formula, the existential quantifer should have scope over the diamond.
Research Interests:
Normality is commonplace. We routinely distinguish normal weather from hurricanes and droughts; normal prices from bargains and rip-offs; and normal behavior from eccentricity and oddity. Not only do we form judgments about normality with... more
Normality is commonplace. We routinely distinguish normal weather from hurricanes and droughts; normal prices from bargains and rip-offs; and normal behavior from eccentricity and oddity. Not only do we form judgments about normality with ease, but facts about what is normal serve as a reliable guide to navigating the world. Knowing the normal presentation of a disease can help a doctor to successfully diagnose a patient. Knowing the normal level of rush hour traffic can help a commuter to arrive on time. And knowing the normal migration patterns of birds can help ornithologists to identify species. Our subject in this paper is the modality of normality. Items of a variety of types can be evaluated for normality. For example, we can readily compare the
normality of individuals, kinds and properties. Gerald Ford was more normal, for a president, than Richard Nixon. Weasels are more normal, for mammals, than wombats. And being tall is more normal, for basketball players, than being short. The modality of normality, in contrast, has to do with properties of states of affairs.There may be interesting connections between the properties of individuals, kinds and properties and the modality of normality. However, in what follows, our attention will be focused exclusively on facts about the latter.
Research Interests:
Recent research has identified a tension between the Safety principle that knowledge is belief without risk of error, and the Closure principle that knowledge is preserved by competent deduction.Timothy Williamson reconciles Safety and... more
Recent research has identified a tension between the Safety principle that knowledge is belief without risk of error, and the Closure principle that knowledge is preserved by competent deduction.Timothy Williamson reconciles Safety and Closure by proposing that when an agent deduces a conclusion from some premises, the agent’s method for believing the conclusion includes their method for believing each premise. We argue that this theory is untenable because it impliesMethod Luminosity, the thesis that whenever an agent believes p using a method, the agent is in a position to know they believed p using that method. Several possible solutions are explored and rejected
Research Interests:
In preface cases, people believe that some of their beliefs are false. Many have considered what such people are justified in believing. We turn our attention to what they can know. We introduce a novel 'archipelago puzzle', showing that... more
In preface cases, people believe that some of their beliefs are false. Many have considered what such people are justified in believing. We turn our attention to what they can know. We introduce a novel 'archipelago puzzle', showing that if deduction extends knowledge, then ordinary knowledge of error can lead in surprising ways to the absurdly pessimistic knowledge that most of one's beliefs are false.
Research Interests:
In 1978, Eric Kraemer observed that 'Brown intentionally threw a six' and 'Brown intentionally won the game' can differ in truth-value, even when it is known that Brown won the game just in case he threw a six. More generally, there are... more
In 1978, Eric Kraemer observed that 'Brown intentionally threw a six' and 'Brown intentionally won the game' can differ in truth-value, even when it is known that Brown won the game just in case he threw a six. More generally, there are cases where S intentionally V 1 and S intentionally V 2 differ in truth-value even though V 1 and V 2 are known to be co-extensive. We call this Kraemer's puzzle in the theory of intentional action. We bring out some of the puzzle's central features, and gesture towards a solution.
Research Interests:
The so-called safety conception of knowledge enjoys considerable popularity, but there are important choice points when it comes to its articulation and deployment. In this essay we explore a number of them and also make vivid some... more
The so-called safety conception of knowledge enjoys considerable popularity, but there are important choice points when it comes to its articulation and deployment. In this essay we explore a number of them and also make vivid some important challenges to various versions of the safety approach. In section one we present some key facets of Williamson’s presentation of safety. In section two, we present five important choice points for the safety theorist. In section three, we discuss an important issue concerning methodological orientation that turns on the difference between analysis and model-building.
Inheritance is the principle that deontic 'ought' is closed under entailment. This paper is about a tension that arises in connection with Inheritance. More specifically, it is about two observations that pull in opposite directions. One... more
Inheritance is the principle that deontic 'ought' is closed under entailment. This paper is about a tension that arises in connection with Inheritance. More specifically, it is about two observations that pull in opposite directions. One of them raises questions about the validity of Inheritance, while the other appears to provide strong support for it. We argue that existing approaches to deontic modals fail to provide us with an adequate resolution of this tension. In response, we develop a positive analysis, and show that this proposal provides a satisfying account of our intuitions.
Research Interests:
Here is a second sample chapter from the The Bounds of Possibility, co-authored with Cian Dorr and Juhani Yli-Vakkuri forthcoming with OUP
Research Interests:
Introductory Chapter to  The Bounds of Possibility: Puzzles of Modal Variation

Cian Dorr, John Hawthorne and Juhani Yli-Vakkuri, forthcoming OUP.
Research Interests:
Research Interests:
Epistemicism is one of the main approaches to the phenomenon of vagueness. But how does it far in its treatment of moral vagueness? This paper has two goals. First, I shall explain why various recent arguments against an epistemicist... more
Epistemicism is one of the main approaches to the phenomenon of vagueness. But how does it far in its treatment of moral vagueness?  This paper has two goals. First, I shall explain why various recent arguments against an epistemicist approach to moral vagueness are unsuccessful. Second, I shall explain how, in my view, reflection on the Sorites can inform normative ethics in powerful and interesting ways.  In this connection, I shall be putting the epistemicist treatment to work.
The analysis of desire ascriptions has been a central topic of research for philosophers of language and mind. This work has mostly focused on providing a theory of want reports, i.e. sentences of the form S wants p. In this paper, we... more
The analysis of desire ascriptions has been a central topic of research for philosophers of language and mind. This work has mostly focused on providing a theory of want reports, i.e. sentences of the form S wants p. In this paper, we turn attention from want reports to a closely related, but relatively understudied construction, namely hope reports, i.e. sentences of the form S hopes p. We present two contrasts involving hope reports, and show that existing approaches to desire fail to explain these contrasts. We then develop a novel account that combines some of the central insights in the literature. We argue that our theory provides us with an elegant account of our contrasts, and yields a promising analysis of hoping.
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Hacking (1987) and White (2000) influentially argue that it is not. Subsequent debate on this question has centered on competing analogies; but... more
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Hacking (1987) and White (2000) influentially argue that it is not. Subsequent debate on this question has centered on competing analogies; but it is not clear which of these analogies are apt. We instead approach the question through a systematic framework for self-locating epistemology. As it turns out, all leading approaches to self-locating evidence agree that the fact that our own universe contains fine-tuned life indeed confirms the existence of a multiverse (at least in a suitably idealized setting). This convergence is no accident: we present two theorems that in this setting, any updating rule that satisfies a few reasonable conditions will have the same feature. The conclusion that fine-tuned life provides evidence for a multiverse is hard to escape.
Many defend the thesis that knowledge requires safety from error, so that when someone knows p, they couldn't easily have been wrong about p. This paper investigates the principle of Counterfactual Closure (CC), which connects knowledge... more
Many defend the thesis that knowledge requires safety from error, so that when someone knows p, they couldn't easily have been wrong about p. This paper investigates the principle of Counterfactual Closure (CC), which connects knowledge and counterfactuals: if it easily could have happened that p, and if p were the case, then q would be the case, it follows that it easily could have happened that q. We use CC to probe the viability of various models of knowledge. We first show that an unrestricted version of CC is false. This falsifies a model where the easy possibilities are counterfactually similar to actuality. We next show that normality models of knowledge predict that CC fails. Here, the easy possibilities are the sufficiently normal worlds. We then offer a true restriction of CC. This principle says that p is an easy possibility when p counterfactually depends on a coin flip. We show that restricted CC is invalidated by extant normality theories. Finally, we enrich normality theories with the principle of Counterfactual Contamination, which says that any world is fairly abnormal if at that world very abnormal events counterfactually depend on a coin flip.
This paper models perceptual knowledge in cases where an agent has multiple perceptual experiences over time. Using this model, we introduce a series of observations that undermine the pretheoretic idea that the evidential significance of... more
This paper models perceptual knowledge in cases where an agent has multiple perceptual experiences over time. Using this model, we introduce a series of observations that undermine the pretheoretic idea that the evidential significance of appearance depends on the extent to which the appearances match the world. On the basis of these observations, we model perceptual knowledge in terms of what is likely given the appearances. An agent knows p when p is implied by her epistemic possibilities. A world is epistemically possible when its probability given the appearances is not significantly lower than the probability of the actual world on the appearances.
A Critical Discussion of Phenomenal Conservatism.
" There are no gaps in logical space " , writes Lewis (1986), giving voice to sentiment shared by many philosophers. But different natural ways of trying to make this sentiment precise turn out to conflict with one another. One is a... more
" There are no gaps in logical space " , writes Lewis (1986), giving voice to sentiment shared by many philosophers. But different natural ways of trying to make this sentiment precise turn out to conflict with one another. One is a pattern idea: " Any pattern of instantiation is metaphysically possible ". Another is a cut and paste idea: " For any objects in any worlds, there exists a world that contains any number of duplicates of all of those objects. " Jumping off from discussions from Forrest and Armstrong (1984) and Nolan (1996), we use resources from model theory to show the inconsistency of certain packages of combinatorial principles and the consistency of others.
Another paper on fine-tuning!
The laws of physics are unexpectedly inhospitable to life. Scientists did not expect to discover that life depends on seemingly improbable values in the fundamental constants of physics. Scientists expected to discover that life would be... more
The laws of physics are unexpectedly inhospitable to life. Scientists did not expect to discover that life depends on seemingly improbable values in the fundamental constants of physics. Scientists expected to discover that life would be possible given a wide variety of values in the fundamental constants. But so it goes. One learns all sorts of weird things from contemporary physics.

If this unexpected inhospitability were equally unexpected with or without the existence of God, then the fine-tuning of the fundamental constants would be irrelevant to the philosophy of religion. But the fine-tuning of the fundamental constants is substantially more likely given the existence of God than it is given the non-existence of God. Thus the fine-tuning of the fundamental constants is strong evidence that there is a God.

There are some real complexities to the fine-tuning argument, complexities regarding which controversy is appropriate. But the fine-tuning argument is more controversial than it ought to be. The basic idea of the fine-tuning argument is simple. It's as legitimate an argument as one comes across in philosophy.

We will formulate the fine-tuning argument using the machinery of  Bayesian probability theory. We think that a good deal of structural insight can be obtained by doing so. (In particular, we find Bayesian analyses to be more illuminating than analyses which rely on explanation-theoretic vocabulary, such as “cries out for explanation”.) We hope that our theoretical preferences will be vindicated by our output. After some scene setting, we will sketch what we take to be a promising way of developing the fine-tuning argument, which we dub the “core argument”. Additional detail and explanation will be supplied as we engage with a series of potential concerns about the argument so sketched. Along the way, we will rebut a recent critique of the fine-tuning argument from a philosopher, Jonathan Weisberg, and will also rebut a range of critiques that are common in the popular and scientific literature. We will finally turn to atheistic replies that concede the lessons of the core argument, but which attempt to find a rational home for atheism with its scope. We believe this to be the most promising approach for the atheist.