Skip to main content
On the first page of “What is Cantor's Continuum Problem?”, Gödel argues that Cantor's theory of cardinality, where a bijection implies equal number, is in some sense uniquely determined. The argument, involving a thought... more
On the first page of “What is Cantor's Continuum Problem?”, Gödel argues that Cantor's theory of cardinality, where a bijection implies equal number, is in some sense uniquely determined. The argument, involving a thought experiment with sets of physical objects, is initially persuasive, but recent authors have developed alternative theories of cardinality that are consistent with the standard set theory ZFC and have appealing algebraic features that Cantor's powers lack, as well as some promise for applications. Here we diagnose Gödel's argument, showing that it fails in two important ways: (i) Its premises are not sufficiently compelling to discredit countervailing intuitions and pragmatic considerations, nor pluralism, and (ii) its final inference, from the superiority of Cantor's theory as applied to sets of changeable physical objects to the unique acceptability of that theory for all sets, is irredeemably invalid.
A probability distribution is regular if it does not assign probability zero to any possible event. Williamson (2007) argued that we should not require probabilities to be regular, for if we do, certain “isomorphic” physical events... more
A probability distribution is regular if it does not assign probability zero to any possible event. Williamson (2007) argued that we should not require probabilities to be regular, for if we do, certain “isomorphic” physical events (infinite sequences of coin flip outcomes) must have different probabilities, which is implausible. His remarks suggest an assumption that chances are determined by intrinsic, qualitative circumstances. Weintraub (2008) responds that Williamson’s coin flip events differ in their inclusion relations to each other, or the inclusion relations between their times, and this can account for their differences in probability. Haverkamp and Schulz (2011) rebut Weintraub, but their rebuttal fails because the events in their example are even less symmetric than Williamson’s. However, Weintraub’s argument also fails, for it ignores the distinction between intrinsic, qualitative differences and relations of time and bare identity. Weintraub could rescue her argument by claiming that the events differ in duration, under a non- standard and problematic conception of duration. However, we can modify Williamson’s example with Special Relativity so that there is no absolute inclusion relation between the times, and neither event has longer duration except relative to certain reference frames. Hence, Weintraub’s responses do not apply unless chance is observer-relative, which is also problematic. Finally, another symmetry argument defeats even the appeal to frame-dependent durations, for there the events have the same finite duration and are entirely disjoint, as are their respective times and places.
In a fair, infinite lottery, it is possible to conclude that drawing a number divisible by four is strictly less likely than drawing an even number; and, with apparently equal cogency, that drawing a number divisible by four is equally as... more
In a fair, infinite lottery, it is possible to conclude that drawing a number divisible by four is strictly less likely than drawing an even number; and, with apparently equal cogency, that drawing a number divisible by four is equally as likely as drawing an even number.
As an application of his Material Theory of Induction, Norton (2018; manuscript) argues that the correct inductive logic for a fair infinite lottery, and also for evaluating eternal inflation multiverse models, is radically different from... more
As an application of his Material Theory of Induction, Norton (2018; manuscript) argues that the correct inductive logic for a fair infinite lottery, and also for evaluating eternal inflation multiverse models, is radically different from standard probability theory. This is due to a requirement of label independence. It follows, Norton argues, that finite additivity fails, and any two sets of outcomes with the same cardinality and co-cardinality have the same chance. This makes the logic useless for evaluating multiverse models based on self-locating chances, so Norton claims that we should despair of such attempts. However, his negative results depend on a certain reification of chance, consisting in the treatment of inductive support as the value of a function, a value not itself affected by relabeling. Here we define a purely comparative infinite lottery logic, where there are no primitive chances but only a relation of ‘at most as likely’ and its derivatives. This logic satisfies both label independence and a comparative version of additivity as well as several other desirable properties, and it draws finer distinctions between events than Norton's. Consequently, it yields better advice about choosing between sets of lottery tickets than Norton's, but it does not appear to be any more helpful for evaluating multiverse models. Hence, the limitations of Norton's logic are not entirely due to the failure of additivity, nor to the fact that all infinite, co-infinite sets of outcomes have the same chance, but to a more fundamental problem: We have no well-motivated way of comparing disjoint countably infinite sets.
Three arguments against universally regular probabilities have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson (2017) and Benci et al. (2016) have raised... more
Three arguments against universally regular probabilities have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson (2017) and Benci et al. (2016) have raised technical objections to these symmetry arguments, but their objections fail. Howson says that Williamson's (2007) " isomorphic " events are not in fact isomorphic, but Howson is speaking of set-theoretic representations of events in a probability model. While those sets are not isomorphic, Williamson's physical events are, in the relevant sense. Benci et al. claim that all three arguments rest on a conflation of different models, but they do not. They are founded on the premise that similar events should have the same probability in the same model, or in one case, on the assumption that a single rotation-invariant distribution is possible. Having failed to refute the symmetry arguments on such technical grounds, one could deny their implicit premises, which is a heavy cost, or adopt varying degrees of instrumentalism or pluralism about regularity, but that would not serve the project of accurately modelling chances.
On the first page of “What is Cantor's Continuum Problem?”, Gödel argues that Cantor's theory of cardinality, where a bijection implies equal number, is in some sense uniquely determined. The argument, involving a thought experiment with... more
On the first page of “What is Cantor's Continuum Problem?”, Gödel argues that Cantor's theory of cardinality, where a bijection implies equal number, is in some sense uniquely determined. The argument, involving a thought experiment with sets of physical objects, is initially persuasive, but recent authors have developed alternative theories of cardinality that are consistent with the standard set theory ZFC and have appealing algebraic features that Cantor's powers lack, as well as some promise for applications. Here we diagnose Gödel's argument, showing that it fails in two important ways: (i) Its premises are not sufficiently compelling to discredit countervailing intuitions and pragmatic considerations, nor pluralism, and (ii) its final inference, from the superiority of Cantor's theory as applied to sets of changeable physical objects to the unique acceptability of that theory for all sets, is irredeemably invalid.
Gödel argued that Cantor’s notion of cardinal number was uniquely correct. More recent work has defended alternative “Euclidean” theories of set size, in which Cantor’s Principle (two sets have the same size if and only if there is a... more
Gödel argued that Cantor’s notion of cardinal number was uniquely correct. More recent work has defended alternative “Euclidean” theories of set size, in which Cantor’s Principle (two sets have the same size if and only if there is a one-to-one correspondence between them) is abandoned in favor of the Part–Whole Principle (if A is a proper subset of B then A is smaller than B). Here we see from simple examples, not that Euclidean theories of set size are wrong, nor merely that they are counterintuitive, but that they must be either very weak or in large part arbitrary and misleading. This limits their epistemic usefulness.
Research Interests:
Research Interests:
Some have suggested that certain classical physical systems have undecidable long-term behavior, without specifying an appropriate notion of decidability over the reals. We introduce such a notion, decidability in a measure μ (or d-μ),... more
Some have suggested that certain classical physical systems have undecidable long-term behavior, without specifying an appropriate notion of decidability over the reals.  We introduce such a notion, decidability in a measure μ (or d-μ), which is particularly appropriate for physics and in some ways more intuitive than Ko’s (1991) recursive approximability (r.a.). For Lebesgue measure λ, d-λ implies r.a.  Sets with positive λ- measure that are sufficiently “riddled” with holes are never d-λ but are often r.a.  This explicates Sommerer and Ott’s (1996) claim of uncomputable behavior in a system with riddled basins of attraction.  Furthermore, it clarifies speculations that the stability of the solar system (and similar systems) may be undecidable, for the invariant tori established by KAM theory form sets that are not d-λ.
There is no uniquely standard concept of an effectively decidable set of real numbers or real n-tuples. Here we consider three notions: decidability up to measure zero [M.W. Parker, Undecidability in R n : Riddled basins, the KAM tori,... more
There is no uniquely standard concept of an effectively decidable set of real numbers or real n-tuples. Here we consider three notions: decidability up to measure zero [M.W. Parker, Undecidability in R n : Riddled basins, the KAM tori, and the stability of the solar system, Phil. Sci. 70(2) (2003) 359–382], which we abbreviate d.m.z.; recursive approximability [or r.a.; K.-I. Ko, Complexity Theory of Real Functions, Birkhäuser, Boston, 1991]; and decidability ignoring boundaries [d.i.b.; W.C. Myrvold, The decision problem for entanglement, in: R.S. Cohen et al. (Eds.), Potentiality, Entanglement, and Passion-at-a-Distance: Quantum Mechanical Studies fo Abner Shimony, Vol. 2, Kluwer Academic Publishers, Great Britain, 1997, pp. 177–190]. Unlike some others in the literature, these notions apply not only to certain nice sets, but to general sets in R n and other appropriate spaces. We consider some motivations for these concepts and the logical relations between them. It has been argued that d.m.z. is especially appropriate for physical applications, and on R n with the standard measure, it is strictly stronger than r.a. [M.W. Parker, Undecidability in R n : Riddled basins, the KAM tori, and the stability of the solar system, Phil. Sci. 70(2) (2003) 359–382]. Here we show that this is the only implication that holds among our three decidabilities in that setting. Under arbitrary measures, even this implication fails. Yet for intervals of non-zero length, and more generally, convex sets of non-zero measure, the three concepts are equivalent.
We examine a case in which non-computable behavior in a model is revealed by computer simulation. This is possible due to differing notions of computability for sets in a continuous space. The argument originally given for the validity of... more
We examine a case in which non-computable behavior in a model is revealed by computer simulation. This is possible due to differing notions of computability for sets in a continuous space. The argument originally given for the validity of the simulation involves a simpler simulation of the simulation, still further simulations thereof, and a universality conjecture. There are difficulties with that argument, but there are other, heuristic arguments supporting the qualitative results. It is urged, using this example, that absolute validation, while highly desirable, is overvalued. Simulations also provide valuable insights that we cannot yet (if ever) prove.
Essay review of Florin Diacu and Philip Holmes, Celestial Encounters: The Origins of Chaos and Stability.
Research Interests:
In standard probability theory, probability zero is not the same as impossibility. If an experiment has infinitely many possible outcomes, all equally likely, then all the outcomes must have probability zero, but one of them must occur... more
In standard probability theory, probability zero is not the same as impossibility.  If an experiment has infinitely many possible outcomes, all equally likely, then all the outcomes must have probability zero, but one of them must occur nonetheless.  Many have suggested that this should not be so—that probabilities (ontic or epistemic, depending on the author) should be regular:  Only impossible events should have probability zero and only necessary or certain events should have probability one.  This can be arranged if we allow infinitesimal probabilities, but it turns out that infinitesimals do not solve all of the problems.  I will show that regular probabilities cannot be translation-invariant, even for bounded and disjoint events.  Hence, for various events confined to finite space and time (e.g., dart throws and vacuum fluctuations), regular chances cannot be determined by space-time invariant physical laws, and regular credences cannot satisfy seemingly reasonable symmetry principles.  Moreover, these examples are immune to the main objections against Timothy Williamson’s infinite coin flip examples.
Research Interests:
Ontological arguments like those of Gödel (1995) and Pruss (2009; 2012) rely on premises that initially seem plausible, but on closer scrutiny are not. The premises have modal import that is required for the arguments but is not... more
Ontological arguments like those of Gödel (1995) and Pruss (2009; 2012) rely on premises that initially seem plausible, but on closer scrutiny are not. The premises have modal import that is required for the arguments but is not immediately grasped on inspection, and which ultimately undermines the simpler logical intuitions that make the premises seem plausible. Furthermore, the notion of necessity that they involve goes unspecified, and yet must go beyond standard varieties of logical necessity. This leaves us little reason to believe the premises, while their implausible existential import gives us good reason not to. Gödel (1995) introduced a new class of formal arguments for the existence of God, appealing to a notion of " positive " property and applying modal logic. Gödel's premises were later shown to imply modal collapse, i.e., if they are true, then everything true is necessary (Sobel 1987), and then to be inconsistent (Benzmuller and Paleo 2016). However, Anderson (1990) and Pruss (2009; 2012) give simpler versions that avoid these problems. We will focus here on one of Pruss's formulations, but our observations apply generally. We will see that Pruss's premises are not as innocent as they first appear. Once their modal import is unpacked, and their unclear foundations exposed, they are not very plausible at all.
Research Interests:
The behavior of some systems is noncomputable in a precise new sense. One infamous problem is that of the stability of the solar system: Given the initial positions and velocities of several mutually gravitating bodies, will any... more
The behavior of some systems is noncomputable in a precise new sense.  One infamous problem is that of the stability of the solar system:  Given the initial positions and velocities of several mutually gravitating bodies, will any eventually collide or be thrown off to infinity?  Many have made vague suggestions that this and similar problems are undecidable:  No finite procedure can reliably determine whether a given configuration will eventually prove unstable.  But taken in the most natural way, this is trivial.  The state of a system corresponds to a point in a continuous space, and virtually no set of points in space is strictly decidable.  A new, more pragmatic concept is therefore introduced:  A set is decidable up to measure zero (d.m.z.) if there is a procedure to decide whether a point is in that set and it only fails on some points that form a set of zero volume.  This is motivated by the intuitive correspondence between volume and probability:  We can ignore a zero-volume set of states because the state of an arbitrary system almost certainly will not fall in that set.  D.m.z. is also closer to the intuition of decidability than other notions in the literature, which are either less strict or apply only to special sets, like closed sets.  Certain complicated sets are not d.m.z., most remarkably including the set of know stable orbits for planetary systems (the KAM tori).  This suggests that the stability problem is indeed undecidable in the precise sense of d.m.z.  Carefully extending decidability concepts from idealized models to actual systems, we see that even deterministic aspects of physical behaviour can be undecidable in a clear and significant sense.
Postulates a rapidly expanding background space sprinkled with more stable "pocket universes" Can observations of conditions in our pocket confirm or disconfirm eternal inflation theories? Principle of Mediocrity: "that we think of... more
Postulates a rapidly expanding background space sprinkled with more stable "pocket universes" Can observations of conditions in our pocket confirm or disconfirm eternal inflation theories? Principle of Mediocrity: "that we think of ourselves as a civilization randomly picked in the metauniverse" (Vilenkin 1995
A tool for mechanics courses.  This one picture is enough to solve any problem involving constant acceleration.