Skip to main content
Bayesian inference is limited in scope because it cannot be applied in idealized contexts where none of the hypotheses under consideration is true and because it is committed to always using the likelihood as a measure of evidential... more
Bayesian inference is limited in scope because it cannot be applied in idealized contexts where none of the hypotheses under consideration is true and because it is committed to always using the likelihood as a measure of evidential favoring, even when that is inappropriate. The purpose of this paper is to study inductive inference in a very general setting where finding the truth is not necessarily the goal and where the measure of evidential favoring is not necessarily the likelihood. I use an accuracy argument to argue for probabilism and I develop a new kind of argument to argue for two general updating rules, both of which are reasonable in different contexts. One of the updating rules has standard Bayesian updating, Bissiri's (2016) general Bayesian updating, Douven's (2016) IBE-based updating, and Vassend's (2019a) quasi-Bayesian updating as special cases. The other updating rule is novel.
Scientists and Bayesian statisticians often study hypotheses that they know to be false. This creates an interpretive problem because the Bayesian probability assigned to a hypothesis is typically interpreted as the probability that the... more
Scientists and Bayesian statisticians often study hypotheses that they know to be false. This creates an interpretive problem because the Bayesian probability assigned to a hypothesis is typically interpreted as the probability that the hypothesis is true. I argue that solving the interpretive problem requires coming up with a new semantics for Bayesian inference. I present and contrast two solutions to the interpretive problem, both of which involve giving a new interpretation of probability. I argue that both of these new interpretations of Bayesian inference have the same advantages that the standard interpretation has, but that they have the added benefit of being applicable in a wider set of circumstances. I furthermore show that the two new interpretations are inter-translatable and I explore the conditions under which they are co-extensive with the standard Bayesian interpretation. Finally, I argue that the solutions to the interpretive problem support the claim that there is pervasive pragmatic encroachment on whether a given Bayesian probability assignment is rational.
Research Interests:
Bayesianism and likelihoodism are two of the most important frameworks philosophers of science use to analyse scientific methodology. However, both frameworks face a serious objection: much scientific inquiry takes place in highly... more
Bayesianism and likelihoodism are two of the most important frameworks philosophers of science use to analyse scientific methodology. However, both frameworks face a serious objection: much scientific inquiry takes place in highly idealized frameworks where all the hypotheses are known to be false. Yet, both Bayesianism and likelihoodism seem to be based on the assumption that the goal of scientific inquiry is always truth rather than closeness to the truth. Here, I argue in favor of a verisimilitude framework for inductive inference. In the verisimilitude framework, scientific inquiry is conceived of, in part, as a process where inference methods ought to be calibrated to appropriate measures of closeness to the truth. To illustrate the verisimilitude framework, I offer a reconstruction of parsimony evaluations of scientific theories, and I give a reconstruction and extended analysis of the use of parsimony inference in phylogenetics. By recasting phylogenetic inference in the verisimilitude framework, it becomes possible to both raise and address objections to phylogenetic methods that rely on parsimony.
Research Interests:
I argue that information is a goal-relative concept for Bayesians. More precisely, I argue that how much information (or confirmation) is provided by a piece of evidence depends on whether the goal is to learn the truth or to rank actions... more
I argue that information is a goal-relative concept for Bayesians. More precisely, I argue that how much information (or confirmation) is provided by a piece of evidence depends on whether the goal is to learn the truth or to rank actions by their expected utility, and that different confirmation measures should therefore be used in different contexts. I then show how information measures may reasonably be derived from confirmation measures, and I show how to derive goal-relative non-informative and informative priors given background information. Finally, I argue that my arguments have important
Charles Stein discovered a paradox in 1955 that many statisticians think is of fundamental importance. Here we explore its philosophical implications. We outline the nature of Stein's result and of subsequent work on shrinkage estimators;... more
Charles Stein discovered a paradox in 1955 that many statisticians think is of fundamental importance. Here we explore its philosophical implications. We outline the nature of Stein's result and of subsequent work on shrinkage estimators; then we describe how these results are related to Bayesianism and to model selection criteria like the Akaike Information Criterion. We also discuss their bearing on scientific realism and instrumentalism. We argue that results concerning shrinkage estimators underwrite a surprising form of holistic pragmatism.
According to a widespread but implicit thesis in Bayesian confirmation theory, two confirmation measures are considered equivalent if they are ordinally equivalent — call this the " ordinal equivalence thesis " (OET). I argue that... more
According to a widespread but implicit thesis in Bayesian confirmation theory, two confirmation measures are considered equivalent if they are ordinally equivalent — call this the " ordinal equivalence thesis " (OET). I argue that adopting OET has significant costs. First, adopting OET renders one incapable of determining whether a piece of evidence substantially favors one hypothesis over another. Second, OET must be rejected if merely ordinal conclusions are to be drawn from the expected value of a confirmation measure. Furthermore, several arguments and applications of confirmation measures given in the literature already rely on a rejection of OET. I also contrast OET with stronger equivalence theses and show that they do not have the same costs as OET. On the other hand, adopting a thesis stronger than OET has costs of its own, since a rejection of OET ostensibly implies that people's epistemic states have a very fine-grained quantitative structure. However, I suggest that the normative upshot of the paper in fact has a conditional form, and that other Bayesian norms can also fruitfully be construed as having a similar conditional form.
Charles Stein discovered a paradox in 1955 that many statisticians think is of fundamental importance. Here we explore its philosophical implications. We outline the nature of Stein’s result and of subsequent work on shrinkage estimators;... more
Charles Stein discovered a paradox in 1955 that many statisticians think is of fundamental importance. Here we explore its philosophical implications. We outline the nature of Stein’s result and of subsequent work on shrinkage estimators; then we describe how these results are related to Bayesianism and to model selection criteria like the Akaike Information Criterion. We also discuss their bearing on scientific realism and instrumentalism. We argue that results concerning shrinkage estimators underwrite a surprising form of holistic pragmatism.
Research Interests:
Before we can decide what degree of belief to apportion to a prediction derived from a hypothesis or a claim made by a purported expert, we must figure out how much trust we should place in the hypothesis or expert. I argue that degree of... more
Before we can decide what degree of belief to apportion to a prediction derived from a hypothesis or a claim made by a purported expert, we must figure out how much trust we should place in the hypothesis or expert. I argue that degree of epistemic trust is an epistemic attitude that is fundamentally distinct from degree of belief because the two types of attitude obey different updating norms. I give two distinct arguments for the latter claim. One of the arguments takes as its starting point the characterizations of inductive inference provided in Bissiri et al. (2016) and Vassend (2019), while the other argument is grounded in the "minimum divergence" framework (Bernardo (1979), van Fraassen (1981), Diaconis and Zabell (1982), Berger et al. (2009), Bissiri et al. (2016), Eva and Hartmann (2018)), which holds that epistemic attitudes should be changed in a conservative way in light of evidence. I also show that the alternative updating methods I suggest maximize expected accuracy, provided the expectation is calculated relative to the degree of trust function rather than the degree of belief function. Finally, I use a simple simulation experiment to provide proof of concept that one of the alternative updating norms I suggest works in practice.
Research Interests:
Scientists will often trust a hypothesis for predictive purposes even if they believe that the hypothesis is false. Moreover, it is clear that trust -- like belief -- comes in degrees and ought to be updated in response to evidence. I... more
Scientists will often trust a hypothesis for predictive purposes even if they believe that the hypothesis is false. Moreover, it is clear that trust -- like belief -- comes in degrees and ought to be updated in response to evidence. I argue that degrees of trust and degrees of belief are governed by different updating norms and that they are therefore two fundamentally distinct epistemic attitudes, neither one of which may be reduced to the other. Interestingly, there is less reason to think that binary (all-or-nothing) trust differs from binary (all-or-nothing) belief, so the discussion highlights a potentially fundamental difference between formal and traditional epistemology. Finally, I show that the distinction between degrees of trust and degrees of belief has important consequences for decision theory because it is trust -- not belief -- that in general ought to guide action.
Research Interests: