Skip to main content
Jack  Woods
  • Jack Woods, School of PRHS, Michael Sadler building, University of Leeds, Woodhouse Lane, Leeds, LS2 9JT.
It is plausible that there are epistemic reasons bearing on a distinctively epistemic standard of correctness for belief. It is also plausible that there are a range of practical reasons bearing on what to believe. These theses are often... more
It is plausible that there are epistemic reasons bearing on a distinctively epistemic standard of correctness for belief. It is also plausible that there are a range of practical reasons bearing on what to believe. These theses are often thought to be in tension with each other. Most significantly for our purposes, it is obscure how epistemic reasons and practical reasons might interact in the explanation of what
one ought to believe. We draw an analogy with a similar distinction between types of reasons for actions in the context of activities. The analogy motivates a two-level
account of the structure of normativity that explains the interaction of correctness-based and other reasons.

This account relies upon a distinction between normative reasons and authoritatively normative reasons. Only the latter play the reasons role in explaining what state one ought to be in. All and only practical reasons are authoritative reasons. Hence, in one important sense, all reasons for belief are practical reasons. But this account also preserves the autonomy and importance of
epistemic reasons. Given the importance of having true beliefs about the world, our epistemic standard typically plays a key role in many cases in explaining what
we ought to believe. In addition to reconciling (versions of) evidentialism and pragmatism, this two-level account has implications for a range of important debates in normative theory, including the interaction of right and wrong reasons
for actions and other attitudes, the significance of reasons in understanding normativity and authoritative normativity, the distinction between ‘formal’ and ‘substantive’ normativity, and whether there is a unified source of authoritative normativity.
Philosophical arguments usually are and nearly always should be abductive. Across many areas, philosophers are starting to recognize that often the best we can do in theorizing some phenomena is put forward our best overall account of it,... more
Philosophical arguments usually are and nearly always should be abductive. Across many areas, philosophers are starting to recognize that often the best we can do in theorizing some phenomena is put forward our best overall account of it, warts and all. This is especially true in areas like logic, aesthetics, mathematics, and morality where the data to be explained is often based in our stubborn intuitions.

While this methodological shift is welcome, it's not without problems. Abductive arguments involve significant theoretical resources which themselves can be part of what's being disputed. This means that we will sometimes find otherwise good arguments which suggest their own grounds are problematic. In particular, sometimes revising our beliefs on the basis of such an argument can undermine the very justification we used in that argument.

This feature, which I'll call self-effacingness, occurs most dramatically in arguments against our standing views on the subject matters mentioned above: logic, mathematics, aesthetics, and morality. This is because these subject matters all play a role in how we reason abductively. This isn't an idle fact; we can resist some challenges to our standing beliefs about these subject matters exactly because the challenges are self-effacing. The self-effacing character of certain arguments is thus both a benefit and limitation of the abductive turn and deserves serious attention. I aim to give it the attention it deserves.“
I extend my earlier argument that abductive comparisons of logical theories are problematic to cover piecemeal approaches like reflective equilibrium. I suggest that reflective equilibrium faces a version of the challenge Wright and... more
I extend my earlier argument that abductive comparisons of logical theories are problematic to cover piecemeal approaches like reflective equilibrium. I suggest that reflective equilibrium faces a version of the challenge Wright and Shapiro initially raised for it. Solving Wright's problem opens up a vulnerability to  cycles. I close by sketching what the best piecemeal approach would look like.
Neofregeanism and structuralism are among the most promising recent approaches to the philosophy of mathematics. Yet both have serious costs. We develop a view, structuralist neologicism, which re- tains the central advantages of each... more
Neofregeanism and structuralism are among the most promising recent approaches to the philosophy of mathematics. Yet both have serious costs. We develop a view, structuralist neologicism, which re- tains the central advantages of each while avoiding their more serious costs. The key to our approach is using arbitrary reference to ex- plicate how mathematical terms, introduced by abstraction principles like Hume’s, refer. Focusing on numerical terms, we argue that this allows us to treat abstraction principles as implicit definitions serving to determine all (known) properties of the numbers, achieving a key neofregean advantage, while preserving the key structuralist advantage that which objects play the number role doesn’t matter.
I defend normative subjectivism against the charge that believing in it undermines the functional role of normative judgment. In particular, I defend it against the claim that believing that our reasons change from context to context is... more
I defend normative subjectivism against the charge that believing in it undermines the functional role of normative judgment. In particular, I defend it against the claim that believing that our reasons change from context to context is problematic for our use of normative judgment. To do so, I distinguish two senses of normative universality and normative reasons---evaluative universality and reasons and ontic universality and reasons. The former captures how even subjectivists can evaluate the actions of those subscribing to other conventions; the latter explicates how their reasons differ from ours. I then show that four central aspects of the functional role of normativity---evaluation of our and others actions and reasons, normative communication, hypothetical planning, and evaluating counternromative conditionals---require far less than full ontic universality. The upshot is that there's no serious problem for subjectivism along these lines.
A natural suggestion and increasingly popular account of how to revise our logical beliefs treats revision of logic analogously to the revision of any scientific theory (Hjortland, Priest, Russell, Williamson, etc.) I investigate this... more
A natural suggestion and increasingly popular account of how to revise our logical beliefs treats revision of logic analogously to the revision of any scientific theory (Hjortland, Priest, Russell, Williamson, etc.) I investigate this approach and argue that simple applications of abductive methodology to logic result in revision-cycles, developing a case study of an actual dispute with this property. This is problematic if we take abductive methodology to provide justification for revising our logical framework. I then generalize the case study, pointing to similarities with more recent and popular heterodox logics such as naive logics of truth. I use this discussion to motivate a constraint—LOGICAL PARTISANHOOD—on the uses of such methodology: roughly: both the proposed alternative and our actual background logic must be able to agree that moving to the alternative logic is no worse than staying put.
Etiquette and other merely formal normative standards like legality, honor, and rules of games are taken less seriously than they should be. While these standards aren’t intrinsically reason providing (or “substantive”) in the way... more
Etiquette and other merely formal normative standards like legality, honor, and rules of games are taken less seriously than they should be. While these standards aren’t intrinsically reason providing (or “substantive”) in the way morality is often taken to be, they also play an important role in our practical lives: we collectively treat them as important for assessing the behavior of ourselves and others and as licensing particular forms of sanction for violations. I here develop a novel account of the normativity of formal standards where the role they play in our practical lives explains a distinctive kind of reason to obey them. We have this kind of reason to be polite because etiquette is important to us. We also have this kind of reason to be moral because morality is important to us. This parallel suggests the importance we assign to morality is insufficient to justify it being substantive.
Moore’s paradox, the infamous felt bizarreness of sincerely uttering something of the form “I believe grass is green, but it ain’t”—has attracted a lot of attention since its original discovery (Moore 1942). It is often taken to be a... more
Moore’s paradox, the infamous felt bizarreness of sincerely uttering something of the form “I believe grass is green, but it ain’t”—has attracted a lot of attention since its original discovery (Moore 1942). It is often taken to be a paradox of belief—in the sense that the locus of the inconsistency is the beliefs of someone who so sincerely utters. This claim has been labeled as the priority thesis: If you have an explanation of why a putative content could not be coherently believed, you thereby have an explanation of why it cannot be coherently asserted. (Shoemaker 1995). The priority thesis, however, is insufficient to give a general explanation of Moore-paradoxical phenomena and, moreover, it’s false. I demonstrate this, then show how to give a commitment-theoretic account of Moore Paradoxicality, drawing on work by Bach and Harnish. The resulting account has the virtue of explaining not only cases of pragmatic incoherence involving assertions, but also cases of cognate incoherence arising for other speech acts, such as promising, guaranteeing, ordering, and the like.
Sometimes we can have a fact playing a role in a grounding explanation, but where the particular content of that fact makes no difference to the explanation---any fact would do in its place. I call these facts vacuous grounds. I show that... more
Sometimes we can have a fact playing a role in a grounding explanation, but where the particular content of that fact makes no difference to the explanation---any fact would do in its place. I call these facts vacuous grounds. I show that applying the distinction between vacuous and non-vacuous grounds allows us to give a principled solution to Kit Fine and Stephen Kramer's paradox of (reflexive) ground. This paradox shows that on minimal assumptions about grounding and minimal assumptions about logic, we can show that grounding is reflexive, contra the intuitive character of grounds. I argue that we should never have accepted that grounding is irreflexive in the first place; the intuitions that support the irreflexive intuition plausibly only require that grounding be non-vacuously irreflexive. Fine and Kramer's paradox relies, essentially, on a case of vacuous grounding and is thus no problem for this account.
I investigate syntactic notions of theoretical equivalence between logical theories and an recent objection thereto. I show that this recent criticism of syntactic accounts as extensionally inadequate is unwarranted by developing an... more
I investigate syntactic notions of theoretical equivalence between logical theories and an recent objection thereto. I show that this recent criticism of syntactic accounts as extensionally inadequate is unwarranted by developing an account which is plausibly extensionally adequate and more philosophically motivated. This is important for recent anti-exceptionalist treatments of logic since syntactic accounts require less theoretical baggage than semantic accounts.
I argue that certain species of belief, such as mathematical, logical, and normative beliefs, are insulated from a form of Harman-style debunking argument whereas moral beliefs, the primary target of such arguments, are not. Harman-style... more
I argue that certain species of belief, such as mathematical, logical, and normative beliefs, are insulated from a form of Harman-style debunking argument whereas moral beliefs, the primary target of such arguments, are not. Harman-style arguments have been misunderstood as attempts to directly undermine our moral beliefs. They are rather best given as burden-shifting arguments, concluding that we need additional reasons to maintain our moral beliefs. If we understand them this way, then we can see why moral beliefs are vulnerable to such arguments while mathematical, logical, and normative beliefs are not—the very construction of Harman-style skeptical arguments requires the truth of significant fragments of our mathematical, logical , and normative beliefs, but requires no such thing of our moral beliefs. Given this property, Harman-style skeptical arguments against logical, mathematical, and normative beliefs are self-effacing; doubting these beliefs on the basis of such arguments results in the loss of our reasons for doubt. But we can cleanly doubt the truth of morality.
Research Interests:
We point out a serious problem for a pair of interesting proposals, due to Daniel Singer and Gillian Russell/Greg Restall, about how to understand the Humean claim that we can't get an ‘ought’ from an ‘is’. Our complaint about these... more
We point out a serious problem for a pair of interesting proposals, due to Daniel Singer and Gillian Russell/Greg Restall, about how to understand the Humean claim that we can't get an ‘ought’ from an ‘is’. Our complaint about these recent attempts is that they interfere with substantive debates about the nature of the ethical. This problem, here developed in detail for Singer’s and Russell and Restall’s account of Hume’s dictum, is of a general type arising for the use of model-theoretic structures in cashing out substantive philosophical claims: the question of whether an abstract model-theoretic structure successfully interprets something often involves taking a stand on non-trivial issues surrounding that thing. In this particular case, the problem is that in the presence of reasonable conceptual or metaphysical claims about the ethical, the taxonomies Singer’s and Russell and Restall’s accounts are built on treats obviously ethical claims as descriptive and conversely.  Consequently, it's far from clear that the distinction between the ethical and the descriptive admits of an illuminating model-theoretic characterization and it’s clear that the model-theoretic characterizations given by Singer and Russell and Restall are not metaethically neutral.
Research Interests:
I address a response from (Sagi 2015) to Willian Hanson and Timothy McCarthy's arguments against the invariance critierion for logicality. I argue that we can distinguish "invariance of content" from "invariance of character" and, by... more
I address a response from (Sagi 2015) to Willian Hanson and Timothy McCarthy's arguments against the invariance critierion for logicality. I argue that we can distinguish "invariance of content" from "invariance of character" and, by doing so, resuscitate a version of Hanson and McCarthy's argument as targeted against content-invariance being sufficient for logicality. I go on to show that invariance of content and character entail limited forms of general and real world necessity under plausible assumptions.
Why do promises give rise to reasons? I consider a quadruple of possibilities which I think will not work, then sketch the explanation of the normativity of promising I find more plausible—that it is constitutive of the practice of... more
Why do promises give rise to reasons? I consider a quadruple of possibilities which I think will not work, then sketch the explanation of the normativity of promising I find more plausible—that it is constitutive of the practice of promising that promise-breaking implies liability for blame and that we take liability for blame to be a bad thing. This effects a reduction of the normativity of promising to conventionalism about liability together with instrumental normativity and desire-based reasons.  This is important for a number of reasons, but the most important reason is that this style of account can be extended to account for nearly all normativity—one notable exception being instrumental normativity itself. Success in the case of promises suggests a general reduction of normativity to conventions and instrumental normativity. But success in the cases of promises is already quite interesting and does not depend essentially the general claim about normativity.
I discuss Greg Restall’s attempt to generate an account of logical consequence from the incoherence of certain packages of assertions and denials. I take up his justification of the cut rule and argue that in order to avoid... more
I discuss Greg Restall’s attempt to generate an account of logical consequence from the incoherence of certain packages of assertions and denials. I take up his justification of the cut rule and argue that in order to avoid counterexamples to cut, he needs, at least, to introduce a notion of logical form. I then suggest a few problems that will arise for his account if a notion of logical form is assumed. I close by sketching what I take the most natural minimal way of distinguishing content and form to be and suggest further problems arising for this route.
I respond to an interesting objection to my 2014 argument against hermeneutic expressivism. I argue that even though Toppinen has identified an intriguing route for the expressivist to tread, the plausible developments of it would not... more
I respond to an interesting objection to my 2014 argument against hermeneutic expressivism. I argue that even though Toppinen has identified an intriguing route for the expressivist to tread, the plausible developments of it would not fall to my argument anyways---as they do not make direct use of the parity thesis which claims that expression works the same way in the case of conative and cognitive attitudes. I close by sketching a few other problems plaguing such views.
Mark Schroeder has argued that all reasonable forms of inconsistency of attitude consist of having the same attitude type towards a pair of inconsistent contents (A-type inconsistency). We suggest that he is mistaken in this, offering a... more
Mark Schroeder has argued that all reasonable forms of inconsistency of attitude consist of having the same attitude type towards a pair of inconsistent contents (A-type inconsistency). We suggest that he is mistaken in this, offering a number of intuitive examples of pairs of distinct attitudes types with consistent contents which are intuitively inconsistent (B-type inconsistency). We further argue that, despite the virtues of Schroeder's elegant A-type expressivist semantics, B-type inconsistency is in many ways the more natural choice in developing an expressivist account of moral discourse. We close by showing how to adapt ordinary formality-based accounts of logicality to define a B-type account of logical inconsistency and distinguish it from both semantic and pragmatic inconsistency. In sum, we provide a roadmap of how to develop a successful B-type expressivism.
I suggest that we can extend Tarski's model-theoretic criterion of logicality to cover indefinite expressions like Hilbert's ɛ-operator, Russell's indefinite description operator η, and abstraction operators like 'the number of'. I draw... more
I suggest that we can extend Tarski's model-theoretic criterion of logicality to cover indefinite expressions like Hilbert's ɛ-operator, Russell's indefinite description operator η, and abstraction operators like 'the number of'. I draw on this extension to discuss the logical status of both abstraction operators and abstraction principles.
In this short paper I argue against expressivism as a descriptive account of moral language. I do this by leveraging features of the connection between ordinary assertion and belief to formulate predictions about the putative connection... more
In this short paper I argue against expressivism as a descriptive account of moral language. I do this by leveraging features of the connection between ordinary assertion and belief to formulate predictions about the putative connection between moral assertion and various non-cognitive states. These predictions are dramatically disconfirmed.
This article explores the connection between natural deduction rules and model-theoretic accounts of connective meaning. James Garson has shown that when we generalize the notion of a model, the natural deduction rules for negation, the... more
This article explores the connection between natural deduction rules and model-theoretic accounts of connective meaning. James Garson has shown that when we generalize the notion of a model, the natural deduction rules for negation, the conditional and conjunction express their familiar intuitionistic meanings. He also shows that these meanings are compositional in the sense of uniquely extending an assignment of semantic values to the atomic sentences of a language. I demonstrate that the meaning expressed by disjunction fails to be categorical and, moreover, that it lacks a seemingly required compositionality property. This fact is independently interesting as well as problematic for certain forms of inferentialism about the logical constants.
This is an opinionated overview of the Frege-Geach problem, in both its historical and contemporary guises. Covers Higher-order Attitude approaches, Tree-tying, Gibbard-style solutions, and Schroeder's recent A-type expressivist solution.
Research Interests:
Review of RIchard Joyce's "Essays in Moral Skepticism"
Research Interests:
Research Interests:
It's increasingly common to recognize that philosophical arguments are typically abductive in character. From Lewis's point that philosophy is a game of costing views, to the rise of anti-exceptionalism in logic to the now near universal... more
It's increasingly common to recognize that philosophical arguments are typically abductive in character. From Lewis's point that philosophy is a game of costing views, to the rise of anti-exceptionalism in logic to the now near universal use of methods like reflective equilibrium in ethics and metaethics, philosophers have started to recognize that often the best we can do is advocate this or that as the best explanation of some phenomena. This is especially true in esoteric areas like logic and morality where the target explanandum are often our trenchant intuitions.

While this methodological shift is largely a welcome change, abductive arguments involve relatively heavy theoretical resources that themselves can be in serious contention. This means that we will occasionally have an argument whose conclusion conflicts with some of the supporting materials we use to justify that conclusion. These arguments are interesting for a number of reasons, but perhaps most importantly because this feature---the self-effacing character of these arguments---occurs most often in the context of arguments for modifying our standing views on subject matters like logic, aesthetics, mathematics, and morality. This gives us the resources to resist such arguments, at least in some cases. Which cases and which arguments is the subject of this paper.

The upshot is that self-effacement furnishes a robust anti-skeptical strategy for a privileged fragment of morality and mathematics, but this doesn’t easily extend to aesthetics or morality. Which puts the line between where we should be dogmatic and where we shouldn’t exactly where it belongs.
Kaplan claimed that both expressions and sentences had characters, or context-invariant meanings. Relatively little attention has been paid to the question of how to build these complex, sentential characters from simpler ones, however,... more
Kaplan claimed that both expressions and sentences had characters, or context-invariant meanings. Relatively little attention has been paid to the question of how to build these complex, sentential characters from simpler ones, however, and this has engendered some confusion regarding what complex characters might be like and what sorts of explanatory roles they might play. We aim to allay any confusions by introducing a formal model of character that allows for composition of various degrees of strength. Then we discuss how these composed sentential characters can help us to model an interesting notion of partial understanding, in addition to some properties of invariants.
Research Interests:
Epistemic Teleology is the view that epistemic normativity is explained by facts about value, broadly construed. It comes in two familiar versions which differ about whether the normative status of any given epistemic state is explained... more
Epistemic Teleology is the view that epistemic normativity is explained by facts about value, broadly construed. It comes in two familiar versions which differ about whether the normative status of any given epistemic state is explained directly in
terms of the epistemic value of that state, or, alternatively, directly in terms of its practical value. Both versions face compelling counterexamples. We here develop an
indirect alternative. We distinguish two normative properties: fittingness and criticism-­‐‑worthiness. Fittingness is a property of epistemic attitudes such as beliefs or credences, or sets thereof. The fittingness conditions for an epistemic attitude are
provided by your favourite first-­‐‑order epistemic theory: whether by conditionalisation, by coherence with your other beliefs, by following from some fundamental inductive rule, or some medley of higher-­‐‑order rules. Fittingness explains our evidentialist intuitions, but we argue that fittingness is not genuinely normative. The mark of a genuine normative standard is criticism-­‐‑worthiness for failure to meet the standard. Many instances of being in a token epistemic state are criticism-­‐‑worthy, but the fittingness of these states is neither necessary nor sufficient for its criticism-­‐‑worthiness. Instead the criticism-­‐‑worthiness of an epistemic state is
fully explained by its practical value. We argue that this view enjoys many of the advantages of familiar versions of epistemic teleology, without suffering from their shortcomings. This view promises many further advantages: it explains pragmatic encroachment; it removes any ‘dualism’ of practical and epistemic normativity; it offers desire-­‐‑based and value-­‐‑based normative theories a reply to the ‘unity of
reason’ objection; it eases the way to normative naturalism; and it fits nicely with a variety of attractive general normative theories.
Research Interests:
I introduce and motivate the notion of vacuous grounding. The occurrence of a fact q in a set of grounds for some fact p is vacuous when any fact could have done the work q does in grounding p. This notion turn out to be useful in cashing... more
I introduce and motivate the notion of vacuous grounding. The occurrence of a fact q in a set of grounds for some fact p is vacuous when any fact could have done the work q does in grounding p. This notion turn out to be useful in cashing out metaphysical intuitions about what grounds what. I start my investigation with a lengthy case study: how to formulate the autonomy of the ethical. This is the view that we cannot ''get'' an ethical fact from a natural fact. Following recent work by Barry Maguire, we can make sense of this by saying that no ethical fact is fully grounded in natural facts and that any fact partially grounded by ethical facts is ethical. I show that this is false---natural facts can be partially grounded in ethical facts---but that natural facts cannot be non-vacuously grounded in ethical facts. We can better characterize the autonomy of the ethical from the natural in terms of vacuous grounding and resolve several other difficulties, such as characterizing Ronald Dworkin's view of the law; distinguishing between basic and non-basic values, and resolving a paradox of grounding due to Kit Fine and Stephen Kramer.
This is a historical paper offering an account of the rules of proof (or themata) of the Stoic logicians. I argue that the correct understanding of the Stoics' rejection of redundant arguments suggests that their logic is antimonotonic. I... more
This is a historical paper offering an account of the rules of proof (or themata) of the Stoic logicians. I argue that the correct understanding of the Stoics' rejection of redundant arguments suggests that their logic is antimonotonic. I then show how to reconstruct an antimonotonic fragment of their proof system.
Research Interests:
Modest Inferentialism, the view that inferential role fixes the meaning of (at least logical) expressions, against a background grasp of meaning, has had a bit of a checkered history. In this piece, I show discuss problems with this view... more
Modest Inferentialism, the view that inferential role fixes the meaning of (at least logical) expressions, against a background grasp of meaning, has had a bit of a checkered history. In this piece, I show discuss problems with this view and criteria for a successful version of it. I show how the best extant version of modest inferentialism, due to James Garson, has trouble with these criteria and discuss what is needed to overcome these problems. To solve them, I develop an interpretation of the formal model-theoretic conditions which Garson-style modest inferentialism generates for the classical rules for the connectives which, in turn, motivates a principled restriction on admissible models. This involves a discussion of how to represent contingency in a modest inferentialist setting, the role of an intuitive interpretation in justifying side-conditions on models, and what atomic sentences represent in this setting. This interpretation satisfies the intuitive criteria for a successful modest-inferentialist account of the meaning of the logical connectives; it is a strong contender, I reckon, for an internally satisfying account of the meaning of the logical connectives---and one which does not extend to intuitionistic logic. This last point furnishes a not entirely disreputable argument against intuitionistic logic in favor of something approximating classical.
Research Interests:
This paper addresses the putative connection between logical consequence and rational belief revision. After criticizing both optimistic accounts of this connection (such as that of John MacFarlane) and pessimistic accounts (such as... more
This paper addresses the putative connection between logical consequence and rational belief revision. After criticizing  both optimistic accounts of this connection (such as that of John MacFarlane) and pessimistic accounts (such as that of Gilbert Harman), I suggest a modest account of the connection which draws on the structural features of the consequence relation. The result is a nice lower bound on the strength of the connection between logical consequence and rational belief revision, but one which can be strengthened if compelling reasons are found to do so.
Research Interests:
Research Interests:
We argue that a number of difficulties facing expressivist solutions to the Frege- Geach problem are paralleled by almost exactly analogous problems facing realist semantic theories. We argue that a prominent realist solution to the... more
We argue that a number of difficulties facing expressivist solutions to the Frege- Geach problem are paralleled by almost exactly analogous problems facing realist semantic theories. We argue that a prominent realist solution to the problem of explaining logical inconsistency can be adopted by expressivists. By doing so, the expressivist brings her account of logical consequence more in line with philosophical orthodoxy, while simultaneously purchasing herself the right to appeal to a wider class of attitudinal conflicts in her semantic theorizing than is allowed, for instance, by Mark Schroeder in his recent work. Finally, it emerges that a standard objection to expressivist theories is based on a misinterpretation of the Frege-Geach problem. We explain this misinterpretation and show how expressivists can easily skirt the objection it motivates.