The Rationality of Epistemic Akrasia
Introduction
To be e pistemically akratic is either
(1) to believe p and also believe that believing that p is rationally
forbidden
or
(2) to not believe p and also believe that not believing that p is rationally
forbidden1
The e pistemic akratic either possesses a belief which she believes it is rationally forbidden to
possess, or lacks a belief which she believes it is rationally forbidden to lack.1
If rational akrasia is possible then rational false belief must be possible, since rational
akrasia must involve a false belief. If the rationality of a belief entails its truth then the
impossibility of rational akrasia would follow automatically. But a general factivity constraint on
rational belief is not in the spirit of the anti-akrasia literature, as it maintains that akrasia cannot
be rational for a different reason. The driving idea is that the epistemic akratic somehow fails by
her own lights, and that there is some distinctive sort of incoherence involved in failing by one’s
own lights. Indeed, requirements positing the irrationality of akrasia are often seen as one
instance of requirements prohibiting a specific kind of structural irrationality.2
First, an initial bit of ground clearing. Insofar as a proposition can be believed under
multiple guises, then epistemic akrasia is clearly unproblematic in a variety of cases where
multiple guises are in play. Consider:
1
S
tandard definitions of epistemic akrasia all have the same basic structure. There are two elements:
(1) a belief / lack of belief in some proposition
(2) a belief that such a belief / lack of belief in that proposition is bad in some specific
way
The sort of badness we’ve employed is being rationally forbidden. Other sorts have also been employed, and
most of what we have to say can be adapted to these other notions.
2
See, for instance, Worsnip (2018).
1
Through a Glass Darkly: You are touring a castle. You look out through a clear
window and you see a ship. You are struck by the ship’s beauty, and so you believe
that it is beautiful. Moving elsewhere, you look out through an occluded window. You
see an indistinct shape. You can tell that it’s a ship, but you can’t tell much else about
it. You thus form the belief of the ship that you ought not to believe that it is beautiful.
But unbeknownst to you (and surprisingly) the two windows look out onto the same
ship. So there is a ship x such that you both believe that x is beautiful and believe that
you ought not believe that x is beautiful.
There is clearly nothing shameful about the akratic combination in this scenario. Anti-akrasia
epistemology must either deploy a sufficiently fine-grained conception of propositions so that
cases of this sort cannot arise, or else refine the anti-akrasia principle in a way that makes
explicit some constant guise assumption. We shall charitably assume that guise worries of the
sort raised by the above example can be adequately controlled for.
Even assuming that the guise issue can be controlled for, the thesis that all epistemically
akratic states are irrational is subject to counterexamples. We shall, in part one below, present
various counterexamples.3 Some proponents of anti-akrasia principles concede the existence of
isolated counterexamples that they hope to circumscribe, thereby preserving the irrationality of
epistemic akrasia in all but a certain special class of cases.4 But counterexamples are pervasive,
and have various distinct sources. In part two, we examine and critique some positive lines of
argument for anti-akrasia principles, some of which are extant in the literature and some of
which are novel (but suggested by related literatures). In part three, we look at a strategy for
keeping the anti-akratic sensibility alive, a strategy that appeals to idealization. All told, the case
against a nti-akratic principles is surprisingly strong and the case for anti-akratic principles is
surprisingly weak.
Part I: The counterexamples
I.1 Lack of Access to Beliefs
One might have a belief and yet fail to realize that one has it. One might lack a belief and yet fail
to realize that one lacks it. These access failures may have various sources. They may simply be
due to a failure of introspective acuity, analogous to a failure of perceptual acuity. Or they may
be more deeply rooted in a false theory about the nature of belief and its relation to other states
like knowledge. Either way, imperfect access to one’s belief states can yield cases where
epistemic akrasia seems rationally unproblematic.
Now to the counterexamples. Consider the following case:
These counterexamples can easily be adapted to apply to practical akrasia, though we will leave the case of
practical akrasia aside.
4
See, for instance, Horowitz (2014) and Titelbaum (2015).
3
2
Good News: The epistemic oracle reveals some information about the normative status
of your belief, or lack thereof, in a proposition p. If you believe that p then belief in p is
rationally required, and if you don’t believe that p then belief in p is rationally
forbidden. You breathe a sigh of relief––either way it looks like you’re in the clear.
You believe that you believe that p, and thus you believe that belief in p is rationally
required. But you’re not perfectly reliable about what your beliefs are, and in this case
you’ve made a mistake. In fact, you don’t believe that p. So you do not believe that p
and also believe that not believing p is forbidden. Thus you are epistemically akratic.
Analysis: This doesn’t at all feel like a case in which you’re failing by your own lights. You
believe, after all, that whether or not you believe p, you are rational. Given your imperfect
introspection about your doxastic state you seem to have behaved quite reasonably. So although
you are epistemically akratic, it doesn’t seem that you are irrational.
Going Ancient: You’re convinced by various texts in ancient philosophy (for example,
Plato’s Republic) that knowledge is inconsistent with belief. This is a mistake; the
contemporary consensus is correct, and in fact knowledge entails belief. You know that
your spouse is a good person, and you know that it’s important that you know this.
Thus you conclude that you ought not to believe that your spouse is a good person, as
you believe that such belief would be inconsistent with knowledge. But since
knowledge entails belief, you do in fact believe that your spouse is a good person. Thus
you both believe that your spouse is a good person and believe that you ought not to
believe that your spouse is a good person, and are epistemically akratic.
Analysis: The innocent mistake about the relationship between belief and knowledge satisfyingly
explains your mistake about whether or not you ought to believe that your spouse is a good
person. You were convinced that your spouse is a good person and believed that you should be
convinced that your spouse is a good person––you only made a mistake about what being
convinced amounted to vis-a-vis belief. There’s nothing remotely incoherent about your
epistemic state, and no call to consider you irrational.
Missed It By That Much: You’re 79% confident that the Yankees will make it to the
World Series next year. You know that you’re 79% confident that the Yankees will
make it to the World Series next year, and you know that you ought to be exactly that
confident. You’ve thought about the relationship between belief and credence a lot.
You believe that a credence of .75 or greater suffices for belief. But while there is a
Lockean threshold for belief, its value is actually .8 rather than .75. Thus you believe
that you ought to believe that the Yankees will make it to the World Series next year
(you know that you ought to have credence .79, and you falsely believe this is above
the threshold for belief), but you don’t in fact believe that they will.
3
Analysis: The innocent mistake about where the Lockean threshold5 is produces an access
failure: you fail to believe p, but owing to your mistaken views about the threshold for belief,
you do not realize that you fail to believe p (Similarly, a case where the real threshold is lower
than one reasonably thinks it is could produce a failure to believe that you believe p when you
do.) This mistake satisfyingly explains your erroneous belief about what you ought to believe.
You knew how confident you ought to be and knew you were exactly that confident. The fact
that, surprisingly, your level of confidence that the Yankees will make it to the World Series next
year does not amount to a belief does not make your epistemic state incoherent and does not
make you irrational.6
I. 2 False Theories of Rationality
We have seen that false (but rationally formed) theories of belief can yield intuitively
unproblematic cases of epistemic akrasia. The same can happen in cases of false but rationally
formed theories about the nature of rationality. Consider the following:
Unger Games: Following Unger, you believe that a belief in p is rational only if you
have reasons for believing p and that something can be a reason for belief only if it is
known. Moreover you reasonably trust an epistemologist––Peter Unger, in fact––who
tells you that knowledge is unachievable. You thus believe that none of your beliefs are
rational, thinking that the best that can be hoped for is some lesser status. But Unger
has got all this wrong, and in fact many of your beliefs are rational.
Analysis: Here again epistemological errors need not involve a lapse in rationality. Once one has
convinced oneself––rationally––that rationality is too much to be hoped for, it does not seem that
the akratic state that results is irrational.
The cases we articulated provide substantial reason for doubting that all epistemically
akratic agents are irrational. But we do not wish to declare the rationality of epistemic akrasia
prematurely. Having presented a battery of cases in which akrasia doesn’t look that bad, let us
turn to positive arguments for anti-akratic principles.
Part Two: Defenses of Anti-Akratic Principles
See Foley (2009).
Clayton Littlejohn (2018) appeals to a thought common among those who hold akratic states to be irrational.
Concerning a rational belief that a certain belief is rationally required, he says “W
ith this belief in place and its
blessing from rationality, its hard to see how rationality could then require you to refrain from believing [the
proposition in question]”. But here, so long as the belief about thresholds is rational, it is obvious enough how one
could rationally believe a certain belief to be rationally required and yet it be forbidden. Indeed, to enter into a belief
state in this scenario would run afoul of one’s knowledge that a credence of .8 or higher was forbidden.
5
6
4
It is not atypical for authors to take anti-akratic principles for granted, and then set out to derive
further conclusions from them.7 When positive arguments are provided, they often take the form
of a vignette describing some particular akratic agent followed by remarks to the effect that there
is clearly something amiss with that agent. But, of course, rejecting general anti-akratic
principles is not tantamount to declaring all akratic agents to be rational. So while the proponents
of anti-akrasia epistemology cannot accept our judgments about the cases we present, we can
accept their judgments about the cases they present.
Hence, particular cases in which an akratic agent appears to be irrational will not
establish that all akratic agents are irrational (especially in light of other cases in which an akratic
agent appears to be rational). Let us therefore explore some more systematic ways in which one
might argue for anti-akrasia epistemology. If systematic arguments for the irrationality of akrasia
called the status of our counterexamples into doubt then our case for the rationality of akrasia
would be insecure.
II.1 Moore-paradoxicality
Perhaps the most popular argument for an anti-akrasia constraint appeals to the seeming
Moore-paradoxicality of assertions like ‘p, but it is irrational for me to believe p’.8 It is then
taken that the best explanation of the seeming paradoxicality of such assertions is that the
relevant contents can never be rationally believed.9
It is notable that sometimes the inference ‘p is unassertable, therefore p is not rationally
believable’ goes badly wrong. I can rationally believe he’s disingenuous, but I would never
assert that, but asserting ‘he’s disingenuous, but I would never assert that’ would certainly sound
odd––it’s like one saying one would never eat chocolate while eating chocolate. Is it important
to distinguish cases in which one believes both p and that believing p is irrational from cases
where one asserts both that p and that it is rationally forbidden to believe p.
First, while it is easy enough to construct cases where one believes that p but does not
know that one believes that p, it is much harder to think of cases in which a subject genuinely
asserts a proposition, but has no epistemic access to the fact that she asserted it.10 A subject who
lacks access to her own belief in p typically doesn’t go about asserting p.
For a recent example, one that we shall later discuss in detail, see Titelbaum (2015).
See, for instance, Feldman (2005: 108), Bergmann (2005: 424), Hazlett (2012: 211), and Smithies (2012).
9
The Moorean thought experiments are often constructed as ones in which one believes certain conjunctions. Even
if the irrationality of believing the conjunction is conceded, the conclusion that the pair of propositions cannot
rationally be simultaneously believed only follows straightforwardly on the assumption that rational belief is closed
under logical consequence. But that assumption is not unassailable. Indeed, on some views it is a straightforwardly
bad thing to always believe the logical consequences of one’s rational beliefs (for example it is arguably rational to
believe of each lottery-ticket-holder that they will lose but irrational to believe that all will).
10
That is not to say that there are no such cases. Think for example about edge cases on a continuum between flat
out asserting that p and a hedged claim that p or a case where one is deaf and is unsure whether one actually
produced the words one intended to produce.
7
8
5
Second, the argument from Moore-paradoxicality only works assuming that the norms
governing assertion are the same as those governing belief. But there are reasons to dispute this.
Perhaps, for instance, there is a knowledge or sureness norm on assertion, but no knowledge or
sureness norm on belief. Compare (1) and (2) below:
(1) The train leaves at noon, though I’m not certain.
(2) Sally believes that the train leaves at noon, though she is not certain.
To many, (1) sounds like a worse indictment of oneself than (2) is of Sally, and one explanation
of this is that the norms governing assertion are more stringent than those governing belief.
In sum, we are dubious about using data elicited by considering assertions to draw
conclusions about norms governing belief.
But further, the kinds of considerations salient in the counterexamples above mitigate the
paradoxicality of the assertions that fit standard templates for Moore-paradoxicality. Assume that
you become persuaded by a very stringent theory of belief that belief requires absolute certainty,
and that it is in practice impossible for ordinary mortals to be that certain enough of contingent
truths to count as genuinely believing them. If you then assert ‘It is raining, but [of course] I
don’t believe that’, your strange views of belief mitigates much of the strangeness of this
assertion. Similar points can be made in connection with the examples given above. For instance,
in Unger Games a subject falsely but rationally believes that knowledge is unachievable. If she
then asserts ‘It is raining, but I don’t know that’, the paradoxicality of the assertion will again be
mitigated if we bear in mind her stringent view of knowledge.
There are thus several reasons to be skeptical of attempts to draw out general norms on
belief based on considerations having to do with the seeming paradoxicality of certain assertions.
II.2 Dubious luck
Horowitz (2013) sharpens the case against rational akrasia, going beyond standard Moorean
arguments. She addresses akratic states that involve believing a proposition p, while also
believing that one’s evidence doesn’t support p. We will, at least while discussing her paper,
follow Horowitz in thinking that an agent is rationally permitted to believe a proposition if and
only if that agent’s (total) evidence supports that proposition, and follow her in thinking that
evidence supports a proposition just in case it makes it sufficiently likely. In this case, rational
akrasia would require that an agent’s evidence be radically misleading regarding itself in the
following way: it makes p likely, while making it likely that it doesn’t make p likely.
6
In fact, the kind of case Horowitz considers is one in which the evidence makes p likely,
while making it likely that it makes p unlikely.11 Consider, then, a subject who believes p, while
believing that her evidence supports ¬
p . If such a subject could be rational, Horowitz reasons
that this subject could come to rationally believe that her evidence regarding p is misleading (she
assumes that, at least in the case she is imagining, a subject will be in a position to rationally
believe something entailed by a pair of things each of which she rationally believes12). But,
Horowitz argues, this is not a legitimate way to come to believe that one’s evidence regarding p
is misleading. How could it be rational to believe p on the basis of evidence one takes to be
misleading regarding p? Supposing that such a subject could be rational, Horowitz similarly
reasons that this subject could rationally conclude that she got lucky in coming to truly believe p,
despite having evidence that fails to support p.
The kinds of considerations discussed above can be brought to bear on Horowitz’s
arguments. Consider cases involving lack of access to one’s own beliefs. In order to rationally
conclude, by using the kind of reasoning Horowitz envisages, that one got lucky in arriving at the
truth regarding whether p despite having evidence that fails to support p, one must believe that
one believes p. But the subjects in all of the counterexamples in our first class lack precisely this
kind of access to their own beliefs. Indeed, when Horowitz (2013) describes the oddness of
believing that one’s evidence regarding p is misleading, she claims that the person who believes
this can point to a “a particular belief state of his that is, he thinks, unsupported by his total
evidence”. But in cases where access to the relevant belief is lacking, this is not something that
the agent can do.
II.3 Sleepy detectives and Williamsonian akrasia
Horowitz does not, in fact, deny the possibility of rational epistemic akrasia. She argues only that
paradigmatic instances of epistemic akrasia are irrational, but allows that eccentric instances of
epistemic akrasia can be rational. The paradigmatic instance she considers comes from a case she
terms Sleepy Detective, while the eccentric instance she considers comes from a case she terms
Dartboard, which is inspired by Timothy Williamson’s famous clock cases in Williamson
(2010) and Williamson (2014). Horowitz argues that these cases are different in three important
ways, and thus that different verdicts about them are warranted. But as we will show, the
disanalogies Horowitz imputes are illusory, and thus the acceptance of rational epistemic akrasia
in a putatively eccentric instance makes a strong case that there is no special rational prohibition
of akrasia. (Of course, akrasia may be irrational in particular cases, just as pretty much any pair
of beliefs can be irrational in particular cases.)
Here are the cases Horowitz presents:
Such evidence thus generates a more radical kind of akrasia than the one we have just been considering. The agent
does not merely believe that the evidence does not support p, b ut believes that the evidence supports ¬p .
12
Note that this assumption cannot be vindicated by multi-premise closure, since the status of likelihood on the
evidence does not respect such a general form of closure.
11
7
Sleepy Detective: Sam is a police detective, working to identify a jewel thief. He knows
he has good evidence—out of the many suspects, it will strongly support one of them.
Late one night, after hours of cracking codes and scrutinizing photographs and letters, he
finally comes to the conclusion that the thief was Lucy. Sam is quite confident that his
evidence points to Lucy’s guilt, and he is quite confident that Lucy committed the crime.
In fact, he has accommodated his evidence correctly, and his beliefs are justified. He calls
his partner, Alex. “I’ve gone through all the evidence,” Sam says, “and it all points to one
person! I’ve found the thief!” But Alex is unimpressed. She replies: “I can tell you’ve
been up all night working on this. Nine times out of the last ten, your late-night reasoning
has been quite sloppy. You’re always very confident that you’ve found the culprit, but
you’re almost always wrong about what the evidence supports. So your evidence
probably doesn’t support Lucy in this case.” Though Sam hadn’t attended to his track
record before, he rationally trusts Alex and believes that she is right—that he is usually
wrong about what the evidence supports on occasions similar to this one.13
Dartboard: You have a large, blank dartboard. When you throw a dart at the board, it
can only land at grid points, which are spaced one inch apart along the horizontal and
vertical axes. (It can only land at grid points because the dartboard is magnetic, and it’s
only magnetized at those points.) Although you are pretty good at picking out where the
dart has landed, you are rationally highly confident that your discrimination is not
perfect: in particular, you are confident that when you judge where the dart has landed,
you might mistake its position for one of the points an inch away (i.e. directly above,
below, to the left, or to the right). You are also confident that, wherever the dart lands,
you will know that it has not landed at any point farther away than one of those four. You
throw a dart, and it lands on a point somewhere close to the middle of the board.
Supposing that the dart lands at point <3,3>, Horowitz says that you should be certain that the
dart landed on either <3,3>,<3,2>, <2,3>, <4,3>, or <3,4>, but that you should be highly
uncertain as to which of those points in particular was hit.14
The first disanalogy Horowitz imputes is that the akratic state in Sleepy Detective is
unstable whereas the akratic state in Dartboard is stable. In Sleepy Detective, Sam believes that
Horowitz does not explicitly say within the vignette that the Sleepy Detective believes that her belief that Lucy is
a thief is not supported by the evidence. But it is clear from the surrounding discussion that we are to impute this
belief to the detective.
14
In adapting Williamson’s case to the credence framework, Horowitz assumes that knowing p makes credence 1 in
p rational, and more generally that rational credence matches what Williamson calls ‘evidential probability’.
Williamson himself is a little cagey about the relation between evidential probability and rational credence.
13
8
Lucy is the thief but also believes that his evidence does not support Lucy being the thief.
Horowitz writes, “[I]t seems that Sam can avoid being misled. He can point to a particular belief
of his that is, he thinks, unsupported by his total evidence… [T]here is something in particular
that Sam thinks he should believe—but he does not believe it.” In Dartboard, Horowitz considers
the proposition Ring––that the dart landed on <3,2>, <2,3>, <4,3>, or <3,4>. She writes,
“[A]lthough you should be .8 confident that you should have .2 confidence in Ring, there is no
particular credence distribution that you think you should adopt. While you should think your
evidence is misleading, this belief nevertheless seems stable.”
There is, however, no such disanalogy. The epistemological statuses of Sleepy Detective
and Dartboard do not vary in this way; the appearance of variation is a product only of
Horowitz’s varying mode of analysis. She analyzes Sleepy Detective in the coarse-grained, local
framework of particular outright beliefs––yielding the result that Sam believes that he should not
hold his belief that Lucy is the thief, whereas she analyzes Dartboard in the fine-grained, global
framework of total credal states––yielding the result that there is no alternate credence
distribution that you would prefer. But it would be equally viable to flip which framework is
used to evaluate each case. One could easily get the result that in Sleepy Detective there is no
alternate particular credence distribution that Sam would prefer, and that in Dartboard you
believe that you should not hold your belief in Ring. In terms of stability, Sleepy Detective and
Ring are on a par.
The second disanalogy Horowitz imputes is that in Sleepy Detective Sam knows what his
evidence is whereas in Dartboard you do not know what your evidence is. She writes that in a
case like Sleepy Detective “you are akratic because you are uncertain (or have a false belief)
about what E, your evidence, supports. In Dartboard, however, you are akratic because (in part)
you are uncertain about what your evidence is. This is compatible with your being highly
confident, or even certain, of the truth about the evidential support relations. So while the two
akratic states look similar, they come about through different kinds of uncertainty.”
Again, there is no such disanalogy. As before, the appearance of variation is a product
only of Horowitz’s varying mode of analysis. She analyzes Sleepy Detective with a casual notion
of evidence, one according to which knowing that you have clues without knowing what you
know on that basis counts as knowing what your evidence is. But she analyzes Dartboard with a
stricter notion of evidence (deployed by Williamson), one according to which knowing that you
have a visual experience without knowing what you know on its basis counts as not knowing
what your evidence is. But it would be equally viable to flip which framework is used to evaluate
each case. In that setting it would be easy to claim that Sleepy Detective Sam doesn’t know what
his evidence is (either because there is something he knows about the case but does not know he
knows or because there is something he doesn't know about the case but doesn’t knows that he
doesn’t know), and that in Dartboard you do know what your evidence is (because you do know
that your evidence is your visual experience).
Note that it is very hard for informally described cases to fix whether an agent’s
9
uncertainty concerns what her evidence is or what her evidence supports; in general, both
precisifications can be made. For example, the following two precisifications can be made
functionally equivalent by varying the interpretation of evidence as needed: (1) An agent knows
that E1 supports p and that E2 supports ¬p , but does not know whether her evidence is E1 or E2.
(2) An agent knows that her evidence is E1, but does not know whether E1 supports
p and E2
supports ¬
p , or instead that E1 supports
¬p and E2 supports p. Moreover, even given a situation
sufficiently formalized to distinguish between uncertainty about what an agent’s evidence is and
uncertainty about what an agent’s evidence supports, Horowitz does not provide any reason to
think that such a distinction has bearing on whether or not an instance of epistemic akrasia is
rational. To our eyes, both sorts of uncertainty seem on a par.
The third disanalogy Horowitz imputes is that in Sleepy Detective Sam’s evidence is
truth-guiding, whereas in Dartboard your evidence is falsity-guiding. She writes,
In cases like Sleepy Detective, our evidence is usually “truth-guiding” with respect to
propositions about the identity of the guilty suspect (and most other propositions, too).
By this I mean simply that the evidence usually points to the truth: when it justifies high
confidence in a proposition, that proposition is usually true, and when it justifies low
confidence in a proposition, that proposition is usually false. If a detective’s first-order
evidence points to a particular suspect, that suspect is usually guilty. If it points away
from a particular suspect, that suspect is usually innocent…
In Dartboard, however, the evidence is not truth-guiding, at least with respect to
propositions like Ring. Instead, it is falsity-guiding. It supports high confidence in Ring
when Ring is false—that is, when the dart landed at <3,3> . And it supports low
confidence in Ring when Ring is true—that is, when the dart landed at <3,2> , <2,3> ,
<4,3> , or <3,4> . This is an unusual feature of Dartboard. And it is only because of this
unusual feature that epistemic akrasia seems rational in Dartboard. You should think that
you should have low confidence in Ring precisely because you should think Ring is
probably true—and because your evidence is falsity-guiding with respect to Ring.
Epistemic akrasia is rational precisely because we should take into account background
expectations about whether the evidence is likely to be truth-guiding or falsity-guiding.
Yet again, there is no such disanalogy. We note that Horowitz’ claim that the evidence supports
high confidence in Ring when it is false is erroneous. The evidence supports low confidence in
Ring in nearly all situations––whenever the dart does not land on <3, 3>. Of the many situations
in which the evidence supports low confidence in Ring, Ring is true in only four. If the dart lands
nowhere near the ring the evidence will support low confidence in Ring and Ring will duly be
false. So the most that can be said is that in Dartboard the proposition Ring is false whenever it is
supported by the evidence.
10
Let us say that a proposition is falsity-guided relative to a range of situations when it is
only supported by evidence in one of those situations when that proposition is false. Horowitz’s
assertion, then, is that rational high confidence in a proposition of the form ‘p and p is unlikely
on my evidence’ only ever occurs in Williamson style cases when p is a falsity-guiding
proposition relative to the situations associated with those cases, and in that sense the
counterexamples are eccentric. But Horowitz is wrong about this. Although the instance of
epistemic akrasia she considers does involve a falsity-guided proposition, that is an inessential
aspect of the case. It’s easy to construct Williamsonian models in which one’s evidence justifies
high confidence in “p and my evidence supports ¬p ” but where p is true.
Here is a simple case. Suppose a pointer is pointing to one of 100 points arranged in a
circle (which for convenience we shall number 1-100) with uniform prior over the points and
where the margin-for-error is plus/minus 3. A pointer is pointing to 10 and thus the strongest
thing the observer knows is that it is in the range 7-13. Consider the proposition––call it
Gappy––that the pointer is pointing to either 7 or 8 or 10 or 12 or 13. At each of 7, 8, 12 and 13,
the proposition expressed by ‘Gappy is true and Gappy is unlikely on the evidence’ is true.
Consider the epistemic possibility that the pointer points to 8, for example: In that situation
Gappy is true and the epistemic possibilities are 5-11. Among those epistemic possibilities
Gappy is true at 7, 8 and 10. So at that epistemic possibility Gappy is only 3/7 likely to be true.
Similarly for 7, 12, and 13. In sum, Gappy is true and it is rational for the observer to be over .5
confident that Gappy is true and unlikely on the evidence.
In this situation Gappy is true but isn’t known. Let us say a proposition p is
ignorance-dependent across a range of situations when there are situations in which “p and p is
unlikely on my evidence” holds, but they are all ones in which p is not known. Is it true that in
Williamson-style margin-for-error cases, the only propositions that make for rational high
confidence in the conjunction “p and it is unlikely that p” are ignorance-dependent propositions?
If we include models where discriminatory powers vary from point to point, the conjecture is
obviously false. Consider, for example, a model with a uniform prior over a large number of
points arranged in a circle, and in which one’s discriminatory abilities vary from point to point.
From point 17 one can tell that one is either at point 16, 17, or 18 and nothing stronger, and from
any other point n, one can tell that one is either at n - 100, n - 99, … , n + 99, or n + 100 and
nothing stronger. Let p be the proposition that the dart hits 16, 17, or 18. If point 17 is hit, then
one knows p. At either point 16 or at point 18, p would be very unlikely on the evidence. So even
though at 17, one knows p one can easily see that, at 17, the probability that one’s evidence
supports ¬
p is ⅔.
What if we confine ourselves to settings in which there is no variation in discriminatory
capacity and so where the margin-for-error is constant from point to point? On this constancy
assumption, will Williamsonian models allow high evidential probability for the proposition ‘p
and it is unlikely that p’ only when p is unknown? Still the conjecture fails.
Suppose the epistemic possibilities are suitably modelled as a finite 3-dimensional space.
11
When a person is at a point in the space, the margin-for-error is given by a radius that generates a
sphere around the point. Suppose the margin-for-error, relative to some unit of measure is 1. The
strongest thing the person knows, then, is given by a sphere of radius 1 around the point that the
person occupies. Call the proposition that the person occupies one the points in that sphere
Sphere. The person knows Sphere. Consider an inner sphere generated by a radius of 0.7 from
the point that the person occupies. Subtract the inner sphere from the initial sphere. That gives us
a thick crust of the sphere. Call the proposition that the person is somewhere in the crust Crust.
Most of the volume of the sphere is taken up by the crust. Thus the evidential probability
(assuming uniform priors across regions of equal volume) of crust is high; the probability of
Crust is high conditional on Sphere, and Sphere is the strongest thing known. But here is a
feature of every point c in the crust: each sphere centred on it with radius 1 will intersect the
original sphere in such a way that most of the sphere around c will be outside of the original
sphere. Thus if the agent is at any point in the crust, Sphere is unlikely to be true. We get the
result that the conjunction Sphere and Sphere is unlikely to be true is a proposition that is likely
to be true in the case described. But notice that Sphere is known. We have an akratic
margin-for-error case built up around Sphere using standard Williamsonian techniques, and in
which Sphere is known. The proposition Sphere is not ignorance-dependent, but Sphere is
nevertheless a proposition for which rational high confidence that it is true and unlikely on the
evidence to be true is called for.15 There is no special sense in which the evidence in
Williamsonian models for high probability in akratic conjunctions are falsity-guiding or even
ignorance-dependent; subject to the construction of the model the relevant proposition may be
neither.
In sum, since the disanalogies Horowitz imputes are questionable, the plausibility of
rational epistemic akrasia in Dartboard cannot easily be confined to a highly restricted class of
eccentric cases.
II.4 Justification-Knowledge links
The proponent of anti-akratic epistemology might rely on some systematic structural
connection between rationality and knowledge. One might adopt the radical idea that a collection
of beliefs is rational if and only if all the beliefs constitute knowledge, but such a strategy is
uninteresting. If this view is right then one can’t rationally both believe p and believe that
believing p is not rational, but such a view is completely alien to the spirit of much of the
anti-akratic literature. As we noted earlier, anti-akratic principles lose their distinctive interest in
Even stronger results follow in higher dimensions. In such cases, uniform discriminatory abilities will lead to the
strongest proposition one knows being that one is somewhere in an n-dimensional hypersphere. In hyperspheres, the
probability that the strongest proposition one knows is supported by one’s evidence can approach 0. The greater the
dimensions, the greater proportion of the original hypersphere the hypercrust can be (where the hypercrust is wholly
within the hypersphere and such that at every point in it, the hypothesis that one is in the original hypersphere is
unlikely on the evidence).
15
12
a setting where there is a general factivity requirement on rationality. There are, however,
alternatives.
4.1 Duplication Theories
A more interesting strategy is suggested by a range of views that frame rational (or
justified) belief in terms of knowledge without invoking factivity. A prominent instance of such
views relies on a duplication test on rationality: according to the simplest account, a belief in p is
rational just in case it could be that a duplicate agent knows p.16 These views have the
consequence that unknowable propositions cannot be rationally believed. Assuming knowledge
to entail rational belief (as these views commonly do), and that knowledge distributes across
conjunction, the conjunction p and it is not rational to believe p cannot be rationally believed, as
it is unknowable.17
Of course, being epistemically akratic does not require believing the above conjunction,
but merely believing both of the conjuncts. However, given that duplication is an equivalence
relation, facts of possible knowledge by duplicates put interesting constraints on what
propositions can be jointly rationally believed, yielding an interesting argument against the
possibility of rationally both believing p and believing that it is not rational to believe p.
Suppose that the agent is rationally epistemically akratic: she rationally believes p, and
rationally believes that it is not rational to believe p. Then, by the duplication account, there is
some duplicate agent A1 who knows p, and some duplicate agent A2 who knows that it is not
rational to believe p. But since duplication is an equivalence relation, A1 and A2 must be
duplicates. We have reached a contradiction: since A2 has a duplicate who knows p, by the
duplication account it is rational for A2 to
believe p. Hence, A2 cannot
know that it is not rational
18
to believe p, for knowledge is factive. It follows that the agent cannot both rationally believe p,
and rationally believe that it is not rational to believe p.
It is not clear how to extend even the simple duplication view to the second anti-akratic
idea––that it is never rational to both not believe p and also believe that not believing p is
rationally forbidden––for the view does not immediately deliver an account of when not
believing p is rationally forbidden. (It obviously won’t do to say that not believing p is rationally
forbidden when and only when a duplicate knows p. No duplication-theoretic analysis suggests
itself.) But even the suggested argumentative strategy for defending the first akratic principle is
rather suspect.
For related views, see Bird (2007), Ichikawa (2014), and (to a lesser extent) Hirvelä (manuscript).
Lasonen-Aarnio (2010, forthcoming A) has argued that given the most promising ways of construing much talk of
rational, justified, or reasonable belief, knowledge is orthogonal to rational belief: there are cases of “unreasonable
knowledge”. If that is right, then the duplication strategy as a way of arguing for the irrationality of epistemic
akrasia is particularly unpromising, for it assumes knowledge to entail rational belief.
18
We are indebted to a discussion with [CITATION DELETED].
16
17
13
The simple version of duplication-style appeals to full metaphysical duplication,
checking for rational belief in a proposition by checking whether some perfect duplicate knows
that very proposition. But such a view is extremely dubious. One would have thought, for
example, that it is possible to rationally believe metaphysical impossibilities––for instance, that
Hesperus is not Phosphorus, or that thinkers have no proper parts (perhaps Descartes was
rational in believing this!). A related worry is that rational false beliefs regarding any of the facts
regarding a subject that are shared by her duplicates are impossible. A person’s duplicates share
all of their intrinsic properties.19 Intrinsic properties presumably specify an agent’s neuronal
patterns, but it should surely be possible for an agent to hold a rational false belief about the
microphysical properties of her neurons.
Possible fixes will either run into similar problems or complicate the argument for
anti-akrasia or both. Perhaps, for example, the relevant kind of duplication just involves
duplicating one’s mental states (as opposed to perfect metaphysical duplication). But even
leaving aside such issues as whether knowledge counts as a mental state, we think it can be
perfectly rational for subjects to hold false beliefs about some of their mental states, such as their
beliefs.20 Another kind of refinement––one that obviously helps with the Hesperus-Phosphorus
example––appeals to counterpart beliefs, the rough idea being that to rationally believe p is to be
such that a duplicate has a counterpart belief that is knowledge, where the counterpart belief
might be in a different proposition.21 While such a move makes the resulting view of rationality
rather more plausible, it significantly complicates the possibility of mounting an argument for the
irrationality of epistemic akrasia. In order to allow for rationally believing metaphysical
impossibilities, counterpart beliefs may be in different propositions. But then it is not at all clear
whether the resulting accounts can deliver the result that it can never be rational to both believe p
and that believing p is rationally forbidden, for that result relied on the impossibility of jointly
believing specific propositions that cannot be jointly known for reasons having to do with the
factivity of knowledge.22
In summary we doubt whether any version of the duplication test will be simultaneously
plausible and anti-akrasia entailing.
4.2 Justification, Closure and the Possibility of Knowledge.
Lewis (1983).
See Srinivasan (2015).
21
B
ird (2007: 87) and Ichikawa (2014: 194) appeal to counterpart beliefs in order to avoid the counterintuitive result
concerning necessarily false propositions.
22
As well as the fact that p is not held fixed, the content of ‘rational’ may vary as well across duplicates. Suppose,
for example, ‘rational’ was vague and picks out different relations according to slight variations in use by one’s
community. Assuming that setup, various counterpart beliefs are not about rationality but about a slightly different
relation, rationality*.
19
20
14
The simple duplication theory is but one instance of a class of theories according to
which rationally believing p entails the possibility of knowing p. Call this the ‘Possible
Knowledge Principle’. That barebones claim, no matter how it is embedded in further theory,
generates interesting results about akratic issues. In particular, given (i) the factivity principle
that it is necessary that if one knows p then p, (ii) the distribution principle that necessarily if one
knows p and q one knows p and one knows q and (iii) the principle that necessarily, if one knows
p, one rationally believes p, we get the impossibility of rationally believing a conjunction of the
form
C: p and it is not rational for me to believe p
For this to be possible to know, by distribution it would have to be possible to know both
conjuncts. But if one knew p then, if knowing entails being rational, that right side would be
false. But then one would not know the right side if one knew the left.
Note that this line of thought does not require a metaphysical modality. So long as
‘Possibly’ and ‘Necessarily’ are interpreted as duals and the principles of factivity, distribution
and knowledgle-to-rationality are plausible for the relevant notion of necessity, the argument will
go through. Thus, for example, it will go through for a notion of epistemic possibility and
epistemic necessity that respected those principles but which made room for certain cases where
something can be epistemically possible even if it is not metaphysically possible. Thus, unlike
the duplication theory––which turned on how things are with metaphysically possible
duplicates––the operative notion of possibility need not be that of metaphysical possibility.23
We can immediately see a further result: Suppose we accept the closure principle that
necessarily, if it is rational to believe p and it is rational to believe q, then it is rational to believe
p and q. Then rational akrasia of the form we have been discussing is also impossible. For in
that case rationally believing p and rationally believing it is not rational to believe p will entail it
is rational to believe the conjunction C above. Since it is implausible for the pro-akrasia person
to lay blame at the door of principle that knowledge distributes over conjunction or factivity, the
main options for making room for akrasia in a setting where the Possible Knowledge principle is
accepted will be either (i) to deny that knowledge entails rational belief or else (ii) to deny
closure.
[CITATION DELETED] has explored (i) at length elsewhere and, for reasons of space,
this is not the place to rehearse the considerations adduced in that paper.24 But we shall dwell on
(ii) a little further. Even if multi-premise closure holds for knowledge, it is not hard to construct
interesting toy models of justification on which justified belief entails the possibility of knowing
but for which closure fails dismally. Suppose, for example, that one is justified in believing p just
Note that even on an epistemic possibility gloss, the Unger Games scenario is now blocked, as it is not
epistemically possible that one knows that one knows nothing.
24
[CITATION DELETED]
23
15
in case for all one knows one knows p (i.e. one doesn’t know that one doesn’t know p). Let’s
deploy a standard framework according to which one knows p iff there is no epistemically
accessible world where ¬p . (Again it doesn’t matter whether the worlds are metaphysically
possible worlds or not.) Then it is easy enough to construct scenarios where there is an
epistemically accessible world where one knows p (and so one is in fact justified in believing p)
and an epistemically accessible world where one knows that one knows that it is not the case that
one knows p, and so one is also in fact justified in believing that one is not justified in believing
p.
Of course the model of justification just stated does not even preclude being
simultaneously justified in believing p and being justified in believing ¬
p , and the theorist may
wish to posit constraints on accessibility that preclude this. The natural way to do this is to
impose a convergence constraint on the accessibility structure so that when two worlds w1 and
w2 are each accessible, there is a world w3 that each sees. This corresponds to what is commonly
known as the .2 axiom in modal logic, which says that anything that is possibly necessary is
necessarily possible.25. Then there can never be a pair of accessible worlds one of which is such
that one knows p in it and the other of which is such that one knows ¬
p in it. For the accessibility
constraint would require some world that each world sees. But since the know-p-world would
only see p-worlds and the know-¬p -world would only see ¬p -worlds, this constraint could not be
satisfied. Still, even with that constraint in place, there is no block on being justified in believing
p while also being justified in believing that one is not justified in believing p. To be justified in
believing p we need an accessible world where one knows p, and to be justified in believing that
one is not justified in believing p we need an accessible world where one knows that one knows
that one does not know p. But it is perfectly consistent with this to suppose that there is a world
that each of these two worlds sees––it can be a world where p is true but unknown. So the natural
accessibility constraint that blocks justifiably believing each of a contradictory pair imposes no
block on akratic combinations.
Such blocks require additional accessibility constraints. Let’s focus on the combination of
Jp and J¬J p. One way of blocking that combination is to impose the additional KK principle (i.e.
a transitivity constraint on accessibility). This principle says knowledge is luminous. Another
principle that will preclude rational akrasia is a luminosity constraint on justification itself, i.e.
the principle that if Jp then KJp.26 But such luminosity constraints are widely thought to be
When the epistemic logic is normal and imposes factivity for K and adds the .2 axiom, we get KT.2 as the
epistemic logic. The .2 is called ‘G1’ in Cresswell and Hughes (1996).
26
As an accessibility constraint this amounts to the principle that what is possibly necessary is necessarily possibly
necessary. For relevant discussion see Rosenkrantz (2018), who also has his own distinctive way of dealing with the
ungraspable proposition cases mentioned below.
25
16
implausible.27 Another route is to directly impose the constraint that if Jp then ¬
J ¬J p, but that
would be dialectically uninteresting in the context of this paper.
We note that K is often interpreted in this literature as ‘being in a position to know’,
especially by those who adopt luminosity principles.28 After all, such luminosity principles look
hugely implausible under a flatfooted ‘Know’ reading. Justification then becomes not being in a
position to know that one is not in a position to know. But we don’t think the appeal of
anti-akraisa is enhanced by glossing it as the principle that never both JP and J¬J P in that sense.
We offer a case inspired by the false theory of rationality theme of section one. It
presents a challenge for the anti-akratic theorist, and also illustrates how hard it is to think
intuitively about akrasia in this context––prepare yourself for some confusingly iterated
epistemic operators. Here goes.
An unmarked clock appears to be pointing to 3, and indeed it is. The margin-for-error is
1. So you are in a position to know that the clock’s hand is somewhere between 2 and 4. (Since
K entails J, you J that the clock’s hand is pointing between 2 and 4.) Someone has told you
(incorrectly) that the margin-for-error is roughly 2. For all you are in a position to know you are
in a position to know that you are in a position know that the margin-for-error is roughly 2 (by
trust) and thereby be in a position to know that you are in a position to know that you are not in a
position to know that the clock is pointing to between 2 and 4. (This would not mean that the
clock’s hand is not pointing between 2 and 4, just that you are not in a position to know that it is
pointing between 2 and 4.) So you are in a position to know (and thus have justification for) the
proposition that the clock’s hand is pointing between 2 and 4, and you are not in a position to
know that you are not in a position to know that you are in a position to know that you are not in
a position to know that the clock’s hand is pointing between 2 and 4. So you are justified in
believing (since you are not in a position to know that you are not in a position to know) that you
are in a position to know that you are not in a position to know that the clock’s hand is pointing
between 2 and 4. So––interpreting ‘J’ as ‘You are not in a position to know that you are not in a
position to know’––we have a case of akrasia. And for what it's worth there seems to be nothing
especially odd about the subject. The case thus provides little intuitive pressure to provide
additional accessibility constraints that block the akratic combination suggested by the case.
Less confusingly, it is also worth realizing that the notion that one is justified in believing
p just in case ¬
K
¬
K
p involves a considerable amount of idealization in another way. Suppose
one is not in a position to grasp the proposition that ¬
K
p. Then, plausibly, one is not in a position
to know that ¬
K
p, but it is strange to think one automatically has justification for believing p.
Williamson (2000) provides the best known recent case against such luminosity principles. Stalnaker notably
disagrees, being very friendly to KK and indeed to the logic 4.2 for knowledge that one gets by adding the .2 axiom
just discussed to the characteristic S4 axioms. His 2006 ‘On Logics of Knowledge and Belief’ is a locus classicus
for the exploration of the logic of ‘For all you know you know’. In recent years, a group of philosophers, in large
part former students of Stalnaker, have been working to revive the KK principle. These include Greco (2014), Salow
and Goodman (2018), and Dorst (2019).
28
For example, Rosenkrantz (2018) and Dorst (2019).
27
17
Moreover, one might think the key anti-akratic thoughts concern pairs of beliefs that one actually
has, and the suggestion that in this case one believes p would require a quite alien conception of
belief. One might instead articulate anti-akrasia as:
New Anti-Akrasia (where ‘K’ means ‘S is in a position to know that’ and ‘B’ means ‘S
believes that’): It can’t be that case that: Bp and ¬
K
¬
Kp and B(K¬Kp) and
¬K(¬K
(K¬Kp))
But issues of belief access now become pertinent. Recall Good News. It is plausible that a
version of the case can be spelled out so that the agent justifiably believes (i.e. believes and
doesn’t know that she doesn’t know) that she doesn’t believe p even though she does. Once that
is conceded it seems plausible that there is a counterexample to the new principle along the lines
discussed in the Good News case.
Even when one has a pair of beliefs with the contents p and K¬Kp and accesses those
beliefs, the case for New Anti-Akrasia is underwhelming. Suppose one believes p and believes
one is in a position to know that one is not in a position to know p. Insofar as one is aware of
each of the beliefs one will be in a position to know that they are not both cases of knowledge.
But unless closure for justification is assumed, it is far from clear why each belief can’t still be
one that for all one knows is a case of knowledge. (The unmarked clock case above, embellished
with a pair of beliefs that the clock’s hand is pointing between 2 and 4 and also that the
margin-for-error is roughly two, is a good test case.) Is it so strange to suppose that one can be
in a position to know that a pair of one’s beliefs are not both cases of knowledge even though
one is not in a position to know of either of the pair that it fails to be knowledge? (Note that
neither belief need be false, as the unmarked clock case illustrates.)
In short, the accessibility frameworks provide intriguing ways of linking the Possible
Knowledge principle to anti-akrasia, but we remain skeptical that a case for anti-akrasia can be
made along these lines.
We have seen that, for the closure-denying theorist, there is all the difference in the
world between the hypothesis of conjunctive akrasia (which involves believing C) and the kind
of akrasia with which we started this paper (which involves believing each of the conjuncts of C
individually). It bears emphasis that the theorist who rejects conjunctive akrasia but not our
non-conjunctive version will not be troubled by Horowitz’s arguments, since those require
reasoning from justified belief in conjunctions like C. It is also worth noting in passing that
Horowitz readily assumes something like closure for justification. In the initial paragraph of her
paper she glosses akratic states as involving “high confidence in something like p, but my
evidence doesn’t support p”, but then immediately switches to glossing it in terms of believing
each of those conjuncts individually. In the setting of her paper this may be harmless enough.
Perhaps insofar as one can devise cases where one has rational high confidence in p and also has
rational high confidence in the proposition that p is not supported by one’s evidence, one can
18
devise cases where one has rational high confidence in the conjunction p and p is not supported
by my evidence. (The familiar Bayesian point that high confidence isn’t generally closed under
conjunction need not pose a problem for this existential thought). But for anyone who believes in
the Possible Knowledge principle, the difference between conjunctive akrasia and regular akrasia
cannot be ignored.
The Possible Knowledge principle offers an interesting case against conjunctive akrasia.
And in combination with closure for justification it delivers full blown anti-akrasia. Such views
are rather out of the spirit of much of the anti-akratic literature. As we have just seen, for
example, Horowitz is interested in the limits of rational high confidence, and uses rational high
confidence as the heuristic for determining the limits of rational belief. But it is obvious enough
that one can have rational high confidence in propositions that one cannot know. Suppose, for
example, one has a ticket in a 100-ticket lottery. One has rational high confidence in the
conjunction expressed by ‘I will lose and I don’t know that I will lose’. But assuming that
knowledge distributes over conjunction, it is not possible to know that. Epistemological
orthodoxy has tended to assume that there is a viable notion of rational belief that is not
governed by the Possible Knowledge principle. If that assumption can be resisted then the topic
of akrasia takes on a slightly new look. Nevertheless, the prospects for defending a general
anti-akratic outlook on the basis of that principle does seem especially promising.
Part Three: Idealization and Fixed Point Theses
Defenders of an anti-akrasia constraint might appeal to idealized agents in order to defend some
version of their ant-akratic position. It is important here to distinguish between two ways that
idealizations may be deployed in epistemology.
First, consider, for example, a Bayesian who deploys an idealization of logical
omniscience (and various Bayesian theorems that rely on such an idealization). Such
idealizations can be helpful without presuming anything to the effect that mistakes about logic
are ipso facto lapses in irrationality. For example, they can be helpful as a way of exploring
which kinds of epistemic structures can arise even without any failures of logical knowledge, or
as a way of exploring how best to bet in a certain kind of situation when every logical
consequence is fully in view. It may be particularly messy, for instance, to judge how one is to
bet in a situation where competing considerations are complex and one also has a limited grasp
of logic. Some illumination will be achieved if one factors out noise from one of the parameters,
keeping the competing considerations in place but rendering the agent logically omniscient. (Of
course in special cases this won’t be possible, e.g. when one is betting on one’s level of
competence in logic.) However, appeal to such idealization won’t further the cause of the
19
anti-akratic: the claim isn’t merely that anti-akrasia principles hold in models in which certain
sources of uncertainty are ignored.
There is a second spirit in which one might make idealizations. One might think that it is
a requirement of rationality that one does not make a logical mistake and think on those grounds
that an idealization to the logically omniscient is mandatory insofar as one is exploring how a
rationally ideal agent that is not subject to any lapse in rationality might proceed. When it comes
to mistakes not about logic but about the rationality of one’s own beliefs, the defender of
anti-akrasia needs to deploy idealizations in this spirit. One might think that it is a requirement
of rationality that––at a first pass––one not make any mistakes about the requirements of
rationality and thus that agents who are epistemically akratic are somehow automatically guilty
of a failure of rationality.29
The idea that mistakes about the requirements of rationality are mistakes of rationality
has been pursued by Michael Titelbaum (2015), and it is instructive to see how that idea plays
out in his hands. Titelbaum recognizes that the idea needs immediate qualification. While he
begins with the sweeping idea that all mistakes about the requirements of rationality are mistakes
of rationality, he quickly retreats to the thesis that an a priori false belief concerning the
requirements of rationality is never permitted. (As this principle is restricted to a priori matters,
call this the Restricted Fixed Point Thesis.) Such a retreat is wise because the sweeping thesis is
indefensible: Suppose I see what I take to be Anya walking on the other side of the street, but
unbeknownst to me it is Anya’s identical twin. Since I know that Anya permissibly believes p, I
believe that the person walking on the other side of the street permissibly believes p. But my
belief is false. It would be highly implausible to deem me irrational on the basis of my mistake
about what that person rationally believes since the mistake is rooted in a reasonable mistake
about the identity of that person.30
The Restricted Fixed Point Thesis avoids objections that easily refute a sweeping fixed
point thesis. But what should we make of the Restricted Fixed Point Thesis, and how does it bear
on the rationality of akratic states? Here we would like to make a number of observations.
First, Titelbaum’s own argumentative path to the Restricted Fixed Point Thesis begins
from what he calls the Akratic Principle:
Quite aside from the rationality of epistemic akrasia, it's an excellent question as to whether we should take the
standard idealization to logic itself in the first or second spirit. We are tempted to perspective according to which
logical relations and operators are part of the world and as such can be amenable to rational error just like any other
part. But it is beyond the scope of this paper to pursue that issue properly.
30
And here is an example from Titelbaum himself: s mistakenly believes that what Frank wrote on a napkin is a
requirement of rationality but is wrong because he has reasonable but mistaken beliefs about what was written on
the napkin. Littlejohn (2018) also argues that mistakes about rationality are mistakes of rationality. According to the
objectivism that he advances as the least painful option, “When our beliefs about rationality miss their targets,
they’re irrational”. But notice that (by contrast with Titelbaum’s Restricted Fixed Point Thesis), this claim is not
qualified in any way. According to Littlejohn’s objectivism then, at least as stated, the napkin belief and the Tim
belief are both irrational. But this seems to be going too far.
29
20
Akratic Principle: No situation rationally permits any overall state containing both an
attitude A and the belief that A is rationally forbidden in one’s current situation.
From there point he argues (in part abductively), for the Restricted Fixed Point Thesis. Taken at
face value the Akratic Principle in the very kind of principle we have been arguing against and
so, without an independent argument for it, would be of little relevance here. But we should
recall that Titelbaum tells us early on that “from now on when I discuss beliefs about rational
requirements I will be considering only beliefs in a priori truths or falsehoods”.31 So what is
really going on is an argument for an Akratic principle restricted to propositions that can be
settled a priori (call this the “Restricted Akratic Principle”. What should we make of this more
modest anti-akratic view and the Restricted Fixed Point Thesis that is argued for on that basis?
Here we would like to make a few observations.
First there is an issue defining what counts as a “mistake about the a priori truths about
rationality”. The crux of the problem is to say what it is for a claim to be about the requirements
of rationality. Suppose that p is any a priori knowable proposition. Then,
q1: The requirements of rationality are such that p
q2: Rational beliefs are rational iff p
will be a priori, and if one believes ¬p , one will make mistakes about q1 and q2, at least insofar
as one has opinions about q1 and q2 at all. But if either of q1 or q2 count as propositions about the
requirements of rationality then any mistake about a priori matters will induce a mistake about
the requirements of rationality. In sum, unless we are given a very refined notion of what it is for
a proposition to be about the requirements of rationality, the Restricted Fixed Point Thesis will
collapse to the more sweeping:
A Priori Fixed Point Thesis
A mistake about a proposition that is a priori true (i.e. a priori knowable) is a mistake
of rationality.
Second, quite apart from the issue of defining the relevant notion of aboutness, one wonders why
the Restricted Fixed Point Thesis should be true unless the A Priori Fixed Point Thesis is true as
well. Suppose, for some mathematical proposition, we convince ourselves that even if a
mathematical genius could come to know a priori that ¬
p , it may be rational to believe p through
testimony. Why not take the same attitude to a priori propositions about rationality? Suppose
some sophisticated a priori argument showed that Unger was wrong to believe that a belief is
And in a footnote the point of the use of ‘situation’ in the Akratic Principle is clarified: “... there will be a priori
truths about which situations rationally permit which overall states. They will take the form “if the empirical facts
are such-and such, then rationality requires so-and so”, Titelbaum (2015) ftn 27.
31
21
rational iff one has reasons for it. Just as in the mathematical case, it is nevertheless quite natural
to think that someone wouldn’t automatically be irrational for believing it.
Third, even if we could somehow convince ourselves of the A Priori Fixed Point Thesis, that
may have limited relevance to the more general questions of this paper concerning the rationality
of akratic states. After all, the akratic person described at the outset need not automatically be
making a mistake about a priori matters. Consider Good News––there is no reason to think that
the person in that case is making a mistake about a priori matters. Similarly, if a Lockean thesis
is true but the threshold for belief is not a priori knowable, then the above thesis need not make
trouble for examples like Missed It By That Much.
Fourth, it is far from clear that Restricted Akrasia and the more general Restricted Fixed
Point Thesis have similar motivations. An initially compelling way to motivate anti-akrasia
principles is by appealing to the idea that epistemic akrasia involves a distinctive kind of
incoherence. But the fixed point these discussed locate the trouble in an entirely different
place––the problem is holding an a priori false belief (about the requirements of rationality).
Assume that it is a priori knowable that believing p is rationally required in my current situation.
Then rationality forbids me from believing that believing p is rationally forbidden. In fact,
insofar as I believe that believing p is rationally forbidden, I am already irrational, irrespective of
whether I am akratic. If I then become akratic by forming a belief in p, I cannot be faulted on the
basis of a fixed point thesis of a further breach of rationality (in fact, that would seem to make
me more rational, as I come to believe something I am rationally required to believe). The
trouble with violations of the Restricted Akratic Principle is not that they involve a kind of
incoherence, a failure by one’s own standards. It is believing something that contradicts what is a
priori entailed by what one believes. In sum, Titelbaum’s Fixed Point Thesis (and Restricted
Akrasia) stand in need of further justification, and even if they can be justified, they amount to
heavily restricted versions of the original anti-akratic idea.
Conclusion
Anti-akratic constraints in epistemology, while popular, are subject to a wealth of
counterexamples. Meanwhile, arguments for those constraints––when given in the first
place––are surprisingly few, and face resistance from these counterexamples. Attempts to shore
up those anti-akratic constraints through idealization are unpromising. The apparent failure of
such constraints suggests more general difficulties for the project that attempts to lay down
structural requirements of rationality.
22
References
Bergmann, Michael
2005 "Defeaters and higher-level requirements", Philosophical Quarterly, 55 (220): 419–436.
doi: 10.1111/j.0031-8094.2005.00408.x
Bird, A.
2007, “Justified Judging”, Philosophy and Phenomenological Research, 74(1), 81-110.
doi:10.1111/j.1933-1592.2007.00004.x
Cresswell, M.J. & Hughes, G.E.
1996. A New Introduction to Modal Logic. Routledge.
Dorst, Kevin
2019, “Abominable KK Failures”, Mind 128 (512):1227-1259. doi:10.1093/mind/fzy067
Feldman, Richard
2005 "Respecting the evidence". Philosophical Perspectives 19 (1):95–119. doi:
10.1111/j.1520-8583.2005.00055.x
Foley, Richard
2009. "Beliefs, Degrees of Belief, and the Lockean Thesis". In Franz Huber & Christoph
Schmidt-Petri (eds.), Degrees of Belief. Springer. pp. 37-47.
Goodman, Jeremy & Salow, Bernhard
2018, “Taking a chance on KK”, Philosophical Studies 175 (1):183-196. doi:
10.1007/s11098-017-0861-1
Greco, Daniel
2014, “Could KK Be OK?”, Journal of Philosophy 111 (4):169-197. doi:
10.5840/jphil2014111411
Hazlett, Allan
2012. "Higher-Order Epistemic Attitudes and Intellectual Humility". Episteme 9 (3):205-223.
doi: 10.1017/epi.2012.11
Hirvelä, Jaakko
Manuscript “Justification and the Knowledge Connection”
Horowitz, Sophie
2014 “Epistemic Akrasia”, Nous, 48.4: 718-744.
Ichikawa, J.
23
2014 “Justification Is Potential Knowledge”, Canadian Journal of Philosophy, 44.2:184-206.
doi:10.1080/00455091.2014.923240
Lasonen-Aarnio, M.
2010 “Unreasonable Knowledge”, Philosophical Perspectives 24(1): 1–21.
Forthcoming A “Dispositional Evaluations and Defeat”, in Brown, Jessica and Simion, Mona
(eds.), Reasons, Justification and Defeat, Oxford University Press.
Forthcoming B “Coherence as Competence”, Episteme
Lewis, David.
1983, “Extrinsic Properties”, Philosophical Studies, 44: 197–200.
Littlejohn, Clayton
2018 “Stop Making Sense? On a Puzzle About Rationality”, Philosophy and Phenomenological
Research 96.2: 255-513.
McGee, Vann
1985. "A counterexample to modus ponens". Journal of Philosophy 82 (9): 462-471.
Rosenkranz, S.
2017 "The Structure of Justification", Mind, 127(506), 629-629.
Smithies, Declan
2012 "Moore's Paradox and the Accessibility of Justification", Philosophy and
Phenomenological Research, 85-2: 273-300.
Srinivasan, Amia
2015 “Normativity Without Cartesian Privilege”, Philosophical Issues 25-1: 273–299.
Stalnaker, Robert
2006 “On Logics of Knowledge and Belief”, Philosophical Studies, 128 (1):169-199.
Titelbaum, Michael G.
2015 “Rationality’s Fixed Point (Or: In Defence of Right Reason)”, Oxford Studies in
Epistemology 5: 253-294.
Unger, Peter
1975. Ignorance: A Case for Scepticism. Oxford University Press.
Williamson, Timothy
2000. Knowledge and its Limits. Oxford University Press.
2011 “Improbable knowing”. In T. Dougherty (ed.), Evidentialism and its Discontents. Oxford
University Press.
2013 “Gettier Cases in Epistemic Logic”. Inquiry: An Interdisciplinary Journal of Philosophy 56
(1): 1-14.
24
2014 “Very Improbable Knowing”. Erkenntnis 79 (5): 971-999.
Worsnip, Alex
2018 “The Conflict of Evidence and Coherence”, Philosophy and Phenomenological Research
96.1: 3-44.
25