Generative Linguistics Meets Normative Inferentialism *
David Pereplyotchik
Kent State University
Forthcoming in the Croatian Journal of Philosophy
Draft only; please do not quote without permission
Abstract:
Squarely in the Chomskyan tradition, Paul Pietroski’s recent book, Conjoining Meanings, offers
an approach to natural-language semantics that rejects foundational assumptions widely held
amongst philosophers and linguists. In particular, he argues against the view that meanings are,
or at least determine, extensions (truth conditions, satisfaction conditions, and
denotation/reference). Having arrived at the same conclusion by way of Brandom’s deflationist
account of truth and reference, I glimpse the possibility of a fruitful merger of ideas. In the
present essay, I outline a strategy for integrating the generative linguist’s empirical insights
about human psychology with Brandom's pragmatist approach to language. I’ll argue that both
have important contributions to make to our overall understanding of language, and that the
differences between them almost all reduce to a cluster of interrelated verbal differences.
Contrary to first appearances, there are actually very few points of substantive disagreement
between them. The residual differences are, however, stubborn. I end by raising a question
about how to square Pietroski’s commitment to predicativism with Brandom’s argument that a
predicativist language is in principle incapable of expressing ordinary conditionals.
Keywords
generative linguistics, anti-extensionalism, normativity, inferentialism, predicativism, public
language, communication
My deepest thanks to Daniel Harris, Ryan DeChant, Elmar Unnsteinsson, Eliot Michaelson, and Jessica Lynne
Cooperrider for helpful advice on earlier drafts of this paper; thanks also to Nate Charlow, Matt Moss, and other
participants of the 2018 Ontario Meaning Workshop, for stimulating exchanges on related topics. That said, if
some of what I claim here is false, then these people are all partly to blame. Ditto for cases where I say something
awkwarder than usual; they should of caught that. My sincere gratitude, lastly, to all of the participants at the two
symposia at which I was fortunate to present earlier versions of these ideas, including the 2018 Croatian Meaning
Workshop. From the latter group, I owe special thanks to Paul Pietroski (obviously), Michael Devitt, John Collins,
Anna Drożdżowicz, and, most of all, to Dunja Jutronić—not only for inviting me to participate in the workshop, but
for her saintly and near-inexhaustible patience with my obnoxious delays in submitting the present work to the
Journal. Not to mention the outrageous word-count. I beg the reader’s forgiveness for the latter, as well.
*
1
Introduction
I take it that the correct approach to natural-language syntax is the one that Noam Chomsky
outlined as early as the 1950s and, along with many others, has continually refined over the past
seven decades. The ongoing research program of generative linguistics that his syntactic
theorizing inspired has, in the fullness of time, yielded a diversity of impressive results. These
include exciting and previously unimaginable empirical discoveries about the human capacity
for language, both in broad scope—e.g., recursive generability and the principles-andparameters model—and at the level of fine-structure (e.g., traces, parasitic gaps, etc.). However,
as a theorist interested not only in syntax but also in semantics, I find myself in a difficult and
somewhat awkward position.
Not to complain, but, you see, I happen to have learned my semantics from the work of Robert
Brandom, and it’s safe to say that I’ve drunk the Kool-Aid that he served up in his magnum
opus, Making It Explicit. Having thus bought into both Chomsky’s generative grammar and
Brandom’s normative inferentialism, I now find myself facing the daunting challenge of bridging
the apparent chasm between the two. It may be that I’m utterly alone in this quandary, but in
the course of the present discussion, I hope—perhaps somewhat perversely—to draw others into
it as well.
There may never be an academic conference addressing the common themes and shared
commitments of generative linguistics and normative inferentialism. For all that, the
differences between them are, I believe, more boring—i.e., verbal or sociological—than is widely
assumed. In what follows, I’ll argue that, contrary to first appearances, there are actually very
few points of substantive disagreement between them. The residual differences are stubborn, to
be sure, but this can only be appreciated after a suitably wide collection of background
agreements is put into place. I devote the first half of the present discussion to this latter task.
Squarely in the Chomskyan tradition, Paul Pietroski’s recent book, Conjoining Meanings, offers
an approach to natural-language semantics that rejects foundational assumptions that are
widely-held amongst philosophers and linguists. In particular, he argues against the view that
meanings are, or at least determine, extensions. The latter include such familiar semantic
properties as truth conditions, satisfaction conditions, and denotation/reference. Having
arrived at the same conclusion by way of Brandom’s deflationist account of truth (§1.3), I began
to glimpse the possibility of a fruitful merger of ideas. In what follows, I take a first pass at
integrating the generative linguist’s empirical insights about human psychology with the broadly
pragmatist framework about mind and language that Brandom has developed over the course of
his career.1 I’ll argue that both have important contributions to make to our overall
understanding of language. The easy part is spelling out what; the harder part is assessing the
residual disagreements.
Here’s the overall plan:
In §1, I survey a range of core commitments that jointly constitute Brandom’s philosophical
project—most centrally, his normative pragmatics, inferentialist semantics, and substitutional
syntax. Along the way, I note his intellectual debts to David Lewis, including the large-scale
The picture I sketch in what follows is drawn mostly from the material in Brandom (1994, 2000, and 2008). In
what follows, I’ll occasionally abbreviate these to MIE (Making it Explicit), AR (Articulating Reasons), and BSD
(Between Saying and Doing, 2008), respectively. The final sections of this essay also make heavy use of the
material in “Why Philosophy Has Failed Cognitive Science” (in Brandom, 2009).
1
2
explanatory goals that animate Brandom’s inquiry. These, I later argue, are in many ways
orthogonal to Pietroski’s concerns. The latter claim figures in my broader argument that the two
approaches can be fruitfully combined, potential protests from both sides notwithstanding.
In §2, I outline Pietroski’s position, focusing on his explanatory aims, his empirical
methodology, and the substance of his proposal for a theory of human semantic competence. I
take up Pietroski’s arguments against Lewis’s approach to natural language, with the aim of
showing that Brandom’s theoretical goals differ sufficiently from Lewis’s to inoculate him
against Pietroski’s criticisms. Turning to Pietroski’s discussions of Frege, I point out that his
work in cognitive science undercuts some of Brandom’s claims in “Why Philosophy Has Failed
Cognitive Science”. As we’ll see, far from ignoring Frege, Pietroski incorporates many of his
insights into an empirical account of psychological processes. Nevertheless, some of the Fregean
lessons that Brandom emphasizes do not seem to have moved Pietroski. In later sections, I
explore some possible reasons for this.
I devote §3 to a survey of the core commitments that Pietroski and Brandom have in common.
As already noted, both reject truth-conditional semantics and seek to develop alternative
frameworks for semantic theorizing. Similarly, I’ll point out, the alternatives they propose can
both be seen as taking referential purport to be the distinctive feature of language. This
contrasts with received views in (meta)semantics that focus, in the first instance, on referential
success, leaving reference failure (e.g., empty names) for special treatment, as a blessedly rare
“defective” case. This is related, I suspect, to the further convergence between the Brandom and
the Pietroski on the proper treatment of the distinction between de dicto and de re
constructions. Last, but no less important, is their common rejection of the idea that
communication requires an identity between the meanings expressed by a speaker and those
grasped by the hearer. I’ll note that this shared commitment undercuts some of the main
arguments against meaning holism—an inherent feature of Brandom’s inferentialist account.
A discussion of the differences between Pietroski and Brandom occupies the remainder of the
essay. At first glance, the contrast between them seems as vivid as any in the field. Brandom’s
project is explicitly normative, pursued largely from the armchair, and aims to provide an
account of concept users as such—not just humans, but any conceivable concept-mongering
creature (or artificial system). By contrast, Pietroski’s project is avowedly descriptive,
constrained by empirical data, and aims to provide an account of actual humans—particularly,
the “Slangs” that they naturally produce and consume. (Pietroski uses ‘Slang’ as a catch-all term
for the natural languages that children acquire.) These differences ramify quickly. For instance,
Brandom’s focus on social practices of communication seems to be at odds with Pietroski’s
“individualistic” methodology, which is characteristic of generativist views more broadly
(Chomsky, 2000; Collins, 2012). This contrast seems particularly sharp in light of Brandom’s
commitment to the existence of (something like) public languages—at least in the sense of
productive and flexible norms governing the communal practice of “giving and asking for
reasons” (GOGAR). All of these issues are discussed in §4, with the aim of gradually blunting
the force of what initially appear to be quite sharp differences.
If one insists on seeing Brandom’s and Pietroski’s inquiries as targeting a common subject
matter, then one is sure to view the differences between them as substantive theoretical
disagreements. They do, after all, use the same term, ‘language’, which at least suggests that
they’re talking about one and the same phenomenon. But this way of viewing the situation is
optional, at best. We’ve learned by now to stop assuming that theorists are targeting the same
phenomenon simply on account of their using homophonous terms. Two theorists can press the
3
same bit of folk vocabulary, e.g., ‘meaning’ or ‘concept’, into more weighty theoretical labor in
quite different ways. That being so, one can just as well see the inferentialist and the
generativist as addressing different (though undoubtedly related) topics—each providing
insights about his chosen domain of inquiry, and leaving the rest of us to wonder how those
insights might be integrated, or at least brought to bear on one another. This is the strategy I’ll
recommend throughout the present discussion.
In arguing that the two are, at the very least, logically compatible, one owes an account—or, at a
minimum, a broad sketch of an account—of the theoretical relation between them. The proposal
that I’ll develop is that Brandom’s explanatory ambitions differ from Pietroski’s in precisely the
ways that are paradigmatic of “inter-level theoretical relations”. My suggestion is that Brandom
is attempting to furnish a high-level description of a quite general phenomenon—language use,
as such. This presupposes that lower-level implementations of the relevant generalizations can
vary widely.2 Pietroski’s theoretical aims are, though certainly more exciting from an empirical
standpoint, a good bit narrower than Brandom’s, in that they deal exclusively with the human
case. As with any lower-level account of a more general phenomenon, Pietroski’s view is
compatible, in principle with any number of higher-level descriptions of language. Thus, while
it’s incorrect—or, at any rate, misleading—to say that the two accounts are strictly orthogonal to
one another, the fact is that each places very few constraints on the other. It should be
surprising, then, when we find substantive points of contact between them, whether these be
points of contention or convergence.
Still, even if my suggestion is right that the two frameworks are more compatible than might
initially appear, we must face up to the residual differences that credibly threaten my reconciling
project. The most stubborn of these, which I accordingly leave to the very end, has to do with
Pietroski’s arguments for a specific version of predicativism—roughly, the view that all
subsentential entities are best treated as predicates, not (say) singular terms (§2.4). To be sure,
Pietroski’s commitment to this view is not a central aspect of the overall generativist enterprise.
Rather, it’s a tendentious empirical hypothesis, for which he offers correspondingly forceful
arguments. That being so, if it were to turn out that the hypothesis is false, generative linguistics
would go on without a hitch. Still, I focus on this issue because it raises much larger questions
about how to treat subsentential entities, not just at the level of semantics, but also at the level of
syntax.
The notion of syntax that Brandom employs is substitutional—not in the sense of “substitutional
quantification” (though he endorses that too, on independent grounds), but, rather, in the sense
that he takes sentences, i.e., expressions of full thoughts, to be the primary vehicles of meaning.
On his view, subsentential items (words, morphemes, etc.) are the products of taking a
“substitutional scalpel” to the antecedently-interpreted sentential unities. We’ll take a first stab
at unpacking the scalpel metaphor in §1.5, and then come back to it in more detail in §4.3.
While this substitutional approach may have a home in a Fregean semantics—Pietroski’s
powerful arguments notwithstanding—there is no obvious sense in which it can be legitimately
applied to the syntax of human languages.
The trick that I pull repeatedly throughout the second half of this essay—namely, that of
relegating the two inquiries to distinct theoretical “levels”—doesn’t get much of a grip here. For,
it’s a requirement of any such picture that an account pitched at the (relatively) higher level of
analysis should be compatible, at least in core respects, with any lower-level account of the
Indeed, as Fodor (1975) points out, without substantive constraints from the higher-level account, such
“realization bases” might differ indefinitely.
2
4
“realization base”. But Brandom’s substitutional syntax seems, even upon close scrutiny, to be
not only different from generative grammar, but decidedly at odds with it. I strongly suspect
that this different comes down to a methodological conflict between the generativist’s “bottomup” treatment of subsentential items and the inferentialist’s “top-down” alternative.3 In §4, I
discuss and evaluate some ways of viewing this disagreement, arriving ultimately at the
following bittersweet conclusions.
Despite strong grounds for optimism about the possibility of integrating normative
inferentialism with the up-and-running research program of generative linguistics, it must be
admitted that rendering Brandom’s substitutional approach to syntax compatible with the going
theories in generative grammar (e.g., Chomsky’s minimalist program) presents an obstinate
challenge. Though I can think of no reason that the challenge is insuperable in principle, it’s
nevertheless the case that I am not, at present, equipped to meet it myself. Perhaps others can
do better in this regard—a task I invite, encourage, and exhort philosophers of language to
undertake, on the strength of the positive arguments adduced here.
§1. Brandom: From Normative Pragmatics to Inferentialist Semantics (and Back)
Introduction
Robert Brandom’s philosophical project is grand in both scope and ambition. The resulting
theoretical framework has a number of moving parts, to put it mildly. In this section, I’ll lay out
what I take to be the central commitments of his “normative inferentialism”, with particular
focus on those that pertain to the broader goals of this discussion—i.e., the proposal to integrate
the generativist and inferentialist research programs.
Although this section is intended to be largely exegetical, postponing critical evaluation to §4, I
should emphasize that it wouldn’t matter to me very much if Brandom wouldn’t put things quite
the way that I do below. While his account is by far the most well-worked out version of
normative inferentialism, and hence immensely useful as a guide in this area of inquiry, my
intent is not so much to capture every nook and cranny of one particular theorist’s gargantuan
philosophical system. Rather, the goal of this section is to present and motivate an account of
language that is attractive enough to warrant comparison with Pietroski’s independently
attractive proposals about natural-language semantics (§2). Ultimately, it’s only by comparing
them that we can put ourselves in a position to clearly assess the merits of either, let alone to
contemplate their integration.
§1.1 Methodology and Explanatory Aims
The central questions, for Brandom, are about what constitutes language use—not by humans,
necessarily, but by any natural creature or artificial system to which attributions of a linguistic
competence are warranted. Given that normal adult humans have mastery of at least one
natural language, Brandom’s account will, in some sense, apply to us as well—though perhaps
only as a particular instance of a much more general phenomenon. Still, the inquiry he
undertakes is not a straightforwardly psychological one; nor is it pitched as a historical or
See Collins (2012) for a detailed and rewarding discussion of this issue. The contrast that Collins draws between
sentence-first and word-first approaches (my terms, not his) serve, if anything, to sharpen the contrast that I’m
worried about here. My hunch is that coming to grips with Collins’ conclusions in that work will be crucial to
resolving the issues I raise in §4.3.
3
5
anthropological hypothesis. Rather, the idea is that, armed with a philosophically sophisticated
and conceptually articulated account of “what the trick is”—where “the trick” is language use,
broadly construed—an empirical scientist (a linguist, psychologist, or artificial intelligence
researcher) can ask more detailed questions about “how the trick happens to be done” by one or
another creature or artifact.
This latter kind of research is bound to yield an increasingly refined picture of some particular
type of linguistic competence—paradigmatically, the human type (though one often hears of
impressive progress in interpreting the languages of other social creatures, e.g., prairie dogs and
dolphins). Thus, although the account is intended to apply far beyond the human case—to
aliens, robots, and other terrestrial animals—there is no commitment on Brandom’s part to the
effect that empirical findings can have no bearing whatsoever on his philosophical claims. Nor
does he hold that legitimate empirical inquiry can take place only after a credible philosophical
account has been supplied. This is just one sense in which normative inferentialism and
empirical science, including generative linguistics, are not in competition.
One might wonder, at this point, how far removed Brandom’s project is from empirical
considerations. Perhaps too far? True, many of the thinkers with whom his work most directly
engages did not conceive of their inquiries on the model of empirical theorizing. In fact, the
three philosophers who figure most centrally in MIE—namely, Kant, Frege, and Wittgenstein—
all famously made a point of distancing themselves from natural science. But we have to keep in
mind, here as elsewhere, that Brandom’s theoretical framework is “a house with many
mansions”. Thus, we need not take a one-size-fits-all approach to this question; indeed, there
are good reasons not to.
In providing detailed analyses of linguistic constructions in distinctively human languages—de
dicto/re reports, truth-talk, (in)definite descriptions, indexicals, deixis, anaphora, modals,
quantification, predicates, and singular terms—Brandom relies on exactly the same stock of
empirical considerations that one finds in standard semantics texts (e.g., Heim and Kratzer,
1998). Such data are often cast in terms of intuitive judgments concerning the truth-values or
truth-conditions of a target sentence, often in the context of auxiliary assumptions that are
supplied by an accompanying vignette. But these very same data can equally well be recast as
competent speakers’ judgements concerning (not truth but) inferential proprieties—e.g., what
one is required or permitted to infer on the basis of the assumptions supplied (in combination
with all prior background information that is assumed to be common-knowledge).4
A similar treatment can be given of metalinguistic intutions/judgments concerning minimal
pairs and sentential ambiguity. Two sentences differ in meaning, on the inferentialist account,
just in case they differ in respect of the inferential proprieties that govern their use. Likewise, a
sentence is ambiguous just in case it is capable of playing two distinct/incompatible inferential
roles. And, though he doesn’t, to my knowledge, ever discuss the phenomenon of polysemy, I
presume that Brandom would treat is as a case of overlapping inferential properties—perhaps
ones that meet some further normative or inferential conditions.
The fact that Brandom often appeals to precisely such data in developing his inferentialist
semantics suggests that at least this aspect of his view is firmly grounded in empirical fact. But,
as I’ll emphasize later, Brandom’s use of these data is not empirical but, rather, illustrative. He
is, in other words, using examples from English—indeed, almost exclusively English, in contrast
4
My thanks to Eliot Michaelson and Daniel Harris for helping me to see the points in this paragraph more clearly.
6
with the generativist’s cross-linguistic methodology—as case studies that, according to him,
exemplify more general phenomena. Thus, the English-language examples that he occasionally
provides—a bit too rarely, one might lament—serve not as empirical data for a scientific or
naturalistic hypothesis. Rather, the most charitable reading of his appeals to such examples
casts them as attempts to help us to get a conceptual grip on features of language(s) as such.
Still, it must be admitted that other aspects of Brandom’s overall picture are far less tethered to
the facts on the ground. Presently, we’ll see that his normative pragmatics—to which his
downstream proposals, including inferentialism, are conceptually subordinate—makes only very
minimal empirical assumptions. For instance, it takes for granted that complex social creatures
came into being somehow or other—e.g., via evolution by natural selection, or by deliberate
engineering, as with AI. Brandom likewise presumes that the behavioral control systems of such
creatures/artifacts—brains, motherboards, or what have you—work somehow or other; else they
couldn’t perform the behaviors that institute social norms, given reasonable naturalistic
constraints. Still, putting aside such near-vacuous assumptions, this aspect of Brandom’s
project can fairly be described as an armchair enterprise.5 Whether that’s something to hold
against it—e.g., on the basis of one or another naturalist scruple—is something we can only
ascertain only after we’ve surveyed the details of his overall proposal. To these we now turn.
§1.2 Normative Pragmatics
Brandom begins by situating all linguistic practices in the wider realm of activities that are, in
some sense, rule-governed. The notion of a rule plainly stands in need of careful articulation.
Given our philosophical history, two options immediately suggest themselves—what Brandom
calls “regulism” and “regularism”. According to the regulist, rule-following is a matter of
obeying rules that one can explicitly formulate or comprehend. By contrast, the regularist holds
that rule-following is a matter of being disposed to behave in a way that accords with one or
another empirical regularity. Brandom rejects both of these alternatives, though, as we’ll see,
his own account is an attempt to split the difference between them.
Regulism is vitiated, he argues, by the fact that obeying an explicit rule—e.g., a dictionary
definition or an academic/prescriptive convention of grammar or style—requires first
interpreting the rule. This, in turn, requires deploying concepts—a version of precisely the
phenomenon that we’re seeking to explain. The regularist option, he contends, faces a distinct
challenge. A pattern of behavior—whether finite or infinite, actual or potential—can be either a
successful or a defective case of following a given rule. So, with regard to any pattern of
performance, the question always stands: has the rule in question been followed correctly?
Assessments of correctness are, on Brandom’s view, inherently normative. Yet, what the
regularist offers is a purely descriptive account, couched in the language of cognitive or
behavioral dispositions (and perhaps other, related alethic modal notions; see §3.1).
Brandom’s positive proposal insists on the normativity of assessment, characteristic of the
regulist view, but jettisons the requirement that the rules in question be manifested explicitly
from the very outset. Though rules of practice can eventually come to be articulated—i.e.,
“made explicit”—by a community of concept-using creatures, such rules are, in the first instance,
implicit in the social practices of the creatures in question.
Notably, a former student of Brandom’s, John MacFarlane, has programmed a version of “the game of giving and
asking for reasons” (GOGAR) for the popular game, The Sims. https://www.johnmacfarlane.net/gogar.html
5
7
Why social? Why not, instead, the practices of an individual? Put crudely, the reason is that
isolated individuals cannot, in principle, serve as a normative check on their own judgment and
behavior. In order for a creature to be so much as subject to normative assessment, its behavior
must take place in a context where other creatures respond to it in ways that signal social
(dis)approval. How numerous and long-lasting must the social relations be in order to institute
a social practice, properly so called? Brandom’s answer, which will become relevant later in the
discussion (§4.1), may be somewhat surprising. Terms like ‘social’ and ‘communal’ bring to
mind a relatively large group of creatures. But, when he presses these folk terms into theoretical
service, Brandom’s official view is far less committal than all that. All it really requires is a
dyadic “I-thou” relation—i.e., a case of mutual recognition, in respect of authority and
responsibility, on the part of at least two creatures/systems. Such relations of mutual
recognition can be merely implicit in the 2+ creatures’ overt practices toward one another—e.g.,
one or another type of social sanction.
In the most basic case, social sanctions come down to either naked violence or the provision of
necessities—beatings and feedings. But once a social practice becomes sufficiently complex, it
comes to include not only such “external” sanctions, but also “internal” ones—e.g., initially, the
granting of privileges and later the exchange of tokens of privilege. (We are invited here to
imagine a special fruit that permits its bearer to enter a particular territory without being
attacked by the creatures who guard it.) With each additional layer of interwoven internal
sanctions, the community becomes increasingly ripe for instituting not merely norms of
practical action, broadly construed, but the more specifically linguistic norms of assertion and
demand. I include the latter here, not because Brandom ever treats it in detail, but on account
of his frequent invocation of the trope “the game of giving and asking for reasons” (my
emphasis, to be sure). While the “asking” part seems to deserve equal attention, Brandom takes
the norms governing assertion to be the “downtown” of language—a backhanded adaptation of
Wittgenstein’s metaphor of language as a city with no downtown.
Having argued that assertion is the fundamental pragmatic notion, Brandom goes on to give an
account of it in terms of the normative social statuses of commitment and entitlement. To
illustrate these notions, let’s work through a hypothetical example of how the game of giving and
asking for reasons might be played amongst a group of primitive hominids—or, for that matter,
current prairie dogs.6
Suppose a creature produces a public token in a social context, and that this act has—if only in
that context—the pragmatic significance of explicitly committing the creature, in the eyes of its
community, to its being the case that the enemy is approaching. Each of the other creatures in
the group evaluates this commitment, assessing it as correct or incorrect on the basis of their
own commitments, explicit or otherwise. In doing so, they take a normative stance toward the
“speaker”, whom the group might then treat as being entitled to that commitment. The speaker
can be entitled to a claim either by default—e.g., in cases of joint perception or contextuallyrelevant common knowledge—or by having undertaken prior commitments that jointly warrant
the one in question. Initially, such normative attitudes are implicit in the assessors’ overt
treatment of the speaker. The group might, for instance, shift its attention in the direction of the
speaker’s gaze upon hearing ‘Enemy!”. If the enemy is indeed approaching in a way that is
perceptually evident to the group, the entitlement is thereby secured.
I am acutely aware that, despite my efforts at rendering the scenery in a plausibly naturalistic light, the example is
not only fictional but transparently artificial in countless respects. I trust these won’t matter for the sake of making
the key points accessible.
6
8
Suppose now that the same speaker, now quite anxious, produces another token—e.g., “Run!”
This counts not only as an explicit commitment to a (potentially joint) plan of action, but also an
implicit commitment to the goodness of the inference from “Enemy!” to “Run!” And while the
members of the group assessed the speaker as being entitled to the first claim, they may well go
on to treat this new explicit commitment, i.e., “Run!”, as patently unwarranted in the
circumstances. Again, in the most basic cases, this “treatment” or “stance” toward the speaker
can take the form of overt actions on the part of other group members—e.g., grabbing hold of the
speaker and keeping them in place. Another form of response might be the production of overt
tokens that commit the group, including the original now-frightened speaker, to an
incompatible plan—e.g., “Stay!” and “Fight!”. This normative incompatibility is itself implicit in
the overall communal practices of the group.
From these primitive beginnings, Brandom suggests, a practice can evolve in such a way as to
allow for speakers to make explicit their commitments regarding the goodness—or, in his terms,
“material propriety”—of the inferences that were previously only implicit in their practices.
Paradigmatically, this is achieved by introducing an expression that has the significance of a
conditional. For instance, we can imagine a newly-evolved creature—call it v2.0—that has
achieved what Brandom calls “semantic self-consciousness”. This involves discursively
representing not just enemies and escape strategies, but also inferential relations between
claims. Creature v2.0 can make explicit its commitment to the goodness of a particular
inference by producing a token that has the significance of the conditional, e.g., “If Enemy then
Run!” (Note that the force operator, ‘!’, is stripped off from the atomic propositions, “Enemy!”
and “Run!”, when the latter are embedded in a conditional. This point comes to the foreground
in §4.2.) Further elaborations of v2.0’s language are manifested with the introduction of new
bits of logical vocabulary, all of which serve to express commitments regarding various
inferential relations. For instance, negation expresses the relation of inferential incompatibility
between claims (see below). This is the core thesis of what Brandom calls “logical expressivism”.
Turning to more complex logico-semantic devices, consider the phenomenon of indirect
discourse. Brandom proposes that de dicto reports of “what was said” are used to make explicit
the attribution of commitments to oneself or to others. For instance, my asserting “Dan said
that p” makes explicit my commitment to Dan’s having explicitly undertaken his own
commitment to the effect that p. In a similar fashion, epistemic vocabulary can play the role of
making explicit a speaker’s assessment of someone’s entitlement to the commitments that
they’ve undertaken. For instance, were I to upgrade my assertion to “Dan knows that p,” I
would not only be attributing the commitment to Dan, but also committing myself to it, and
explicitly representing him as being entitled to it. Brandom (2001: ch. 4) points out that these
three pragmatic aspects of knowledge attributions—entitlement-attribution, commitmentundertaking, and commitment-attribution—correspond, in that order, to the three elements of
the traditional Justified-True-Belief account of knowledge.
What about deontic modal terms? Brandom argues that these serve to make explicit one’s
commitment to a plan of action. In saying “I ought to ”, I make explicit my commitment to a
plan to . Similarly, terms like ‘must’ can be used to make explicit an inferential propriety that
is insensitive to changes in auxiliary assumptions, up to some boundary condition—e.g., natural
or legislated laws. Thus, assertions like “In order to light the wick safely, one must first clean
one’s hands” serve to make explicit a speaker’s commitment to the material propriety of the
inference from “The wick-lighting is safe” to “Your hands are clean,” irrespective of what
commitments they have concerning a wide range of possible auxiliary assumptions, such as “It’s
raining elsewhere,” “I never met my grandfather,” and “Child-trafficking is a serious problem.”
9
Whether or not these latter are included in one’s set of commitments, the inference from “The
wick-lighting is safe” to “Your hands are clean” is ostensibly good. Of course, auxiliary
commitments that conflict with one’s views on natural laws—e.g., “Gasoline burns when lit”—
would render the inference materially invalid. That’s what talk of “boundary conditions” is
intended to capture. In the case of the purely nomic reading of ‘must’, as in “Oxygen must be
present for combustion,” the inference from “Combustion is occurring” to “Oxygen was present”
is good under substitution of virtually any commitment, other than those regarding physical or
chemical law.
In this vein, Brandom provides rich pragmatic analyses of a variety of other “vocabularies”,
including the (meta)semantic devices ‘represents’ and ‘is about’, as well as indexicals, alethic
modals, anaphoric pronouns, and de re attitude reports. Some of the details of these analyses
will emerge throughout the present discussion, and I’ll devote special attention to his account of
de re constructions in §3.4. For the moment, the case that’s most important to examine is that
of ‘true’, as this bears directly on Brandom’s rejection of truth-conditional semantics—arguably
the main negative contention that he shares with Pietroski. Brandom offers a clear and wellmotivated alternative to the standard treatment of truth in terms of “correspondence”—a
notoriously vexed notion that lay at the heart of truth-conditional semantics. Instead he
develops a refined version of the “deflationary” approach of truth and reference—arguably the
best on the market, as we’ll see presently.
§1.3. Deflationism about truth and reference
In asserting that things are thus-and-so, a speaker takes on the pragmatic normative status of a
discursive commitment. What is it, then, for another person to say that the first speaker’s
assertion is true? The normative pragmatic answer is this: in asserting that some claim is true,
one not only ascribes a commitment to the speaker who made it, but one also undertakes that
commitment oneself.7 This allows for the possibility—enormously useful in social practice—for a
speaker to take on commitments that they cannot at present articulate. The inability may be
either to memory loss (“I don’t remember exactly what she said, but it was definitely true”) or to
time constraints (“She gave a long speech; I’m not gonna repeat the whole thing now, but
everything she said was obviously true”). In the most interesting cases, complete articulation is
impossible within physical limits, the set of commitments being literally infinite, as with “The
theorems of Peano arithmetic are all true”.
Thus, on Brandom’s view, the term ‘true’ and its cognates (‘truth’, ‘correctness’, etc.) all serve a
distinctive expressive function, without which branches of discourse such as mathematics would
be impossible. Specifically, these terms all serve to express a commitment to something already
asserted (or, at any rate, assertible). Brandom thus labels this an “expressivist” account of truth,
of a piece with his more general expressivist approach to logical vocabulary. The semantics for
truth ascriptions is elaborated still further in light of his discussion of anaphora (§1.7). The
notion of inter-sentential (and inter-speaker) anaphora will allow us to appreciate how truth
ascriptions can have the pragmatic function of allowing the inheritance of commitments and
entitlements across inter-personal exchanges.
If this pragmatic expressivist account of our use of ‘true’ (and related terms) is correct, then
there is no obvious reason why anything further needs to be said about truth. The latter is often
In the special case where the speaker and the assessor are identical—as in, “What I’m saying is true!”—the
commitment is both redundant and guaranteed, though the term is still useful, in such cases, if only for emphasis
and the like.
7
10
conceived of as a metaphysical language-world relation—the very one denoted/satisfied by the
term ‘truth’ and its cognates. But there seems to be no explanatory work for which such a
relation is obviously indispensable—neither in semantics nor, Brandom argues, in any other
area of theorizing. This puts his view in the same camp as other versions of “deflationism” about
truth—particularly, the well-known disquotational (Quine, 1970), minimalist (Horwich, 1990),
and prosentential accounts (Graver, Camp, and Belnap, 1975). I take all of these to share the
following core commitments:
(i)
(ii)
a rejection of any account/analysis of truth in terms of correspondence, coherence, or
warranted assertibility, on the grounds that truth is not a relation (of any kind)
the aim of casting the notion of meaning as explanatorily prior to that of truth, both
in semantics and elsewhere (including metaphysics, epistemology, and ethics)
The differences between disquotationalism, minimalism, and prosententialism have mostly to
do with matters of detail, such as whether to ascribe truth to sentences or to propositions, or
how exactly to interpret Tarski biconditionals, liar sentences, and quantified truth ascriptions.
These disputes are all strictly irrelevant for our purposes. What’s important here is that
Brandom’s version of deflationism is designed to claim the virtues of each of these prior
accounts, without succumbing to the technical objections that have been lodged against them.
The three main improvements he suggests are (i) subordinating the semantics of truth
ascriptions to his brand of normative pragmatics, (ii) paying closer attention to the syntax of
truth ascriptions, especially their inter-sentential anaphoric structure (§1.7), and (iii) extending
the deflationist account to other semantic notions, including reference, satisfaction, and de re
representation. While (i) is a straightforward application of Brandom’s broader strategy, and
(ii) serves largely to immunize his version of deflationism from extant objections, (iii) strikes me
as a genuine extension of Brandom’s normative pragmatics, allowing it to handle both sentential
and subsentential expressions.
The notions of truth and reference are plainly central to the project of truth-conditional
semantics. Thus, many have noted that a deflationist account of these notions requires a radical
re-thinking of what shape a formal semantic theory should take. In this regard, we now have an
embarrassment of riches. In addition to old-school proposals about warranted assertibility, and
the pragmatists’ short-lived “success semantics” (see Brandom 2009 for critique), we now have
the benefit of more modern proposals, including both Pietroski’s cognitivist account (§2) and
Brandom’s inferentialism. Let’s examine the latter..
§1.4 Inferentialist Semantics
Having situated assertional practices within the broader sphere of rule-governed social activity,
Brandom has introduced his key pragmatic notions of commitment and entitlement. He goes
on to show how these normative statuses, taken together, can be used to construct a semantic
theory, whose business it is to explain (in some sense) how linguistic expressions can come to
play the roles that they do in a community’s assertional practices.
In familiar fashion, the explanation goes by way of assigning “meanings” or “semantic values” to
expression types. But, in keeping with his other commitments, Brandom does not equate
meanings with truth conditions, sets of possible worlds, pragmatic success conditions, or
assertibility criteria. Rather, he subordinates his semantic theory to the normative pragmatics
just outlined, by treating meanings as the inferential proprieties that govern the use of linguistic
11
expressions. Slurring over a considerable mass of detail, we can summarize the proposal as
follows:
Inferentialism: For a given propositional expression, ‘P’, the meaning of ‘P’ can be modeled
as the set of sets of other propositional expressions that
(i)
(ii)
(iii)
(iv)
entitle one to ‘P’ in the presence of (various sets of) auxiliary commitments,
commit one to ‘P’ in the presence of (various sets of) auxiliary commitments,
as well as those to which
‘P’ commits one in the presence of (various sets of) auxiliary commitments, and
‘P’ entitles one in the presence of (various sets of) auxiliary commitments.
A particularly useful compound inferential relation turns out to be that of incompatibility,
wherein taking on one commitment precludes a speaker from becoming entitled to another. In
this sense, a commitment to “Herbie is a dog” is incompatible with entitlement to “Herbie is a
bird”. Brandom (2008, ch. 5) shows how to build a modal propositional semantics on the basis
of just this incompatibility relation, treating the negation of a claim, for instance, as the minimal
set of commitments that are incompatible with it. Here again, the details are illuminating, but
only one significant upshot bears highlighting for present purposes.
Casting the meaning of an expression in terms of its inferential proprieties vis-à-vis other
expressions plainly commits one to meaning holism. A common charge against theories of a
holist stripe is that they founder on the rock of compositionality. For instance, Fodor and
Lepore (1992) famously argue that inferential roles don’t compose, whereas meanings do; a
fortiori, meanings can’t be inferential roles. But the formal incompatibility semantics developed
by Brandom (2008) provides a direct counterexample the main premise of this argument, by
demonstrating how a inferentialist semantics can in fact provably meet reasonable
compositionality constraints, at least in the modal propositional case.8 In any event, we will see
that there are other reasons to reject Fodor and Lepore’s arguments.
§1.5 Substitutional “Syntax”
We’ve now put on the table both a normative pragmatics and an inferentialist semantics.
However, it’s relatively uncontroversial that only proposition-sized expressions can enter
directly into inferential relations as premises and conclusions.9 That being so, we still need to
say how subsentential bits of language can have meanings of their own. Identifying
subsentential expressions will allow us to explain how such expressions can go on to contribute
to the indefinitely many assertions that a creature like us can interpret and produce. While
there is no conceptual barrier, on Brandom’s picture, to a community of creatures/robots using
a language with only finitely many complex expressions, our own case plainly illustrates that
languages can and do come in varieties that admit of productive generation. So, while a firstpass presentation of the inferentialist approach is best conducted in terms of a community of
creatures that uses a finite language—such as might easily be found in (extra)terrestrial nature
or constructed in a robotics laboratory (e.g., AIBO dogs)—it does not follow, and is not true, that
8 Although, as of this writing, an analogous proof for the quantificational case remains elusive, I am aware of no
principled reasons for thinking that such a proof won’t emerge—if not tomorrow, then someday. As will become
clear throughout, I adopt a resolutely optimistic attitude toward such matters.
9
For a dissenting view, see Stainton (2006).
12
the inferentialist program abdicates the responsibility of explaining the productive nature of
some languages. Quite the contrary; Brandom takes his account of subsentential meanings to
constitute one of the core achievements of the inferentialist program.
The primary notion of an inferentialist semantics for subsentential expressions is that of
substitution, which Brandom inherits from (a reconstructed time-slice of) Frege. Starting with a
finite stock of sentence types, , each of whose free-standing (i.e., unembedded) uses have the
default pragmatic significance of performing an assertion, we can ask whether any members of
can be treated as substitutional variants of any others. Keeping to the level of naïve intuition,
the sentence ‘David admires Herbie’ is a substitutional variant of ‘Jessica admires Herbie’. We’ll
see more about how this works in a moment, but the key take-away point is this: if a sentence
has a set of substitutional variants, then we can, to that extent, discern its subsentential
structure. That is, by relating one sentence to another inferentially via substitution, we can
notice and distinguish re-combinable subsentential expressions within the sentences of the
language. Let’s work through an example.
Take the sentence ‘David admires Herbie’ and chop it up any way you like, in respect of
phonology, orthography, or whatever surface-level features happen to be relevant to the
language at hand.10 One way of doing so will yield ‘Herbie’ as a proper part; another yields ‘…
…‘erbi’...’. Now do the same with every other member of , where the latter is assumed to be
finite.11 This yields a set of subsentential bits, , consisting largely of nonsense like …‘dmire’…
and …‘vid admi’…. With this in hand, go back to ‘David admires Herbie’ and substitute any
other member of (or, for that matter, ) in place of ‘Herbie’. You’ll find that most such
substitutions yield uninterpretable gibberish—i.e., expressions that can enter into no inferential
relations with the antecedently interpreted members of . For instance, substituting ‘jump’ for
‘Herbie’ yields ‘David admires jump’, which has no inferential consequences. Same for ‘jumps
rapidly’, ‘red’, ‘we’, and …‘rential cons’…. By contrast, a commitment to ‘Colorless green ideas
sleep furiously’ would presumably preclude entitlement to ‘Nothing ever sleeps furiously’, ‘There
are no colorless green things’, ‘Ideas can only be red’, and many other propositions. There is a
clear sense, then, in which this famous sentence is perfectly well interpretable. (It’s even false!)
Setting aside gibberish, there will be a subclass of expressions that, when substituted for ‘Herbie’
in ‘David admires Herbie’, yield interpretable sentences, such as ‘David admires Jessica’,
‘Jessica admires Herbie’, ‘David feeds Herbie’, and ‘David feeds Jessica’. (Again, a sentence is
interpretable just in case it can play the role of premise or conclusion in an inference.) This
subclass of , call it , contains all and only the recombinable elements—i.e., the subsentential
units of the language—including words, phrases, clauses, morphemes, subjects, predicates, or
We’ll do things in terms of orthography here (given the medium), but phonology is plainly the more primitive of
the two in the human case, both phylo- and ontogenetically, as textbooks in empirical linguistics have long
emphasized. For future robots, the medium will likely be something else—perhaps some descendent of TCP/IP.
This would require adapting the substitutional techniques to that particular case.
10
Any actual creature’s primary linguistic data (PLD) will, of necessity, be finite for in the course of language
acquisition. The obvious analogy to the case of language acquisition in human children should not tempt us into
assuming that Brandom is pitching an empirical account of the stages of acquisition. Still, the analogy is worth
noting, even if we strongly suspect—as generativists do, pace Tomasello (2005)—that children’s linguistic capacities
are productive/generative right from the get-go. From the latter hypothesis, it follows that there is no such thing,
really, as a finite set of PLD for the child, the child’s acquisition device is always doing something analogous to
hypothesis testing, even in the absence of input data. On this picture, the set of PLD is a constantly-moving
target—in effect, a massively complex mental representation, or representational structure/system, within the child.
The latter is plainly not identical with the set of utterances that happened to be produced in a child’s presence.
11
13
whatever other syntactic categories the language in question contains. We can now call one
sentence, S, a substitutional variant of another, S*, just in case S is the result of substituting one
element of with another in any member of . Thus, ‘David admires Herbie’ is a substitutional
variant of ‘Jessica admires Herbie’, on account of its being the result of the substitution of
‘David’ for ‘Jessica’.
The foregoing puts us in a position to entertain a new inferential relation between sentences.
Let’s call an inference substitutional just in case the conclusion is a substitutional variant of one
of the premises. The two inferences, from ‘David admires Herbie’ to either ‘David feeds Herbie’
or to ‘David admires Jessica’, are both fine examples. This notion of a substitutional inference is
what allows for an application of the inferentialist strategy to subsentential expressions.
Subsentential Meaning: The meaning of a subsentential expression, , is the set of
materially good substitution inferences involving .
Thus, the meaning of ‘Herbie’ is the set of inferential proprieties that includes {‘David feeds
Herbie’ ‘David feeds Herb’}, {‘David feeds Herbie’ ‘David feeds a dog}, and {‘David feeds
Herbie’ ‘David feeds his dog’}, and many others12. In all such cases, the substitutional
inferences are materially good in virtue of the fact that ‘Herbie’ is substituted by any of his other
actual names, or by other ways of correctly describing him, uniquely or otherwise.
Needless to say, no one—not I, and certainly not Herbie—will ever have a full grasp of the set of
inferential proprieties that governs the use of the expression ‘Herbie’, as this would involve
knowing everything there is to know about him. Nor is there any guarantee that any two
speakers will converge from the outset on what is correct to infer from “David feeds Herbie”—
e.g., whether inferring “David feeds a dog” is (materially) good. Rather, the point is this: given
that there are, in point of fact, plenty of ways for me to entitle myself to “Herbie is a dog”, and no
plausible ways (please grant) to undercut that entitlement, it would be incorrect, pragmatically
improper, and epistemically unwarranted for someone to assert the opposite. This holds even
if my interlocutor is strongly disposed to maintain a contrary position on the matter (foolishly,
no doubt). It’s important to always keep in mind that normative inferentialism is not about
inferential propensities; it’s about inferential proprieties.
§1.6 Predicates and Singular Terms
One consequence of the view presented thus far is that some linguistic expressions can be
inferentially stronger or weaker than others. Consider the verbs ‘runs’ and ‘moves’. The latter is
logically stronger than the former because all substitution inferences from ‘x runs’ to ‘x moves’
are good, but the reverse inferences generally aren’t. In such cases, the substitution inferences
are said to be asymmetric. We also find terms that invariably enter into symmetric
substitution inferences—e.g., from ‘Mark Twain was an American’ to ‘Samuel Clemens was an
American’ and back again. To make the latter type of inference explicit, subsentential
expressions of identity and nonidentity can be introduced, yielding propositions of the form =
and ≠ (e.g., ‘Sam Clemens is identical with Mark Twain’ and ‘David is not Herbie’).13
For simplicity of presentation, I suppress issues to do with possessives like ‘my’ and ‘his’, and indexical
expressions more generally. Brandom (1994, 2008) supplies an account of these, but the details are irrelevant here.
12
The notion of “introduction” that I intend here is the one developed in Brandom (2008). Roughly, a community
is capable of introducing a novel expression, in this sense, just in case its members already have the practical
abilities that are necessary and sufficient for being able to express—i.e., to make explicit—normative attitudes that
13
14
As we will see in §4, Brandom holds that the distinction between predicates and singular terms
comes down to the distinction between those expressions that must license only symmetric
inferences (e.g., ‘Herbie’ and ‘the dog’), and those that merely can license symmetric inferences,
but need not do so (e.g., ‘deer-like’, ‘jumps’, and ‘rapidly). On the basis of this claim, Brandom
goes on to develop a complex line of reasoning whose ultimate conclusion I’ll call the
“asymmetry constraint”.
Asymmetry Constraint:
Any language that draws no distinction between predicates and
singular terms (conceived in the above manner) is in principle
precluded from introducing conditionals—i.e., expressions that
make explicit one’s commitment to the goodness of an inference—
and other basic operators of propositional logic.
This claim will come to the foreground when we contrast it with Pietroski’s predicativism,
according to which there are in fact no singular terms at all in natural languages. If Brandom’s
argument succeeds, then Pietroski’s predicativist semantic theory faces a serious challenge.
Contrapositively, if Pietroski’s predicativism is correct, then there must be a flaw in Brandom’s
reasoning. This is, in fact, the final puzzle—not to say mystery!—of the overall reconciliation
project that I’ll be urging here.
§1.7 Types, Tokens, and Anaphoric Chains
The expressions discussed thus far have all been linguistic types, tokens of which may well
diverge in meaning from their primary significance in the language. Indeed, terms like ‘Herbie’
have so many different uses—one for my dog, another for the pianist, Herbie Hancock, and
countless others—that Brandom needs an account of what makes any use of ‘Herbie’
semantically co-typical with any other. The question applies even to intra-sentential
occurrences: What makes it the case, for instance, that both tokens of ‘Herbie’ in ‘Herbie
admires Herbie’ of the same type in a given communicative context?
In providing his answer, Brandom introduces the last of the major technical notions that he
needs in order to carry off his overall project—viz., the notion of anaphora. Linguists and
philosophers have paid a great deal of attention to intra-sentential anaphora, as in ‘If a man is a
police officer, then he was born out of wedlock’, where the pronoun ‘he’ is anaphoric on ‘a man’.
Syntacticians, in particular, have devised principles of generative grammar that aim to explain
the natural distribution of anaphoric expressions within sentences of natural language.
Somewhat less effort has thus far been expended on analyzing inter-sentential anaphora, as in
the following exchange between speakers Mihir and Rushal.
Mihir: That man seems to have fallen ill right after he approached the police line.
Rushal: He must have gotten hit by their fancy new sonic weapon.
Mihir: Oh, hey, I didn’t see you there! Do you happen to know the guy?
Rushal: No, I just heard you talking about him and I figured I’d chime in.
were previously only implicit in their practice. Thus, the practical ability to implicitly treat someone as having
entitled themselves to q by committing themselves to p is both necessary and sufficient to introduce conditional
expressions that make explicit the material goodness of that inference—e.g., ‘p q’ and ‘If p then q’. We will see in
§2.5 that Pietroski’s notion of concept introduction is different from Brandom’s, and arguably orthogonal.
15
Here, an anaphoric chain is initiated by Mihir’s use of ‘That man’, which is then picked up by ‘he’
later in the same sentence. But the chain doesn’t end there. Rushal’s use of ‘He’ is anaphoric on
Mihir’s use of ‘That man’ and ‘he’. Mihir’s response picks up the anaphoric chain with an
occurrence of ‘the guy’, which then continues onward to Rushal’s use of ‘him’, and to
occurrences of other expressions in subsequent discourse. Setting aside syntactic issues, what
can we say about this phenomenon at the level of meaning?
In keeping with his inferentialist semantics, Brandom argues that an anaphoric chain is one in
which the inferential proprieties governing the anaphoric initiator (e.g., Mihir’s use of ‘That
man’) are inherited by subsequent expressions in the chain. Thus, if Mihir’s use of ‘That man’ is
partly governed by his commitment to ‘That man is falling ill on live television’, then Rushal
inherits this commitment (among others) in picking up the anaphoric chain with the use of ‘He’,
along with whatever entitlements for this claim Mihir had already secured prior to Rushal’s
appearance on the scene.
With this account in hand, Brandom treats as a special case occurrences that are treated as
semantically c0-typical because they are phonologically or orthographically co-typical—e.g., the
two occurrences of ‘Herbie’ in ‘Herbie admires Herbie’. From this perspective, all expression
types consist of long-stretching anaphoric chains of individual use—an idea familiar from causal
theories of reference-borrowing, though shorn of various optional commitments. This account
also makes it clear what’s happening at the level of pragmatics. In picking up anaphoric chains,
speakers are able to take on normative statuses—paradigmatically, commitments and
entitlements—without themselves having explicitly avowed those statuses, and often without
having much (if any) idea what exactly it is that they’ve inherited. To illustrate, we can extend
the above example.
Suppose Rushal had no prior commitments regarding the victim’s appearance on television, or
indeed anything at all about the victim, but was strongly committed to the claim that police
don’t use sonic weapons on camera. In that case, upon being subsequently apprised of Mihir’s
entitlement to ‘That man is falling ill on live television’, Rushal will be under normative pressure
to either revise his prior commitments about on-camera police violence, or to withdraw the
claim that the victim must have been affected by a specifically sonic weapon. In this second
case, the revision can target either the predicate ‘sonic’—perhaps the police used an invisible
gas—or the alethic modal expression ,‘must’. The latter, on Brandom’s view, functions to make
explicit the modal robustness of an inference—i.e., its insensitivity to substitutions of
background auxiliary commitments, up to some boundary conditions (e.g., physical law). In the
present case, the boundary conditions are set by Rushal’s commitments regarding the general
institutional practices of local police. In order to regain epistemic equilibrium, Rushal can
revise various commitments concerning these practices; for instance, he might conclude that the
local Sheriff has deemed this to be a special occasion, on which on-camera use of sonic weapons
is warranted.
§1.8 Summary
We’ve now surveyed the main contours of Brandom’s overall philosophical project. The
explanatory strategy he pursues can be characterized as “top-down”, in the sense that he begins
by offering an account of communal normative practices, in the broadest sense, and identifies
within these an important subclass—namely, practices that serve to institute distinctively
linguistic norms governing assertion and other communicative acts. (One last plea for
demands!) Such norms pertain to the inferential proprieties that expression types have in their
16
semantically primary occurrences. Thus, the account moves “down” a step—from a normative
pragmatics that posits statuses of commitment and entitlement, to an inferentialist semantics
that aims to analyze meaning in terms of these statuses. The meaning of a propositional
expression type is, on this picture, identified with its normative inferential role—i.e., what other
claims it commits or entitles one to, and what commitments one must undertake in order secure
an entitlement to it.
Drilling down still further, Brandom develops the substitutional approach, which allows one to
“dissect” proposition-sized expression types, revealing subsentential bits of vocabulary. These
carry their own “ingredient content”, despite lacking the free-standing significance of
propositional expressions that enter directly into inferences as premises or conclusions. The
details of this proposal put in place the theoretical commitments that Brandom needs in order to
distinguish predicates from singular terms—a distinction that he goes on to argue will be
discernable in any linguistic practice that allows for the introduction of conditionals and other
logical operators (§4.3).
Having offered a treatment of propositional and subsentential expression types, Brandom steps
down another rung on the explanatory ladder, developing a conception of anaphora that applies
far more broadly than standard discussions in the literature might lead one to suspect. The
anaphoric relationship is, on this view, one of inferential inheritance, wherein the proprieties
governing the use of one expression—the initiator of an anaphoric chain—are taken to then also
govern the expressions occurring later in the chain, irrespective of the speaker’s
acknowledgement (or even awareness) of the statuses they’ve thereby undertaken. The latter
condition serves to explain how speakers can felicitously use expressions whose total set of
inferential proprieties is unknown to them, and perhaps even to anyone in the community.
One might think that all of this is utterly wrongheaded right from the get-go—the normativity,
the substitutions, and even the top-level goal of delineating language-use as such. Indeed, from
the perspective of a mainstream contemporary linguist or philosopher of language, Brandom’s
whole “top-down” explanatory strategy will seem downright perverse. The more common
bottom-up alternative goes as follows.
Taking for granted the notions of denotation/reference and satisfaction, as applied to
subsentential expressions, the bottom-up theorist seeks to formalize a compositional apparatus
for building propositions out of them. Free-standing propositional complexes are thereby
recursively assigned their own special kind of semantic value: e.g., possible-worlds truth
conditions (Heim and Kratzer, 1998) or sets of possible worlds (Stalnaker, 1984). This, in turn,
opens the door to a theory of linguistic communication, according to which speakers append
illocutionary forces to the range of recursively-specified meanings, yielding a variety of speechact types (questions, commands, etc.). The inferences in which a (now-interpreted) speech act
type figures can then be classified as good or bad in virtue of the semantic structures that the
combinatorial apparatus assigns to their premises and conclusions, as well as the illocutionary
forces that (somehow) “attach” to those structures.
Having thus analyzed the semantic properties of speech acts and inferences, one might note that
some—perhaps, in the end, all—of these have features that reliably trigger unencapsulated
pragmatic reasoning. This motivates the familiar project of supplementing a pragmatic theory
with “maxims” of rational cooperative communication/action (Grice 1989; Sperber and Wilson,
1986). Theorists who have carried out this latter project have developed impressive accounts of
17
implicature, metaphor, and other complex communicative phenomena (Levinson, 1983; Harris,
2020).
Proponents of the bottom-up strategy have pressed a catalogue of objections to Brandom’s
project. These include, but are not limited to, the following: (i) insistence on a compositionality
constraint that the inferentialist allegedly can’t accommodate; (ii) rejection of the idea that
language is fundamentally a communicative system; (iii) requirement that any legitimate
inquiry foreswear trafficking in normative assessments; and (iv) an allegation to the effect that
normative inferentialism is incompatible with what is known empirically about the human
mind/brain, particularly in respect of its language-processing abilities.
Before any of these challenges can be met, each stands in need of careful articulation. As
previously noted, I believe that such a task is best undertaken by pitting Brandom’s project
against what appears, at first blush, to be a rival alternative. (As advertised, I’ll argue afterwards
that the appearances are often deceiving in this regard.) With that in mind, I now turn to the
work of Paul Pietroski, whose semantic theory is a recent and powerful contribution to the larger
enterprise of generative linguistics.
§2. Pietroski: Meanings as Pronounceable Instructions for Concept Assembly
The theoretical commitments that comprise Paul Pietroski’s approach to natural-language
semantics are advanced and defended in his recent book, Conjoining Meanings (henceforth
CM).14 In this section, I summarize several of Pietroski’s main contributions, highlighting
aspects of his view that bear on my ecumenical strategy in §§3-4. To be clear from the outset,
the ideas laid out in CM strike me as constituting genuine progress in our understanding of the
psychological mechanisms of human language use. Moreover, I find wholly compelling his
arguments against the central pillars of received views in semantics—particularly, the
commitment to an extensional/truth-conditional approach. The book, overall, is replete with
rich and instructive discussions of topics that go well beyond the scope of the present discussion.
But while we won’t be able to look at the details of some of Pietroski’s original proposals here,
it’s worth noting that they are all, to my mind, persuasively motivated by historical, formal, and
empirical considerations. That having been said, let’s dive in.
§2.1 A different methodology and new explanatory aims
While Brandom’s inferentialist approach is virtually unknown in cognitive science, the
methodology of generative linguistics will be familiar to many in the field, at least in broad
outline. Rooted in a foundational commitment to naturalistic inquiry, the idea is to treat
language as a biological phenomenon—not necessarily in the sense that it has an adaptive
function (Chomsky [2016] disputes this), but in the sense that a neurophysiologically realized
cognitive structure is the explicit target of inquiry. The linguist thus works on the assumption
that human minds contain a language-specific device—a “faculty”, “module”, or “mental
organ”—with a distinctive computational architecture, a proprietary representational format,
and dedicated/domain-specific information-processing routines. The goal is to provide a
detailed specification of each of these, yielding a neurocognitive account of the acquisition and
use of language.
This section elaborates the material in Pereplyotchik (2019). The operative notion of a subpersonal level of
description is spelled out in Pereplyotchik (2017: ch. 7).
14
18
On analogy with bodily organs, the faculty of language (henceforth FL) is assumed to “grow”
within the child during the early years of development. This happens in accordance with a
genetic program, phenotypically realized in the child’s innate ability to acquire linguistic
competence under a diverse range of social and environmental circumstances. Thus, a central
aim of generative linguistics is to specify not only the grammar of an adult language, but also the
principles that underlie language acquisition—particularly those that allow the child to home in
on a specific grammar in a relatively short time, with little or no (overt) negative evidence
(Chomsky, 1986; Yang, 2006). This problem is made exceedingly challenging by the fact that
natural languages are invariably productive/generative, meaning that they allow for boundless
applications of combinatorial recursive operations, yielding a discrete infinity of nonredundant 15
interpretable structures.
The generativist’s strategy for dealing with this central feature of natural language is to posit
grammatical principles that are inherently compositional at all levels of analysis—phonology,
morphology, syntax, and semantics. The syntactic module of FL is taken to merge the elements
of the lexicon—atomic units of a language that contribute their distinctive meanings to more
complex structures. On the basis of these, the semantic module recursively generates complex
meanings, which can enter into downstream personal-level cognition—judgment, reasoning,
planning, and the like.16
Pietroski’s main goal in CM is to characterize the semantic module by offering a detailed
proposal about its proprietary representational format—specifically, the nature of the lexical
items—and the computational operations that assemble larger interpretable structures. At the
level of format, the hypothesis he develops is that virtually all lexical items are predicates, the
latter being restricted to only two types—monadic and (semi)dyadic. Regarding computational
operations, Pietroski aims to make do with a bare minimum of compositional semantic
principles, with the lion’s share of work being done by nothing more than two flavors of
predicate conjunction (one for each type of predicate). We’ll look at some of the details shortly,
maintaining our present focus on matters of methodology.
Following Chomsky (1986, 1995, 2000), Pietroski adopts an individualist position, taking the
object of study to be an “I-language”—an intensionally-specified procedure internal to an
individual language user. He supports this with forceful arguments against the alternative
conception of language(s) that we find in the work of David Lewis (1969, 1970, 1973). A
language, on this rival picture, is a kind of abstract object—namely, an extensionally-specified a
set of well-formed sentences—which is “selected” by a population of creatures, via the adoption
of social/communicative “conventions”. The latter Lewis sees as jointly constituting a public
language, such as English or Norwegian—what generativists refer to as “E-languages”. Pietroski
rejects virtually every aspect of this picture. We’ll look at his reasons for doing so in §4. For
now, it’s sufficient to distinguish three key points of contention.
Pietroski points out that this goes well beyond mere recursion, which is trivially satisfied by any languages with a
rule for applying sentential operators. The infinitude of English thus differs qualitatively from the infinitude of a
language that permits the formation only of P, P&P, P&(P&P),… or P, ~P, ~~P, ~~~P,….
15
It’s important to note that what has been said thus far is not (yet) intended as a theory of real-time/on-line
language processing. Rather, it is to be seen as an abstract characterization of the architecture and internal
operations of a specific cognitive structure, acquired at birth and persisting in a stable state thereafter (Chomsky,
1995).
16
19
First, there’s the metaphysics. Lewis (1973) says languages are abstracta, whereas Pietroski sees
them as biologically-instantiated computational procedures. Then there’s the issue of
extensionality. Pietroski rejects Lewis’s theoretical goals, which consist merely of extensionally
specifying meaning-pronunciation pairs, and adopts instead a more weighty explanatory aim—
namely, that of specifying human linguistic competence as a function-in-intension. Only in this
way, he argues, can the resulting theory capture the psychologically real operations that yield
interpretable structures. Finally, there’s the issue of publicity, and related troubles with Lewis’s
notion of “selection”. Pietroski’s individualist stance leads him to eschew the folk-ontological
commitment to public languages, at least for the purposes of mature empirical inquiry. This
manifests in his methodological practice of focusing on matters of individual psychology—e.g.,
internal mechanisms of semantic composition—rather than the social practices of linguistic
communication. Accordingly, Pietroski sees Lewis’s appeal to public conventions as generally
unhelpful for—indeed, an outright distraction from—the empirical study of linguistic meaning.
Pietroski’s disagreements with Lewis go well beyond such methodological issues, extending to
matters of technical detail. For, in addition to the large-scale commitments mentioned thus far,
Lewis (1970) also developed a powerful formal apparatus for conducting semantic theorizing.
Expressions, in this scheme, are assigned “semantic types”, which are either basic or recursively
derived. The interpretation of complex structures is then accomplished by functions that map
one semantic type onto another. In its most familiar version, such a semantic theory will assign
sentences the basic type <t> and singular terms the basic type <e>. Thereafter, monadic
predicates can be treated as having the derived type <<e>, <t>>, which is a function from things
of type <e> to things of type <t>.
Although this formal typology presupposes no particular metaphysics or metasemantics, it’s
common in practice to think of singular terms as denoting entities (e.g., Jessica), and sentences
as denoting truth-values (T and F). With this in place, monadic predicates like ‘swims’ can be
assigned the semantic function of mapping the entities in its domain to the truth-values in its
range. For instance, ‘Jessica swims’ is mapped to T just in case Jessica (the actual person)
satisfies the predicate ‘swims’; otherwise, F. Likewise, adverbs such as ‘often’ and ‘expertly’ have
the derived type <<<e>, <t>>, <t>>, which is a function that maps the semantic value of
predicates (i.e., functions from <e> to <t>) to the semantic values of sentences (i.e., T or F). Put
somewhat imprecisely, the intuitive idea is that ‘Jessica swims expertly’ is mapped to T just in
case ‘expertly’ is satisfied by the predicate ‘swims’ when applied to ‘Jessica’.
It’s no exaggeration to say that this general framework is seen as a foundational contribution to
formal semantics, even by generative linguists who have no truck with—or, indeed, no
awareness of—Lewis’s broader projects. Part of what makes Pietroski’s negative contentions so
radical, then, is that he rejects wholesale this now-mainstream approach to semantic theorizing.
In particular, he argues that taking an infinite hierarchy of types as explanatorily primitive is not
only unparsimonious, but leaves wholly unexplained crucial aspects of the natural languages
that children invariably acquire. As a matter of empirical fact, humans language permits the
construction of only a limited class of semantic types, not the infinite range of logically possible
ones. This empirical generalization plainly stands in need of explanation, which a semantic
theory can’t provide if it takes all possible types as available to a speaker right from the start.
One can say that thinkers must have the requisite abstractive powers, given the capacities
required to form thoughts like ABOVE(FIDO, VENUS) & BETWEEN (SADIE, BESSIE, VENUS). But
one needs an account of these alleged powers—which permit abstraction of a tetradic
concept from ABOVE(_, _) and BETWEEN(_, _, _)—to explain how thinkers can form the
concepts that Begriffsschift expressions reflect. This is not to doubt the utility of Frege’s
20
logical syntax. On the contrary, his proposals about the architecture of thoughts were major
contributions. But Frege insightfully invented a logical syntax whose intended
interpretation raised important questions that he did not answer.
One can insist that given any polyadic concept with n unsaturated “slots,” a human thinker
can use n-1 saturaters to create a monadic concept, leaving any one of the slots unfilled. But
that leaves the question of how we came to have this impressive capacity. And in chapter
six, I offer evidence that a simple form of conjunction lies at the core of unbounded
cognitive productivity. Our natural capacities to combine concepts are impressive, but
constrained in ways that suggest less than an ideal Fregean mind.
Pietroski recommends a more parsimonious alternative—one that eschews the infinite hierarchy
of semantic types and posits only a very small handful, including, most importantly, monadic
and quasi-dyadic predicates. “The idea [is] that with help from Slang syntax, we can generate an
analog of GIVE(VENUS, _, BESSIE) without saturating GIVE(_, _, _)—much less saturating it twice,
or thrice, and then desaturating once” (103).
Nor does his iconoclasm end there. As noted earlier, Lewis’s general framework for semantic
theorizing leaves open a variety of issues in metaphysics and metasemantics. An equally
mainstream approach to natural-language semantics is decidedly more committal on these
points. Donald Davidson’s truth-theoretic semantics (Davidson, 1983), as well as the many
variants of it that have now been developed, identifies the meanings of linguistic expressions
with their extensions. Thus, truth conditions (perhaps relativized to possible worlds) are seen as
the semantic values of sentences; entities are the values of singular terms; sets are the values of
predicates; events in the case of verbs, and so on. Pietroski marshals a battery of arguments
against this familiar approach. We’ll examine these shortly. For now, we note only that this
anti-extensionalism is a core commitment that he shares with Brandom. It is, therefore, a major
plank in the bridge that I aim to build between the two in §§3-4.
§2.3 Meanings are definitely not extensions
Pietroski sees semantics as a naturalistic inquiry into “how Slang expressions are related to
human concepts” (115). Some theorists wish to simply identify meanings with concepts, but
Pietroski points out that this leaves wholly unexplained the psychological processes that
constitute our semantic competence. I’ll argue in §4 that this point applies to Brandom, who
sometimes speaks indiscriminately of meanings, concepts, conceptual contents, intentional
contents, discursive contents, propositional contents, and so on. However, as I’ll emphasize
there, the difference can only be viewed as a substantive theoretical dispute if we let their use of
the folk term ‘meaning’ bewitch us into assuming that they have a common explanatory target,
contrary to fact.
Better, I think, to appreciate the highly theoretical nature of this piece of jargon and the
different—but not thereby incompatible—explanatory goals of the two frameworks in which it
shows up. Thus, we can distinguish meaningB from meaningP and proceed to contemplate how
the two are related, this now being a jointly philosophical and empirical question, not a boring
verbal one. Indeed, this point is made explicitly by both Pietroski and Brandom, in connection
with both ‘meaning’ and another vexed notion—that of ‘concepts’—which notoriously plays a
wide variety of roles in diverse research contexts. Here again, we can speak of conceptsB and
conceptsP, aiming to articulate the relations between them. Likewise for ‘thought’, ‘judgment’,
and other terms, when explicit disambiguation is required. (See also the discussion of the
notorious ‘-ing/-ed’ ambiguity in §4.2.)
21
As noted above, another popular idea is to identify meanings with extensions (Davidson, 1983).
The central negative contention of CM is that the notions of extension, truth, and denotation
should play no explanatory role in a psychologically-oriented semantics for natural languages
(“Slangs”). Pietroski argues persuasively that the best empirical theory of the relation between
Slang expressions and concepts will not identify meanings with extensions. Indeed, he rejects
even the weaker claim that meanings determine extensions. He proposes, instead, to identify
meanings with something entirely different—in particular, something that can play the
psychological role of relating language to cognition. The candidate he recommends is this:
pronounceable instructions for accessing and assembling concepts. We’ll look at this in some
detail, but let’s first get clear on why Pietroski rejects the truth-conditional orthodoxy that
dominates formal semantics. As we’ll see, there are a great many reasons. To my mind, no one
of these is necessarily decisive, but, taken together, they strongly suggest turning away from the
extensionalist project and starting anew, however much revision this might require. As we go
along, I’ll land a few jabs of my own, but Pietroski’s are by far the weightier blows.
§2.3 Objections to truth-conditional semantics
Pietroski views truth-conditional semantics (henceforth ‘TCS’) as an empirical hypothesis about
Slang expressions, according to which there is a relation—call it “true of”, “refers to”, “denotes”,
or whatever you like—that holds between words and items in the world. TCS views this relation
as being of central importance to our theoretical characterization of natural-language meanings.
In rejecting this hypothesis, one need not deny, of course, that there are words or that there
objects (e.g., babies and ‘bathwater’). One can, instead, deny that there is a unique relation
between them, let alone one that’s suited to playing the theoretical role of linguistic meaning.
Here is how Pietroski puts the point:
I don’t think ‘sky’ is true of skies (or of sky), much less blue skies or night skies. I don’t
deny that there are chases, and that in this sense, chases exist even if skies don’t. But the
existence of chases doesn’t show that ‘chase’ is true of them… [Likewise], there is no entity
that ‘Venice’ denotes. In this respect, ‘Venice’ is like ‘Vulcan’, even though one can visit
Venice but not Vulcan… I also agree that there is a sense in which there are blue skies, but
no blue unicorns. But it doesn’t follow that ‘sky’ is true of some things, at least not in the
sense of ‘true’ that matters for a theory of truth… [T]here is no call to quantify over skies, in
physics or linguistics. (68)
As the example of ‘Vulcan’ illustrates, words can perfectly well be meaningful without having
extensions. Pietroski’s view is that this holds of all Slang expressions. What’s interesting about
words like ‘Vulcan’ is they “illustrate the general point that words don’t have extensions”. The
idea isn’t merely such terms have empty extensions; it’s that they have none at all.
Even if words did have extensions, the latter couldn’t be identified with meanings, if only
because “expressions with different meanings can have the same ‘extension’” (15). Fans of TCS
will typically appeal to “non-actual possibilities” in dealing with this issue. For instance,
‘unicorn’ and ‘ghost’ are said to have the same extension in the actual world, but they differ in
meaning—the reply goes—because they have different extensions in other possible worlds.
Pietroski correctly points out that this “is an odd way to maintain that meanings are extensions.”
If the meaning of a word is not whatever set of things that the word happens to be true of, why
think the meaning is a mapping from each possible world w to whatever set of things that the word
happens to be true of at w? [If] Slang expressions need not connect pronunciations to actual things,
it seems contrived to insist that these expressions connect pronunciations to possible things…
[I]nvoking possible unicorns is contrivance on stilts. (12)
22
Doubtless, fans of TCS will see this as little more than an ad hominem. We’ll look at stronger
arguments shortly. For now, I want to emphasize that this point—or, in any case, a version of
it—carries more weight than is commonly appreciated. Let me take a brief aside to develop it in
my own terms.
The intuitive considerations that motivate TCS (e.g., for introductory semantics students) almost
always have to do with objects that are available for perceptual inspection. (‘David’ refers to this
guy, ‘my desk’ refers to that thing, and so on.) This serves to illustrate, at the level of pretheoretical intuition, how linguistic expressions “hook onto the world”—namely by way of
perceptual contact (indeed, literal contact, in the case of haptic perception). Shortly thereafter,
the details of one or another formal theory are introduced, giving the student little time to reflect
on how far the initial illustration can plausibly generalize. (Spoiler alert: not very far!) If
philosophical questions happen to arise about the status of these “reference” and
“correspondence” “relations”—e.g., with regard to empty names and predicates (‘Vulcan’,
‘unicorn’, etc.)—the instructor can use the opportunity to explore various technical proposals for
dealing with such “special cases”—e.g., Russell’s theory of names as disguised descriptions, or
the formal apparatus of possible-worlds semantics. Attention is thus deflected away from how
massive the intuitive problem really is. Here’s a much-needed corrective.
Consider for a moment the vast range of expressions that we can readily produce and
comprehend, and reflect on how vanishingly few of these have anything much to do with what’s
going on in physical reality, let alone with things that we can perceptually inspect in any
intuitive sense. We speak of Santa and his elves, gods and demons, goals and fears,
opportunities and temptations, aliens and chem-trails, reptiles and unicorns, futures and
fictions, numbers and functions, nouns and verbs, fonts and meanings, haircuts and field-goals,
stocks and derivatives, mergers and monopolies, economies and governments, boson fields and
spin-foams, black holes and electrons, Blacks and whites, Jews and Frenchmen, London and
Moscow, classes and genders, protests and stereotypes, jocks and nerds, bits and bytes, poems
and operas, humor and beauty, and even the possibility (albeit dim) of true liberatory justice.
Appreciating the sheer scope of the phenomenon to be explained renders, to my mind, utterly
implausible the strategy of taking direct perceptual contact with the world as our model of how
language relates to reality in general. Moreover, the total lack of convergence that we find
amongst metasemanticists when we go looking for a metaphysical account of truth and
reference—conceived of, again, as a Very Special sort of natural relation—strikes me as further
grounds for abandoning the project of extensional semantics immediately and forthwith. It
helps, of course, that Pietroski supplies a powerful alternative framework for doing semantics.
And it certainly doesn’t hurt, I reckon, that Brandom complements this with an independently
attractive (“deflationist”) account of truth and reference.
All that aside, Pietroski has a further, more powerful argument against invoking non-actual
possibilities for the purpose of individuating meanings. He makes use of Kripke’s contention
that the non-existence of unicorns in the actual world implies their non-existence in all other
possible worlds (Kripke, 1980). Of course, there may well be creatures in other possible worlds
that look a lot like what we imagine unicorns would look like. But they would not thereby be
unicorns, and our word ‘unicorn’ would not thereby be true of them. If that’s correct, then
‘ghost’ and ‘unicorn’ aren’t just co-extensive in our world; they’re co-extensive in every possible
world. Thus, no identification of meanings with extensions, actual or possible, will distinguish
23
the meanings of those two words. Likewise for all of the related cases—empty names, defective
predicates, necessary falsehoods, and so on.
One might reply by rejecting Kripke’s semantic and metaphysical assumptions, and adopting
instead a Lewisian counterpart theory, but Pietroski points out several problems for this strategy
as well. Adopting the terms ‘LUNICORN’ for Lewisian unicorn-lookalikes and ‘KUNICORN’ for
the whatever it is that Kripke has in mind, he makes the following powerful retort.
We can grant that some theorists sometimes use ‘unicorn’ to express the technical concept
LUNICORN. But if ‘unicorn’ can also be used to express the concept KUNICORN, then it
seems like contrivance to insist that the Slang expression has a meaning that maps some
contexts onto the extension of LUNICORN and other contexts onto the extension of
KUNICORN. If we assume that words like ‘possibly’ have extensions, then perhaps we
should specify the meanings of such words in terms of a suitably generic notion of world
that allows for special cases corresponding to metaphysical and epistemic modalities; cp.
Kratzer. But in my view, theorists should not posit (things that include) unicorns in order
to accommodate correct uses of ‘Possibly/Perhaps/Maybe unicorns exist’ or ‘There may be
unicorns’; and likewise for squarable circles.
Thereafter, the dialectic turns to matters that we need not enter into here. Suffice it to say that,
even if this worry about fine-grained meanings can ultimately be defused, TCS would still face
Pietroski’s more technical (and potentially more damaging) objections. These include matters
pertaining to liar-sentences, as well as the more widespread and natural phenomenon of event
descriptions. Sadly, these too go beyond the scope of our discussion. One argument that I do
want to say a bit more about, though, is on the topic of, where Pietroski’s view of the matter
finds wide acceptance among generative linguists—though, notably, not philosophers of
language (see, e.g., Michael Devitt’s paper in this issue.)
Following Chomsky (2000), Pietroski points out that ‘water’ is polysemously used to talk about
many substances—those found in wells, rivers, taps, etc.—nearly all of which have lower H2O
contents than substances that, at least prima facie, are not water, including coffee, tea, and cola
(CM, 21). This presents a challenge to theories that view ‘water’ as bearing a reference relation
to (all instances of?) the natural kind water, whose metaphysically essential property is being
composed of H2O molecules (Kripke, 1980). If coffee, tea, and cola all have more H2O in them
than most ordinary instances of water, then it’s not clear why ‘water’ doesn’t bear the reference
relation to them, rather to the stuff in the local rivers and wells.
A related consideration has to do with predicate conjunction. The word ‘France’ can be used in
expressing either of two concepts: FRANCE:BORDER and FRANCE:POLIS. The border is hexagonal
and the polis is a republic. But, Pietroski points out, the polysemy of ‘France’ “does not imply
that something is both hexagonal and a republic, much less that ‘France’ denotes such a thing”
(74). Similarly, while ‘London’ can be used to talk about “a particular location or a polis that
could be relocated elsewhere,” it is plain that “no location can be moved, and no political
institution is a location.” Pietroski concludes that “no entity is the denotation (or ‘semantic
value’) of ‘London’; the ordinary word has no denotation” (73, emphasis mine).
§2.4 Meanings as pronounceable instructions
Let’s turn now to Pietroski’s positive views. As noted earlier, the main goal of CM is to defend
the hypothesis that linguistic meanings are “pronounceable instructions for how to access and
assemble concepts” (1). More specifically,
24
each lexical meaning is an instruction for how to access a concept from a certain address,
which may be shared by a family of concepts. … A Slang expression Σ can be used to
access/build/express a concept C that is less flexible than Σ—in terms of what Σ can be used
to talk about, and how it can combine with other expressions, compared with what C can be
used to think about and how it can combine with other concepts— since Σ might be used to
access/build/express a related but distinct concept C* .
Unpacking Pietroski’s hypothesis requires getting clear on the three key notions of
pronounceable instructions, compositional assembly, and conceptual types. Each is more
challenging than the last, so we’ll start with instructions and work our way up.
§2.4.1 Pronounceable instructions
An utterance of a sentence is a spatiotemporally located event, in which a speaker produces a
physical signal. The latter serves, on Pietroski’s view, as an instruction for the hearer’s FL to
perform a computational procedure.17 The instruction can be carried out by any hearer whose Ilanguage is sufficiently similar to the speaker’s. The acoustic properties of an utterance, upon
being transduced, trigger an early perceptual constancy effect, whereby a dedicated module
imposes phonological categories on the neural encoding of the acoustic blast. These cognitive
operations serve, in turn, as instructions for the further segmentation of the phonological units
into syllables and eventually into morphemes and other lexical items. The latter, on Pietroski’s
view are best seen as instructions for accessing (“fetching”) individual concepts, which he
conceives of as atomic units of one or another language of thought. I say “one or another”
because his view leaves open the possibility, which he goes on to explore and even endorse, that
there are many languages in which the mind conducts its information-processing. We’ll return
to this point in connection with Pietroski’s discussions of Frege (§2.5).
Importantly, Pietroski maintains that concepts reside in semantic “families”, which have their
own “addresses” in a broader cognitive architecture. This is a large part of his explanation of the
aforementioned phenomenon of polysemy. The idea is that one and the same lexical item can be
an instruction for fetching “a concept from a certain lexical address … shared by a family of
concepts” (8). Because a lexical instruction points only at an address, rather than a specific
concept, it’s left open for downstream processing routines to determine which particular
concept from the indicated address/family is “relevant” in the present context.
This, of course, raises deep and difficult questions about how hearers manage this latter step—
i.e., reliably accessing the relevant concept(s) in a given context, rather than the irrelevant ones
from the same conceptual family. What psychological mechanisms select just one of a family of
concepts residing at a common lexical address? In large part, Pietroski leaves this issue open—
justifiably so, given everything else he’s juggling. But it’s worth remarking in the present context
that the mechanisms of this kind of selection are widely agreed to involve—indeed, to require—
precisely the kind of nondemonstrative pragmatic reasoning that Brandom has argued to be
constitutive of conceptual contents.
“I hope the analogy to elementary computer programs, which can be compiled and implemented, makes the
operative notion of instruction tolerably clear and unobjectionable in the present context. … Instructions can take
many forms, including strings of ‘0’s and ‘1’s that get used—as in a von Neumann machine—to access other such
strings and perform certain operations on them. … And instead of arithmetic operations that are performed on
numbers, one can imagine combinatorial operations that are performed on concepts” (108).
17
25
§2.4.2 Assembling concepts
Turn now to the second key notion in Pietroski’s main hypothesis—viz., the compositional
assembly of concepts. In general, instructions for assembling something can vary along any
number of dimensions. Some are clear; some aren’t. Some are detailed; others are vague. Some
are simple; others are complex—i.e., composed of simpler instructions. Moreover, not
everything to which an instruction is presented is capable of carrying it out. Some computers
can’t run the software that others can. Some chefs can’t bake the cakes that others have no
trouble baking. And some proteins (or cells) can follow genetic instructions that others simply
can’t. Lastly, the products of successfully carrying out instructions can vary widely. The same
student, with the same instructions, can succeed or fail on an exam, depending on whether
they’ve had sleep the night before. Likewise, a novice barista will generally make worse coffee
with low-quality ingredients than with high-quality ones, successfully following the same
instructions both times.
Given that the semantic module of FL is assumed to have a stable processing routines, carried
out in a proprietary representational format, it follows that it won’t be able to process just any
old instruction, but only a restricted kind. Likewise, it will only be capable of assembling only a
limited class of outputs. The question, then, is what kinds of instructions the semantic module
is capable of implementing and what sort of structures it’s capable of building.
Many theorists aim at capturing something called “compositionality”—a piece of theoretical
jargon that, perhaps more than most, has been worn smooth by a thousand tongues (to use
Wilfrid Sellars’s clever phrase). Of the many ways of cashing it out, Pietroski maintains that
what’s required for an avowedly cognitivist project is that the meanings of lexical items compose
in ways that suitably mirror the structure of complex concepts. Thus, having identified the
meanings of lexical items with instructions to fetch individual concepts, he argues that these
instructions compose, forming complex instructions, with some functioning as (detachable)
components of others. These semantic instructions—what Pietroski calls Begriffsplans—are
responsible for the assembly of concepts meet two constraints. First, they must be suited to that
specific type of instruction. While other kinds of human concepts might be assembled by nonlinguistic means, Begriffsplans can only assemble concepts of a very specific nature (to be
spelled out shortly). Second, in keeping with the “mirroring” constraint (my word, not his), the
complex concepts that Begriffsplans assemble must bear the same part-whole relationships to
one another as do the Begriffsplans themselves.
Laying out some of the specifics of the Begriffsplans that Pietroski posits will put us in a
position to better appreciate his views on concepts. The clearest case of this pertains to
instructions for predicate conjunction. Pietroski takes this to be an absolutely central aspect of
linguistic concept assembly, in part because he holds that the kinds of concepts that the human
FL is capable of assembling are uniformly predicative. In saying this, he means to deny outright
that natural languages (“Slangs”) allow us to access singular concepts. Such concepts do exist,
he thinks, but they can’t be fetched by Begriffsplans. Indeed, he holds that the only predicative
concepts FL can fetch, and hence assemble, are limited to just the monadic and the quasidyadic, with higher adicities receiving a different analysis. These two types of concept
correspond to two flavors of predicate conjunction: M-junction and D-junction. Here’s how
Pietroski characterizes the overall process.
If biology somehow implements M-junction and D-junction, one can envision a mind with
further capacities to (i) use lexical items as devices for accessing simple concepts that can
be inputs to these operations, and (ii) combine lexical items in ways that invoke these
26
operations. … Suppose that combining two Slang expressions, atomic or complex, is an
instruction to send a pair of corresponding concepts to a “joiner” whose outputs can be
inputs to further operations of joining. Imagine a mind—call it Joyce—that has some
lexical items, each with a long-term address that may be shared by two or more
polysemously related concepts. Joyce also has a workspace in which (copies of) two
concepts can be either M-joined or D-joined to form a single monadic concept, thereby
making room for another concept in the workspace, up to some limit. Joyce can produce
and execute instructions like fetch@‘cow’; where for each lexical item L, the instruction
fetch@L is executed by copying a concept that resides at the long-term address of L into the
workspace. Joyce can also produce and execute instructions of the forms M-join[I, I0] and
D-join[I, I0]; where I and I0 are also generable instructions. An instance of M-join[I, I0] is
executed by M-joining results of executing I and I 0, leaving the result in the workspace, and
likewise for an instance D-join[I, I0].
Having introduced two basic types of composable Begriffsplans—one for fetching concepts like
DOG(_) and one for assembling these into complex structures—Pietroski adds four other types of
basic semantic operation:
(i)
(ii)
(iii)
(iv)
a limited operation of existential closure
a mental analog of relative clause formation (weaker than λ-abstraction)
the introduction of concepts like GIVE(_) on the basis of GIVE(x, y, z)
of thematic concepts—e.g., AGENT(_), PATIENT(_), RECIPIENT(_)
[G]iven two monadic concepts, the operation of M-junction yields a third such concept that
applies to an entity e if and only if each of the two constituent concepts applies to e. (32) …
In short, Slangs let us access and assemble monadic [and some limited dyadic] concepts
that can be conjoined, indexed, polarized, and used as bases for a limited kind of
abstraction.
We’ll look at several of these operations in more detail below, but the following passage contains
an initial illustration of the kinds of structures that this system can assemble.
My claim is not that ‘gave a dog a bone’ is an instruction to build [just any] concept with
which one can think about things that gave a dog a bone. That instruction might be
executed by building the concept ∃y∃z[GAVE(x, y, z) & BONE(y) & DOG(z)], which has a
triadic constituent. My claim is that ‘gave a dog a bone’ is an instruction for how to build an
M-junction like [[GIVE(_)^PAST(_)]^∃[PATIENT(_, _), BONE(_)]]^∃[RECIPIENT (_,
_)^DOG(_)], which has only an occasional dyadic concept that has been “sealed in.”
This passage usefully contrasts the conceptual structures assembled by FL with the those that
are often assumed by linguists—wrongly, by Pietroski’s lights—to be available to humans
antecedent to the development of language.
§2.4.3 Concepts, predicative and sentential
We are now in a position to ask more specific questions about Pietroski’s third key notion—viz.,
that of a concept. As we’ve already seen, he takes these to be expressions in a compositional
language of thought, some of which can be assembled by the semantic module of FL. But,
however they might be assembled, they are the representations that allow us to think about the
world.
[C]oncepts have contents that can be described as ways of thinking about things; cf. Evans.
A concept that can be used to think about something as a rabbit, whatever that amounts to,
has a content that we can gesture at by talking about the concept type RABBIT. An instance
of this type is a mental symbol that can be used to think about a rabbit as such, or to
27
classify something—perhaps wrongly—as a rabbit; see Fodor. A concept of the type RABBITTHAT-RAN, which can be used think about something as a rabbit that ran, is presumably a
complex mental symbol whose constituents include an instance of RABBIT. A thought can be
described as a sentential concept that lets us think about (some portion of) the universe as
being a certain way. Thoughts of the type A-RABBIT-RAN can be used to think about the
world as being such that a rabbit ran. (4)
As the remarks at the end of this passage indicate, Pietroski takes thoughts to be a special kind
of concept—namely, a sentential concept. This is important to highlight, in view of its relation to
a broader point about sentential meanings.
Pietroski is skeptical that “Slangs generate sentences as such.” The traditional notion of a
sentence, as a unity of a subject and a predicate, has been roundly abandoned in contemporary
linguistics. While the notions of “subject” and “sentence” have a place in subject-predicate
conceptions of thought, Pietroski points out that they “may have no stable place” in
contemporary scientific grammars (114).
Linguists have replaced “S” with many phrase-like projections of functional items that
include tense and agreement morphemes, along with various complementizers. This raises
questions about what sentences are, and whether any grammatical notion corresponds to
the notion of a truth-evaluable thought. But theories of grammatical structure—and to that
extent, theories of the expressions that Slangs generate—have been improved by not
positing a special category of sentence. So while such a category often plays a special role in
the stipulations regarding invented formal languages, grammatical structure may be
independent of any notion of sentence. (61)
Accordingly, Pietroski suspects that talk of “grammatical subjects” is just a roundabout way of
“saying that tensed clauses have a ‘left edge constituent’ that somehow makes them complete
sentences—whatever that amounts to—as opposed to mere phrases like ‘telephoned Bingley’”
(87). Rather than clarifying the notion of a “complete sentence,” Pietroski points out that talk of
grammatical subjects presupposes it.
How, then, to characterize sentences? Naturally, Pietroski does not appeal to a distinction
between sentential truth conditions and subsentential satisfaction conditions. Instead, he
develops a novel version of predicativism, according to which all of the concepts assembled by
Begriffsplans are predicative, in the sense that they all have a classificatory function. This
includes concepts that are fetched by linguistic expressions like ‘Jessica’, ‘David Pereplyotchik’,
and ‘Reykjavík’. (Yes, the Reykjavík.)
So far, the view on the table is a version of the familiar predicativist position that was introduced
by Quine (1970), defended by Burge (1973), and reanimated in contemporary discussions by the
work of Delia Graff Fara (2005). Pietroski goes on, however, to make a quite novel claim—
namely, that the concepts assembled by sentence-sized Slang expressions are also predicative.
The idea is that familiar subsentential predicates are assembled, largely via predicate
conjunction, and then a new mental operation (⇑ or ⇓) converts the results into a sentential
predicate—what Pietroski calls a “polarized concept”. Here is how he defines these: “Given any
concept M, applying the operation ⇑ yields a polarized concept, ⇑M, that applies to each thing if
M applies to something” (30). For instance, if RABBIT applies to something, then ⇑RABBIT applies
to each thing and ⇓RABBIT applies to no-thing. We will return to this topic in §3, when we
compare this proposal with the inferentialist account of sentence meaning.
28
Recall that semantic instructions (Begriffsplans) have “mechanical execution conditions”.
Because Pietroski takes Begriffsplans to be linguistic meanings, it follows for him that that
“meanings satisfy demanding compositionality constraints.” Such constraints, he argues, permit
the assembly of concepts that are better suited for their role in language use than for the
epistemic role of “fitting the world”. This important upshot of Pietroski’s view bears on his
rejection of both Davidson’s extensional semantics and Lewis’s unrestricted type-theoretic
approach to natural language (§2.1). For, although he leaves it open that we might build truthevaluable thoughts as a side-effect of language processing, he denies that “meanings are
instructions for how to build concepts that exhibit classical semantic properties” (115). Likewise,
he suspects that “most natural concepts [do not] have extensions; cp. Travis… if only because of
vagueness; cp. Sainsbury” (9). Hence, the Begriffsplans that Pietroski identifies with meanings
“make no reference to the things we usually think and talk about” (115). If correct, this
conclusion is just one more nail in the coffin of the extensionalist project.
§2.5 Pietroski on Fregean thoughts and concepts
Common to both Pietroski and Brandom is a deep engagement with the work of Frege.
However, as we’ll see presently, the lessons that Pietroski draws from Frege are not those that
one might expect. In particular, the formal device that he takes over from Frege’s semantics is
not that of function application, as is common; rather, he emphasizes Frege’s immensely useful
notion of concept invention—something you don’t hear much about in discussions of Frege, at
least amongst linguists.
As noted earlier, Pietroski holds that are multiple languages of thought—i.e., distinct formats of
concept application. In his discussions of Frege, he advances the hypothesis that there are, in
fact, at least two such languages. The first one, in order of evolutionary history, may well have a
Fregean semantics and include expressions of type <t>. The second one, which only came in
with the evolution of natural language, consists of concepts that were invented, or introduced, in
a Fregean sense, on the basis of the older ones.
[N]atural sentences of type <t> may belong to languages of thought that are
phylogenetically older than Slangs. Expressions of these newer languages may be used to
build complex monadic concepts, perhaps including some special cases that are closely
related to natural thoughts of type <t>. In which case, the very idea of a truth-conditional
semantics for a human language may be fundamentally misguided. (114)
Because Pietroski treats the new type of concept as being invariably predicative—i.e.,
functioning semantically to classify things into categories, not to denote them individually—he
calls such concepts “categorical”. The older type of concept, which participates in thoughts of
type <t>, includes singular denoting concepts and predicates of any adicity. On account of their
semantic function of relating items to each other, Pietroski calls such concepts (and the thoughts
they participate in) “relational”.
Though I see its significance, I’m not, myself, a huge fan of the ‘categorical’/‘relational’
terminology. Adverting to their historical roles, rather than their internal logic, I’ll call these
languages Olde Mentalese and New Mentalese for the remainder of the discussion. Here’s how
Pietroski casts the theoretical relations between them.
Frege assumed that we naturally think and talk in a subject-predicate format, and that we
need help—[e.g.] his invented Begriffsschrift—in order to use our rudimentary capacities
for relational thought in systematic ways… The idea was that a thought content can be
“dimly grasped,” in some natural way, and then re-presented in a more logically
29
perspicuous format that highlights inferential relations to other contents… I think this is
basically right: our categorical thoughts are governed by a natural logic that lets us
appreciate certain implication relations among predicates; but our relational concepts are
related in less systematic ways. We use relational concepts in natural modes of thought.
(95-6)
The distinction between Olde Mentalese and New Mentalese allows Pietroski clarify his
perspective, contrasting it with Frege’s. Here, too, it’s instructive to quote at length.
Frege introduced higher-order polyadic analogs of monadic concepts. In this respect, my
project is the converse of his. Frege invented logically interesting concepts, and he viewed
monadicity as a kind of relation to truth, as part of a project in logic that prescinds from
many details of human psychology. I think humans naturally use concepts of various
adicities to introduce logically boring predicative analogs. But I adopt Frege’s idea that
available concepts can be used to introduce formally new ones, and that this can be useful
for certain derivational purposes. Frege “unpacked” monadic concepts like NUMBER(_), in
ways that let him exploit the power of his sophisticated polyadic logic to derive arithmetic
axioms from (HP). I am suggesting that Slangs let us use antecedently available concepts—
many of which are polyadic—to introduce concepts like CHASE(_) and GIVE(_), which can be
combined in simple ways that allow for simple inferences like conjunction reduction. But
the big idea, which I am applying to the study of Slangs, is Fregean: languages are not mere
tools for expressing available concepts; they can be used to introduce formally new
concepts that are useful given certain computational capacities and limitations. This is why
I have dwelt so long on Frege’s project. For while the idea of concept introduction was
important for Frege, it is not the aspect of his work that semanticists typically draw on.
The gory details of Frege’s technical devices for concept introduction are, mercifully, beyond our
present needs; only a few key points are relevant. One is that introducing concepts need not be
seen on the model of explicit definition. Rather, Pietroski highlights Frege’s proposal for a
second way of introducing concepts—viz., by inventing them. Similarly, although analyzing a
concept has often been seen as breaking it down into its more basic definitional constituents,
Pietroski joins Fodor (1970) in rejecting the idea that lexicalized concepts will generally admit of
such analytic definitions. Nevertheless, there is an alternative way of analyzing concepts, which
Pietroski characterizes as “a creative activity” (emphasis mine).
Given a very fine-grained notion of content, or thought-equivalence, analysis may not be
possible. But Frege employed at least two notions of content: one based on his notion of
sense (Sinn), and another according to which thoughts are equivalent if each follows from
the other. Given the latter notion, or Lewis’s characterization of contents as sets of logically
possible worlds, one can say that our current representations are not yet perspicuous. We
can use our concepts to ask questions that lead us to reformulate the questions in ways that
allow for interesting answers. From this perspective, analysis can be a creative activity
whose aim is not to depict our current representations…
It’s in virtue of our ability to invent new concepts that we, qua humans endowed with a specific
FL, have invented the monadic and quasi-dyadic concepts that arise only for language use. This
includes not only monadic event-predicates like GIVE(_), invented on the basis of the older
triadic concept GIVE(x, y, z), but also—importantly for Pietroski’s purposes, though not ours—
thematic concepts such as AGENT(_), PATIENT(_), and RECIPIENT(_).
§2.6 Summary
The generativist methodology that animates Pietroski’s inquiry leads him to a number of
strikingly original claims about concepts and a detailed theory of meanings. Treating the latter
in a resolutely naturalist fashion, he maintains that their theoretical role is to mediate between
30
pronunciations and concepts—i.e., to effect the psychological operations that constitute the
interface between language (FL) and the “conceptual-intentional system” (to use Chomsky’s
coinage). Although meanings facilitate the assembly of concepts, which have intentional
contents, Pietroski holds that meanings are neither concepts nor their contents.
On this view, the relation between truth and conceptual/intentional content is “quite
complicated and orthogonal to the central issues concerning how meanings compose” (115).
This, among the many other reasons surveyed above, leads Pietroski to abandon Davidson’s
project of extensional truth-conditional semantics. Moreover, the goal of explaining our access
to a productive hierarchy of concepts, rather than merely stipulating it, underlies his rejection of
the type-theoretic approach championed by Lewis (1970)—one of the many disagreements that
we’ll look at in the next section.
The semantic theory that satisfies Pietroski’s methodological commitments—as well as the
compositionality constraints that he argues follow from it—treats meanings as composable
instructions for concept assembly. The instructions are “composable” in the sense that their
basic constituents—namely, fetch@ and join[I, I’]—can enter into part-whole relations to one
another. Moreover, as noted earlier, the larger structures they compose will, in a definite sense,
mirror those of the concepts that the instructions assemble.
Having furnished empirical evidence for the idea that these “Begriffsplans” reduce largely to two
flavors of predicate conjunction, Pietroski adopts a strong version of predicativism, according to
which all of the concepts that natural language allows us to access and assemble are predicative.
This includes not only the concepts fetched by linguistic expressions that have traditionally
been classed as predicates, but also those that have generally been seen as differing in some
important respect—including singular terms and, more strikingly, even sentences. The
conceptual predicates that meanings allow us to access and assemble thus all either monadic,
dyadic (in a restricted sense), or “polarized”, where the latter kind is assembled by sentence-like
linguistic expressions, using specialized mental operations, ⇑ and ⇓, to “polarize” concepts.
Importantly, the resulting conceptual structures are not necessarily ones that best “fit the
world”, and they’re not even the only ones we can deploy in thought. But, if Pietroski is correct,
the they are the only ones that FL can assemble.
Denying that the concepts involved in language use have denotational properties and relational
structures (of arbitrary adicity) leaves open whether other concepts might have these features.
As we saw, Pietroski hypothesizes that there are in fact such concepts, and that they belong to a
phylogenetically older language of thought than the one FL allows us to access—what I’ve
dubbed ‘Olde Mentalese’. Olde thoughts might have a subject-predicate form, a Fregean
semantics, and belong to the semantic type <t>.
Pietroski goes on to make novel use of Frege’s notion of concept invention in explaining the
(non-definitional) mental introduction of new concepts on the basis of the Olde ones—
specifically, the ones that FL allows us to access/assemble (New Mentalese). This psychological
process, he argues, serves to introduce GIVE(_) on the basis of GIVE(x, y, z), as well as novel
thematic concepts such as AGENT(_), PATIENT(_). These, in turn, participate in building
polarized sentential concepts, such as ⇑RABBIT, which “applies to each thing if RABBIT applies to
something”. In the course of assembling such concepts, it may happen—but only as a side-effect
(fortuitous or otherwise)—that we also token thoughts of Olde Mentalese. But the details of how
Olde Mentalese thoughts functions are, Pietroski rightly holds, beyond the scope of a naturalistic
semantic inquiry into human language.
31
§3. Prospects for Ecumenicism
Introduction
We’ve now surveyed the core commitments of two large-scale theoretical frameworks in the
philosophy of language and seen some of the ways in which they play out in the realm of
semantics, including in detailed analyses of various linguistic constructions. It may appear that
the two views are so different in substance and overall methodology that a conversation between
the two is unlikely to bear much fruit. In fact, I suspect this is a large part of why so few
conversations of this kind ever take place. In the present section, I argue for the contrary
perspective, outlining an ecumenical approach that seeks to integrate the two in a variety of
ways. In surveying what I take to be significant points of convergence—which then serve as
background for constraining residual disputes—I rebuff various superficial objections to the
possibility of integration. In each case, I show how the theoretical differences that they point to
can be reconciled without doing much (if any) violence to either view.
§3.1 Truth, reference, and other non-explanatory notions
One obvious shared commitment between Brandom and Pietroski—indeed, the one that most
clearly motivates the present enterprise—is their common rejection truth-conditional semantics.
We’ve seen a lot about this already, but let’s review the key points and add some new ones.
Pietroski surveys a battery of arguments against Davidson’s proposal, including its more recent
incarnations in possible-worlds semantics. These include troubles with (i) empty names, (ii) coextensive but non-synonymous expressions, (iii) polysemy, (iv) compositionality, (v) liar
sentences, and (vi) event descriptions (inter alia). Brandom’s skepticism is more foundational.
On his view, an explanatory strategy that takes truth and reference—conceived of as
“naturalizable” word-world relations—as fundamental semantic relations will require a
metasemantics that is, at best, optional, at worst, incoherent, and, at present, non-existent.
Although he doesn’t pretend to have supplied a knock-down argument against it, the flaws he
identifies in the various attempts to work out this strategy strike me as fatal. Coupled with his
development of a powerful alternative—a large-scale framework constructed from the top down,
with pragmatics taking an unconventional leading role—as well as his well-motivated treatment
of the notions of truth and reference, Brandom deals a serious blow to the mainstream approach
of subordinating pragmatics to semantics.
Brandom’s main reason for pursuing a normative metasemantics is the one we saw at the
outset—i.e., the inability of a purely descriptive (“naturalistic”) account to capture the normative
notion of (in)correct rule-following. But this is not his only argument, and it’s worth taking a
moment to spell out what strikes me as a potentially more powerful consideration—one that, in
making fewer assumptions, can appeal to theorists of a broader stripe.
All extant attempts at “naturalizing” meaning, content, representation, and the like, have in
common their insistence on employing in only alethic modal notions in their analyses. These
include dispositions (Quine), causation (Stampe), causal covariance (Locke), natural law
(Dretske), nomic possibility (e.g., Fodor’s “asymmetric dependence”), and even appeals to nonnormative teleology (Cummins and, independently, Millikan). Brandom points out that, even if
these could account for ordinary empirical concepts, it’s not at all clear how they might be
extended to the very concepts that appear in the analyses—i.e., the alethic modal notions just
listed (among others).
32
While it’s possible to envision, if only dimly, how something in a person (or a brain) can causally
co-vary with—or bear nomological relations to—water, mountains, and even crumpled shirts,
there’s simply no naturalistic model for envisioning the relation between, on the one hand, the
words ‘possible’, ‘disposition’, or ‘asymmetric dependence’, and, on the other hand, any
particular set of things, events, or phenomena out in the natural world. The same is arguably
true for logical, mathematical, semantic, and deontic vocabulary, as previously noted. (Recall
the fonts, functions, and fears from §2.3.) Indeed, the metasemantics doesn’t seem to get us
much farther than ‘rock’ and ‘stick’. And the experts seem to have given up, since the late 1990s,
on the hard at work of bringing ‘fail’ and ‘decisively’ into the fold. One is well-advised not to
bank on any striking new developments in this area, unless, of course, something dramatic
happens in the surrounding domains of inquiry. (My money, for what it’s worth, is on the AI
people.)
By contrast, a metasemantics that makes explicit and essential use of normative terms—
paradigmatically, deontic modals—is ultimately able to “eat its own tail”, to use Brandom’s
imagery, by shoring up a principled account of those very notions. We’ve already seen a bit
about how Brandom treats normative expressions. According to him, they all serve the function,
characteristic of logical vocabulary more generally, of expressing (making explicit) one’s
commitments to the propriety of an inference or a plan of practical action. Brandom (2008)
offers, in addition, an account of alethic modal vocabulary (recall the safety measures for
gasoline wicks in §1.2), as well as a detailed formal analysis of its many important relations, both
semantic and pragmatic, to the deontic variety. In this way, his account can claim a major
advantage over virtually any conceivable attempt at a naturalistic alternative. And, again, the
force of this argument does not depend on a prior assumption about the normativity of meaning.
This functions here not as a premise, but as a conclusion.
§3.2 Naturalism
Residual worries about adopting a normative metasemantics will doubtless trouble self-avowed
naturalists (including former versions of the present author), who tend to have a constitutional
aversion to trafficking in normative considerations. But this, too, should be tempered—or so I’ll
now argue. The concern is, to my mind, largely dampened by the fact that Brandom’s norms are
in no way “spooky” (despite drawing heavily on Kant), but, rather, grounded directly in social
practices. Such practices consist of activities that are themselves rooted in each creature’s
practical stance of reciprocal authority and responsibility to all others in its community.18 Such
stances are overtly enacted and then, over time, socially instituted in a wholly non-miraculous
fashion, by natural creatures.
Moreover, the resulting discursive/conceptual activities are open to assessment in respect of
truth, accuracy, correctness, and overall fidelity to “the facts on the ground” (as the assessor sees
them). Indeed, the project of normative pragmatics is so obviously not supernatural that it’s not
clear why the self-avowed “naturalist” should be at all worried. Even less clear is why anyone
There’s been confusion on this point, caused largely, I think, by Brandom’s uncharacteristically ill-chosen
terminology. He gives the label “practical attitudes” to what I’ve here (re-)described as “embodied practical
stances” on the part of a creature toward its community members. The use of the term ‘attitude’ has predictably
conjured in the minds of some critics the notion of a propositional attitude—something that already bears a
distinctively conceptual/intentional content. Plainly, this would render Brandom’s account viciously circular, as he
is aiming to explicate the notion of propositional attitude content in terms of (what he calls) practical attitudes. If
the latter already have intentional contents, then there’s obviously no difficulty in spelling out other
semantic/intentional notions downstream. Equally obviously, there would be no theoretical interest in doing so.
18
33
should get to dictate the terms of legitimate inquiry a priori. Why, after all, should our
metasemantic theorizing not make any use of the perfectly familiar and logically well-behaved
deontic modal notions. Indeed, why even a lesser use of them than their alethic counterparts?
What’s so special about alethic modality, anyway? Nothing much, so far as I can see.
Let me dwell on this point, for it seems to me that the knee-jerk resistance to normative
theorizing is deeply ingrained in the naturalist’s mind. (I should know!) Pressing back against
what now strikes me as an irrational prejudice, I exhort philosophers to actively discourage it,
whatever the fate of Brandom’s philosophical project—or, for that matter, mine—turns out to
be. Given our daily immersion in social norms and institutions, it’s frankly puzzling that so
many theorists have allergic reactions to a deontic treatment of language. Norms are not
puzzling. They are all around us, every moment of our lives. They permeate every social
interaction we have and they are the subject of most of our thoughts, all of our plans, and our
very conceptions of our own identities as free, responsible agents.
Moreover, with respect to linguistic norms in particular, there are (so far) no obvious examples
in the natural world of linguistic abilities arising in creatures outside of a relatively elaborate
social context. Indeed, even intelligent artifacts wouldn’t count, if we ever made one, for they’d
be related to the human community of happy roboticists in an obviously relevant way. So it’s
not at all clear—not to me, at least—why this aspect of naturalism should constrain our inquiries
into language and mentality. Obviously, naturalism has many other appealing features, but this
doesn’t seem to be one of them.
The deafening silence from classical naturalists on this point has led some, e.g., Price (2010,
2013), to endorse Brandom’s normative inferentialist project and to embed it into a larger
philosophical framework that eschews notions of correspondence, reference, “the representation
relation”, etc. altogether. (No “mirrors”, he enjoins, using Rorty’s metaphor.) Price applies
Brandom’s expressivist account of logical vocabulary to all of human language. The resulting
“global expressivism” is a key commitment of the novel brand of naturalism that he
recommends to our attention—one that I find deeply compelling.
What I’ve been describing as the “traditional” or “classical” naturalist view—i.e., the received
view among soi-disant naturalists in the literature from Fodor onwards—maintains that we
should draw on the tools, models, and concepts of natural science in characterizing atomistic
word-object or sentence-fact relations—paradigmatically, reference and correspondence. On
this picture, “the world” is seen through the lens of natural science—a metaphysical framework
that has plenty of room for protons, genes, and brains, but stubbornly refuses to accommodate
responsibilities, entitlements, and the like—including, remarkably, persons (at least not in the
fullest sense of that word; cf. Sellars, 1963). The “objects” to which language relates us are thus
limited by naturalist maxims to only the “natural” ones, whatever those are. For this reason,
Price calls this view object naturalism.
The alternative that Price puts on offer is subject naturalism. This view retains a healthy and
well-deserved respect for the deliverances of natural science, but refuses to go along with the
philosophical fiction of “naturalizable” reference and correspondence relations. Rather, our
naturalistic urges should be directed, Price argues, toward concept- and language-using
subjects—i.e., the creatures who acquire, produce, and consume languages, as just one tool in a
larger biological-cum-social enterprise of maintaining homeostasis in the species.
Paradigmatically, such creatures are human persons, but any other naturally social creature can
in principle be studied in this fashion.
34
What’s striking, to my mind, is how similar all of this sounds to the methodological aims of
theorists like Chomsky and Pietroski. Although both call themselves naturalists, each has made
determined efforts to debunk the idea that word-world relations are relevant to an empirical
study of the human language faculty. Nor does either theorist harbor the ambition—
characteristic of the “classical” naturalists mentioned earlier—to reduce intentionality, either by
analysis or metaphysically, to some alethic modal base. Here’s Pietroski on the issue:
One can raise further questions about the sense(s) in which Begriffsplans are intentional,
and which philosophical projects would be forfeited by appealing to unreduced capacities
to generate and execute the instructions posited here; cp. Dummett. But my task is not to
reduce linguistic meaning to some nonintentional basis. It’s hard enough to say what types
of concepts can be fetched via lexical items, and which modes of composition can be
invoked by complex Slang expressions.
This is another point of convergence with Brandom’s pragmatism, which likewise renounces the
reductive aims of the classical naturalist project.
Of course, the mere fact that Pietroski declines to take up the issue in CM doesn’t mean he has
no dog in the fight elsewhere. (I don’t, myself, know.) By the same token, although it’s true that
Brandom doesn’t aim to reduce intentional notions to some construct of natural science, it
doesn’t follow, and isn’t true, that he has no reductive ambitions at all. To the contrary, his
normative inferentialism is designed precisely to reduce intentionality to something
nonintentional, which, in his case, happens to be the normative. This is why normative
pragmatics serves for him as a metasemantics, in the fullest sense of the word. The ‘meta’
indicates not only that what’s on offer is a “theory of meaning”—rather than a first-order
“meaning theory”, to use Dummett’s distinction; more importantly, it connotes that the
semantics is herein subordinated to (i.e., must “answer to”, in an explanatory sense) the social
norms that are the centerpiece of the pragmatics.
§3.3 Referential purport
In keeping with his commitment to the methodological tenets of individualism and internalism,
Pietroski applies many of the points scouted above to conceptual thought.
[S]ome readers may be unfamiliar or uncomfortable with talk of using representations to
think about things in certain ways. So some clarification, regarding aboutness and ways, is
in order. … The relevant notion of thinking about is intentional. We can think about
unicorns, even though there are none. One can posit unicorns, wonder what they like to
eat, diagnose various observations as symptoms of unicorns, etc. Similarly, one can
hypothesize that some planet passes between Mercury and the sun, call this alleged planet
‘Vulcan’, and then think about Vulcan—to estimate its mass, or wonder whether it is
habitable—much as one might posit a planet Neptune that lies beyond Uranus. An episode
of thinking about something can be such that for each thing, the episode isn’t one of
thinking about that thing. … [I]n the course of proving that there is no greatest prime, it
seems an ideal thinker can at least briefly entertain a thought with a singular component
that purports to indicate the greatest prime. … Paradoxes like Russell’s remind us that
even idealized symbols can fail to make the intended contact with reality, at least if one is
not very careful about the stipulations used to specify interpretations for the symbols.
I want to highlight a key point here: Pietroski is presupposing about concepts not that they
succeed in referring—though he allows that some of them do—but that even empty concepts
have intentional contents. These latter plainly cannot be accounted for by positing
35
straightforward metaphysical relations between words, on the one hand, and bits of the world,
on the other.
This emphasis on intentionality in the sense of referential purport is crucial to Brandom’s
project as well. Rather than setting out to explain successful reference/denotation, as the
paradigms of perceptual and demonstrative reference have led many theorists to do (§2.3),
Brandom sees it as necessary to first explain how a creature can so much as purport to refer to
one thing rather than another, and only later to furnish an account of what counts as success in
this regard. Brandom is not alone in adopting this strategy. In the Gricean tradition, the
homologous project is cast in terms of the intentional design of communicative acts—in
particular, a speaker’s intentions to refer, denote, predicate, or to speak truly of something. But
whether one uses the idioms of purport, design, or intention, the key point is that the
phenomenon under discussion does not involve unique, naturalizable, or semantically-relevant
mind-world relations.
That leaves wide open issues about the interface between language and reality, let alone larger
questions of metaphysics and epistemology. While Pietroski stays largely neutral on such topics
in CM—again, justifiably so—Brandom’s account makes quite definite commitments in these
arenas. Nevertheless, there is one place where the two quite clearly converge, and that is with
respect to their treatments of de dicto and de re attitude ascriptions. Let’s take a look at that.
§3.4 De dicto and de re constructions
In granting reasonable concessions to philosophers who stress the importance of mind-world
relations for our theories of intentionality, meaning, concepts, and the like, Pietroski makes the
following remarks:
I grant that ‘think about’ can also be used to talk about a relation that thinkers can bear to
entities/stuff to which concepts can apply; see, e.g., Burge. In this “de re” sense, one can
think about bosons and dark matter only if the world includes bosons and dark matter, about
which one can think. Any episode of thinking de re about Hesperus is an episode of thinking
de re about Phosphorus. This extensional/externalistic notion has utility, in ordinary
conversations and perhaps in cognitive science, when the task is to describe representations
of a shared environment for an audience who may represent that environment differently.
But however this notion is related to words like ‘think’ and ‘about’, many animals have
concepts that let them think about things in ways that are individuated intentionally. We
animals may also have representations that are more heavily anchored in reality; see, e.g.,
Pylyshyn. But a lot of thought is intentional, however we describe it. (80)
It’s important to see that there are two distinct strands of thought here. One is about how some
representations—again, paradigmatically those involved in perceptual or demonstrative
reference—are “more heavily anchored in the world” than others. This might seem to put
Pietroski’s view at odds with Brandom’s inferentialism, given how small an explanatory role the
latter gives to such “anchoring” relations. But this is worry is spurious.
Brandom’s account of perceptual commitments and default entitlements (MIE, ch. 4), as well as
his (largely independent) account of demonstrative reference and “object-involving” thoughts,
are fully compatible with—indeed, positively require—the existence of reliable nomic or causal
relations between perceptible objects in the world and the perceptual mechanisms of a creature.
There are, to be sure, heated debates about how exactly all of that works—e.g., whether the
percepts should be seen as having the function of bare demonstratives (Fodor and Pylyshyn,
2015), noun phrases (Burge, 2010), or inferentially integrated singular thoughts (Brandom,
36
1994). But disagreements on these points are far downstream, theoretically, from the broadly
methodological commitments that I want to highlight here. It’s these that are the subject of the
second strand of thought that I think we should distinguish here.
When Pietroski speaks of a “‘de re’ sense” in which one can think about or ascribe thoughts, he is
at once talking about a certain kind of thought—the “object-involving” kind discussed above—
but also, separately, about a certain kind of thought ascription. The latter, he says, “has utility,
in ordinary conversations and perhaps in cognitive science, when the task is to describe
representations of a shared environment for an audience who may represent that environment
differently.” Note that this circumscribes the function of de re ascriptions to what I think of as
“the navigation of perspectival divides”. More prosaically, de re constructions allow language
users to describe the environment in which a creature is deploying its concepts—as viewed by
the describer (and often her audience)—while de dicto constructions function to ascribe the
concepts so deployed. Here, then, we see another major point of convergence between Pietroski
and Brandom. The latter provides an inferentialist analysis of de dicto and de re ascriptions,
according which they perform precisely the function that Pietroski’s remarks indicate.
According to Brandom, de dicto ascriptions make explicit the commitments of the creature
being described—not those of the ascriber, who may be either ignorant on the matter or hold
commitments directly contrary to those ascribed. A speaker might say, “Dan thinks the greatest
philosopher of language was Quine,” without having any commitments one way or the other
about whether philosophers exist, or about whether Quine was one of the greats. Indeed, the
speaker might think that philosophers are bits of tofu and that Quine is a particularly flavorful
brand. None of that would matter with regard to the speaker’s entitlement to a de dicto claim
about what Dan said (assuming, of course, that the speaker is not identical with Dan himself; see
fn. 7).
By contrast, had the speaker employed a construction that functions in the de re sense—e.g., the
awkward ‘of’-constructions that we’ve inherited from Quine—then their own commitments
would have come into play, with questions arising (at least in potentia) about their entitlements
to those commitments. For instance, had the speaker’s ascription been (1), then their own
commitments regarding the existence of philosophers, the list of the greats, and the possibility of
philosophical tofu would have become immediately relevant.
(1) Regarding the greatest philosopher of language, Dan thinks that he was a piece of tofu!
Turning to subsentential cases, Brandom points out that such shifts in perspectival
commitments can be indicated by operators such as “classified as”, “described as”,
“conceptualized as”, and (importantly) “referred to as”. For instance, in saying “Jamal classified
some food as rabbit,” a speaker, Juanita, purports to indicate some food—the de re component
of the ascription—and then says what concept Jamal applied to it (viz., RABBIT). The word ‘as’
marks the onset of the de dicto component, ensuring that Juanita does not commit herself to the
correctness of Jamal’s classification. (Perhaps she knows that that the stuff on the plate is tofu.)
Some theorists, having noted that RABBIT is the only concept that Juanita ascribes to Jamal, go
on conjecture that Jamal can deploy this subsentential concept, all by itself, in classifying
something as rabbit. Indeed, bewitched by surface grammar, some fail to notice the plain
distinction between what Juanita is doing—i.e., describing (noncommittally) one aspect of
Jamal’s perspectival commitments—and what Jamal is thinking. A mistaken conflation of these
two phenomena is what gives rise, I suspect, to the widespread illusion that we can take as it as a
37
datum that each of us has ability to think of, classify, or refer to something, by deploying just
one subsentential concept (or linguistic expression). I’ll argue later, following Brandom, that
this idea is doubtful; prima facie, one can neither classify nor think about something without
tokening complete thoughts. For instance, the case described, Jamal’s classificatory act requires
tokening the complete thought, CCC IS-RABBIT, where CCC is whatever concept he uses in thinking
about the food on the plate.
§3.5 Interpersonal similarity of meaning and content
Many theorists hold, for reasons that are ultimately unpersuasive, that communication is a
matter of passing a message, idea, meaning, or thought from speaker to hearer. As with
throwing a frisbee, a successful case in one in which the item sent by the speaker is the very
same as the one received by the hearer—if only in the ideal. But, after the mid-century work of
Quine, Kuhn, Sellars, and many others who developed broadly holist ideas (e.g., Churchland,
1979), it’s hard to see this picture as anything but optional. Brandom’s account of
communication is one of the many that rejects it outright, modeling communication instead on
activities like tango dancing, where partners has to give and take “in complementary fashion”;
book-keeping, where each participant “keeps separate books” regarding her own commitments
and those of other participants; and baseball games, where a common “scoreboard” shows what
complex normative statuses each participant bears to each of the others. (Many have remarked
that this is a distinctively American philosophy.)
Metaphors aside, Brandom’s inferentialism carries an explicit commitment to holism about
meaning and content. Many follow Fodor and Lepore (1992) in seeing this as the root of several
major problems for his view. But the objections that Fodor and Lepore press are virtually all
rooted in implausibly strong assumptions about the necessity of meaning/content identity—
rather than mere similarity—for various philosophical inquiries. These include the projects of
adequately characterizing successful communication, interpersonal disagreement, and rational
belief revision. It seems to me that Brandom’s accounts of these things are perfectly fine as they
stand, but fans of holism seem to have gone scarce in recent years, and Fodorian views about
meaning/content identity have arguably become the received views in the field. In yet another
clear case of both his iconoclasm and his significant convergence with Brandom, Pietroski
likewise rejects the identity-of-meanings picture, though on grounds that are independent of any
holist commitment. Targeting first the extensional account of meaning-identity as coextensiveness, Pietroski points out that
speakers can connect the pronunciation of ‘pen’ to the same meaning without determining
a set of pens that they are talking about. If each speaker uses a polysemous word that lets
her access a concept of writing implements, as opposed to animal enclosures, they resolve
the homophony the same way and thereby avoid blatant miscommunication. In this sense,
they connect a common pronunciation with a common meaning. But it doesn’t follow that
any speaker used a word that has an extension...
Later, Pietroski counsels us—wisely, in my view—to give up the whole idea that “successful
communication requires speakers [to] use the same meanings/concepts” (33), regardless of the
theoretical framework in which this idea is couched (extensional or otherwise). He views it as a
mere idealization that “members of a community have acquired the same language—or that they
use the same words, or words that have the same meanings.” Despite Brandom’s focus on social
norms, shared commitments, and the like, what Pietroski says here is entirely in line with his
view. This will take some spelling out, which I undertake in §4. For now, let me emphasize that,
38
if correct, then Pietroski’s take on this issue would severely undermine the (already fairly flimsy)
arguments against meaning holism.
§3.5 The Pragmatics of Assessment
We’ve seen that Brandom places great stress on the notion of assertion, and that he sees this as
something that we should characterize in normative terms. Given Pietroski’s naturalist
commitments, one might think that he disagrees. But this overlooks the key point that I will go
on to make in the remainder of this discussion—namely that the theoretical aims of Pietroski’s
semantic theory are so starkly different from those that animate Brandom’s inquiry that their
common use of folk terms like ‘meaning’ should not bewitch us into thinking that they’re talking
about the same phenomenon.
Pietroski’s target throughout CM is not communication, but the psychological mechanisms that
make it so much as possible. When he does provide hints of his broader views about
communication, what he says is entirely of a piece with Brandom’s normative pragmatism.
My own view is that truth and falsity are properties of certain evaluative actions—e.g.,
episodes of assertion or endorsement—and the corresponding propositional contents that
are often described with the polysemous word ‘thought’, as in ‘the thought that snow is
white’ or ‘the thought constructed by executing that instruction’; cp. Strawson.
As with all theory-laden terms, including (especially in this context) ‘thought’, ‘meaning’, and
‘concept’, we should always remember that the aims and presuppositions of the inquiry are far
more important to keep in view than the pronunciation of the jargon. Pietroski makes this point
in the following passage.
Let’s not argue about nomenclature. One can use ‘concept’ more permissively than I do,
perhaps to include images or other representations that are not composable constituents of
thoughts. One can also use ‘concept’ less permissively, perhaps to exclude representations
that fail to meet certain normative constraints. Or one might reserve the term for certain
contents, or ways of thinking about things, as opposed to symbols that have or represent
contents. But I want to talk about a kind of physically instantiated composition that is
important for cognitive science, along with a correspondingly demanding though nonnormative notion of constituent, without denying that contents and nonconceptual
representations are also important.
We will go on in §4 to compare Brandom’s and Pietroski’s notions of “constituent,” where it will
become important that Pietroski’s aims is to provide a non-normative account of constituency.
For now, consider how this plays out with respect to Pietroski’s distinction between Olde and
New Mentalese. You’ll recall that Pietroski’s account has children starting off with Olde
Mentalese concepts that fail to meet the conditions for assembly by FL Begriffsplans, and later
inventing new concepts that are specially tailored for the job. For instance, Pietroski writes,
“Ignoring tense for simplicity, we can form concepts equivalent to: GIVE(VENUS, __ , BESSIE);
GIVE(VENUS, BESSIE, __ ); GIVE( __ , BESSIE, VENUS); GIVE( __, BESSIE, __ ); etc. But children may
start with composable concepts that are less cognitively productive” (101).
In the remainder of the discussion, I’ll lean heavily on the following two key points. First, much
of what Pietroski says about the phylogenetically older “Fregean” modes of thought that permit
internal substitution is of a piece with the lessons that Brandom likewise draws from Frege. The
second key point, which alone serves to resolve many of the apparent conflicts between the two
views, is that Pietroski’s account is best be viewed as a (partial) theory of the subpersonal
39
mechanisms that underlie our norm-tracking abilities, which in turn contribute to making
possible the social practices of communication. For instance, Pietroski’s account of how a child
comes to acquire the concepts that FL can assemble should, if correct, be regarded as laying
down empirical constrains on the variety of norm-tracking social practices that are a human
child will be able to master at various ages.
If the foregoing claims are correct, then the two theoretical enterprises not only share several
key commitments, but they are actually complementary, each providing an important piece of
the overall puzzle about how best to view language(s). While Brandom’s inquiry is neither
descriptive nor psychological, his account clearly presupposes that there must be some
descriptively correct account of how any creature’s sub-creatural cognitive architecture emerges,
whether in early development or in its species’ socio-evolutionary (“memetic”) history. His
claim is simply that such an account can’t be the end of the story. Rather, he argues that it’s the
necessary groundwork for a much larger picture of the role of language in social practice.
Likewise, although Pietroski’s focus is the psychology of the individual speaker-hearer—assumed
to be universal within the species—we’ve seen that there’s room for communal norms in his
overall picture. Indeed, passages from CM contain explicit remarks about the norms that govern
inquiry in the mature sciences—e.g., norms that serve to stabilize referential purport among
expert chemists. Such norms, however, would reside largely at the level of pragmatics, which is
decidedly not Pietroski’s focus in CM, nor a major aspect of Chomsky’s own work since the 50s.
Perhaps, then, we can adapt David Marr’s famous distinction between “levels of analysis”—if
only by crude analogy—by viewing Brandom as articulating a “high-level description” of the
conditions that any language user must satisfy in order to count as such. If this suggestion is
right, then Pietroski is best seen as telling us about the details of how this high-level description
happens to be “implemented” or “realized” in the human case. Whatever the merit of the
analogy to Marr’s picture, my claim is that there is no inherent conflict between accounts
pitched at distinct levels of inquiry. There could only be a substantive dispute if the they shared
a common domain. But, as Pietroski’s remarks in the following passage make clear, that’s just
not so in the present case.
Let me digress, briefly, to note a different response to concepts without Bedeutungen. One might
adopt a hyper-externalistic understanding of ‘ideal’, so that no ideal thinker has the concept VULCAN,
and no ideal analog of this concept lacks a Bedeutung; cp. Evans, McDowell. But then an ideal
thinker must not only have her mental house in good order, avoid paradox-inducing claims, and be
largely en rapport with an environs free of Cartesian demons. In this hyper-externalistic sense, ideal
thinkers are immune from certain kinds of errors, perhaps to the point of being unable to think about
the same thing in two ways without knowing it; cp. Kripke. One can characterize corresponding
notions of concept*, language*, and their cognates. These notions, normative in more than one way,
may be of interest for certain purposes. Perhaps inquirers aspire to have languages* whose
expressions* have meanings* that are instructions to build concepts* that can be constituents of
thoughts*. But my aim, more mundane, is to describe the thoughts/concepts/meanings that ordinary
humans enjoy. And I see no reason to believe that the best—or even a good—way to study concepts or
meanings is by viewing them as imperfect analogs of their hyper-externalistic counterparts, even if
we can and should try to acquire concepts*. Notions like truth and denotation may be required to
interpret philosophical talk of thoughts*/concepts*/meanings*. One can also hypothesize that
ordinary words combine to form sentences that express thoughts composed of concepts that denote
or “are true of ” things in the environment. But this hypothesis does not become inevitable as soon as
we talk about thoughts/concepts/meanings. On the contrary, natural concepts seem to be mental
symbols that can be used to think about things, even when the concepts apply to nothing. And if
words like ‘give’ are used to access naturally introduced concepts like GIVE(_), we must be prepared
to discover that words are used to access concepts that fall short of certain philosophical ideals.
40
§4. Challenges for Ecumenicism
The ground-clearing maneuvers of the previous section put us in a position to explore residual
differences between Pietroski and Brandom that threaten to be more substantive. As advertised,
I hope to show that the initial appearances are misleading even in some of these cases, contrary
to received opinion amongst philosophers of language. Still, I’ll ultimately admit defeat when
we arrive at the very last topic—the banes of predication and singular terms.
§4.1 E-language
One place to look for sources of substantive disagreement is in the vicinity of Chomsky’s
infamous distinction between E-language and I-language (Chomsky 1986). Chomsky initially
drew this distinction with the explicit intention of formulating a key difference between his
approach to language and that of Quine, Lewis, and others—what was then arguably the
dominant view. Given that Brandom was a student of Lewis’s and frequently pays homage to
him in his published work, the question arises whether his view likewise falls prey to the
compelling objections that Chomsky articulated decades ago. We can address this question by
focusing on Pietroski’s more recent formulation of Chomsky’s insights.
Let’s begin with the distinction itself. What exactly is an E-language supposed to be?
Chomsky characterizes I-languages as procedures that connect interpretations with signals.
Languages in any other sense are said to be E-languages. Chomsky … isolates a procedural
notion and uses ‘E-language’ as a cover term for anything that can be called a language but
isn’t an I-language. Thus, E-languages may include certain clusters of behavioral
dispositions, heuristics for construing body language, etc. But Chomsky’s ‘I/E’ contrast
does connote Church’s intensional/extensional contrast, and sets of interpretation-signal
pairs are paradigmatic E-languages. Though to repeat, such sets are often defined
procedurally.
This negative conception of E-languages (“anything that can be called a language but isn’t
an I-language”) casts such a wide net that it can’t but apply to Brandom. Certainly, the
latter is making no substantive psychological hypotheses about a computational system in
the human brain. Nevertheless, as we’ll see, much of what Pietroski says in rejecting the
study of E-language poses no conflict with Brandom’s view. While he argues that David
Lewis “wanted to describe our distinctively human language(s),” this is not the direction
that Brandom takes. His project is to describe language as such, not “distinctively human
languages.”
Although my broader claims by no means hang on a particular interpretation of Lewis, it
does seem to me that Pietroski’s reading of Lewis is not maximally charitable. I say this
not because I think that his deep disagreements with Lewis somehow insidiously creep
into his interpretation. Rather, I’ll argue that there’s a way of seeing Lewis as engaged in
a wholly different project from the one that Pietroski foists on him. Still, whatever the
case about Lewis, the project that I have in mind is the one that Brandom in fact
undertakes—which, in my view, he carries out successfully, even if his teacher did not.
The subsections that follow distinguish three separate points of debate: (i) extensional vs.
procedural conceptions of grammar, (ii) a sentence-first vs. word-first approach to semantics,
and (iii) individualist vs. social conceptions of language.
41
§4.1.1 Extensional vs. intensional constraints
In getting more precise about Lewis’s particular brand of E-language, Pietroski highlights
two points: (i) the metaphysical claim that language(s) are extensionally-specified
abstract objects—specifically, sets of meaning-pronunciation pairs—and (ii) Lewis’s
conception of the process whereby a population comes to use such an object in social
exchanges.
Lewis’s proposal was that languages like English are sets [functions-in-extension, not
procedures] that are related to certain social phenomena via conventions of “truthfulness
and trust.” He speaks in terms of populations “selecting” certain languages. The suggestion
is that using a particular language is like driving on the right: an arbitrarily chosen way of
coordinating certain actions.
One of the main things that Pietroski finds problematic here is that treating natural
languages as functions-in-extension requires relinquishing the explanatory ambitions of
generative linguistics—specifically, the aim of describing the cognitive structure that
underlies, or even constitutes, human linguistic competence. Lacking an account of the
cognitive architecture of FL, or the representational format in which it conducts its
business, we have no empirically credible story about how adults accomplish real-time
language processing, not to mention how children acquire the creative capacity to
produce and consume an indefinite range of novel expressions.
Lewis says instead that “a grammar, like a language, is a set-theoretic entity which can be
described in complete abstraction from human affairs.” Chomsky offers a way of locating
languages and grammars in nature. Lewis stipulates that each grammar determines the
language (i.e., the set of sentences) that it generates, but that languages do not determine
grammars. So even if there is a Lewisian grammar for a certain set of Slang sentences, this
does not explain how the relevant sentence meanings are related to lexical meanings, or
how speakers of the Slang know that the strings in question fail to have certain meanings.
At best, a Lewisian grammar indicates how a certain kind of mind might abstract lexical
meanings from sentence meanings, given hypotheses about the relevant constituency
structures and composition principles; see chapter three. But this doesn’t yet tell us
anything about how humans connect lexical meanings with pronunciations, or how we
can/cannot combine lexical meanings.
This all strikes me as correct. What Lewis should have been aiming his inquiry at language as
such, regardless of which creature is using, acquiring, or “selecting” it. Setting aside Lewis,
Brandom’s target explanandum is “what it is to do the trick,” not “how the trick is done by us”—
or, for that matter, by a dolphin, an AI robot, or a Martian. (I’ll suggest a friendly amendment to
this point in §4.2). Chomsky, Pietroski, and other generativists, on the other hand, want to
know about humans specifically. These are different projects, to be sure, but they’re not thereby
rivals. Though they may well constrain one another, the relations between them can better be
seen in terms of the “levels of analysis” picture that I proposed earlier.
In addressing Lewis’s account of how populations “select” a language, Pietroski assumes that the
target phenomena are sufficiently similar between his inquiry and Lewis’s that the two are not
only commensurable with one another, but are actually in direct conflict on various key points.
Lewis asserts that £ is the language used in a population P “by virtue of the conventions of
language prevailing in P;” where conventions, in his sense, are special cases of mutually
recognized regularities of action/belief in a population of individuals who can act
rationally. He says that a convention of “ truth and trustfulness,” sustained by “ an interest
in communication,” is what makes a particular set of sentences the language of a given
population. However, Lewis offers no evidence that this proposal is correct. And there is an
42
obvious alternative: if a population P is small enough to make it plausible to speak of the
language used in P, then the members of P will have acquired generative procedures that
are very similar. So why think there are “facts about P which objectively select” a shared Elanguage, and not that members of P have similar I-languages?
The answer that I suspect Lewis would give to Pietroski’s last question in this passage is that
there is no obvious way of individuating I-languages independently of the “forms of life”—i.e.,
communal practices—in which any linguistic creature is caught up. That’s because a discursive
creature’s concepts and/or lexical items are constitutively related to the norms that structure
those practices. Thus, if meanings really are instructions to fetch and assemble concepts, then
we won’t be able to individuate meanings without appealing to such norms either. It’s not clear
to me how Pietroski would (or could) respond to this point.
Pietroski’s critique of Lewis—particularly, his troublesome notion of “selection”—is carried
forward in the following passage. What’s new here, though, is that we have a direct quotation
from Lewis making precisely the claim that I’ve been urging on his behalf—namely, that the
generative grammarian’s target explananda are simply not the ones that he seeks to address.
According to Lewis, populations select languages by virtue of using sentences in rational
ways. So on his view, if language use is not a “rational activity” for young children, they are
not “party to conventions of language” or “normal members of a language-using
population.” [Lewis writes:] “Perhaps language is first acquired and afterward becomes
conventional. . . . I am not concerned with the way in which language is acquired, only with
the condition of a normal member of a language-using population when he is done
acquiring language.” I don’t know what it is to acquire language in Lewis’s sense, or how he
would describe whatever creolizers acquire. But even if one wants to focus on “normal”
adults, ignoring acquisition may not be a viable option for those who want to find out what
Slangs and meanings are. Inquirers don’t stipulate how phenomena are related; they
investigate.
In the end, I agree with Pietroski that Lewis’s talk of populations “selecting” languages is hard to
take seriously, especially in light of the Chomskyan alternative. Casting things in Lewis’s way
plainly does leave the crucial process of “selection” unexplained—at least at the level of
psychology, which is arguably where all the exciting action is at. Nor can I bring myself to credit
the idea that the central aim of formal linguistics should forever be what Lewis said it should
be—i.e., the extensional characterization of some abstract set (a moving target, as Pietroski
points out, given the constant introduction of novel lexical items). But one can share Pietroski’s
doubt that there’s any meaningful sense in which people “select languages”, as well as his
broader anti-extensionalism, but nevertheless maintain that there is a level of theoretical
abstraction at which language is best viewed as a social phenomenon.
The conventions that govern the phenomenon in question might be “truthfulness and trust”, as
Lewis thought, or they might be more explicitly normative, as in Brandom’s conception of
socially-instituted assertional and inferential commitments and entitlements. Likewise, the
languages that such conventions institute might be static abstracta, as Lewis seems to have
believed, or they might be flexible, dynamic, and highly context-dependent social relations, as on
Brandom’s account. The latter, moreover, has no theoretical use for the notion of “stable
populations” that go around “selecting” various abstract objects—each of which somehow
manages to answer to the label ‘Spanish’ (or whatever), despite their extensional differences.
There is, therefore, no call to reify such strange entities on the normative inferentialist picture.
Neither linguistic phyla (e.g., Romance or Germanic) nor local dialects (2019 Boston English)
need be real—not in Lewis’s sense, anyway—in order for Brandom’s project to get off the ground.
43
Indeed, as previously mentioned, the “communities” that Brandom appeals to in his account of
norm-governed social practices can be as small as two creatures. (Actually, a footnote in MIE
reveals that Brandom might even be willing to countenance something like an “I-thou relation”
coming to be instituted by temporally asymmetric recognition of authority and responsibility
relations between distinct time-slices of one and the same creature.) Thus, in principle, every
new dyadic social interaction can serve to institute novel local norms, alongside any that were
previously shared. Whatever else Brandom might be accused of doing, then, attempting to
individuate and reify stable (let alone timeless) public languages is no part of his brief.
§4.1.2 The primacy of sentences
Having argued against Lewis’s metaphysical assumptions, Pietroski goes on to take issue with
his methodological claim that semantic inquiry should begin by assigning meanings to
sentences, rather than to subsentential expressions.
Lewis goes on to say “ if σ is in the domain of a language £, let us call σ a sentence of £.”
But he wasn’t using ‘sentence’ as a technical term for any string to which a language
assigns a meaning. Rather, Lewis initially restricted his notion of a meaningful expression
to sentences. He later introduces talk of word meanings via talk of grammars.
As we saw in our earlier discussion of Pietroski’s views on sentences, he views this sort of
approach as being out of step with contemporary theorizing in generative linguistics.
Linguists have since replaced “S” with many phrase-like projections of functional items
that include tense and agreement morphemes, along with various complementizers. This
raises questions about what sentences are, and whether any grammatical notion
corresponds to the notion of a truth-evaluable thought. But theories of grammatical
structure—and to that extent, theories of the expressions that Slangs generate—have been
improved by not positing a special category of sentence. So while such a category often
plays a special role in the stipulations regarding invented formal languages, grammatical
structure may be independent of any notion of sentence.
Here again, we must distinguish Lewis from Brandom. For the latter, the primacy of sentences
for is not a mere stipulation, nor an irrational fetish for a specific syntactic type. Rather, his
principal—and I think quite principled—grounds for isolating sentences at the outset of a
normative pragmatic inquiry is that this is the only type of expression with which a can make an
assertion—i.e., an explicit move in an norm-governed inferential practice. Brandom’s concern
with the normative structure of this “game of giving and asking for reasons” is what motivates
his focus on the role that assertions play, both in reasoning—as premises and conclusions—and
also in communication. In the latter context, they serve the function of allowing speakers to
undertake normative statuses—paradigmatically, commitments and entitlements.
Pietroski’s methodological counsel is to postpone discussion of communication to a later day.
As he remarks, specifying the structure of FL is work enough for one lifetime—surely more. But
it doesn’t follow from this that a theoretical inquiry with different aims must be, in some sense,
second-rate, let alone illegitimate. Nor is it clear that the two inquiries are even in competition
with one another over how to best to describe a common subject matter. As noted above, their
common talk of “meanings”, “languages”, and the like might tempt one into thinking that the
topic under discussion is the same for both theorists. But this would be a mistake. The two
theoretical frameworks in which these terms are couched—and relative to which they have their
theoretical meanings—are so dramatically different in respect of their explanatory aims that
viewing them as even intending to refer to the same phenomena is a bit of a stretch.
44
Unless I’m mistaken, Pietroski himself seems to have fallen into this trap, despite his earlier
counsel to avoid such temptations. One clear place where he does so is in the following passage,
which is notable for including—now for a second time—Lewis’s direct protests to being saddled
with the views that Pietroski nevertheless goes on to attribute to him.
By contrast, Lewis held that sentences are prior to grammars. While granting that we
should not “discard” notions like phrasal meaning (relative to a population P), or the “fine
structure of meaning in P of a sentence,” he says that these notions “depend on our
methods of evaluating grammars.” … For Lewis, a grammar Γ is used by P if and only if Γ is
a best grammar for a language £ that is used by P in virtue of a convention in P of
“truthfulness and trust” in £. One might have thought that a “best” grammar for the
alleged set £ would be one that best depicts the procedures acquired by members of P. But
according to Lewis, “it makes sense to say that languages might be used by populations
even if there were no internally represented grammars.” He then makes an even more
remarkable claim. “I can tentatively agree that £ is used by P if and only if everyone in P
possesses an internal representation of a grammar for £, if that is offered as a scientific
hypothesis. But I cannot accept it as any sort of analysis of “£ is used by P”, since the
analysandum clearly could be true although the analysans was false.” Note the shift from
Lewis’s opening question—what is a language?—to a search for an analysis of what it is for
a language to be used by a population. … This shift is interwoven with the insistence that
languages are sets of sentences, and a willingness to accept the consequences for how
grammars are related to non-sentential expressions.
What Pietroski describes here as a “shift” in Lewis’s explanatory aims strikes me as a correct
description of what Lewis was up to all along; or, at any rate, should’ve been.
Setting aside Lewis exegesis, we can turn again to Brandom, whose methodology is a good deal
more perspicuous than his teacher’s. As already noted, Brandom rejects the “set of sentences”
conception of languages, tying social norms to practices that are fluid, socially distributed, and
highly context-dependent. Nevertheless, he can accept Lewis’s views (quoted in the passage
immediately above) about grammars and about the status of subsentential expressions. Again,
that’s because he motivates the primacy of sentences (or “full thoughts”) by reference to their
roles in assertions, and hence in the norms governing the social practices in which assertion is a
basic move. Such a practice need not be as articulated as ours, in respect of either content or
syntax. And while Pietroski is prosecuting an empirical inquiry into human psychology,
Brandom’s is assaying the pragmatics of assertion, inference, and assessment.
Here is one final bit of textual evidence for my suggestion that Chomsky and Pietroski, on the
one hand, and Lewis and Brandom, on the other, are simply talking at cross purposes. Pietroski
writes:
I think [Lewis’s] ordering of priorities is misguided. Slangs are child-acquirable generative
procedures that connect meanings with pronunciations in ways that allow for constrained
homophony. So whatever meanings are, there are natural procedures that connect them
with pronunciations in specific ways. Instead of adopting this promising starting point for
an empirically informed discussion of languages and meanings, Lewis offered a series of
stipulations. Many others followed suit. But that doesn’t make it plausible that Slangs are
sets. And if Slangs are I-languages in Chomsky’ s sense, then we shouldn’t ignore this fact
when asking what meanings are. … [Lewis] adds that “the point is not to refrain from ever
saying anything that depends on the evaluation of grammars. The point is to do so only
when we must, and that is why I have concentrated on languages rather than grammars”.
But in reply to a worry that he is needlessly hypostatizing meanings, he says “There is no
point in being a part-time nominalist. I am persuaded on independent grounds that I ought
to believe in possible worlds and possible beings therein, and that I ought to believe in sets
of things I believe in.” So why be a part-time grammarist, given Chomsky’s reasons for
45
thinking that children acquire generative procedures? Quine’s worries about the
“indeterminacy” of meaning are not far away. But while Lewis speaks of evaluating
grammars, he does not engage with Chomsky’s notion of an evaluation metric, or the
correlative notion of “explanatory adequacy”.
I suspect that the reason Lewis would give for being what Pietroski disparagingly calls a “parttime grammarist” is that he is—as he says repeatedly—not concerned with specifically human
languages, but with language as a general phenomenon. This, in any event, is the line that
Brandom takes. And while Pietroski’s arguments in favor of his semantic proposal are
compelling in the context of his particular brand of inquiry, it’s difficult to make sense of his idea
that a different “ordering of priorities” can be “misguided”. At worst, an ordering of priorities—
i.e., the adoption of some concrete set of methodological maxims, descriptive aims, and
explanatory ambitions—can fail to be illuminating about the domain that it carves out for itself.
But it’s hard to see how this charge can be credibly leveled against Brandom’s inquiry, the
results of which many philosophers—the present author included—find deeply revelatory of
communicative linguistic practices.
§4.1.3 Public languages, dialects, and I-languages
Another context in which the notion of E-language has been invoked is to account for how
folk terms like ‘Italian’ and ‘Swahili’ manage to pick out something in the world. Pietroski
takes a dim view of this theoretical aim.
we can describe many I-languages as English dialects (or idiolects), without English being
any particular language. Prima facie, there are many ways to be a speaker of English:
American, British, and Canadian ways; young child ways, adult scientist ways; etc. Being a
speaker of English seems to be a multiply realizable property whose instances are similar
in ways that matter for certain practical purposes. We can use ‘English’ to group certain Ilanguages together, perhaps in terms of paradigmatic examples, an intransitive notion of
mutual intelligibility—think of Brooklyn and Glasgow—and historically rooted dimensions
of similarity. There need not be an English language that each speaker of English has
imperfectly acquired; cp. Dummett. We can use ‘Norwegian’ similarly, and classify both
Norwegian and English I-languages as Germanic, without supposing that Germanic is a
language shared by speakers of Norwegian and English. Analogies between linguistic and
biological taxonomy can be preserved, whatever their worth, by thinking of specific Ilanguages as the analogs of the individual animals that get taxonomized—with ‘Human’ as
the most inclusive category, and ‘Indo-European’ indicating something like a phylum.
Turning to Brandom, we note once more that one thing he is definitely not trying to do is
to taxonomize the natural languages of the human species. He’s simply not in the
business of reifying or hypostasizing the norms that govern specific discursive
interactions, nor individuating the entities that allegedly answer to the names ‘English’ or
‘Norwegian’. There really seems to be no principled reason why Brandom couldn’t
countenance Pietroski’s eminently reasonable view on these matters.
Thus, we can confidently assert that, when Pietroski describes a hypothetical theorist who
“grant[s] that children acquire I-languages, yet maintain[s] that Slangs are E-languages
that connect pronunciations with extensions of idealized concepts,” the theorist he is
describing is not Brandom. If the latter were in the business of theorizing about Slangs,
in particular, then he might indeed take up the hypothetical proposal that “each Slang is a
social object that certain speakers acquire by internalizing a generative procedure and
meeting some further conditions.” But no part of his actual view hangs on whether
“English”, “British English”, or “Germanic” name metaphysically real entities, let alone
46
ones that are individuated extensionally. Nor, again, is his theory a descriptive
psychological one. And it most certainly has no truck with extensions.
We can illustrate all this more clearly by considering a contrast that Pietroski draws
between what he calls E-NGLISH (an E-language) and NGLISH (an I-language).
One might describe E-NGLISH in terms of the strings that certain people could understand
as sentences, and the meanings they could assign to those strings, given a certain
dictionary and a suitably idealized sense of ‘could’. But this presupposes that a competent
speaker of NGLISH has an expression-generating procedure whose lexicon can be
expanded. So we don’t need to invoke E-NGLISH to say what NGLISH is.
One can stipulate that if Amy, Brit, and Candice speak English, there is something that
Amy, Brit, and Candice speak. But then the “thing spoken” may be a class of I-languages.
One can stipulate that people share a language if and only if they can communicate
linguistically. But then “shared languages” may not play any role in explaining linguistic
communication. Sharing a language, in the stipulated sense, may be a matter of using
similar I-languages in combination with other human capacities and shared assumptions
about potential topics of conversation.
These, I maintain, are all claims that Brandom should be happy to accept. True, he holds that
there are social norms governing the communicative exchanges of creatures who can understand
and produce indefinitely novel constructions. But he should have no qualms with the claim that
humans happen to do this via some subpersonal psychological apparatus. Plainly, his account
must presuppose that there is at least one way of “doing the trick”—our own—though it leaves
room for others.
Ultimately, as I’ve emphasized throughout this discussion, the appearance of friction is, here as
elsewhere, rooted in a cluster of verbal disputes. Underlying these is the fact that Brandom’s
normative inferentialism, while sharing many homophonous pieces of jargon with Pietroski, is
pitched at a higher theoretical plane, so to speak, than a cognitive theory of NGLISH. Thus,
Brandom should grant that NGLISH—the psychological apparatus that a given speaker
possesses—can be specified without any recourse to E-NGLISH, if such a thing can even exists in
a sense that he would countenance.
Note that, if Brandom’s inferentialism is on the right track, then the “shared assumptions about
potential topics of conversation” that Pietroski mentions are partly constitutive of the discursive
norms that enter into an illuminating pragmatic account of communication. Needless to say,
these not be the only things that enter into such an account. At the level of phonological and
morpho-syntactic processes, as well as the ability to access and assemble concepts, the
computational—and, ultimately, neurocognitive—explanation will surely be couched in the
terms that generativists recommend. These are arguably subpersonal processes, which, again, is
not the theoretical “level” at which Brandom’s account is pitched.19 A substantive disagreement
can only be maintained if both inquiries have a shared target; this is, once again, not such a case.
Still, I think it must be admitted that a large part of what makes a cognitive theory of NGLISH
theoretically interesting is that it enters into a larger account of how a specific creature—the
human animal—is able to track, follow, articulate, challenge, reject, and revise the norms of
discourse that have contingently arisen in its social milieu. This is not the only source of
theoretical interest, of course. The biologically-realized combinatorial principles that constitute
19
I spell out what I mean by “subpersonal” in Pereplyotchik (2017: §7.3)
47
NGLISH are a marvel of neurocognitive information-processing, and should be studied by
natural science as such, with the fascination that grips empirical linguists and neuroscientists
alike. Nevertheless, the internal operations of this apparatus would be of no adaptive use to a
creature if they made no contribution to the shaping of a broader class of norm-governed social
activities. And, biology aside, the beauty of the mechanism is only enriched, not diminished,
when we take into account the social interactions that it makes possible.
Famously, Chomsky (2016) rejects the idea that language is an adaptation specifically designed
for social/communicative purposes. He entertains an alternative hypothesis to the effect that
the narrow faculty of language—what Pietroski is here calling ‘NGLISH’—was initially an aid to
individual thought, making it possible for recursive structures in the mind to be composed, and
thus “entertained”. This hypothesis strikes me as pretty implausible, but even if it turns out to
be true, Brandom’s claims would still hold. That is, it would remain the case that the creature’s
newly-structured individual-level thoughts played a role in its profound re-shaping of the
prevailing social norms. Had its thoughts or judgments—i.e., its normatively evaluable
commitments to things being thus-and-s0—not somehow become entwined with broader social
practices, they would not thereby have been commitments, whatever else they might have been.
For, lacking a social existence, there would have been no source of a normative check on what
the creature had committed itself to. The conceptual contents of the internal states of a solitary
creature would thus be underdetermined to such an extent that it may be more accurate, not to
mention fruitful, to view them as subpersonal states—or, at any rate, as something other than
judgments. But this reeks of the kind of terminological legislation that both Pietroski and
Brandom repeatedly warn against, so I’ll leave the matter there.
Putting aside terminology, what Brandom is concerned to articulate is the general structure of
pragmatic social statuses—e.g., commitment and entitlements—and how the norms that govern
those statuses can be used in theorizing about the semantics of linguistic expressions in any
language. The project is “general” in the sense that it designed to be applicable to any case of
linguistic communication, not just the human case. How the norms of communication that we
find in humans today might have evolved in the distant past of our particular species is, once
again, no part of Brandom’s explanatory target. Clearly, they emerged somehow, and the
empirical story is bound to be fascinating. But all Brandom really needs to get his project going
is the very broad claim—empirical, though perhaps only by a courtesy—that some natural
psychological mechanisms underlie—and thus explain, in the descriptive naturalistic sense—any
given creature’s facility with social norms.
One might suspect that there is nevertheless a substantive metaphysical dispute in the vicinity.
Brandom posits public linguistic norms, whereas Pietroski has no truck with public entities of
any kind. But matters aren’t so clear. Consider how Pietroski characterizes two hypothetical
ontological views about what NGLISH really is: “We can identify NGLISH with an I-language,
or perhaps a class of I-languages that differ only in small ways that are irrelevant for the
purposes at hand.” But what the difference could there possibly be between positing an Elanguage and positing “a class of I-languages that differ only in small ways that are irrelevant for
the purposes at hand”? What ontological payoff, that is, can there be for a theorist who insists
on I-languages and resemblance classes thereof, as opposed to public languages—or, at any rate,
theoretically useful public linguistic entities (TUPLEs)?
Pietroski hints that the issue may have to do with other metaphysical features of E-NGLISH and
NGLISH, including their modal properties: Whereas “a Slang seems to have its composition
principles essentially, … E-NGLISH includes no composition principles; the set contains only
48
string-meaning pairs, atomic and complex.” The difference is that sets are individuated by their
members, but “any initial list of atomic expressions can be updated.” Pietroski’s point, if I
understand it correctly, is that the ongoing process of language change creates a moving target
for semanticists like Lewis. As new lexical items emerge (or “go extinct”) in the actual world, the
set that such a theorist intends to pick out changes. Indeed, Pietroski quips that “[i]dentifying
Slangs with sets of expressions is like identifying animals with sets of molecules, and insisting
that growth be described as replacing one animal with another. Even if this metaphysics is
coherent, it may not cohere with plausible biology and linguistics” (57).
It isn’t clear, though, why this sort of consideration couldn’t be pressed just as hard with regard
to I-languages. Although Pietroski says that “a Slang seems to have its composition principles
essentially,” he plainly acknowledges that I-languages, conceived of as a psychological
procedures, can also be updated. For instance, “communicative failures can lead children to
modify their (still modifiable) procedures for connecting meanings with pronunciations, subject
to constraints.” Indeed, it seems to be an empirical hypothesis whether all of the elements of
Pietroski’s own cognitive/semantic proposal in CM come online in the child’s I-language at the
same time. Perhaps I-languages initially allow only instructions for combining monadic
concepts, and only add (limited) dyadicity later in the maturational process. If the latter, then
we might ask, “Is this a new I-language?” More importantly, how could we tell?
On the assumption that I-languages have their lexical entries and composition rules essentially,
we can adapt Pietroski’s animals/molecules analogy, seeing a child’s Slang as a succession of
distinct I-languages—one for each day, week, month, or year. This entails that a child’s Slang
over the course of a year can be a set of significantly different I-languages. Though it is
doubtless amenable to empirical inquiry, the question, “How many I-languages per day?” seems
metaphysically awkward, at best. Similarly, Pietroski holds that “dialectical variation … makes
appeal to a single set of English expressions seem silly.” Rightly so. But, again, the same
considerations apply to the I-languages of speakers who are competent in many “dialects” or
“languages” (as the benighted folk call them).
What are the individuation conditions on I-languages, in light of these considerations? In
Pereplyotchik (2017: ch. 3), I argued that the answers to such questions are not much clearer in
the case of I-languages than for the case of E-languages. I suspect that it will remain so for the
duration of sustained inquiry in the decades to come.
§4.2 Philosophy has not failed cognitive science
In a recent paper, provocatively entitled “Why Philosophy Has Failed Cognitive Science,”
Brandom argues that analytic philosophy, exemplified in the work of Frege, has devoted a great
deal of energy to clarifying the nature of logical and semantic notions, but that we’ve thus far
failed to properly hand off the fruits of our heritage to researchers in cognitive science.20 The
present section is devoted to a survey of the claims that Brandom makes about this alleged
failure. I’ll argue that Pietroski’s work provides a direct counterexample to several of these
20 Recognizing that cognitive science is comprised of many fields, Brandom aims his criticism more directly at
philosophers who work on topics in cognitive psychology, developmental psychology, animal psychology (esp.
primatology), and artificial intelligence, rather than at those who study topics in neurophysiology, linguistics,
perceptual psychology, learning theory, and memory. Admittedly, this is a strange way to cut up the terrain. In
particular, for our purposes, it’s not at all clear why philosophers of linguistics are not on Brandom’s list of targets.
But let’s not dwell on this. If only for the sake of furthering our present inquiry, I’ll include philosophers and
language and linguistics in the list, making no exception for myself.
49
claims, but that Brandom is right to point out that many theorists, including Pietroski, hold
commitments that Frege’s insights should lead us to reject.
§4.2.1 Modes of inquiry, philosophical and scientific
Brandom recounts the way in which modern approaches to logic and semantics began with
Frege’s Begriffsschrift, which furnished us with a new logic and new ways of thinking about
meaning. Russell then showed us how to apply these ideas more generally in philosophy. But,
while the ideas that Frege and Russell developed about logic and semantics were quite general
in their import, later theorists to attempted, with variable success, to apply those general ideas
to the specific case of natural language—i.e., the system of representation that normal human
children acquire. (I suspect this has to do our sheer familiarity with the only clear case of
language use available—i.e., our own—coupled with the anthropocentrism that motivates any
inquiry into language.) This had the effect of blurring the lines between a general philosophical
theory of language, on the one hand, and an empirical linguistic inquiry into the special case of
human linguistic competence. That, Brandom maintains, is a mistake. On his view, the kind of
inquiry that Pietroski is engaged in deals with the contingencies and the specifics of how
humans acquired conceptual and linguistic abilities. Philosophy, by contrast, deals with the
“normative” question of what counts as “doing the trick”.
Although I’ve followed Brandom in putting the point this way through the discussion so far, I
must now register that this is not, in my view, the best way of saying what I think Brandom
intends to say here. At any rate, it’s not, by my lights, the point that he should be making at this
juncture in the dialectic. For, one might legitimately wonder how “What counts as doing the
trick?” gets to be a normative question—whether in the case of language or of anything else—
rather than a straightforward question of fact. Presumably, “What counts as being a horse?” is
not a normative question, for the simple reason that horses are a natural type of object, studied
as such by zoologists.
A theorist sympathetic to Brandom might reply that the notion “counts as” is normative because
it’s a matter of what competences and abilities it is appropriate to ascribe to a creature. But,
here again, a parallel move can be made in the case of horses, vis-à-vis the properties that are
correctly ascribed to them (notably, the property of being a horse). As a friendly amendment to
Brandom, then, I will address this worry on his behalf by re-iterating and fleshing out the
proposal that I floated earlier in the discussion, regarding “levels of analysis”.
As I’ve noted, it seems to me that what Brandom (and probably Lewis) has on offer is a highlevel description of a theoretically interesting kind of social practice—specifically, (“what counts
as a”) language game—with all of the impressive social, practical, and cognitive benefits that
make this kind of practice worthy of careful study. Correspondingly, I suspect Pietroski’s
proposal has its home in an inquiry pitched at a lower level of analysis. To flesh the picture out
further, it will be useful to quickly rehearse a central tenet of the mainstream approach to “levels
of analysis” in contemporary philosophy of science.
A view that posits multiple levels of theoretical analysis, whether in biology, computer science,
or in the sciences overall, is not thereby committed to any particular story about how theoretical
progress at any one level can, should, or must constrain theorizing at any other. True, the early
proponents of a “levels of science” picture also attempted—unsuccessfully, as it turns out—to
secure a “unity of science” thesis. But later thinkers, notably Fodor (1975), generously disabused
us of these lofty ambitions. What we know now is that theoretical pressure can and often does
50
“go both ways”, with higher and lower levels informing one another in equal measure, and with
equal authority. Lower levels, as such, are no longer seen as having an inherent epistemic
privilege.
Similarly, we can now appreciate the fact—poorly understood until fairly recently—that theories
at different levels of inquiry are often to a large extent independent variables. Here’s what I
mean: a theorist who formulates a high-level analysis of some phenomenon typically assumes—
often with good grounds—that the generalizations they discover at that level might be
implemented in any number of ways by lower-level mechanisms. That’s one half of the
independence claim. The other half is best appreciated from the perspective of a theorist
working at the (relatively) lower level of analysis. From this vantage point, the mechanisms,
laws, generalizations, and/or principles that are discovered, however “abstract” they might
seem, are assumed to be just one instance of an even more general phenomenon—a token of a
potentially much larger type.
What I want to recommend is that we apply these general considerations from the philosophy of
science to the concrete case of generative linguistics and normative inferentialism. Although it
would be misleading to say that the subject matters that these two research programs seek to
address are literally orthogonal to one another, the grain of truth in that bit of imagery is this:
Brandom high-level account is, as such, indifferent to how lower-level mechanisms might
operate in various token instances. Pietroski, on the other hand, is assaying the fine-structure of
the lower-level mechanisms, but only in the special case of human languages. As such, while the
results of his inquiry are relevant to Brandom’s overall picture—indeed, they might pose
devastating problems for Brandom (see below)—they function in practice not as substantive
theoretical constraints, but as an account of a very special case (particularly to us!) of the kind of
story that Brandom presupposes can be told for any creature to which his normative pragmatic
account is applicable.
If this is on the right track, then how do we make sense of the fact that Brandom, like Lewis,
explicitly appeals to data from natural language in motivating his analyses of phenomena that
are, at least in principle, specific to our way of doing things? If the theory is not intended to be a
contribution to a “merely parochial” inquiry about us, then why use examples from our
language—indeed, almost exclusively from English, in particular—in constructing and
developing it? There are two complementary ways of answering this question. The first, already
mentioned, is to point out that Brandom draws our attention to features of human languages (in
practice, just English) not for the purpose of displaying empirical data that his account can
explain, but, rather, to illustrate aspects of language that he believes have pragmatic or semantic
analogues in languages beyond the human case. (By analogy, think of the Chomsky hierarchy.)
The second prong of the reply consists of highlighting the fact that Brandom has devoted much
time and effort to arguing—ultimately persuasively, in my view—that many of the linguistic
devices he treats in his work are actually universal and essential features of language as such.
Moreover, as many generative linguists have pointed out in discussions of Universal Grammar,
language universals need not be categorical; they can, instead, take a conditional form, e.g.,
“any language that has feature F will also have property P,” or “if a language can express content
C, then it can also express content C*.” Brandom (2008) works out a detailed typology of such
relationships between logically possible languages, including those that differ either in respect of
their general expressive power or in respect of more specific semantic devices, e.g., deixis. He
takes this to be a pragmatic-cum-semantic version of Chomsky’s famous analysis of the
syntactic hierarchies of expressive power.
51
The view that Brandom promotes throughout his discussions of this topic is that traditional
philosophers of language, starting with Quine, directed their efforts at analyzing linguistic
constructions that, by and large, shed light on quite general semantic phenomena—i.e., ones that
we can hope to one day discover in other species (terrestrial or otherwise), or to build into our
intelligent robots. Although such linguistic devices might seem, from the perspective of a
modern-day linguist, to comprise a rather motley collection—why propositional attitude reports
but not, say, ergative verbs?—the tie that binds them, according to Brandom, is one that we can
best appreciate from the vantage point of a (high-level) normative inquiry into general
pragmatics. The linguistic phenomena that Quine and others identified early on as being
particularly germane to philosophy all have this in common: for each of them, there are good
reasons to think that it’s not just something we happen to find in distinctively human languages,
but something that tells us about what a language is, irrespective of which creatures happen to
use it or what subpersonal mechanisms they deploy in doing so.
§4.2.2 Frege’s insights
The lessons that Brandom believes philosophers have failed to pass on to their colleagues in the
sciences pertain to four key distinctions, all due to Frege, between (i) labeling and describing, (ii)
freestanding and embedded content, (iii) content and force, and finally (iv) simple vs. complex
predicates.
The last of these, Brandom argues, opens up a semantic hierarchy that is no less important for
cognitive scientists to be familiar with than the syntactic hierarchy that bears Chomsky’s name.
Taking this hierarchy into account in the context of empirical theorizing would help, he claims,
to characterize the phylogenetic and ontogenetic development of linguistic and conceptual
capacities, upward through what Brandom thinks of as “grades of conceptual content”, including
the propositional variety, the quantificational refinement, and ultimately the relational contents
that Frege taught us to recognize.
We’ve seen that Pietroski has a great deal to say about this. Indeed, the Fregean considerations
that he surveys in the service of an avowedly naturalistic theory in cognitive science are precisely
those that Brandom recommends to our attention (and then some). For Brandom, Frege’s
insight is that there are patterns in sentences that cannot be modeled as mere part-whole
relations. For instance, although there is no expression that appears in “Herbie admires Jessica”
and “Jessica admires Herbie” that doesn’t also appear in “Herbie admires Herbie”, the latter
sentence exhibits an inferential pattern different from the other two—the patter that we gesture
at by employing notational distinctions between, e.g., admire(x,y) and admire(x,x), or by making
explicit their inferential proprieties by embedding them inside of conditionals, as so:
(2) If someone admires anyone, then someone admires someone.
(3) If someone admires anyone, then someone admires themselves.
(true)
(false)
Thus, admire(x,x) expresses a kind of predicate that is not a part of a sentence, but an aspect of
it—which we can recognize as an “inferential pattern” and model as an equivalence class of
sentences. Frege’s device of function-application is a way of capturing this idea. Functions are
not, in general parts of their outputs. (The function capitol-of(x) yields Kiev when applied to
Ukraine, but neither capitol-of(x) nor Ukraine are parts of Kiev.) This is why sentential
connectives can be modeled with Venn diagrams, but complex predicates cannot. Even the
simplest mathematics uses complex predicates—e.g., natural number or successor(x, y)—and
52
Frege showed that, once you can build complex predicates, you can keep building endlessly
more, in the manner we ran across in our discussion of Lewis’s type-theoretic semantics.
As we’ve seen, Pietroski warns against taking for granted a creature’s ability to construct
concepts of unbounded adicities. But the warning is intended to apply only when doing naturallanguage semantics. For other purposes, Pietroski agrees that Frege’s insights are of
foundational and lasting importance. Moreover, the hypothesis that he develops posits thoughts
that admit of a Fregean semantic treatment (perhaps even a truth-conditional one), but it
requires these to first be converted, via Frege’s process of concept invention, into the kinds of
thoughts that are “legible”, so to speak, to the human FL. While it’s not clear what independent
empirical evidence Pietroski might offer for positing psychological mechanisms that facilitate
such a translation—I am aware of no obvious analogue in the case of other perceptual modules—
what is clear is this: Brandom’s contentions regarding Frege’s distinction (iv), between simple
and complex predicates, are rendered moot by the very existence of Pietroski’s work, which
presents an up-and-running empirical inquiry that is deeply informed by Frege’s core
contributions.
Matters are much less clear with regard to the other three distinctions that Frege was at pains to
draw. Let’s turn now to his distinction between labeling and describing.
§4.2.3 Sentences, predicates, and classification
Brandom points out that old-school scholastic accounts of thought were rooted in a
classificatory account of concepts—a relic of Aristotelian “forms”. The medievals noticed that,
once you have singular terms and classifications, you can build up to an account of truth, and
then analyze good inference in terms of truth-preservation. Pietroski unabashedly endorses this
strategy—in particular, the Aristotelian focus on classificatory concepts, which are central to his
predicativism about New Mentalese.
This raises the question: What exactly is classification? How does a predicate get to perform its
semantic function? Here is Pietroski’s answer:
…intuitively, a predicate classifies things, into those that meet a certain condition (e.g.,
being a rabbit) and those that do not. Anything that meets the condition satisfies the
predicate, which applies to anything that meets the condition. We can invent perceptible
predicates. Though for now, let’s focus on predicative concepts, like instances of RABBIT. I
assume that many animals have such mental predicates. … [A] predicate may apply to each
of [several] things, or to nothing. But these are just special cases of classifying. …even if
logically ideal predication is relational as opposed to classificatory, there seems to be a
psychological distinction between relational and classificatory concepts, even if we speak of
monadic/dyadic/n-adic predicates.
What I see, both here and throughout CM, are inter-definitions of semantic notions like “applies
to,” “classifies,” “satisfies,” and “meets conditions.” Although Pietroski has made it clear that he
is not trying to “break out of the intentional circle,” to use Quine’s phrase, the account he
provides does not, to my mind, do much to illuminate the phenomenon in question. Having
more labels for it does allow us to conjure different clusters of theoretical intuitions. But none of
these seems definite enough to make progress with.
Turn, then, to Brandom’s answer, which has the virtue of laying out substantive proposals and
refining them, arriving ultimately at one that meets various important desiderata. Like
Pietroski, Brandom maintains that “classifying” is not the obtaining of some (super)natural
53
relation between a concept and (a portion of) the actual world—let alone non-actual possible
worlds. On his view, there are, instead, acts of classification—e.g., asserting “That’s a rabbit,” or
tokening the corresponding perceptual thought (“LO, IT RABBITETH!”). We’ve already seen the
details of Brandom’s account of assertion, as well as his (subordinate) account of classification.
Let’s now approach the latter from a different direction, this time contrasting Brandom’s view
with extant rivals.
If asked, straight-out, “What is classification?,” the knee-jerk response that most philosophers
would offer is that classification is a matter of differential responsiveness. This is a start, but it
leaves wide open the question of what vocabulary we’re permitted to use in describing the
objects, properties, and events to which a physical system might be differentially responsive. If
we give ourselves free reign, then the notion becomes too cheap to do serious work; differentially
responding to Italian and French operas would count as classifying them, regardless of how the
trick was done. But, of course, one wants to know how that sort of thing happens, not just that it
does. Unfortunately, pursuing the answer to this explanatory question by restricting our
vocabulary to only naturalistically respectable terms quickly lands us with panpsychism—a
bridge much too far. For, as Brandom points out, even a chunk or iron differentially responds to
varying amounts of oxygen in its surroundings, e.g., by rusting.
Equally vacuous is the (unqualified) suggestion that we acquire predicative concepts, and hence
classificatory powers, by performing a process of “abstraction” from either the intrinsic qualities
of states of sentient awareness—as Hume, Russell, and Carnap all held at various points in their
otherwise distinguished careers—or from the raw information supplied by sensory mechanisms,
as naturalist like Neurath might have it. Without a detailed and well-motivated account of the
operation of “abstraction”, the acquisition of classificatory concepts has been labeled once more,
but remains stubbornly unexplained.
To their credit, naturalists like Fodor and Dretske attempted to meet the problem head-on.
Information-carrying states count as classificatory concepts, they argued, when they’re
embedded in suitably complex systems—ones that reliably keep track of their environment,
learn, and behave flexibly, perhaps on account of their history of natural selection and innate
resources. Burge (2010) adds to this list the requirement that the reliable tracking abilities must
have the shape of perceptual constancies, not mere sensory registrations.
Brandom maintains that no such project can work, even in principle, precisely because it ignores
Frege’s conceptual distinction between mere labeling and full-blown describing. A case of
labeling is one in which items are differentially sorted, but only extensionally, such that no
specific inferential consequences can be drawn from the presence of the label. A magic wand
might tell us that doorknobs, pet fish, and crumpled shirts are all and only the items that share
the magic feature, F. But without knowing what F is, in intensional terms, we have no idea what,
if anything, follows from the application of the label ‘F’—what, in the fictional scenario, is
semantically achieved by the activation of the wand. In order for this (or any other) physical
signal to become more than a mere label, it must be inferentially articulated, in the sense that
there have to be things that follow from something’s being F, as well as things that can have an
instance of ‘F’ in their inferential consequences.21
Following Dummett’s counsel, Brandom urges that we take into account both the circumstances and the
consequences of applying a concept. For some nonsynonymous propositions, the antecedent circumstances
coincide, but the inferential consequences serve to distinguish their contents. For instance, consider the contrast
between “I will one day write a book about anarchism” and “I foresee that I will one day write a book about
anarchism.” The inferential antecedents (“circumstances”) of these two claims might be the same, but the
21
54
One of Frege’s key lessons, then, is that inferential significance is central to conceptual content.
Some concepts have only inferential conditions of application, not perceptual ones—either
contingently, as with GENE, or necessarily, as with POLYNOMIAL or FRICTIONLESS PLANE. One can,
of course, call things “concepts” even when they meet less stringent conditions. But, in that
case, one should be sure to note the difference between differential responsiveness and
inferential articulation. Moreover, these points hold irrespective of whether a differentialresponse capacity is innate or learned, and they apply just as much Boolean compounds of more
basic units of differential responsiveness—i.e., compound labels.
While it’s impossible to credit Brandom’s claim that philosophers like Pietroski have taken
insufficient notice of Frege’s foundational insights, there is something to be said, I think, for his
criticism on this particular point. As we’ll see below, Pietroski’s views on classification don’t
seem to respect the distinction—whether it be Frege’s or Brandom’s—between labeling and
describing. This has downstream consequences for Pietroski’s view that really do seem to be out
of step with Brandom’s theoretical commitments.
The disagreement about classification is joined when Brandom asserts that thinking about
something, as in “We’re still thinking about his tax returns,” is a matter of tokening complete
thoughts—i.e., intentional states that can be expressed by linguistically competent creatures only
in complete speech acts, which requires producing complete sentences (if only in the
paradigmatic case). Pietroski, by contrast, follows the peculiar philosophical convention of
using the phrase “thinking about” to denote a punctate event of conceptual classification. While
he agrees with Brandom that having a thought requires tokening a “sentential concept”, he also
maintains that all concepts are “ways of thinking about things.”
This is where Brandom would disagree, on account of his commitment to the effect that
subsentential concepts are not complete thoughts. According to him, tokening such a concept
cannot by itself constitute “thinking about something”. To do that, subsentential concepts must
(in some way) participate in a sentential one. So while sentential concepts are correctly
described as “ways of thinking about things,” Brandom follows Frege in viewing subsentential
concepts as aspects of such ways. Thus, whereas Pietroski claims that “hearing ‘Bessie’ can…
activate the denoter BESSIE, thereby leading [one] to think about Bessie in a singular way” (108),
Brandom would deny that activating the denoting concept BESSIE can alone constitute thinking
about Bessie—in any way—even once. This point about “denoters” applies also, mutatis
mutandis, to predicative concepts.
Pietroski can, of course, stipulate that thinking about things doesn’t require tokening complete
thoughts. But it’s difficult to see what could motivate such a move. Relying on brute
introspection, one might fancy that singular reference has taken place with only one
subsentential concept in play—e.g., “I’m quite certain that I was just thinking of tofu; not
anything about it, specifically; just… tofu.” However, such introspective judgments are known to
inferential consequences are different. This point applies even to observational concepts—e.g., MOVING or MOTION.
A motion detector or a well-trained parrot that reliably emits the sound /Moving/ when there is, in fact, movement
afoot (and not otherwise) does not thereby have the concepts in question. For although the circumstances of
application are right, there are no inferential consequences to speak of in these cases. Brandom also makes the
helpful observation that operators can serve to distinguish concepts that share both circumstances and
consequences of application. For instance, the concepts HERE and WHERE-I-AM are shown to be distinct when
interacting with the temporal operator ‘always’: “It’s nice here/where I am” vs. “It is always nice here/where I am.”
55
be an extremely unreliable source of data, whether performed by naïve speakers or by
theoreticians.22 One might, more plausibly, appeal to the theory of perception developed by
Burge (2010), according to which perceptual awareness involves the application of only
subsentential concepts, modeled on noun phrases. But this won’t do, either. For, if judgments
and classifications are all “sentence-sized”, as Brandom argues, then even the perceptual mental
attitude of noticing can’t properly be treated as a case of applying just one classificatory concept.
Noticing rabbits involves judging that there are rabbits in the relevant spatiotemporal vicinity,
and making such judgments requires deploying concepts other than the classificatory predicate,
RABBIT(_)—e.g., the concept HERE(_).
With all this in mind, I think we should side with Brandom in saying that subsentential concepts
play a role in acts of classification, where the latter are construed as either as public assertions
or as inner endorsements of judgable contents. I see no reason to assume that tokening a
subsentential concept is sufficient to carry off an act of classification. Nor is it obvious that
classifying is a function of all concept application, as Pietroski believes. Does wondering
whether Bessie exists really require classifying her? The latter question brings us face-to-face
with the Fregean distinction between force and content, to which we now turn.
§4.2.4 Force and content
Brandom draws our attention to an ambiguity that was long ago pointed out by Wilfrid Sellars—
the so-called “-ing/-ed” ambiguity—which allows us to use words like “claim” and “thought”
polysemously to describe speech acts and propositional attitudes in respect of their intentional
contents, on the one hand, and in respect of their illocutionary force or “mental attitude type”,
on the other. With regard to the latter, Stephen Schiffer has popularized the imagery of different
“boxes” in the mind—one that corresponds to the functional role of beliefs, another to that of
desires, a third one for intentions, and so on. Pietroski likewise notes the distinction in the
following passage from CM.
One needs to be careful with the terminology, since words like ‘thought’ and ‘concept’ are
polysemous with regard to symbols and contents; ‘thought’ and ‘judgment’ are also like
‘assertion’, which can be used to describe certain events that can be characterized in terms
of contents. In speaking of a thought that Sadie is a horse, one might be talking about a
mental episode, a mental sentence, or a content shared by various sentences.
Brandom goes on to argue that this distinction is not only useful for theorists, but that it also
marks a distinct level of conceptual sophistication. Creatures who can tell the difference
between the act of asserting and the content of what’s asserted can be said to be aware, at least
implicitly, of the force/content distinction. To make this awareness explicit, a creature can
embed a sentence inside of a conditional, thereby stripping it of its force.23
Now, on the assumption that classification is, in fact, a kind of illocutionary force, Brandom
concludes that ‘If Fa then Ga’ cannot, in point of fact, be used to classify a as F, despite invoking
Distinct methodological troubles plague both of these two options, but they all strike me as insuperable and not
worth discussing here.
22
Brandom illustrates how conditionals can be used to distinguish those inferential consequences that derive from
the content of what’s said from those that derive from its force. Witness, for instance, the strikingly different
inferential consequences of the sentences “p” and “I believe that p” when embedded as antecedents in conditionals:
“If p then p” is obviously true for all values of ‘p’, but “If I believe that p, then p” is not foolishly arrogant for a mere
mortal to assert, but also disastrously false in all known cases.
23
56
both ‘a’ and ‘F’.24 This is another place where his views on the nature of classification come into
conflict with Pietroski’s. And, here again, I can think of no plausible way around it.
One might suggest, on Pietroski’s behalf, that we seem to be able to simply entertain a notion—
e.g., to contemplate “justice” or “the possibility of pigs one day flying”—without thereby
committing ourselves to anything at all. This, Brandom points out, goes back to Descartes’s view
that one can first “entertain” an idea/proposition and then, by an act of mental will, either
endorse or deny it, yielding either a committal judgment or a positive doubt. Pietroski’s picture
of concept-assembly likewise points in this direction. On that model, the process of assembly
eventuates in the construction of a “polarized sentential concept”, which is then shipped off to
central cognition for endorsement, rejection, or further contemplation.
But this idea is at odds with Kant’s equally compelling observation that concepts have contents
only in virtue of their role in judgment. Pushing still further, Frege argued that entertaining
propositions is a late-coming ability that involves a thinker embedding a proposition into the
antecedent slot of a conditional—as in the following soliloquy: “What if p? Well, if p were the
case, then q would also; but that would mean that neither r nor s…”. If Frege’s proposal is
correct, then the ability to “entertain an idea” piggy-backs on two prior abilities—viz., to assert
conditionals, and then to perform inferences that take them as premises or conclusions (e.g.,
hypothetical syllogisms).
Now, Pietroski agrees that the mental act of endorsement results in a committal judgment,
which both he and Brandom take to be subject to normative evaluation—i.e., assessments of
correctness, warrant, rational propriety and the like. But it’s not clear how Pietroski’s ⇑/⇓
operators for assembling polarized sentential concepts facilitate this act of endorsement. More
generally, Pietroski’s proposal seems to have little to offer in the way of a subpersonal about how
any kind of force/attitude is superadded, so to speak, to polarized concepts, after the
Begriffsplans get done assembling them.
§4.3. Predicates and singular terms
We turn now to our very final topic, which concerns a foundational disagreement between
Brandom and Pietroski on the nature of singular and predicative concepts. Recall that
Pietroski’s semantics for natural language is resolutely predicativist, in the sense that it
recognizes no analogue of type-<e> expressions—intuitively, singular terms—i.e., no instruction
for fetching singular concepts. Recall as well that he does countenance the presence of such
concepts in the human mind and that he recognizes the useful cognitive roles that such concepts
play in thinking/reasoning. But this kind of cognition is couched in Olde Mentalese—the
phylogenetically ancient representational format in which pre-linguistic thought was conducted,
and which Pietroski thinks we still employ today, outside of our language use.
Brandom develops a powerful argument to the effect that any language that fails to draw a
distinction between predicates and singular terms is in principle barred from introducing basic
logical operators—including both negation and the conditional. If this argument is successful, it
would have no effect at all on Pietroski’s claims about Olde Mentalese, which happily draws that
distinction. But it would seem to present a rather major problem for Pietroski’s main proposal
about natural-language semantics, which has predicativism as one of its core commitments. So
24 Likewise, he warns against conflating denial and supposition—two kinds of force—with negation and
conditionalization, which are semantic functions that directly participate in the content
57
it behooves us, in surveying the points of discord between them, to focus on this foundational
case, using it to draw out related points of contention about syntax.
§4.3. Brandom’s argument
“What are subsentential expressions?” and “Why are there any?” These are the two questions
that Brandom raises in an essay of the same title (2001: ch. 4). In §1, we glimpsed the overall
shape of his answer. Here, we’ll reiterate the main points and look at some of the details. The
reason for doing so is that this is the last—and arguably most challenging—of the issues that
divide Brandom’s normative inferentialism from the overall generative enterprise.
In the reconciliatory spirit of my overall project, I’ll propose a possible strategy for ameliorating
the dispute. But I should concede from the outset that this appears to be a particularly stubborn
issue. This is frustrating, as the issue obviously cuts pretty deep. Having laid out the details of
Brandom’s difficult argument, I’ll settle, in the end, for merely having raised the question—one
that hasn’t been discussed, to my knowledge, anywhere else in the literature—of how generative
grammar might be (in)compatible with Brandom’s substitutional approach to syntax.
§4.3.1 Details and a proof
As noted earlier, Brandom agrees with Pietroski that discerning subsentential expressions is
what makes it possible for us, both as theorists and as language users, to “project” proprieties
governing the use of novel sentences. Once we’ve done this, we can then recombine
subsentential items into new expressions, with meanings/contents that were previously
inexpressible. Brandom recommends using the notion of substitution for this purpose, adapting
Frege’s insight that discerning meaningful subsentential expressions is a matter of treating
sentences as substitutional variants of one another.
In spelling out the syntactic side of this technical notion, Brandom begins by identifying three
“substitution-structural roles”. These include the role of being an expression that is (i)
substituted in, (ii) substituted for, and (iii) a substitutional frame. For instance, “David admires
Herbie,” is substituted in to yield a substitutional variant, such as "Herbie admires Herbie,”
where the expression 'David' has been substituted for. The residual substitutional frame is
what’s is common to the two substitutional variants—schematically, "x admired Herbie.”
On the semantic side, a substitutional variant of a sentence will be defined in terms of the
inferences that it enters into, as a premise or a conclusion. In keeping with his inferentialist
project, Brandom develops the idea that the meaning of a subsentential expression consists in
the materially correct substitution inferences involving that expression—i.e., inferences in which
the conclusion is a substitutional variant of one of the premises. Thus, ‘Herbie’ has the meaning
that it does partly in virtue of its role in a vast range of materially good inferences, including the
single-premise inference from “Herbie barked” to “My dog barked”.
With this in mind, Brandom notes that substitution inferences come in two flavors: symmetric
and asymmetric. The above inference, from “Herbie barked” to “My dog barked”, is symmetric,
in the sense that it’s materially good in either direction. Plainly, this trades on the identity
between Herbie and my dog. This is more grist for Brandom’s logical expressivist mill. He
captures this observation by pointing out that identity is the logical notion that we use to
express—i.e., make explicit—the substitutional commitments that are central to our notion of
singular terms (and, relatedly, of the items they purport to denote). Contrast this with the
58
inference from “Herbie runs” to “Herbie moves”, which is materially good in only one direction,
not in the other. That’s because ‘runs’ is materially stronger, in respect of inferential
consequences, than ‘move’; the former licenses all of the inferences that the latter does, and then
some. The distinction between symmetric and asymmetric inferential proprieties governing
substitution inferences is, as we’ll now see, the central aspect of Brandom’s distinction between
predicates and singular terms. Let’s turn finally to his definitions of these two notions.
Each of the two definitions has a syntactic component and a semantic component. On the
syntactic side, Brandom says that singular terms invariably play the substitution-structural roles
of being substituted for (as well as in), whereas predicates invariably play the role of
substitutional frames. On the semantic side, he points out that the substitution of singular
terms is always governed by symmetric inferential proprieties, whereas predicates are
necessarily governed by at least some asymmetric ones. For instance, ‘Herbie’ is a singular
term partly in virtue of the fact that, if the substitution inference from “Herbie barked” to “My
dog barked” is materially good, then so is its converse. Crucially, the same does not hold for the
substitution inference from “Herbie runs” to “Herbie moves”, where the substitution of
predicates is in play. That’s, again, because ‘runs’ is inferentially stronger than ‘moves’. This is
an instance of something that Brandom goes on to argue is constitutive of predicates as a class—
viz., that they necessarily enter into at least some asymmetric substitution-inferential relations
with other predicates in the language.
Thus far, Brandom has supplied an answer only to his first question: what are singular terms
(and, by extension, predicates)? To summarize, the answer is that singular terms play the
syntactic roles of substituted fors and substituted ins, and the semantic role of entering solely
into symmetric substitutional inferences. Predicates, by contrast, play the syntactic role of
substitutional frames that necessarily enter into at least some asymmetric substitutional
relations.
To ask why there are singular terms, then, is to ask the following question: Why do the syntactic
and semantic substitutional roles line up as they do? This way of setting up the question allows
us to generate a taxonomy of the logical possibilities, in terms of two binary parameters—syntax
and semantics. We can thus imagine languages that instantiate the following four permutations.
i)
ii)
iii)
iv)
Substituted for is symmetric and substitutional frame is symmetric.
Substituted for is asymmetric and substitutional frame is symmetric.
Substituted for is asymmetric and substitutional frame is asymmetric.
Substituted for is symmetric and substitutional frame is asymmetric.
The option that’s actually instantiated by singular terms and predicates is (iv). The question
then becomes: What's “wrong” with the other options?
What rules out option (i), according to Brandom, is that, many of the substitution inferences
that are to be codified and projected at the level of sentences by discerning subsentential
expressions are asymmetric. No weakening inferences could be generated if all subsentential
components were restricted solely to symmetric inferences. What the remaining options have
in common is that they assign asymmetric inferential proprieties to expression-kinds that play
the syntactic role of being substituted for. We can thus ask: what’s wrong with that
combination? The answer to this question is where things become technically challenging.
Readers who feel like skipping ahead to the next section can take with them only the upshot of
59
the proof: If a language fits the model of options (ii) or (iii), then it does not permit the
introduction of conditional contents (contrary to fact, in our own case).
Brandom invites us to consider the generalizations that permit expressions with subsentential
contents to determine the proprieties of a productive and indefinitely flexible class of novel
combinations. Asserting that Herbie is (identical with) my dog commits me to the propriety of
all inferences of the form P(Herbie) P(my dog). Similarly, for predicates; asserting that
anything that runs thereby moves commits one to the propriety of all inferences of the form
Runs(x) Moves(x). This is why, when such content-constitutive and potentially asymmetric
substitutional commitments made explicit, they take the form of quantified conditionals—
another feather in the logical expressivist’s cap.
Here, then, is Brandom’s proof that options (ii) and (iii) in effect rob a language of its most basic
logical notions—even ones as simple as negation and the conditional.
The pattern corresponding to the hypothetical asymmetric significance of “substituted fors”
would replace identity claims with inequalities. Let “t > t*” mean that P(t) P(t*) is in general
a good inference, but not every frame, P, will make the converse inference, P(t*) P(t),
materially good. Now, call a predicate Q an inferential inverse of a predicate P if, for all t and t*,
the following condition is satisfied.
Inferential Inverse =df
if P(t) P(t*) holds, but P(t') P(t) doesn’t,
then Q(t*) Q(t) holds and Q(t) Q(t*) doesn’t
Thus, to answer the question of what’s “wrong” with options (ii) and (iii), it suffices to show that
if every sentential substitutional frame has an inverse, then there can be no asymmetrically
significant substituted fors. The demonstration now proceeds by way of the following key
observation.
Key observation:
In any language containing the expressive resources of elementary
sentential logic, every predicate has an inferential inverse. Conditional and
negating locutions are inferentially inverting; e.g., inferentially weakening
the antecedent of a conditional inferentially strengthens the conditional.
Thus, if condition the antecedent of Inferential Inverse holds, the then the
consequent can be shown to hold as well.
Proof: Let Q be defined as P r. It follows immediately that P(t*) S(t*) entails P(t) S(t),
but P(t) S(t) does not entail P(t') S(t').
What this argument shows, if it shows anything at all (see below), is that conditional locutions
are inferentially inverting precisely because they play the indispensable expressive role of
making inferential relations explicit. (Mutatis mutandis for negation and other logical
operators.) If this is right, then we can conclude, as Brandom does, that any language able to
muster the expressive resources required for introducing basic sentential connectives will also
draw a distinction between singular terms and predicates (as defined), assuming it has any
subsentential structure at all. Conversely, any language that forgoes the term/predicate
distinction is thereby severely castrated in its expressive power—incapable in principle of
introducing so much as a material conditional.
60
§4.3.2 Potential replies
The foregoing argument was developed in Brandom’s was presented in its canonical form in
MIE. (See also the later and more condensed treatment in Brandom, 2001). In the decades
since then, many theorists have marshalled a variety of technical objections against his line of
reasoning. Some of these are based on straightforward confusions and can thus be defused
without much concern (see MIE, ch. 6). Others might be more troublesome. Whatever the case
about that, I want to ask what bearing this argument would have on Pietroski’s position if it
were successful.
As noted above, the argument appears to present a serious problem for Pietroski’s commitment
to predicativism, at least in the case of natural-language expressions and New Mentalese. (We
can breathe easy about expressions of Olde Mentalese; these are, mercifully, in the clear.) How
might Pietroski reply to this challenge? Closer to home, if this dispute can’t be resolved, does
that spell doom for my larger reconciliation project in this essay? Sadly, the answers I can
muster at present will be satisfying to few.
One possibility is cut things off at the root by rejecting Brandom’s substitutional approach to
both syntax and semantics. Indeed, this is most obvious route for Pietroski to pursue, given his
claim that we can’t simply take for granted a creature’s ability to “slice out” terms from
sentences, so as to use them in combinatorically constructing an infinite hierarchy of semantic
types, a la Frege or Lewis. Such a project, Pietroski argues, stipulates from the outset far more
than it explains in the end. Suppose that he’s right about this. Does that mean that his
empirical results—assuming for present purposes that that’s what they are—have literally
contravened Brandom’s strategy? Put another way, if generative linguistics is the correct
approach to natural language, then are we barred from using Brandom’s “substitutional scalpel”
to identify subsentential structure, distinguish between singular terms and predicates, and carry
off the inferentialist project at the subsentential level? I do not think so. Or, at any rate, I’m not
convinced.
One excuse for my wavering on this point is that the considerations Brandom uses are so
general—i.e., so totally independent of other details of the languages to which it applies—that it’s
hard to see which of them Pietroski is really in a position to deny. True, substitutional syntax
smells a little too much like the old-school “discovery procedures” and “immediate constituent
grammars” of benighted pre-Chomskyans (see Fodor, Bever, and Garrett, 1974 for a blistering
refutation). But the methodological similarities, in my view, cut no ice. Nominalist discovery
procedures, were, for all their shortcomings, empirical hypotheses about human languages.
Otherwise, they wouldn’t even get to be rendered false by straightforwardly empirical
arguments. By contrast, we’ve seen that Brandom’s project, despite drawing on examples from
English—again, for illustrative purposes only—is explicitly not pitched as an empirical inquiry
into human language. So he can’t be accused of attempting to resurrect that old idea.
Nor is it clear that Brandom’s approach to the general project of delineating syntactic categories
is incompatible with further elaborations by the kind of syntax that Chomsky supplies. In the
only passage I’ve found where he mentions generative grammar and its transformation rules
(Brandom, 1987), he makes precisely that suggestion:
Recall … that Chomsky showed that one should not expect to generate the well-formed
sentences of natural languages by concatenation, combination, or tree-structuring of any
set of categories of this sort. To any such “phrase-structure grammar” will have to be
added transformations of such combinatory structures. Categorial classifications are just
61
the raw materials for grammar in this sense, and don't have anything to say about how one
might proceed to the rest of the task of syntax once one has the categories. (165: fn. 2)
With this in mind, I’ll end with the following throw-away speculation. If, in a remarkably
distant possible world, Brandom were to go into the business of empirical theorizing, it’s not
clear to me that he would (or should) adopt anything other than generative grammar as the
optimal theoretical framework within which to prosecute his inquiry. Certainly, he is well
aware—how could he not be?!—of the theoretical need for Chomskyan grammars. On the rare
occasion that he does mention these, he doesn’t say anything that even hints at a disagreement.
(Recall, “Chomsky showed…”, my emphasis.) Moreover, his frequent invocations of the
Chomsky hierarchy in discussions of computational procedures and the expressive power of
various languages (e.g., Brandom, 2008: ch. 1) suggests no particular aversion to core
generativist principles. To be sure, this isn’t very much to go on. It doesn’t even begin to show
that Brandom’s “substitutional syntax” is compatible with (any) generative grammar. But it’s
what I can offer at present, besides relentless optimism.
Conclusion
Attempting to integrate the theories developed by Brandom and Pietroski may strike some as an
utterly futile project, analogous to grafting, say, a squirrel onto a cow. One thinks to oneself,
“Perhaps it can be done, but… why?!” In the foregoing pages, I’ve argued that this view of the
matter constitutes a failure to appreciate the live opportunities for a fruitful merger. Such a
merger is, like any large one, a daunting gamble. But, it seems undeniable, from where I sit, that
both Brandom and Pietroski have furnished significant insights into the nature of something
called “language”—a phenomenon that we should firmly resist regarding as unitary.
That having been said, it seems only natural to suppose that combining the two theories will
yield a richer overall picture than either theory can provide on its own. This sort of thing
doesn’t always work out, of course; not all teams of All-Stars are All-Star teams, after all. But
even if the resulting view is not to one’s liking, I find it frankly inconceivable that some such
reconciliation project won’t have to be effected eventually. Perhaps we aren’t there yet; perhaps
both generative linguistics and normative inferentialism must await more penetrating
developments before their future descendants can be merged. (Or, again, maybe the AI people
blow our minds with some new-fangled contraption next Tuesday. Who knows?) Whatever the
case about that, I hope to have convinced the reader that there are, in fact, very few substantive
disagreements between the two approaches. What initially appear to be sharp contrasts turn
out, on inspection, to be mostly benign differences of theoretical focus and explanatory
ambition.
I’ll close on a broadly sociological note. A mistaken commitment to the incompatibility of
generative linguistics and normative inferentialism has had, I believe, negative consequences for
both philosophy and linguistics. Specifically, there is, at present, little or no cross-talk between
researchers working in these two traditions. Indeed, they seem to be as siloed off from one
another as any two major research programs in “analytic” philosophy of language can be. If
nothing else, by partially undermining the mistaken assumption of incompatibility, I hope to
have gone at least some way toward rectifying the situation. My hope is that others will follow
suit, attempting—surely with more skill than I—to forge still further connections between the
two enterprises. Even if Pietroski and Brandom make for strange bedfellows, there is no
question that they make for excellent guides. And, for better or worse, the terrain is largely
uncharted. Let us press forward, then—as always, with optimism.
62
References
1. Brandom, Robert B. (1987). “Singular Terms and Sentential Sign Designs,” Philosophical
Topics, Vol. XV, (1), pp. 145–167.
2. Brandom, Robert B. (1994). Making It Explicit: Reasoning, Representing, and Discursive
Commitment. Cambridge, MA: Harvard University Press.
3. Brandom, Robert B. (2001). Articulating Reasons: An Introduction to Inferentialism.
Cambridge, MA: Harvard University Press.
4. Brandom, Robert B. (2008). Between Saying and Doing: Towards an Analytic
Pragmatism. Cambridge, MA: Harvard University Press..
5. Brandom, Robert B. (2009). “Why Philosophy Has Failed Cognitive Science,” in Reason and
Philosophy: Animating Ideas. Cambridge, MA: Harvard University Press.
6. Burge, Tyler (1973). “Reference and proper names,” Journal of Philosophy, 70, pp. 425–439.
7. Burge, Tyler (2010). Origins of Objectivity. New York, NY: Oxford University Press
8. Churchland, Paul M. (1979). Scientific Realism and the Plasticity of Mind. Cambridge, UK:
Cambridge University Press.
9. Chomsky, Noam (1986). Knowledge of Language. New York, NY: Praeger.
10. Chomsky, Noam (1995). The Minimalist Program. Cambridge, MA: MIT Press.
11. Chomsky, Noam (2000). New Horizons in the Study of Language and Mind. Cambridge,
UK: Cambridge University Press.
12. Chomsky, Noam (1995). Why Only Us: Language and Evolution. Cambridge, MA: MIT
Press.
13. Collins, John (2012). The Unity of Linguistic Meaning. Oxford, UK: Oxford University
Press.
14. Davidson, Donald (1983). Essays on Truth and Interpretation. Oxford, UK: Oxford
University Press.
15. Dummett, Michael (1978). Truth and other Enigmas. London, UK: Duckworth.
16. Fodor, Jerry A. (1975). The Language of Thought. Oxford, UK: Oxford University Press.
17. Fodor, Jerry A. (1979). “Three Reasons for not Deriving ‘Kill’ from ‘Cause to Die’,” Linguistic
Inquiry 1: 429-438
18. Fodor, Jerry A., Bever, Thomas, and Garrett, Merrill (1974). The Psychology of Language,
Cambridge, MA: MIT Press.
63
19. Fodor, Jerry A. and Lepore, Ernest (1992). Holism: A Shopper’s Guide. Oxford, UK:
Blackwell.
20. Fodor, Jerry A. and Pylyshyn, Zenon W. (2015). Minds Without Meanings: An Essay on the
Content of Concepts. Cambridge, MA: MIT Press.
21. Grover, Dorothy L., Camp, Joseph L., Belnap, Nuel D. (1975). “A Prosententialist Theory of
Truth,” Philosophical Studies: An International Journal for Philosophy in the Analytic
Tradition, Vol. 27, No. 2, pp. 73-125
22. Graff Fara, Delia (2015). “Names are Predicates,” Philosophical Review, 124 (1): 59-117.
23. Grice, H. Paul (1989). Studies in the Way of Words. Cambridge, MA: Harvard University
Press.
24. Harris, Daniel W. (2020). “We Talk to People, not Contexts,” Philosophical Studies 177 (9):
2713–2733.
25. Heim, Irene and Kratzer, Angelika (1998). Semantics in Generative Grammar. Oxford, UK:
Blackwell.
26. Horwich, Paul (1990). Truth. Oxford, UK: Oxford University Press.
27. Kripke, Saul (1980). Naming and Necessity. Cambridge, MA: Harvard University Press.
28. Levinson, Stephen C. (1983). Pragmatics. Cambridge, UK; Cambridge University Press
29. Lewis, David K. (1969). Convention. Cambridge, MA: Harvard University Press.
30. Lewis, David K. (1970). “General Semantics,” Synthese 22: 18–67.
31. Lewis, David K. (1973). “Languages and Language,” in K. Gunderson (ed.) Minnesota Studies
in the Philosophy of Science, Vol. 7. Minneapolis, MN: University of Minnesota Press.
32. Pereplyotchik, David (2017). Psychosyntax: The Nature of Grammar and Its Place in the
Mind. Springer International.
33. Pereplyotchik, David (2019). “Whither Extensions,” Mind and Language (Special Issue), pp.
1–14.
34. Pietroski, Paul M. (2018). Conjoining Meanings: Semantics Without Truth Values. Oxford,
UK: Oxford University Press.
35. Price, Huw (2011). Naturalism Without Mirrors. Oxford, UK: Oxford University Press.
36. Price, Huw (2013). Expressivism, Pragmatism and Representationalism. Cambridge, UK:
Cambridge University Press.
37. Quine, W. V. (1953). “On What There Is,” in From a Logical Point of View. New York:
Harper and Row.
64
38. Quine, W. V. (1960). Word and Object. Cambridge, MA: MIT Press.
39. Quine, W. V. (1970/1980). Philosophy of Logic, 2nd Edition, Cambridge, MA: Harvard
University Press.
40. Rorty, Richard (1979). Philosophy and the Mirror of Nature. Princeton, NJ: Princeton
University Press.
41. Sellars, Wilfrid (1963) “Philosophy and the Scientific Image of Man,” in Science, Perception,
and Reality (Robert Colodny, ed.). Humanities Press/Ridgeview. pp. 35-78
42. Stainton, Robert J. (2006). Words and Thoughts: Subsentences, Ellipsis, and the
Philosophy of Language. Oxford, UK: Oxford University Press
43. Stalnaker, Robert (1984). Inquiry. Cambridge, MA: MIT Press.
44. Sperber, Dan and Wilson, Deirdre (1986/1995). Relevance: Communication and Cognition,
2nd Edition. Oxford, UK: Blackwell
45. Tomasello, Michael. (2005). Constructing a Language: A Usage-Based Theory of Language
Acquisition. Cambridge, MA: Harvard University Press.
46. Yang, Charles (2006). The Infinite Gift: How Children Learn and Unlearn the Languages of
the World. New York: Scribner.
65