[go: up one dir, main page]

0% found this document useful (0 votes)
15 views28 pages

Lexical Semanticsc

This document is a chapter on lexical semantics by Ray Jackendoff, discussing the nature of the lexicon and semantics, and outlining various theories and issues within the field. It emphasizes the complexity of lexical meanings, the distinction between grammatical words and lexical items, and the mental representations that underpin word meanings. The chapter also critiques existing semantic theories and proposes criteria for a comprehensive theory of lexical semantics that accounts for learnability, compositionality, and the relationship between language and thought.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views28 pages

Lexical Semanticsc

This document is a chapter on lexical semantics by Ray Jackendoff, discussing the nature of the lexicon and semantics, and outlining various theories and issues within the field. It emphasizes the complexity of lexical meanings, the distinction between grammatical words and lexical items, and the mental representations that underpin word meanings. The chapter also critiques existing semantic theories and proposes criteria for a comprehensive theory of lexical semantics that accounts for learnability, compositionality, and the relationship between language and thought.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/365439451

Lexical Semantics

Article · January 2022


DOI: 10.1093/oxfordhb/9780198845003.013.3

CITATIONS READS
2 2,774

1 author:

Ray Jackendoff
Tufts University
242 PUBLICATIONS 34,018 CITATIONS

SEE PROFILE

All content following this page was uploaded by Ray Jackendoff on 06 February 2023.

The user has requested enhancement of the downloaded file.


Lexical Semantics
Ray Jackendoff
(chapter for Gleitman, Papafragou, and Trueswell, Oxford Handbook of the Mental Lexicon)

As the topic of lexical semantics is vast and has been approached from many different
points of view, this chapter necessarily presents a rather personal view of the field, addressing
issues with which I have been engaged and which I find telling. Others would no doubt write
quite a different chapter. Section 5.1 discusses general issues: what a theory of lexical
semantics has to account for; section 5.2 sketches several prominent approaches in the literature.
Sections 5.3-5.6 then present some of the basic elements involved in the content of lexical
meanings. Section 5.7 raises a forbidding difficulty for all theories of lexical semantics, and
section 5.8 wraps things up.

5.1. Introduction: The problem of lexical semantics

5.1.1. What’s the lexicon? and What’s semantics?

In order to develop a theory of lexical semantics, we must first ask what counts as a
lexical item – what sort of things are in the lexicon – and what is intended by semantics.

The lexicon is typically regarded as the “place” where words are stored; the term lexical
item is understood as coextensive with word. However, as stressed by DiSciullo and Williams
1987, it is important to distinguish the notion of grammatical word from the notion of lexical
item – a piece of language stored in memory. For instance, purpleness can be recognized as a
grammatical word, but it is probably not stored in most speakers’ memories. Rather, when being
heard or uttered, its morphological structure and meaning are constructed on the fly from smaller
stored parts. Hence it is not a lexical item in DiSciullo and Williams’s sense. On the other hand,
the idiom chew the fat is a VP, not a grammatical word. Because its meaning cannot be built
from the meanings of its parts, it has to be learned and stored. Thus it is to be regarded as a
lexical item.

Of these two notions, that of stored lexical material is most relevant for the purposes of
studying the mental lexicon. Under this construal, a lexical entry consists of an association
between a piece of semantics, a piece of phonology, and a piece of syntactic structure, stored in
longterm memory.1

According to this extended conception of the mental lexicon, it contains not only words,
but thousands of phrasal idioms. It also contains collocations that have literal interpretations but
are known to be the “right” way to say things, e.g. phrases like black and white rather than
#white and black, and clichés such as light as a feather and red as a beet. In addition to such
items and a vast number of other stored multi-word expressions (Christiansen and Arnon 2017;
Culicover, Jackendoff, and Audring 2017), the lexicon must also contain morphological affixes,

1
Alternatively, idioms are sometimes thought to be stored in a different “place” from words, such as an “idiom list.”
This does not change the issue of their semantics, which resembles that of words in (nearly) every respect.

1
some of which are meaningful (e.g. -ful and un-), and some of which are not (e.g. accusative
case).2

Perhaps more surprisingly, the lexicon also includes syntactic constructions that are
linked to idiosyncratic meanings. An example is the N of an N pattern illustrated in (1).

(1) a travesty of an experiment (» ‘an experiment that was a travesty’)


that gem of a result (» ‘that result, which was a gem’)

The syntactic heads in (1) (travesty, gem) are understood as modifiers that offer a negative or
positive evaluation of the syntactic dependent (experiment, result).3 This contrasts with the
canonical interpretation of this syntactic configuration, in which the syntactic head is also the
semantic head. For instance, a picture of a cat denotes a picture, not a cat. Idiomatic patterns
like (1) must be learned and listed in the lexicon; they are the stock in trade of Construction
Grammar (Fillmore et al. 1988; Jackendoff 1990; Goldberg 1995; Croft 2001; Hoffmann and
Trousdale 2013).4

Turning now to the term semantics: Many researchers have sought to limit semantics to
those aspects of meaning that are specific to grammar or specific to language. To pick a random
example, Bierwisch and Lang 1989, Levinson 1997, and Lang and Maienborn 2019 make a
distinction between “Semantic Form” (SF) and “Conceptual Structure” (CS). SF includes only
the aspects of meaning that are contributed directly by the words in a sentence. CS specifies the
full intended message, including implicatures, coercions, fixing of deictic reference,
nonlinguistic context, and so on; these extra factors are taken to be the responsibility of
pragmatics. Pietroski 2018 proposes an even narrower use of the term: word meanings do not
themselves have semantic content but are rather “instructions” for “fetching” and “assembling”
concepts.

Here, however, we will understand the term “semantics” in a broader sense, roughly
equivalent to CS in the sense of Bierwisch et al.: a word meaning is the concept that the word
expresses, and the study of word meanings overlaps substantially with that of concepts.
Learning a word meaning involves acquiring a concept and associating it with phonological and
syntactic representations. “Pragmatics” will be regarded as the collection of processes that
supply the parts of utterance meaning that do not come directly from the words, the idioms, or
the meaningful syntactic constructions (see Schwarz and Zehr, this volume). For present
purposes, I don’t wish to make a fuss about the terminology; I just want to be clear what I intend
here. (Section 5.7 returns to the issue of limiting the scope of semantics.)

5.1.2. What does a theory of semantics in the mental lexicon have to account for?

2
For a lexical treatment of non-affixal morphology such as ablaut, see Jackendoff and Audring 2020.
3
That the first noun has to be evaluative explains, for instance, why *that sailor of a violinist is no good but that
butcher of a violinist is all right: butcher can be understood as an evaluation but sailor cannot.
4
If it should turn out that there is something special about the meanings of words in particular, so be it. But such a
distinction can only be discovered by addressing the entire menagerie of phenomena. Given limitations of space, I
will nevertheless concentrate here on the meanings of words.

2
From the perspective of the mental lexicon, the central question of lexical semantics is:
What sorts of mental representations serve as the meanings of lexical items in the extended
lexicon?

A theory of semantics in the mental lexicon should satisfy at least eight partially
overlapping desiderata. The first is that the theory must be explicitly mental: it must relate to
the way that the mind conceives of the world. In fact, if approached with this focus in mind,
lexical semantics can provide one of the best sources of evidence about how humans understand
the world and act in it.

A second desideratum is descriptive adequacy or expressiveness: the theory must supply


each lexical item5 with a meaning, such that nonsynonymous items have distinct semantic
structures, and such that items that intuitively are related have related semantic structures. The
account must for instance include the meaning relations involved in polysemy, such as those
illustrated by the underlined words in (2).

(2) The mirror broke. ~ Bill broke the mirror.


the end of the rope ~ the end of the speech
the butter on the bread ~ Bill buttered the bread

A third criterion is compositionality: it must be possible for the meanings of lexical items
to be combined into phrase, sentence, and discourse meanings, with the help of pragmatics when
appropriate (see Stojnic and Lepore, this volume; Piñango, this volume). A fourth is
translatability: to the extent that accurate translation is possible, translation equivalents –
whether words or phrases – should have the same or at least very similar semantic structures.

A fifth desideratum is that lexical semantics must provide a formal account of inference,
such as the inferences in (3).

(3) Beth owns a dog à Beth owns an animal


Bill entered the room à Bill ended up in the room
Jill despises chess à Jill doesn’t like chess
Pam sold a book for $5 à Pam received $5

All five criteria so far concern the internal workings of the semantic system. A sixth
desideratum is an account of reference – how language expresses thoughts about the world. In a
theory of the mental lexicon, this issue has to be treated with special care. Meanings are encoded
deep in the brain, and their access to the real physical world out there is mediated by complex
perceptual computations that construct the world of our experience. Moreover, many words
refer to nonperceivable entities such as gods, mortgages, and theorems, which exist only by
virtue of human cognitive capacities. Therefore, in a properly mentalistic theory of reference,
linguistic expressions refer to the world as construed by the language user, rather than “the

5
Caveat: each meaningful lexical item. A few words, such as the do of English do-support, have no meaning and
are used only as “grammatical glue” to fulfill syntactic requirements.

3
world” simpliciter.6 Similarly, a mentalistic theory of truth has to concern not absolute objective
truth, but a language user’s judgments or convictions of truth.

A seventh desideratum for lexical semantics is learnability: a language learner must be


able to construct and store the meanings of tens of thousands of lexical items on the basis of
linguistic and nonlinguistic input. The consequences of this condition require a little more
exegesis and lead to an eighth desideratum.

5.1.3. The open-endedness of lexical meanings

A founding tenet of modern linguistic theory is the productivity of language – its ability to
combine words into an unlimited number of sentences (Chomsky 1957, 1965, citing Humboldt
1836). Given the finiteness of the brain, one can neither learn nor store this unlimited repertoire,
so language users must possess a productive combinatorial system for creating linguistic
expressions – and their meanings – from smaller parts (Stojnic and Lepore, this volume).
Moreover, a language learner has to come equipped with the ability to induce this system on the
basis of linguistic and nonlinguistic input.

Less emphasized has been a parallel argument for the meanings of individual lexical items.
The languages of the world express an apparently unlimited number of lexical meanings, enough
to encompass all the things we and other cultures can name: kinds of objects, kinds of actions,
kinds of properties, kinds of relationships, and so on, with all their intricate shadings and
undertones. There is never a sense of running out of new concepts.

Of course, any single person stores only some finite number of lexical items. How are they
acquired? Fodor 1975 advocates that language learners come equipped with the full unlimited
range of potential lexical meanings, and that learners simply activate whichever of the innate
concepts correspond to words of their local language.7 But this cannot be. It is hardly plausible
that the concepts expressed by telephone and carburetor are innate – and were already innate in
the ancient Greeks and even in prehistoric hunter-gatherers. Rather, the space of possible lexical
meanings available to the learner has to be characterized in terms of a productive combinatorial
system, with a finite set of primitives and principles of combination. This finite base equips the
language learner to construct lexical meanings, on the basis of linguistic and nonlinguistic input.8

A final desideratum for a theory of lexical semantics, then, is to discover this system of
primitives and principles of combination – what might be called the “grammar of meaning.”

6
An important component of the world “as construed” is a conviction of its reality. But this is a property of the
cognitive system in general, not just semantics. Presumably monkeys and dogs likewise take the world to be real;
they differ from us, however, in not being able to question the reality of what they experience.
7
Fodor’s main argument for the innateness of all concepts expressed by monomorphemic words is that it is
impossible to specify most word meanings in terms of phrasal definitions (Fodor, Garrett, Walker, and Parkes 1980).
However, definitions fail not because meanings are innate, but because many crucial features of word-internal
semantic structure (to be discussed in sections 5.3-5.6) are simply not available in phrasal composition. Since
definitions are by their nature phrasal, they cannot completely mirror the internal semantic structure of lexical items.
8
The primitives need not be binary features, though this is one option. There are also dimensions of continuous
variation, for instance the 3-dimensional color space. And there must also be the possibility of function/argument
structure. See sections 5.3-5.6.

4
This does not imply that every language lexicalizes the same concepts (Landau, this volume) or
even that two speakers of the same language lexicalize the very same concepts in the very same
way. That is clearly not the case. Rather, as with syntax acquisition, a child comes to lexical
acquisition with the tools to construct any of the humanly possible lexical meanings, given
appropriate input.

5.2. Some semantic theories and their bearing on mentalist lexical semantics

Needless to say, there is no theory at present that satisfies all these desiderata. Approaches
differ as to which of the desiderata they engage with and emphasize. A few important
approaches bear mention here, at the risk of gross oversimplification of rich traditions of inquiry.

By far the most influential is formal or truth-conditional semantics (Heim and Kratzer
1998), growing out of philosophy of language and formal logic. Based on arguments of e.g.
Frege 1892, Lewis 1972, Montague 1974, and Putnam 1975, the foundations of this approach are
explicitly non-psychological, grounding inference and reference either in the real world or in a
set-theoretic model (which may include possible worlds). Hence the character of the mental
lexicon is rarely addressed, and neither is the problem of learnability. A principal concern of this
approach is how word meanings are composed into phrase and sentence meanings; and lexical
semantics tends to be characterized predominantly in terms of these compositional properties
(their type structure). But aside from quantifiers, intensional predicates like believe, and
functional categories such as determiners and tenses, little attention is paid to the internal
semantic structure of lexical items.9

Mainstream generative grammar (Chomsky 1965, 1981, 1995) has had virtually nothing
systematic to say about lexical semantics. Berwick and Chomsky 2016 (90) characterize lexical
items as “atomic elements,” “the minimal meaning-bearing elements of human languages –
wordlike, but not words”; these elements “pose deep mysteries.... Their origin is entirely
obscure, posing a very serious problem for the evolution of human cognitive capacities.” In this
approach, compositionality is yoked to syntax, lexical items have no (discernible) internal
structure, and there is no approach to inference, reference, or acquisition.

Quite a different approach is based on Wittgenstein’s (1953) dictum that identifies


meaning with use, and on Harris’s (1957) proposal that meaning is determined by co-occurrence.
Latent Semantic Analysis (Landauer and Dumais 1997) uses a large corpus to classify words in
terms of the collection of other words in their context. A more recent version is vector-space
semantics or Distributed Semantic Models (Lenci 2008, Baroni 2013), in which a word meaning
is encoded as a vector in a very high-dimensional space, again based on the distribution of its
contexts of use in a very large corpus. Items that are similar in meaning are located near each
other in the space. While such approaches to measuring semantic similarity may be able to

9
In principle, one could construct a mentalist version of model-theoretic semantics, by taking the model to be not
sets of possible worlds but rather the model of the world that is constructed by the speaker’s cognitive systems. The
character of such a model would be an empirical issue. Truth in such a model would amount to conformance to the
speaker’s construal of the world, as advocated in section 5.1.3. Bach 1986 begins to go in this direction, and there
are some subareas of formal semantics that adopt a mentalist approach, for instance work on scalar implicature (e.g.
Chierchia, Fox, and Spector 2012; de Carvalho et al. 2016; Schwarz and Zehr, this volume).

5
mimic certain human experimental results, and may be useful for search engines (Clark 2013),
analysis based solely on statistics of cooccurrence does not lend itself to an account of semantic
compositionality or an account of extralinguistic reference. Morever, it is questionable whether
the elaborate statistical techniques for deriving semantic vectors bear any resemblance to human
lexical learning.

More cognitively based frameworks include Conceptual Semantics (Jackendoff 1983,


1990, 2002; Pinker 1989, 2007); Cognitive Grammar (Fauconnier 1985; Langacker 1987; Lakoff
1987; Talmy 2000; Geeraerts 2010); Generative Lexicon (Pustejovsky 1995, Pustejovsky and
Batiukova 2017); Geometry of Meaning (Gärdenfors 2014); Natural Semantic Metalanguage
(NSM: Wierzbicka 1985), and the extensive lexical decompositions of Miller and Johnson-Laird
1976. These approaches take meaning to be situated in the mind. They tend to focus on
expressiveness, analyzing large families of lexical meanings, and seeking analyses in terms of
psychologically plausible primitives. With the exception of NSM, they are all concerned with
compositionality – and with breakdowns of strict compositionality in phenomena such as
coercion, metaphor, and meaningful idiomatic constructions (in the sense of section 5.1). They
differ in formalism and in the extent to which they deal explicitly with inference, reference, and
learnability.10 Oddly enough, the phenomena they concentrate on are to some degree orthogonal
to those addressed in formal semantics: they are concerned more with words like chair and cup
than with words like every and only.

5.3. Major components of lexical semantics

5.3.1. The contribution of spatial structure

We now turn to some of the issues of semantic description that bear on the character of
lexical meanings. The first of these issues is posed by Miller and Johnson-Laird 1976,
Macnamara 1978, and Jackendoff 1987a: How can we talk about what we see? For a simple
case, consider the utterance That is a chair, accompanied by a pointing gesture. In order to
construct or comprehend this utterance, a speaker has to integrate information from the visual
system with information from the language system, in particular identifying a visually perceived
individual as the intended referent of the linguistic expression that. Furthermore, for the
language user to judge the utterance true or false, this visual information has to be compared
with an internal representation of what chairs look like.

What is the form of this information? Putative features like [+has-legs], [+has-back-
support], or [+for-sitting] are not very useful: they do not generalize to any significant class of
examples. In particular, chair legs and chair backs are quite different in character from human
and animal legs and backs. A better hypothesis is that these characteristics of chairs are encoded
in a quasi-geometric or topological format, the highest level of the visual system, in which

10
An important outlier among the mentalist theories is Fodor’s (1975, 1987) Language of Thought Hypothesis.
Fodor definitely wishes to situate lexical semantics in the mind. But he rejects at least two of the desiderata for
lexical semantics. First, as observed in section 5.1.3, he insists that word meanings are noncompositional and
innate. Second, he takes a partly nonmentalistic view of reference, insisting that word meanings are intentional, i.e.
that they refer to entities in “the world,” rather than to the world as construed by the language user (for discussion,
see Jackendoff 2002, section 9.4). Fodor’s arguments are almost entirely programmatic; there is little engagement
with details of linguistic analysis.

6
objects are represented in a perspective-independent fashion, in spirit following Marr’s (1982)
“3D model” and Kosslyn’s (1980) “skeletal image.”

This format, which might be called spatial structure, is not simply a visual image of
some instance or of a prototype. It has to be able to represent objects schematically, abstracting
away from details, and independent of viewpoint. It has to represent the full forms of objects,
not just their visible surfaces, and it has to represent the configuration and motion of objects in a
scene. It is not exclusively visual: object shape and spatial configuration can also be derived
through hapsis (the sense of touch). Moreover, the shape and spatial configuration of one’s own
body can be derived though proprioception (the body senses) and altered through the production
of actions. Thus spatial structure can be conceived of as a central level of cognition, coding the
physical world in relatively modality-independent fashion. It is moreover a necessary
component of the mind, even in the absence of language: presumably our primate relatives
encode the physical world in much the same way as we do.

Turning back to lexical semantics, the hypothesis is that part of the lexical entry for the
word chair is a schema in spatial structure that delimits the variation in shape and size of chairs.
Similarly, the lexical entry for the verb sit includes a link to a spatial structure that encodes the
action of a schematic individual performing this action, including as context a horizontal
supporting surface. The fact that a chair is used for sitting involves composing these schematic
representations.

In principle, many aspects of lexical meaning can be offloaded onto spatial structure –
not just object shape, but for instance color, surface pattern (striped, polka-dotted), texture
(smooth, lumpy) trajectory of motion (encircle, zigzag), and manner of motion (sprint, waddle).
In addition, of course, known faces have to be coded spatially and associated (in the lexicon?
where else?) with names.

To the best of my knowledge, there are no formal theories of spatial representations that
even begin to approach the task of supplying the range of distinctions demanded by the semantic
richness of language. (Landau and Jackendoff 1993 is an attempt in the highly restricted domain
of spatial prepositions; Epstein and Baker 2019 review brain localization of various features of
perceived scenes.) However, it must be stressed that such a representation is necessary in any
event in order to explain vision, hapsis, proprioception, and their interactions. It overlaps (and is
perhaps coextensive with) the “core knowledge” of objects explored by Spelke 2000 and Carey
2009. At the same time, for the purposes of lexical semantics, it enables language to make
contact with perception of the world, and it enables the theory to eliminate a plethora of
unmotivated semantic features such as [+has-legs].

It should go without saying that there must be further domains of perceptual


representations, namely sounds, smells, and tastes, and that these too have to be included in
lexical representations where appropriate. For instance, the lexical entry for laugh must include
a schematic encoding of what laughing sounds like – what Lila Gleitman (p.c.) has called the
“ha-ha part” – perhaps linked to a spatial schema of what laughing looks like and/or a
proprioceptive representation of what it feels like to laugh.

7
5.3.2. The contribution of conceptual structure

However, spatial structure alone cannot make all the distinctions necessary for lexical
meaning. Many other elements lend themselves to a more conventional algebraic or feature-
based representation that I’ll call conceptual structure. These include:

• The type-token distinction. All perceived entities are tokens. But a stored spatial
representation, say of a dog, could be either a token (‘my dog Rover’) or a type (‘dogs
that look a certain way’). A binary feature is necessary to distinguish them.
• Taxonomic relations: ‘X is an instance/subtype of Y.’ A classic case is furniture, whose
instances don’t look at all alike, hence cannot fall under a common schema in spatial
structure. Rather, this relation has to be encoded in conceptual structure.
• Unobservable temporal relations: ‘Event/Situation X is such-and-such a distance in the
past/future.’ The time at which something unobservable takes place is not a spatial
property.
• Aspectual predicates such as ‘begin’, ‘continue’, and ‘end’ draw attention to the shapes
of events (Piñango, this volume). An image of an event alone cannot pick out the onset
of the event as opposed to the event itself.
• Causal relations: ‘Event X causes/enables/impedes Event Y.’ As argued by Hume 1748
and demonstrated experimentally by Michotte 1954, causation is a notion cognitively
imposed on visual perception, above and beyond the motion and contact of objects.11
• Modality: ‘X is possible’ is not part of the appearance of X. Similarly, Sherlock
Holmes’s looks are not what makes him fictional. And the ambiguity of John wants to
buy a car – in which car can be specific or nonspecific – has nothing to do with how the
car looks. (See Hacquard, this volume.)
• Social notions: ‘X is the name of Y’, ‘X is dominant to Y’, ‘X is kin to/friend of/enemy
of Y’ ‘X is member of group Z’, ‘X is obligated to perform action W’. Many of these
have counterparts in primate societies (Cheney and Seyfarth 1990; Jackendoff 2007) and
they have nothing to do with how the individuals in question look.
• Theory of mind notions: ‘X believes Y’, ‘X imagines Y’, ‘X intends action Y’ are
unobservable and therefore must be encoded in conceptual structure (see Landau, this
volume).

Thus the work of lexical semantics has to be divided between spatial and conceptual structure,
each contributing its own characteristic forms of information. In short, lexical semantics
involves multiple domains of mental representation.12

5.4. Semantic decomposition, but not into necessary and sufficient conditions

11
However, if one of the objects involved is oneself, either in exerting force on another entity or in being the
recipient of force, causation might be perceived more directly through proprioception of exertion.
12
See Jackendoff 1996 for discussion of how to decide whether particular phenomena belong to spatial structure,
conceptual structure, or some combination.

8
The tradition in philosophy of language (e.g. Tarski 1956, Katz 1966) is that sentences are to
be evaluated for truth value in terms of a set of necessary and sufficient conditions. The meaning
of a word, say cat, can be thought of as the necessary and sufficient conditions for the sentence X
is a cat to be true, that is, X meets the conditions associated with cat.

However, this criterion must be rejected. First, consider the sentences in (4).

(4) a. This object is red.


b. This object is orange.

As we move along the spectrum from focal red to focal orange, exactly where does (4a) stop
being true and (4b) start being true? There is no fact of the matter. If the question is posed
experimentally, people’s reaction times to colors in the border region between the two are
slower; their judgments may differ depending on the color of previously presented examples; and
they don’t all agree (Murphy 2002). It is not that there is a “true” meaning of red to be
established by science but people don’t know it (as with Putnam’s (1975) tigers, gold, and
water). Nor can the problem be solved by calling the shades in this range red-orange: we then
face the same border problem, this time between red and red-orange. Rather, the meaning of red
simply has vague boundaries surrounding focal red, in part hemmed in by nearby colors in color
space.13

Another well-known case is the sorites paradox: How many hairs can one have on one’s
head and still be bald? Here again there is no exact fact of the matter – and there certainly is not
going to be a science of baldness that can resolve the issue. Rather, there is a focal notion of
baldness – no hair at all – plus a vague upward boundary.

A more consequential example is at what point during gestation a human fetus becomes a
person – as though there is a sharp boundary. It is the very indeterminacy of the answer that
allows it to be politically manipulated.

A different class of examples does have defining conditions, but they are neither
individually necessary nor collectively sufficient. A typical example is the verb climb (Fillmore
1982a, Jackendoff 1985). Intuition suggests that climb denotes an action that involves (a)
moving upward and (b) roughly, moving along a vertical-ish surface with effort – what might be
termed ‘clambering.’ (Notice that both conditions involve spatial structure.) A sentence like
(5a) conforms to both these conditions.

(5) a. The bear climbed the tree. [upward motion + clambering]


b. The bear climbed down the tree/across the ledge. [only clambering]

13
If one wishes to treat a word meaning as a mentally represented prototype (Rosch 1978), it is still necessary to
establish a range of variation: scarlet is far narrower than red.
Note also that particular idiosyncratic combinations can be learned and stored. For instance, a redhead
does not have a head that is focal red, but rather has hair of a particular range of shades approximating red. The
same goes for Fodor’s example pet fish (see Lepore and Stojnic, this volume), which is neither a prototypical pet nor
a prototypical fish: one learns what sorts of fish are prototypically kept as pets and thereby overrides compositional
typicality. A truly novel combination, say pet marsupial, evokes the prototype, in this case a kangaroo or possibly
an opossum.

9
c. The plane climbed to 20,000 feet. [only upward motion]
o
The temperature climbed to 35 C. [only upward motion]
d.*The plane climbed down to 20,000 feet. [neither]
o
*The temperature climbed down to minus 10 C. [neither]

(5b) violates the condition of upward movement; it involves only clambering. So upward
movement is not necessary, and clambering is sufficient. On the other hand, (5c) violates the
condition of clambering and involves only motion upward. So clambering is not necessary, and
motion upward is sufficient. Finally, the sentences in (5d) conform to neither condition, and they
are not judged to count as climbing. Hence at least one or the other of the two conditions is
necessary, and either is sufficient.

One might be tempted to claim, along the lines of Katz 1966, that climb is polysemous: one
sense denotes clambering and appears in (5b), while the other denotes upward motion and
appears in (5c). But that implies that (5a) is ambiguous, and thus that one can ask which sense of
climb the speaker of (5a) intends. This misrepresents what is going on: (5a) partakes of both
senses at once, as suggested by the intuitive analysis of climb. Hence the two conditions together
denote stereotypical climbing, and in neutral contexts both are invoked by default. At the same
time, either condition can be dropped to yield a more “extended” or less stereotypical denotation.

We therefore arrive at a configuration within a lexical meaning in which conditions are not
logically conjoined. A new principle of combination is necessary in the repertoire of semantic
composition. Jackendoff 1983 uses the term preference rules for conditions combined in this
fashion: they are both preferable, but either is sufficient on its own. The relation is more
specific than logical inclusive or, in that when a referent meets both conditions, it is judged not
just acceptable, but more stereotypical (Rosch and Mervis 1975, Murphy 2002).14

A well-known case of such a configuration, involving more conditions, is Wittgenstein’s


(1953) famous example of the word game: a variety of conditions in a preference rule
configuration, no single one of which is necessary. With multiple conditions in play, the result is
Wittgenstein’s “family resemblance” relation. A more telling example is mother (Lakoff 1987).
A stereotypical mother is (a) related genetically to the child, (b) gives birth to the child, and (c)
raises the child. However, these three functions can be distributed among two or even three
individuals, in which case it is not clear which one(s) to call the mother, and the judgment
depends heavily on the interests of the judger and/or legal fiat.

From the perspective of traditional philosophy of language, preference rule relationships


look rather exotic and perhaps even out of control. But in human cognition they are nothing
special. They are ubiquitous in visual perception, as observed as early as Wertheimer 1923 (see
also Labov 1973 and Jackendoff 1983). And they appear even in seemingly elementary
distinctions such as phonetic perception, where the difference between a perceived ‘d’ and a
perceived ‘t’ may be signaled by any combination of voice onset time, interval of vocal tract
closure, and length of the preceding vowel (Liberman and Studdert-Kennedy 1977). Thus

14
Relation to a focal instance and characterization in terms of preference rules are not the only source of
prototypicality judgments. For instance, frequency evidently plays a role: this presumably accounts for Armstrong,
Gleitman, and Gleitman’s (1983) finding that people judge 4 to be a more prototypical even number than 206.

10
preference rule phenomena are characteristic of mental computation, nonlinguistic as well as
linguistic. Hence there should be no objection to admitting preference rule composition as a
fundamental principle of combination in lexical semantics.

5.5. Ontology: The kinds of entities we can refer to

5.5.1. Demonstratives as evidence for basic ontological categories

Section 5.3.1 observed that a demonstrative (or deictic) such as that can be coupled with a
spatial representation, thereby referring to an object in the (construed) environment. However,
demonstratives also have numerous other uses, apparently referring to other sorts of entities,
such as the underlined terms in (6).

(6) a. Please put your hat here and your coat there. [pointing] [location]
b. He went thataway. [pointing] [trajectory]
c. Please don’t do that around here. [pointing, gesturing] [action]
d. The fish I caught was this big. [demonstrating] [distance]
e. This many people were at the party. [holding up 4 fingers] [amount/number]
f. Can you walk like this? [demonstrating a waddle] [manner]

In each case, the demonstrative invites the hearer to pick out some referent in the perceptual
field. What kind of referent it is is determined by the linguistic expression. For instance, the
locative demonstratives here and there in (6a) invite the hearer to pick out a location; the
collocation do that in (6c) invites the hearer to pick out an action, and the degree use of that in
(6d), preceding an adjective denoting size, invites the hearer to pick out a size or distance.

These examples show that we can refer to all these sorts of entities as if they exist in the
perceivable world – a much richer repertoire than is usually considered in theories of either
semantics or visual perception.15 These various types of entities might be considered, so to
speak, as “semantic parts of speech” or ontological categories – the basic types of entities that
inhabit the conceptualized world. The list in (6) is hardly exhaustive: there are ontological
categories for other modalities of perception, such as sounds (Did you hear that?), linguistic
expressions (Did he really say that?), tastes, smells, and pains, as well as nonperceivable
ontological types such as information and values. However, I conjecture that there is a finite set
of them that serve as primitive features in the conceptual repertoire.

At the base of a lexical item’s conceptual structure, then, is its ontological category. The
meanings of the demonstratives in (6) consist of little more than that; all the details of the
intended referent are filled in from the visual field. But most other words do carry further
structure in their lexical entries, which distinguish them from all other words in their category.

15
Traditional philosophy of language and formal semantics typically restrict the ontology to individuals (usually
persons), properties, truth values, and, since Davidson 1967, events and/or actions.
How all of these types of entities are picked out of the visual field is largely unknown, though there has
been considerable progress on event perception (e.g. Ünal, Ji, and Papafragou 2019), Loschky et al. 2020, Zacks
2020.

11
5.5.2. Polysemy that straddles ontological categories

Continuing with the theme of ontological categories, one of the earliest observations in
Conceptual Semantics originates with Gruber 1965: many lexical items and grammatical
patterns that are used to describe objects in space also appear in expressions that describe
nonspatial domains. Consider (7)-(10), especially the parallels among them in the use of the
underlined words.

(7) Spatial location and motion


a. The cat is on the mat. [Location]
b. The cat went from the mat to the window. [Change of location]
c. Fred kept the cat on the roof. [Caused stasis]
(8) Possession
a. The money is Fred’s. [Possession]
b. The inheritance finally went to Fred. [Change of possession]
c. Fred kept the money. [Caused stasis]
(9) Ascription of properties
a. The light is red. [Simple property]
b. The went/changed from green to red. [Change of property]
c. The cop kept the light red. [Caused stasis]
(10) Scheduling activities
a. The meeting is on Monday. [Simple schedule]
b. The meeting was changed from Monday to Tuesday. [Change of schedule]
c. The chairman kept the meeting on Monday. [Caused stasis]

The lexical and grammatical parallelisms in these examples (and many more, cross-
linguistically) suggest that various semantic domains (or “semantic fields”) have partially
parallel structure. They differ in the basic semantic relation on which each field is built, as
expressed in the (a) sentences. In (7a), the basic relation is between an object and a spatial
location; in (8a), it is between an object and who it belongs to – a sophisticated social notion
involving rights and obligations (Snare 1972, Miller and Johnson-Laird 1976, Jackendoff 2007).
In (9a), the relation is between an object and its “location” in a “property space” such as size,
color, value, or even emotional affect (such as angry and placid). In (10a), the relation is
between an action and, as it were, its assigned “location” on the time-line. These disparate
relations are elaborated with parallel machinery: the (b) sentences express change over time in
the relation in question; the (c) sentences express this relation remaining intact over time, by
virtue of the agency of someone or something.

Important to this analysis is that many verbs and prepositions occur over and over again
in different semantic fields – not entirely consistently, but frequently enough not to be just a
coincidence. For a realist (or “externalist”) semantics, these parallelisms make little sense. In
the real world, the spatial location and motion of an object has nothing to do with who it belongs
to (e.g. houses change owners without changing location) or what color or size it is; and there is
no real-world reason why future actions can be shuttled around in time the same way objects are

12
moved in space. In contrast, for a mentalist (or “internalist”) theory, these parallelisms help
reveal the grain of human conceptualization: these semantic fields share part of their structure,
and the parallelisms encourage corresponding parallelisms in linguistic expression.

Going back to the issue of lexical decomposition, this analysis suggests that the meanings of
verbs and prepositions are not semantic primitives: they must include a “semantic field feature”
that says which domain(s) they belong to. The possible values of the field feature overlap in part
with the ontological categories of the previous section, in particular including spatial
configurations of objects and temporal configurations of actions.

This parallelism among semantic domains is central to Conceptual Semantics (Jackendoff


1983, 1990), and to Cognitive Grammar (Langacker 1987, Lakoff and Johnson 1980, Talmy
2000). In the latter framework, the parallelism is often attributed to metaphor and/or embodied
cognition (Varela, Thompson, and Rosch 1991), with spatial location and motion as the “source
domain” from which the other domains are derived by analogy. Conceptual Semantics, however,
takes the position that the expressions in (8)-(10) are not metaphors, in the sense of colorful
extensions of spatial language and conceptualization. Rather, they are the ordinary way that
English gives us to speak about these domains, and they share part of their structure with each
other. Nevertheless, as in the Cognitive Grammar view, the spatial domain has extra salience,
because of its richness (e.g. more than a single dimension), its perceivability, its support from
spatial structure, and especially because of its role in guiding physical action.

The upshot is that the choice of semantic field has to be a feature of word meanings, and at
least some values of this feature are likely semantic primitives.

5.5.3. Dot-objects

Multiple ontological categories not only can distinguish different senses of a polysemous
word, as shown in the previous section: they can also coexist within a single sense of a word. A
well-known case is the word book (Pustejovsky 1995, Pustejovsky and Batiukova 2017). A
stereotypical book is a physical object, consisting of bound pages with writing on them.
However, the book also has informational content, and in that capacity it belongs in the
conceptual domain of information. Pustejovsky notates this dual allegiance as ‘object •
information,’ hence the term dot-object.

As with climb in the previous section, one might contend that book is ambiguous between
the two senses. But both senses can be attributed to the same token of the word, as in That book
is over 400 pages long, but it has a great ending, with no sense of ambiguity. Instead, these two
conditions stand in a preference rule relationship: an e-book is not a physical object but carries
an appropriate amount of informational content; a blank notebook is a physical object but has no
informational content (yet); but a stereotypical book is both physical and informational.

Dot-objects, like preference rules, are ubiquitous. Reading is a dot-activity – physically


scanning a page and receiving informational content from it. A university is a dot-object,
consisting of a collection of physical objects but also performing a complex social function:
Tufts University, located in beautiful Medford, Massachusetts, offers degrees in 43 subjects. In

13
this example the physical existence has recently become defeasible, given the ascendance of
online universities, but the social function is necessary. A more complex dot-action is stealing:
moving an object (physical) that does not belong to one (social/legal) from one place to another
(physical) so as to conceal it (physical) from the rightful owner (social/legal); and this action has
an associated negative moral value (moral).

An extremely important dot-object in human conceptualization is the concept of a person,


crucial to the understanding of all social and moral predicates. All cultures (that I have ever
heard of) conceptualize a person as a linkage of an animate body in the physical domain with
what is variously called a mind, a self, a spirit, or a soul (Bloom 2004, Jackendoff 2007), in the
personal or social domain. Faces, hands, livers, and so on belong to the physical, while theory of
mind, moral responsibility, rights, and obligations belong to the personal/social domain.

To elaborate a little: Whatever cognitive neuroscience may tell us (e.g. Dennett 1991,
Damasio 1994, Crick 1994), these two aspects of a person are conceptualized as separable. A
zombie is a body bereft of its soul. Ghosts, angels, gods, ancestors, and souls ascending to
heaven are often conceived of as humanlike but without human bodies. Souls can come to
inhabit different bodies through reincarnation, metamorphosis (as in Kafka and the frog prince),
body-switching (as in Freaky Friday), and spirit possession. Multiple personality disorder is
reportedly experienced as multiple persons competing for control of the same body (Humphrey
and Dennett 1989). An individual suffering from Capgras Syndrome experiences a loved one as
an impostor, i.e. physically indistinguishable from the person in question but with the wrong
personal identity (McKay, Langdon, and Coltheart 2005). In each case, personal identity (and
hence moral responsibility) goes with the mind/soul. Kafka’s Gregor Samsa is still himself,
trapped in the body of a giant cockroach; the mother and daughter in Freaky Friday trade bodies,
not minds.

We have no trouble understanding such situations, bizarre as they are. They are
commonplace in fiction, legend, and religion. This suggests that this very complex dot-object
concept of a person is either innate or a remarkably persistent cultural invention; I would vote for
the former. In any event, the notion of personhood is central not only to lexical semantics but to
social and cultural cognition in general.

5.6. Combinatoriality

5.6.1. Argument structure

After ontological category, perhaps the most fundamental feature of word meanings, and the
one addressed by virtually every theory of lexical semantics, is argument structure. This is a
basic component of semantic compositionality and a crucial part of the interface between
semantics and syntax. For instance, the meaning of a verb is a conceptualized situation, event, or
action involving a certain number of entities, each of which is specified by an open typed
variable – the verb’s semantic arguments. These variables are instantiated in language by the
meanings of the verb’s syntactic arguments. Consider the sentence The lion chased the bear.
The action of chasing requires two semantic arguments – the individual doing the chasing and
the one being chased – and these are expressed as the syntactic subject and object of the sentence

14
respectively. The “chaser” argument must furthermore be typed as animate and as having the
intention of catching the “chasee.” The latter is also animate, but defeasibly so – one can chase a
runaway vehicle, for instance. Thus the meaning of chase can be thought of as a schematic event
in which these two individuals are characters.

Some approaches classify semantic arguments in terms of thematic roles (also called theta-
roles), following Gruber 1965 and Jackendoff 1972. An entity being located or in motion is
termed a Theme; an entity performing an action is an Agent; an entity being acted upon is a
Patient, an endpoint of motion is a Goal; and so on. Depending on the theory, thematic roles are
treated as features that a predicate assigns to its semantic (or even syntactic) arguments
(Chomsky 1981, Dowty 1991, Tenny 1994, Levin and Rappoport Hovav 2005), or as organized
in a separate argument structure “tier” that regulates the correspondence between syntax and
semantics (Grimshaw 1990). Alternatively, as argued in Jackendoff 1987b, 1990, thematic roles
can be regarded as informal terms for structural configurations in semantic structure. For
instance, Theme is the informal term for the first argument of the schematic event GO, Agent is
the term for the first argument of the schematic event CAUSE, and so on.

The relation between the semantic and the syntactic argument structure of a verb is often
assumed to be one-to-one, so one can speak simply of a verb’s argument structure. This
assumption surfaces in syntactic theory as Chomsky’s (1981) theta-criterion and Baker’s (1988)
Uniform Theta Assignment Hypothesis (UTAH). However, exploration of the terrain reveals
that this relation is actually quite complex; I will only sketch it here.

First, a verb can stipulate anywhere from zero to four semantic and syntactic arguments:

(11) a. Zero arguments


It’s drizzling. [where it is a meaningless syntactic argument, necessary (in English but
not in, e.g. Spanish) to fill the subject position]
b. One argument
Dan is sleeping. The door opened. Sue sneezed.
c. Two arguments
The lion chased the bear. Bill fears snakes.
d. Three arguments
Amy gave Tom a present. Beth named the baby Bayla.
e. Four arguments
Henry traded Judy his candy bar for a balloon.

However, some semantic arguments of some two- to four-argument verbs need not be expressed
syntactically (12). For instance, one cannot eat without eating something, so eat has two
semantic arguments. However, the entity being eaten can be left implicit, so that Peter is eating
has only a single syntactic argument.

(12) a. Two semantic arguments; one or two syntactic arguments


Peter is eating (a pizza).
b. Three semantic arguments; one, two, or three syntactic arguments
Diane served (us) (lunch).

15
c. Four semantic arguments; two, three or four syntactic arguments
Henry sold (Judy) a bike (for $50).

Some semantic arguments must be expressed as adjectives, prepositional phrases, or clauses.

(13) a. Beth considers Bayla awesome.


b. The bird flew out the window.
c. Tom put the book on the table.
d. Sam believes that the sky is falling.
e. Ezra managed to confuse everyone.

A few verbs have more syntactic arguments than semantic arguments. For instance, the
actions of perjuring oneself and behaving oneself involve only one character. The reflexive
direct object is a syntactic argument, but adds nothing to the semantics. You can’t perjure or
behave someone else.

Many verbs appear in multiple syntactic frames, sometimes with different semantic
argument structure (14a,b), sometimes the same (14c,d) (see Levin 1993 for hundreds of
examples).

(14) a. The water boiled. Levi boiled the water. [» ‘Levi made the water boil’]
b. The tank filled. Judy filled the tank. [» ‘Judy made the tank get full’]
c. Amy gave Tom a present. Amy gave a present to Tom.
d. Henry sold Judy his bike. Henry sold his bike to Judy.

Verbs are not the only words with argument structure. The meaning of a noun can also
involve semantic arguments. Semantically, one cannot be a friend without being a friend of
someone, mentioned or not; something cannot be a part or an end without being a part or an end
of something. Nouns that are morphologically related to verbs typically inherit the verb’s
semantic argument structure, whether expressed syntactically or not. For instance, the
construction of a building semantically implies both an entity doing the constructing and an
entity being constructed; a donation is something that one entity is donating to another.

Adjectives too have semantic argument structure. One semantic argument is the entity of
which the adjective is predicated, for instance the house in The house is big. But some adjectives
also have further semantic arguments, whether syntactically expressed or not. For instance one
cannot be polite without being polite to someone or to people in general; a hypothesis cannot be
interesting without someone or people in general being (assumed to be) interested in it.

An important complication is the light verb construction, illustrated in (15).

(15) a. Joe took a walk. [cf. Joe walked]


b. Kay gave Harry a hug. [cf. Kay hugged Harry]
c. Don put the blame on Nancy. [cf. Don blamed Nancy]

16
In these cases, the verb has its ordinary syntactic argument structure, but the type of action being
denoted and the semantic argument structure of the sentence are determined by the nominal, as
seen in the paraphrases.

Finally, meaningful constructions too have argument structure. For instance, the N of an
N construction mentioned above (a gem of a result) has two noun positions to be filled in in
syntax. Each corresponds to a semantic argument: the first to an evaluative term, and the second
to the entity being evaluated.

5.6.2. Word decomposition into more primitive components


A prominent theme in theories of the mental lexicon, going back at least to Gruber 1965,
is the analysis of word meanings in terms of what are purported to be more primitive
components. For example, (16a) can be paraphrased fairly closely as (16b), and the same for
(16c,d).

(16) a. John entered the room.


b. John went into the room.
c. John climbed the mountain.
d. John went to the top of the mountain (in a clambering manner).
That is, enter X means approximately ‘go into X’, and climb X means approximately ‘go upward,
clambering along the surface of X, to the top of X.’ These paraphrase relations can be captured
more formally by lexical decomposition, such that the word and its paraphrase come to have the
same analysis.

(17) illustrates the case of enter. GOspatial is a schematic event in which an individual X
traverses a Path (17a). TO is a schematic Path that terminates at an object or location Y (17b);
INTERIOR is a schematic location that consists of the space subtended by the interior of an
object Z (17c). These three pieces of structure are combined in enter: it is a schematic event in
which an individual X traverses a path that terminates in the interior of an object Z (17d).

(17) a. ‘go’ = [Event GOSpatial (X, PATH)]


b. ‘to’ = [Path TO ([Object/Location Y])]
c. ‘in’= [Location INTERIOR (Z)]
d. ‘enter’ = [Event GOSpatial (X, [Path TO ([Location INTERIOR (Z)])])]

This is perhaps clearer in a tree notation, in which double lines mark schematic entities and
single lines mark their arguments.

(18) Event

GO X Path

TO Location

INTERIOR Z

17
One can therefore think of enter as “incorporating” the meaning of the prepositions to and in.
The upshot in syntax is that enter ends up as a syntactically transitive verb.16

A more complicated case is illustrated in (19), for which approximate paraphrases appear
in (20):

(19) a. John buttered the toast.


b. John saddled the horse.
(20) a. butter X » ‘put butter on X (in the fashion that butter is meant to be used)’
b. saddle X » ‘put a saddle on X (in the fashion that saddles are meant to be used)’

Such relationships have been claimed to be captured by a syntactic operation of “noun


incorporation” (e.g. McCawley 1968, Baker 1987, Hale and Keyser 1993), which moves the
noun butter from direct object position up to the verb node. But the full semantics does not
follow from ‘put Z on X’. One cannot butter bread by simply putting a stick of butter on it, and
one cannot butter a table at all. Rather, buttering involves using butter in the fashion it is meant
to be used, i.e. spreading it on bread or a pan – what Millikan 1984 calls the proper function of
butter and Pustejovsky 1995 calls the telic quale of the word butter. In other words, the meaning
of the verb is constructed in part from the noun’s internal semantic structure, which is nowhere
reflected in its syntax.17

The intuition behind noun incorporation can be captured instead through the structure of
lexical semantics. Put decomposes as ‘X causes Y to go to location Z’ (20a). The verb butter
specifies the variable Y as ‘butter’ and location Z as ‘onto object W’. In addition it spells out a
Manner in which the butter moves, not formalized here, but likely made explicit in spatial
structure. The result is (21b), or, in tree notation, (22).

(21) a. ‘put’ = [Event CAUSE (X, [Event GO (Y, [Path TO [ZLocation ]])])]
b. ‘butterV’ = [Event CAUSE (X, [Event GO (BUTTER, TO ([Location ON (W)]); Manner …])]

16
Note that GO, TO, and INTERIOR all have counterparts in spatial structure. Note also that even if they are not
themselves primitives, they nonetheless allow us to eliminate a putative primitive ENTER.
17
Fodor 1981 argues that the verb paint cannot be defined in similar terms as ‘put paint on,’ because when
Michelangelo dipped his brush in the paint, he was putting paint on the brush but he was not painting the brush.
Adding proper function to the conceptual structure takes care of this problem: the proper function of paint, roughly,
is to intentionally apply it as a liquid to a cover a surface in a certain manner, with the intention of having the paint
dry and thereby cover the surface. In turn, this splits into two senses: painting a wall or painting a picture.

18
(22) Event

CAUSE X Event

GO BUTTER Path Manner

TO Location

ON W

The result is that the verb butter ends up as a syntactically transitive verb whose direct object
denotes the object on which the butter is placed – the endpoint of motion. The entity in motion,
the butter, is incorporated into the meaning of the verb and plays no role in the syntax.

5.6.3. Other

I have discussed here only one combinatorial property of lexical items, namely argument
structure. Many others can be mentioned, without any intended prejudice against still others I
have not mentioned:

• Modification adds information to a semantic head that is not dictated by argument


structure, such as prenominal adjectives (brown cow, terrible policy), measure phrases
(five-foot fence), relative clauses (the man who came to dinner), manner and time
adjuncts to verbs (run quickly, eat often), degree markers on adjectives (extremely funny),
conditional and purpose modifiers (if he comes, in order to leave), among many
heterogeneous possibilities.
• Anaphoric items require an antecedent in order to specify their full interpretation in
context. These include not just definite pronouns, but also the identity-of-sense pronoun
one, non-identity pronouns such as someone else, the reciprocal each other, pro-VPs such
as do so, and elliptical constructions such as vice versa.
• Quantifiers such as each, some, and many create scope relations over the syntactic/
semantic configurations in which they are embedded.
• Negative polarity items such as any, yet, and the collocation lift a finger can appear only
in a particular set of modal contexts.
• Restrictors such as only and even are sensitive to information structure (focus) and create
presuppositions against which the focus is judged.
• Meaningful constructions such as cleft and pseudocleft – which are lexical items in the
extended lexicon of section 5.1.1 – designate their free argument as focus in information
structure.
• Discourse connectors such as however, specifically, and the collocation for example
specify a semantic relation between the clause they are in and the surrounding discourse.
• Intonation contours convey meaning, usually of information structure but also attitudes
such as sarcasm. These pairings belong in the extended lexicon (where else?).
• Words and meaningful constructions can convey speech register, which may or may not
be considered part of lexical meaning.

19
5.7. Word knowledge vs. world knowledge?

Returning to a point hinted at in section 5.1.1: All theories of lexical decomposition


(including my own) encounter a serious problem. Their analyses typically leave a residue of
unanalyzed material, such as BUTTER and the manner of using it in (21b). Many researchers
have sidestepped this problem by proposing that such bits of meaning are not relevant to
linguistic semantics per se; rather, they belong in “world knowledge” or “encyclopedic
knowledge.” For instance, Katz and Fodor 1963 make a distinction between “semantic markers”
(the linguistically relevant material) and “distinguishers” (the residue); Katz 1972 distinguishes
“dictionary” from “encyclopedic” information; Bierwisch and Lang 1989 and Levinson 1997
separate “semantic form” from “conceptual structure”; and Lieber 2019 separates a structured
“semantic skeleton” from an unstructured “semantic body.”

Other approaches (e.g. Bolinger 1965; Jackendoff 1983, 2002; Lakoff 1987; and Langacker
1987) argue that these residues cannot be disregarded. They are part of lexical meaning; they are
intimately involved in inference and reference. These approaches therefore take the position that
there can be no principled boundary between word meanings and world knowledge.

In some cases, the residue can be attributed to spatial structure. That might be the case with
butter, where both the substance butter and the action of buttering have a spatial component.
However this is not always possible. Here are two representative cases.

First consider that staple of linguistic semantics, bachelor = ‘unmarried adult male human’.
Lakoff 1987 points out that this analysis does not account for presupposed sociocultural context,
for instance that the pope would not be characterized as a bachelor. Either this context has to be
part of the word’s lexical entry, or, if it is “world knowledge,” it must somehow be connected
directly or indirectly to the lexical entry. Moreover, the “core” of the meaning is itself
problematic: the feature ‘unmarried’ has to tie into understanding of the complex social
institution of marriage. Is this information part of the lexical entry of bachelor, or is it world
knowledge? How would we decide, and what difference would it make? An important
consideration is that the very same information must also be part of the lexical entries for the
words marry and marriage. Hence lexical semantics cannot forgo responsibility for analyzing it.

For another case, return to the verb sell and its four arguments, as in (23) (=12c)).

(23) Henry sold Judy a bike for 50 dollars.

At the very least, the lexical entry has to include two subevents: Henry giving a bike to Judy (the
“transfer”), and Judy giving Henry 50 dollars (the “counter-transfer”). It wouldn’t be selling
otherwise. The lexical entry further has to stipulate that the counter-transferred entity is money
(by contrast with trade, whose counter-transferred entity is likely not money). Should the lexical
entry include all the properties of money? These properties certainly have to be specified
anyway in the lexical entries for money and dollar. Should the lexical entry include all the logic
of possession? In any event, this logic has to be in the lexical entry of possession verbs such as
own and give away.

20
One’s understanding of sell further has to say that the two subevents are not independent – it
isn’t as though Henry and Judy, unpremeditated, happen to give each other nice presents.
Rather, the two subevents constitute a joint action to which they have both agreed and from
which each of them expects to benefit. In turn, the logic of joint actions (Gilbert 1989, Searle
1995, Bratman 1999, Jackendoff 2007) stipulates that each participant is obligated to the other to
carry through his or her side of the deal. Defecting on an obligation is furthermore morally bad,
and if one participant defects, the other has the right to punish him or her in some way. In turn,
punishing someone is justifiably doing something bad to them in return for something bad they
have done.

All this material has to be listed somewhere in the entries of lexical items like obligation,
collaborate, in return for, and punish. Should it also be in the lexical entry of sell, or should it
be “world knowledge” somehow associated with sell? And if the latter, how is this association
encoded?

Finally, other verbs can be used to convey the same event, with different syntactic argument
structures, and from different perspectives.

(24) a. Judy bought a bike from Henry for $50.


b. Judy paid Henry $50 for a bike.
c. The bike cost Judy $50.
d. Judy spent $50 on the bike.
e. Henry got $50 for the bike.

This suggests that there is an abstract “transaction frame” in the sense of Fillmore 1982b (or a
“strip of behavior” in the sense of Goffman 1974). It encodes all the details of money, joint
action, obligation, and so on. Such a frame is not restricted to the lexical entries of verbs, but it
is referred to by all of them. Again, it is not clear whether this frame is “lexical” or “world”
knowledge.

The upshot of these cases is that what might be considered one word’s “world knowledge” is
often part of another word’s “lexical entry.” Hence knowledge of words and knowledge of the
world are apparently built out of a shared repertoire of basic units. Not only is there no
principled line between them, it is not clear that one should want one, other than as a convenient
limit on how deep one wants one’s analysis to go.

Even if one manages to achieve plausible decompositions of word and world knowledge,
there remains an intuitive distinction between “dictionary” and “encyclopedic” knowledge.
What is listed in the lexical entry of cat? Presumably no one would doubt that it includes what
cats look like, encoded in spatial structure, and that cats are kept as pets, encoded in conceptual
structure. But what about the (defeasible) fact that cats hunt mice and the (nondefeasible) fact
that cats have livers? These seem to lie somewhere in between. And if the fact that cats hunt
mice is listed, does the lexical entry for mouse include the fact that cats hunt them? I have no
clear intuitions about this problem (though Dor 2015 may be on the right track).

5.8. Closing remarks

21
The goal of this chapter has been to motivate a theory of lexical semantics that is expressive
enough to account for a significant range of phenomena. We have sketched a considerable
collection of machinery: geometric spatial structure, featural and algebraic conceptual structure,
preference rules, ontological categories, cross-field parallels, dot-objects, argument structure, and
decomposition into more primitive functions. Each of these has been motivated on the basis of
rather simple examples, and each has been shown to be broadly applicable.

Along the way, we have touched on the desiderata of compositionality, polysemy, reference,
and the problem of primitives. Alas, we have had nothing to say here about inference,
translatability, or learnability. Most important, though, has been our emphasis on lexical
semantics as a mental phenomenon, deeply connected to and supported by the human
conceptualization of the world. Indeed, we have argued that many of the components discussed
here are necessary to mental phenomena outside of language.

One might feel that the repertoire of components discussed here is an embarrassment of
riches, that it is too expressive. I would respond that the main challenge semantic theory faces at
present is to be expressive enough. The full range of human conceptualization is far richer and
more complex than syntax and phonology. It is amazing that anything as complex as human
thought can be squeezed through the relatively limited interface of syntax, phonology, and
phonetics and still be understood. Evidently much depends on pragmatics (Schwarz and Zehr,
this volume). On the other hand, vision is much the same: it’s equally amazing that the limited
degrees of representational freedom in the retina can give rise to such rich spatial understanding.

Nevertheless, a full decomposition of lexical meaning and associated world knowledge into
cognitively plausible primitives is for the present elusive. One might therefore be inclined to
reject the position that the mind encodes concepts in terms of a finite underlying system. I would
regard this as a mistake. Even the most empiricist theory has to claim that concepts are mentally
encoded in some format or another. This pertains even to immediate experience: one cannot
experience a sunset (or an experimental stimulus) unless one’s brain represents it somehow. The
task for semantics and for cognitive science as a whole is to discover the properties of these
formats of mental representation, whatever they are. And all we can do is continue to pick away
at the phenomena, in the hope that we are getting closer to the bottom. I have tried here to
illustrate a small portion of the process.

22
References

Armstrong, S., Gleitman, L., and Gleitman, H. (1983) What some concepts might be. Cognition
13, 263-308.
Bach, Emmon. 1986. The algebra of events. Linguistics and Philosophy 9, 5-16.
Baker, Mark. 1988. Incorporation: A Theory of Grammatical Function Changing. Chicago:
University of Chicago Press.
Baroni, M. (2013). Composition in Distributional Semantics. Language and Linguistics
Compass 7/10: 511–522, 10.1111/lnc3.12050
Berwick, R., and Chomsky, A. (2016) Why Only Us? Cambridge, MA: MIT Press.
Bierwisch, Manfred, and Ewald Lang (eds.). 1989. Dimensional Adjectives. Berlin: Springer-
Verlag.
Bloom, Paul. 2004. Descartes’s Baby: How the Science of Child Development Explains What
Makes us Human. New York: Basic Books.
Bolinger, Dwight. 1965. The atomization of meaning. Language 41, 555-73.
Bratman, Michael E. 1999. Faces of Intention. Cambridge: Cambridge University Press.
[contains essays previously published between 1985 and 1998]
Carey, Susan. 2009. The Origin of Concepts. Oxford: Oxford University Press.
Chierchia, G., Fox, D., and Spector, (2012). Scalar implicature as a grammatical phenomenon.
In C. Maienborn, K. von Heusinger, and P. Portner, eds.: Semantics: An International
Handbook of Natural Language Meaning, Vol. 3, 2297-2331. Berlin: Mouton de Gruyter.
Chomsky, N. (1957). Syntactic Structures. The Hague: Mouton.
Chomsky, N. (1965). Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.
Chomsky, N. (1981). Lectures on Government and Binding. Dordrecht: Foris.
Chomsky, N. (1995) The Minimalist Program. Cambridge, MA: MIT Press.
Christiansen, Morten, and Inbal Arnon. 2017. More than words: The role of multiword
sequences in language learning and use. Topics in Cognitive Science 9: 542-51.
Clark, S. (2013). Vector space models of lexical meaning. In S. Lappin and C. Fox (eds.), The
Handbook of Contemporary Semantic Theory. Wiley.
doi.org/10.1002/9781118882139.ch16
Crick, Francis. 1994. The Astonishing Hypothesis: The Scientific Search for the Soul, Charles
Scribner's Sons, New York.
Croft, William. 2001. Radical Construction Grammar. Oxford: Oxford University Press.
Culicover, Peter W., Ray Jackendoff, and Jenny Audring. 2017. Multiword constructions in the
grammar. Topics in Cognitive Science 1-17. Doi: 10.1111/tops.12255.
Damasio, Antonio R. 1994. Descartes’ Error: Emotion, Reason, and the Human Brain. New
York: G. P. Putnam’s Sons.
Davidson, D. (1967). The logical form of action sentences. In N. Rescher (ed.), The Logic of
Decision and Action. Pittsburgh: University of Pittsburgh Press.
de Carvalho, A., Reboul, A., Van der Henst, J.-B., Cheylus, A., and Nazir, T. (2016). Scalar
Implicatures: The psychological reality of scales. Frontiers in Psychology, 25 October 2016
| https://doi.org/10.3389/fpsyg.2016.01500.
Dennett, Daniel C. 1991. Explaining Consciousness. New York: Little, Brown.
DiSciullo, Anna Maria, and Edwin Williams. 1987. On the Definition of Word. Cambridge,
MA: MIT Press.
Dor, D. (2015) The Instruction of Imagination. New York: Oxford University Press.

23
Dowty, David. 1991. Thematic proto-roles and argument selection. Language 67: 547-619.
Epstein, R. and Baker, C. (2019). Scene perception in the human brain. Annual Review of
Vision Science 5, 373-397.
Fauconnier, Gilles. 1985. Mental Spaces: Aspects of Meaning Construction in Natural
Language. Cambridge, MA: Bradford/MIT Press.
Fillmore, C. 1982a, Towards a descriptive framework for deixis. In R. Jarvella and W. Klein
(eds.), Speech, Place, and Action, 31-52. Wiley.
Fillmore, Charles. 1982b. Frame semantics. Linguistics in the Morning Calm, 111-37. Seoul:
Hanshin Publishing Company.
Fillmore, Charles, Paul Kay, and Mary Catherine O’Connor. 1988. Regularity and idiomaticity
in grammatical constructions: The case of let alone. Language 64: 501-38.
Fodor, Jerry A. 1975. The Language of Thought. Cambridge, MA: Harvard University Press.
Fodor, J. A. (1981). Representations. Cambridge, MA: MIT Press.
Fodor, Jerry A. 1987. Psychosemantics: The problem of meaning in the philosophy of mind.
Cambridge, MA: MIT Press.
Fodor, Jerry A., Merrill F. Garrett, Edward Walker, and C. Parkes. 1980. Against definitions.
Cognition 8, 263-367.
Frege, Gottlob 1892. Über Sinn und Bedeutung. Zeitschrift für Philosophie und philosophische
Kritik 100, 25-50. English translation in Peter Geach and Max Black (eds.), Translations
from the philosophical writings of Gottlob Frege, 56-78. Oxford: Blackwell, 1952.
Gärdenfors, P. (2014). The Geometry of Meaning. Cambridge, MA: MIT Press.
Geeraerts, D. (2010). Theories of Lexical Semantics. Oxford: Oxford University Press.
Gilbert, Margaret. 1989. On Social Facts. Princeton, NJ: Princeton University Press.
Goffman, Erving. 1974. Frame Analysis: An Essay on the Organization of Experience.
Cambridge, MA: Harvard University Press.
Goldberg, Adele. 1995. Constructions: A Construction Grammar Approach to Argument
Structure. Chicago: University of Chicago Press.
Grimshaw, Jane. 1990. Argument Structure. Cambridge, MA: MIT Press.
Gruber, Jeffrey S. 1965. Studies in Lexical Relations. Doctoral dissertation. MIT, Cambridge,
MA. Reprinted in Lexical Structures in Syntax and Semantics. Amsterdam: North-Holland,
1976.
Hacquard, Valentine (this volume). Logic and the lexicon: Insights from modality.
Hale, Kenneth, and Samuel Jay Keyser. 1993. On argument structure and the lexical expression
of syntactic relations. In K. Hale and S. J. Keyser (eds.), The View from Building 20, 53-
109. Cambridge, MA: MIT Press.
Harris, Z. S. (1957). Co-occurrence and transformation in linguistic structure. Language 33:
283-340.
Heim, Irena, and Angelika Kratzer. 1998. Semantics in Generative Grammar. Malden, MA:
Blackwell.
Hoffmann, Thomas, and Graeme Trousdale (eds.). 2013. The Oxford Handbook of Construction
Grammar. Oxford: Oxford University Press.
Humboldt, Wilhelm von. (1999 [1836]) On the Diversity of Human Language Construction and
its Influence on the Mental Development of the Human Species (orig. Über die
Verschiedenheit des menschlichen Sprachbaus und seinen Einfluss auf die geistige
Entwicklung des Menschengeschlechts). Michael Losonsky (ed.), Cambridge: Cambridge
University Press.

24
Hume, D. (1748/1974). An Enquiry Concerning Human Understanding. Reprinted in The
Behaviorists, 307-430. New York: Anchor Books.
Jackendoff , R. (1972). Semantic Interpretation in Generative Grammar. Cambridge, MA:
MIT Press.
Jackendoff, R. (1983). Semantics and Cognition. Cambridge, MA: MIT Press.
Jackendoff, R. (1985). Multiple subcategorization and the θ-Criterion: The case of Climb,
Natural Language and Linguistic Theory 3.3, 271-295
Jackendoff, R. (1987a). On Beyond Zebra: The relation of linguistic and visual information.
Cognition 26, 89-114.
Jackendoff, R. (1987b). The Status of Thematic Relations in Linguistic Theory, Linguistic
Inquiry 18.3, 369-411.
Jackendoff, R. (1990). Semantic Structures. Cambridge, MA: MIT Press.
Jackendoff, Ray. 1996. The architecture of the linguistic-spatial interface. In Paul Bloom, Mary
Peterson, Lynn Nadel, and Merrill F. Garrett (eds.), Language and Space, 1-30. Cambridge,
MA: MIT Press.
Jackendoff, R. (2002). Foundations of Language. Oxford: Oxford University Press.
Jackendoff, R. (2007). Language, Consciousness, Culture. Cambridge, MA: MIT Press.
Jackendoff, R., and Audring, J. (2020). The Texture of the Lexicon: Relational Morphology and
the Parallel Architecture. Oxford: Oxford University Press.
Katz, Jerrold. 1966. The Philosophy of Language. New York: Harper & Row.
Katz, J. (1972). Semantic Theory. New York: Harper & Row.
Katz, Jerrold J., and Jerry A. Fodor. 1963. The structure of a semantic theory. Language 39,
170-210.
Kosslyn, S. (1980). Image and Mind. Cambridge, MA: Harvard University Press.
Labov, W. (1973). The boundaries of words and their meanings. In C.-J. Bailey and R. Shuy
(eds.), New Ways of Analyzing Variation in English, vol. 1. Washington: Georgetown
University Press.
Lakoff, George, and Mark Johnson. 1980. Metaphors We Live By. Chicago: University of
Chicago Press.
Landau, Barbara (this volume). Language and thought: The lexicon and beyond
Landau, Barbara, and Ray Jackendoff. 1993. ‘What’ and ‘where’ in spatial language and spatial
cognition. Behavioral and Brain Sciences 16: 217-38.
Landauer, Thomas, and Susan Dumais. 1997. A solution to Plato's problem: The latent semantic
analysis theory of acquisition, induction, and representation of knowledge. Psychological
Review 104: 211-40.
Lang, E., and Maienborn, C. (2019). Two-level semantics: Semantic form and conceptual
structure. In C. Maienborn, K. von Heusinger, and P. Portner (eds.), Semantics: Theories,
114-153.
Langacker, Ronald. 1987. Foundations of Cognitive Grammar, vol. 1. Stanford: Stanford
University Press.
Lenci, A. (2008). Distributional semantics in linguistic and cognitive research. Rivista di
Linguistica 20.1, 1-31.
Levin, B. (1993). English Verb Classes and Alternations. Chicago: University of Chicago
Press.
Levin, Beth, and Malka Rappaport Hovav. 1995. Unaccusativity: At the Syntax-Lexical
Semantics Interface. Cambridge, MA: MIT Press.

25
Levin, Beth, and Malka Rappaport Hovav. 2005. Argument Realization. Cambridge:
Cambridge University Press.
Levinson, S. (1997). From outer to inner space: Linguistic categories and non-linguistic
thinking. In J. Nuyts and E. Pederson (eds.), Language and Conceptualization, 13-45.
Cambridge: Cambridge University Press.
Lewis, David. 1972. General semantics. In D. Davidson and G. Harman (eds.), Semantics of
Natural Language, 169-218. Dordrecht: Reidel.
Liberman, Alvin, and Michael Studdert-Kennedy. 1977. Phonetic perception. In R. Held, H.
Leibowitz, and H.-L. Teuber (eds.), Handbook of Sensory Physiology, vol. viii: Perception.
Heidelberg: Springer.
Lieber, Rochelle. 2019. Theoretical issues in word formation. In J. Audring and F. Masini
(eds.), The Oxford Handbook of Morphological Theory, 34-55. Oxford: Oxford University
Press.
Loschky, Lester C., Adam M. Larson, Tim J. Smith, and Joseph P. Magliano (2020). The Scene
Perception & Event Comprehension Theory (SPECT) applied to visual narratives. Topics in
Cognitive Science 12 (1), 311-351.
Macnamara, John. 1978. How can we talk about what we see? Unpublished mimeo. Department
of Psychology, McGill University, Montreal.
Marr, D. (1982). Vision. San Francisco: Freeman.
McCawley, James D. 1968. Lexical insertion in a transformational grammar without deep
structure. Papers from the 4th Regional Meeting, Chicago Linguistic Society, 71-80.
McKay, Ryan, Robyn Langdon, and Max Coltheart. 2005. “Sleights of mind”: Delusions,
defences, and self-deception. Cognitive Neuropsychiatry 10, 305-326.
Michotte, A. (1954). La perception de la causalité, 2d ed. Louvain: Publications Universitaires
de Louvain.
Miller, G., & Johnson-Laird, P. (1976). Language and perception. Cambridge, MA: Harvard
University Press.
Millikan, Ruth. 1984. Language, Thought, and other Biological Categories. Cambridge, MA:
MIT Press.
Montague, R. (1974). Formal Philosophy. Selected Papers of Richard Montague, ed. R. H.
Thomason. New Haven: Yale University Press.
Murphy, Gregory. 2002. The Big Book of Concepts. Cambridge, MA: MIT Press.
Pietroski, P. (2018). Conjoining Meanings. New York: Oxford University Press.
Piñango, M. M. (this volume) A lexically-driven system of linguistic meaning composition,
possible mechanisms for meaning “growth” and the role of context integration
Pinker, Steven. 1989. Learnability and Cognition: The Acquisition of Argument Structure.
Cambridge, MA: MIT Press.
Pinker, S. (2007). The Stuff of Thought. New York: Viking.
Pustejovsky, James. 1995. The Generative Lexicon. Cambridge, MA: MIT Press.
Pustejovsky, J., and Batiukova, O. (2017). The Lexicon. Cambridge: Cambridge University
Press.
Putnam, Hilary. 1975. The meaning of "Meaning." In K. Gunderson (ed.), Language, Mind, and
Knowledge, 131-193. Minneapolis: University of Minnesota Press.
Rosch, Eleanor. 1978. Principles of categorization. In E. Rosch and B. Lloyd (eds.), Cognition
and Categorization, 27-48. Hillsdale, NJ: Erlbaum.

26
Rosch, Eleanor, and Carolyn Mervis. 1975. Family resemblances: studies in the internal
structure of categories. Cognitive Psychology 7, 573-605.
Schwarz, Florian, and Jérémy Zehr (this volume). Pragmatics and the lexicon.
Searle, John. 1995. The Construction of Social Reality. New York: Free Press.
Snare, F. 1972. The concept of property. American Philosophical Quarterly 9, 200-206.
Spelke, Elizabeth. 2000. Core Knowledge. American Psychologist. 55 (11): 1233-43.
doi:10.1037/0003-066X.55.11.1233
Stojnic, Gala, and Ernie Lepore (this volume) Compositionality of concepts.
Talmy, Leonard. 2000. Toward a Cognitive Semantics. Cambridge, MA: MIT Press.
Tarski, A. (1956). The concept of truth in formalized languages. In Tarsky (ed.), Logic,
Semantics, and Metamathematics, 152-197. London: Oxford University Press.
Tenny, Carol. 1994. Aspectual Roles and the Syntax-Semantics Interface. Dordrecht: Kluwer.
Ünai, E., Ji. Y., and Papafragou, A. (2019). From event representation to linguistic meaning.
Topics in Cognitive Science.
Varela, F., Thompson, E., and Rosch, E. (1991). The Embodied Mind. Cambridge, MA: MIT
Press.
Wertheimer, M. (1923). Laws of organization in perceptual forms. Reprinted in W. D. Ellis
(ed.), Source Book of Gestalt Psychology, 71-88. London: Kegan Paul.
Wierzbicka, A. (1996). Semantics: Primes and Universals. Oxford: Oxford University Press.
Wiese, H. (2016). Modelling semantics as a linguistic interface system. Ms., Humboldt
University, Berlin.
Wittgenstein, Ludwig. 1953. Philosophical Investigations. Oxford: Blackwell.

27

View publication stats

You might also like