[go: up one dir, main page]

0% found this document useful (0 votes)
89 views9 pages

Lecture 8 Language Production and Comprehension.

Uploaded by

Habiba Habob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views9 pages

Lecture 8 Language Production and Comprehension.

Uploaded by

Habiba Habob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Faculty of

Mohamed Bouguara University of Boumerdes.


Faculty of Letters and Languages.
English Department.
Linguistics : 3 rd year.
Lecturer: Dr. Boughelamallah.
Lesson6: language production and comprehension

INTRODUCTION
Language is a system that associates sounds (or gestures) with meanings in a way that uses
wordsandsentences.Thecomplexprocessoflinguisticcommunicationinvolvesanumberof
interconnected, yet functionally and anatomically separable cognitive processes.
Linguisticsisthescientificstudyofhumanlanguage.Ithasseveralsub-fields:
1. Phonetics&Phonology:Phonetics-theproductionandperceptionofspeechsounds
asphysicalentities.Phonology-thesoundsystemofaparticularlanguageandsounds as
abstract entities. Phonemes are the smallest units of sound. A phoneme roughly
corresponds to a letter of the alphabet, and different languages have different numbers
of phonemes (English has approximately 30 phonemes, whereas some languages such
as Mandarin have more than 50).
2. Morphology: The word structure and the systematic relations between words.
Morpheme - The building-blocks of words, the smallest linguistic unit which has a
meaning or grammatical function. For example, the word“talked”has two morphemes
–“talk”and“-ed”.Thefirstmorphemedescribesaconversationevent,andthesecond
morpheme places this event in the past.
3. Syntax: Phrase and sentence structure. The set of rules of a particular language that
determine the ways words are combined to make sentences. Syntax refers to word
order, for example the exact place of negation in a sentence. It also refers to type of
sentences (question, conditional) and grammatical forms (passive, active).
4. Semantics: The meaning of morphemes, words, phrases, and sentences. This term
overlaps with semantic memory.
5. Pragmatics:The way language is used,how context influences the interpretation of
utterances, and how sentences fit into a conversation (Gill and Damann, 2015). The
same phrase (e.g., he is really smart) could be said seriously or ironically, and the
interpretation is related to pragmatics.

In theprocess of language production,wemove from semantics to phonology,


And in the process of language comprehensionwe move from phonology to semantics.

1
Faculty of
THEFUNCTIONALORGANIZATIONOFLANGUAGEPRODUCTIONAND
COMPREHENSION
The enterprise of relating the
functional components of
word production, such as
lexical selection, phonological
code retrieval, syllabification,
and perception, to regions in a
cerebral network requires a
detailed, explicit theory of the
underlying processes. One
classic example is the theory
presented by Levelt, Roelofs,
and Meyer (1999) (Levelt et
al., 1999), henceforth to be
called LRM, which explicates
the successive computational
stages of spoken word
production,the representations
involved in these
computations, and their time
course (Figure 1).
The core processes include:
Conceptual preparation-the
production of a content word
normally starts by activating
some lexical concept and by
selecting it for expression.
Conceptual semantics refers
to the knowledge one has
about the various attributes of
a concept, independent of the
linguistic realization.When
You are asked tonamea
picture, you must first Figure 1: Components and time course of word production. Left
recognize the depicted object column:coreprocessesofwordproduction andtheir characteristic
and select an appropriate output.Right column: example fragments of the WEAVER
concept.Forexample spreading activation network and its output (Indefrey & Levelt
<DOGS> are typically four- (2004). Cognition 92:101-144).
legged and bark.Lexical
semantics, on the other hand, refers to formal linguistic properties of single words that have
precise processing consequences — for example, ‘BITE’ is an ‘eventive verb’ (describes an
action)anddiffersfrom‘ADMIRE’,a‘stativeverb’(describesastateofbeing),andverbtypes differ
in their processing requirements.

2
Faculty of
There is normally multiple activation of lexical concepts in response to visual input. The
picture of a sheep not only activates the concept SHEEP, but probably also concepts such as
ANIMA Lor GOAT. The communicative situation(or the experimental task)determines
which concept is going to be selected for expression. In a categorization task, for instance,
selection will often involve the super ordinate concept (ANIMAL), whereas a normal
naming task usually involves the basic level concept (SHEEP).

Impaired Semantics (conceptual semantics and lexical semantics) - Patients with


impaired conceptual knowledge, often use objects, particularly less familiar objects,
inappropriately.ImpairedLexicalSemanticsandimpairedaccesstolexicalsemantics
from vision – impairment in accessing the subset of semantic features that allow a
person to know what makes a horse a horse and what distinguishes it from related items
such as a deer or a cow. Patients with impaired lexical semantics could incorrectly label
items or incorrectly match pictures to their names. A person with an impaired ability to
access lexical semantics from vision (a problem known as associative visual agnosia or
optic aphasia), might point to a cow when asked to point to a horse. In addition, the
person might make semantic paraphasias – select incorrect words that are semantically
related to the target.

Impaired access to modality-independent lexical representations (Lemmas) - The


meaning of the item, or lexical semantic representation, is used to select a lexical
representation or lemma that is independent of output modality (oral versus written).
Impairments at this level of processing are manifest as anomia or impaired word
retrieval.Thisdeficitiswell-knowntoallofus(increasinglywithage).Whenwehavea word on
the tip of the tongue,we can neither write the word nor say it, although we may retrieve
some partial information, such as the first letter or sound, or the word’s approximate
length. This partial information often activates phonologically similar words for output,
such that the person makes a phonemic paraphasia (e.g., calling a horse a horn) or
activates semantically related words for output, such that the patient makes a semantic
paraphasia (e.g., calling a horse a cow). Sometimes the partial phonological
information and partial semantic information combine to result in mixed errors, such as
calling a shirt a skirt (Hillis, 2010).

Lemma retrieval(lexicalselection)-The next stage involves accessing


the target word’s syntax. Innormal utterance production the most urgent
operation after conceptual preparation is the incremental construction
of a syntactic frame, i.e. grammatical encoding. Word order, constituent
formation, and inflection all depend on the syntactic properties of the
lexical items that are accessed. Lemma nodes in the syntactic tratum of
the lexical network represent these syntactic properties(such as the part
of speech,the gender of nouns, the argument structure of verbs).Howis
Alemmanode selected?Each no death the conceptual stratum is linked to its unique
lemmanode
atthesyntacticstratum.Compositionalsemantics,closelyconnectedtosyntacticstructure,

3
Faculty of
concerns how meaning is constructed in sentential contexts, for example, allowing one to
distinguish ‘dog bites man’ from ‘man bites dog’.
Form encoding - The range of operations involved in form encoding begins with accessing
the target word’s phonological code and ends while the word is being articulated. Form
encoding, however, is itself a staged process. According to LRM, the first operation upon
lemma selectionion morpho-phonological encoding(morpho-phonological code retrieval).The
speaker accesses the phonological codes for all of the target word’s morphemes.The second
operation is phonological encoding proper. For spoken word production, this reduces to
syllabification and metrical encoding. Syllabification is an incremental process. The ‘spelled-
out’ segments of the phonological code are incrementally clustered in syllabic patterns. The
third operation is phonetic encoding. As syllables are incrementally created, they are rapidly
turned into motor action instructions. These instructions (‘syllable scores’) are stored for the
few hundred high-frequency syllables that do most of the work in normal speech production.
The repository of articulatory syllable scores is called the ‘mental syllabary’.

Impaired access to modality-specific lexical representations - The lemma is used to


select a modality specific lexical representation—the phonological representation(spoken
word form or learned pronunciation) or the orthographic representation (written word
form or learned spelling). Some patients can write words even when they cannot retrieve
their pronunciation (despite intact motor speech). Other patients show the opposite
pattern—an ability to say a word but inability to retrieve the spelling of the same word
(Hillis, 2010).

Onceaspokenwordformorphonologicalrepresentationhasbeenaccessed,it still needstobe


spokenaloud.Therearetwoaspectstothisprocess.Onerequiresmaintainingthephonological
representation (the correct sequence of speech sounds that comprise the pronunciation) while
the sounds are produced, and the second is motor output—articulation. Failure to activate or
maintain activation of the complete phonological representation will result in phonemic
paraphasias, such as substitution, insertion, and transpositions of phonemes (speech sounds)
resulting in a different word (e.g., horn for horse) or non-words (e.g., porse for horse).
Articulationofawordrequiresmotorplanningorprogrammingofthecomplexmovementsof the
lips, tongue, palate, vocal folds, and respiratory muscles, followed by implementation of
these movements.

Apraxia of speech - impairment of motor planning or programming of speech


articulation. This problem can lead to errors of insertion, deletion, transposition,
substitution of speech sounds, or distortions of speech sounds in the absence of impaired
strength, range, or rate of any of the speech muscles. Patients with apraxia of speech are
very aware of their errors and try to correct them, while those who make phonemic
paraphasiasaregenerallyunawareoftheirerrors.Apraxiaofspeechisoftencharacterized by
various off-target productions of the word when attempting to say the same word
multiple times, and is more apparent in production of polysyllabic words, which require
more complex motor planning.

4
Faculty of

Even when motor planning is in tact,the word might be articulated in incorrectly


becauseof dysarthria, a motor speech impairment caused by impaired strength, range,
rate, or timing of movements of the lips, tongue, palate, or vocal folds. Dysarthria can be
distinguished from apraxia of speech by its consistency across words (e.g., the same
speech sound will typically be distorted in both short and long words consistently across
trials in dysarthria, but is much more likely to be inconsistently misarticulated in long
words compared to short words in apraxia of speech). Dysarthria is also associated with
weakness or reduced range/rate of movement of the muscles involved in speech.

Although a number of sophisticated cognitive models of language production that specify the
different stages and the relationships among them have been proposed (Dell, 1986, Levelt et
al., 1999, Indefrey, 2011), understanding the precise neural mechanisms by which humans
encodeandtime-causallyenactdifferentaspectsofalinguisticmessage,includingthedivision
oflaborspatially(withinandacrossbrainregions)andtemporally(acrosstime)hasprovento
beamajorchallenge.Functionalbrain-imagingmethodsgenerallydonotpossessthetemporal
resolution needed to evaluate the individual components involved in word generation. They
also do not afford causal inferences and,although lesions from natural traumaor strokes have
criticallyinformedmodelsoflanguageproduction(GoldrickandRapp,2007),theycommonly
affect extensive cortical areas as well as their underlying white matter tracts, complicating
interpretation. Further, to the extent that the same brain region supports different stages of
language production, a permanent lesion to a region would not allow for temporal
differentiation of those stages.

RELATIONSBETWEENWORDPRODUCTIONANDWORDCOMPREHENSION
An important asymmetry exists between production and comprehension. In production, the
goal is to express a particular meaning,about which we generally have little or noun certainty.
To do so, we have to utter a precise sequence of words where each word takes a particular
form, and the words appear in a particular order. In contrast, the goal of comprehension is to
infer the intended meaning from the linguistic signal. Abundant evidence suggests that
comprehension is affected by both bottom-up, stimulus-related information and top-down
expectations, and the representations we extract and maintain during comprehension are
probabilistic and noisy (Kuperberg and Jaeger, 2016, Karimi and Ferreira, 2016, Levy, 2008,
Nelken et al., 1999, Kidd et al., 2011, Coady and Aslin, 2004). In production, these pressures
for precision and for linearization of sounds, morphemes, and words might lead to a clearer
temporaland/orspatialsegregationamongthedifferentstagesoftheproductionprocess,and,
correspondingly, to functional dissociations among the many brain regions that have been
implicated in production (Indefrey and Levelt, 2004, Indefrey, 2011), compared to
comprehension, where the very same brain regions appear to support different aspects of the
interpretation (like understanding individual word meanings and inferring the
syntactic/semantic dependency structure) (Fedorenko et al., 2012, Blank et al., 2016, Bautista
and Wilson, 2016).

5
Faculty of

NEUROANATOMYOFLANGUAGE
Since the late 19th century, it has generally been accepted that core language processes
underlying spoken word production, comprehension, and repetition are enabled by left
perisylvian regions of the human brain, including Broca's area and Wernicke's area (Broca,
1861;Wernicke,1874).Productionconcernssayingwordstoexpressmeaning,comprehension
concerns understanding the meaning of heard words, and repetition concerns saying heard
words or pseudo-words. According to the seminal Wernicke-Lichtheim model (Lichtheim,
1885;Wernicke,1874),theperisylvianlanguageareascontainmemoryrepresentationsofthe input
“auditory images” (in Wernicke's area) and output “motor images” (in Broca's area) of words.
The presumed deficit and common lesion locations of the aphasias has led to the
connectionist models developed by Broca, Wernicke, Lichtheim, and Heilman (Gill and
Damann, 2015) (Figure 3). In this model, the frontal lobe may play a role in activating the
semantic-conceptual areas, which then activate the phonological lexicon, allowing the person
to produce spontaneous speech. When the phonological lexicon activates the semantic-
conceptual field, the person is able to
comprehend speech. The
phonological lexicon is thought to
contain memories of word sounds.
Therefore, to understand speech,
speech information enters the system
through the auditory cortex, and is
then sent toWernick eare a as well as
to more widely distributed semantic-
conceptual areas to allow
comprehension of the speech in the
semantic-conceptual areas.
To produce spontaneous speech, the
front all to
be(intentional/motivational systems)
would activate the semantic-
conceptualar ease in order to
activate the corresponding areas in
Wernickearea,which would then
project to Broca area and then to the
Figure3:Classicconnectionistmodeloflanguagefun
motor cortex to activate the
ction
appropriate motorprogramsto
Produce the desired speech(Heilman, 2015).
Figure 4 shows the areas involved in language function in the classic connectionist model
(KirshnerHS,Saunders,2004).TheconceptualframeworkbehindthismodelisthatWernicke area
(Brodmann area 22) and the surrounding area mediate comprehension. Auditory stimuli are
projected to Wernicke area from the nearby Heschl gyrus (Brodmann areas 41 and 42),
whereas visual forms of communication(e.g.,reading and signlanguage)are processed by the
primary and secondary visual cortices that then project to Wernicke area through the ventral
visualstream.ThearcuatefasciculusthenprojectsfromWernickeareatoBrocaarea
6
Faculty of
(Brodmann areas 44 and 45) and the
surrounding area to permit repetition.
Broca area is the center for expressive
language planning.
Lateralization of language is associated
with handedness,with approximately90%
of right-handed individuals and 70% of
left-handed individuals being left-
hemisphere dominant for language,
although some debate exists about exact
percentages. In left-handed individuals,
about a third are either right-hemisphere
dominant or have bilateral language
dominance. The right hemisphere is Figure 4: The areas involved in language function in
thought to play a role in the prosody of the classic connectionist model (Kirshner & Saunders,
language. The right hemisphere, in an 2004)
analogous organization to the language
representation in the left the misphere, mediates both prosody and the interpretation of
gesture. The connection is the model of language function, however, does not fully explain
how words are organized into sentences. Functional brain imaging (Poeppel et al., 2012)
suggests that language function is mediated by larger scale, distributed global networks in the
brain, explainingwhymanypatientswithaphasiadonot fitwell into any of the classic
connectionist aphasia syndromes. The new models of the neuro anatomic basis of language
are still under development. Some of the emerging concepts include sub-regions of Broca
area that serve different language functions (Amunts et al., 2010), a dual-stream, cortical
organization of
speechprocessingsimilartothatfoundforvisualprocessing(HickokandPoeppel,2007),and
aroleforthecerebellumandsubcorticalstructuresinthetemporalprocessingofspeech(Kotz and
Schwartze, 2010). The fact that there are different types of meaning makes unsurprising the
observation that the ‘neural basis of meaning’ has been associated with many different
activationprofiles(Poeppel,2006).Forinstance,imagingdatasuggestthatleftinferiorfrontal gyrus
anterior to Broca’s area plays a critical role in verbal meaning (Thompson-Schill et al., 1997);
and the potential role of parietal cortex has been highlighted as well (Price, 2000). To
complicate things further, electrophysiological studies show that right superior and middle
temporallobestructuresarerobustlyimplicated(FedermeierandKutas,1999).Thedataacross
methods and studies have not yet converged on a single model of the calculation of meaning
in the brain.

THE APHASIAS
Normal language function requires proper neural function over a wide geography of brain
regions. A person with dysfunction in this neural network has aphasia.
Table 1 summarizes the classic types of aphasias and their characteristics (Gill and Damann,
2015).

7
Faculty of
ASSESSMENTOFLANGUAGEFUNCTION-BEDSIDETESTING
Thebedsideassessmentoflanguagefunctionisoftenmorequalitativethanquantitative;many
examinersuseoneoftheirowncreation.Thefollowingisasuggestedapproachtothebedside
language examination (Gill and Damann, 2015):
1. Observation:Listen to the patient’s spontaneous speech to assess articulation of words,
fluency, and prosody. If the patient produces little spontaneous verbal output, ask him
orhertodescribeapicturesuchastheCookieTheftpicturefromtheBostonDiagnostic
Aphasia Examination, although any picture showing action may be used. Paraphasic
errors are often identified during observation of speech and are the production of
unintended phonemes, morphemes, words, or phrases. These are generally placed into
the two categories of phonemic and semantic paraphasic errors, although other
classification schemes exist. A phonemic paraphasic error occurs when a person
mispronouncesaword’ssoundsorsaysanon-wordthatretainsasignificantproportion
(often over one-half)of the intended word(the non-word sounds similar to the intended
word).
2. Comprehension (verbal and written): Start with one-step midline commands (“close
your eyes”); progress to distal one-step commands (“hold up your left hand”); then
progress to complex commands (“point to the door after you point to the window”).
3. Repetition: Start with short complete sentences and progress to an open-ended phrase
of at least five words in length, such as the one used in the Bost on Diagnostic
Aphasia Examination (“near the table in the dining room”).
4. Naming:Start with whole items.Ask,“What is this?”and point to the object(e.g.,pen,
watch), then progress to parts (e.g., watchband, cuff of shirt). In patients who either
cannot perform these tasks or are non-fluent, test receptive naming by stating, “Point
to the pen,” and hold out a pen and a watch.
5. Writing: Have the patient write a sentence spontaneously.I f
6. the patient cannot produce a sentence spontaneously, have him or her try to write by
dictation. Because written expression can be affected separately from verbal
expression, writing should be tested in addition to testing verbal output.
7. Testingforapraxiaofspeechshouldbepartoftheevaluationprocessforpatientswith a
progressive aphasia. An apraxia of speech is a motor speech disorder characterized by
a slow rate of speech and distorted speech sounds. To test for apraxia of speech, have
the patient attempt to alternate between labial (produced by the lips), lingual
(produced by the tongue), and guttural (produced by the soft palate and other throat
structures) sounds by saying words such as “patty-cake” or “irresponsibility.”

8
9

You might also like