Metarecognitional Model
Metarecognitional Model
Metarecognitional Model
net/publication/220457748
Article in Human Factors The Journal of the Human Factors and Ergonomics Society · June 1996
DOI: 10.1177/001872089606380203 · Source: DBLP
CITATIONS READS
168 162
3 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Marvin Cohen on 31 May 2014.
model, that explains how decision makers handle uncertainty and novelty while at the same time
exploiting their experience in real-world domains. The model describes a set of critical thinking
strategies that supplement recognitional processes by verifying their results and correcting
problems. Structured situation models causally organize information about a situation and
situation model; critique situation models for incompleteness, conflict, and unreliability; and
prompt collection or retrieval of new information and revision of assumptions. We illustrate the
1
Requests for reprints should be sent to Marvin S. Cohen, Cognitive Technologies, Inc.,
A U.S. AEGIS cruiser was in the Gulf of Sidra below the “line of death,” in waters
claimed as Libyan by Ghaddafi, when it detected a gunboat emerging from a Libyan port. The
gunboat turned directly toward the cruiser and increased its speed. As the gunboat continued to
approach, the Tactical Action Officer (TAO) and the captain had to decide whether or not to
engage it.
One possible account of what happened is that the captain and TAO matched the cues
and other available information to stored patterns. They might have recognized a pattern of cues
as hostile and retrieved an associated response, e.g., to engage the gunboat. At the same time,
there might have been partial matches to other templates, such as routine patrol. A growing body
of research has supported a view of this kind in decision making and problem solving. For
example, beginning with Chase and Simon’s (1973) work on chess, expertise has been equated
with mastery of a large repertoire of familiar patterns and their associated responses. Klein
(1993) has proposed a model of decision making based largely (but not entirely) on recognitional
processes.
Pattern recognition is not the whole story, however. In particular, pattern matching does
not explain how the captain and TAO handled the conflict between competing templates, neither
of which perfectly matched the situation, or the relatively controlled manner in which they
created a picture of the situation, evaluated it, and then created an alternative. And it does not
say how they addressed the issue of thinking more about the problem versus acting immediately.
The captain and TAO dealt effectively and explicitly with uncertainty in a way that
straightforward pattern matching does not capture. Yet they were hardly Bayesians. They did not
attempt to assign a fixed set of possible meanings (with or without associated probabilities) to
Metarecognition 3
each individual cue, but considered different interpretations of each cue in the context of
alternative situation pictures. And the end result was not an assignment of probabilities to the
different hypotheses about intent, but a set of situation models together with an understanding of
their strengths and weaknesses. A better description of the captain and TAO’s processing is that
they adopted a two-tiered strategy: (1) recognitional activation of expectations and associated
responses, accompanied by (2) an optional process of critiquing and correcting. Together, these
processes build, verify, and modify “stories” to account for a relatively novel set of events.
This article presents the outlines of an empirically based theory of the processes of
critiquing and correcting. In describing these critical thinking skills, we emphasize the concept
of metacognition, i.e., processes that monitor and regulate other thought processes such as
memory, attention, and comprehension (Forrest-Pressley, MacKinnon, and Waller, 1985). Our
findings suggest that metacognitive skills also include verifying and improving the results of
pattern recognition, in support of decision making in novel and uncertain situations (Cohen,
Adelman, Tolcott, Bresnick, and Marvin, 1993). Because of this interaction between
proficient decision making. They include: going beyond pattern matching in order to create
plausible stories for novel situations, noticing conflicts between observations and a conclusion,
elaborating a story to explain a conflicting cue rather than simply disregarding or discounting it,
sensitivity to problems in explaining away too much conflicting data, attempting to generate
alternative coherent stories to account for data, and a refined ability to estimate the time
In terms of training, the R/M model suggests that some crucial skills may not be as
the formal tools stressed in analytical models. In training based on these concepts, performance
is improved by acquiring (a) effectively structured domain knowledge and (b) skills in
questioning and revising that knowledge (Cohen, Freeman, Wolf, and Militello, 1995).
In the following sections we describe structured situation models, the arguments (or
evidence-conclusion relationships) that are contained within the models, and a set of
metarecognitional processes that modify and elaborate the stories by evaluating the arguments.
Throughout the discussion, we draw on illustrations from the domain of ship-based tactical
defense. These data are more fully described in Kaempf, Klein, Thordsen, and Wolf (this issue).
The R/M framework, however, has also been applied in Army tactical battlefield planning
(Cohen et al., 1993) and commercial airline pilot decision making (Cohen, 1993a). In the
conclusion we briefly touch on how R/M concepts need to be expanded, refined, and tested in
the future.
When pre-stored patterns prove inadequate, decision makers draw on more abstract
comprehension of trial evidence by jurors is a constructive process, in which the jurors create
explanatory causal models of the available facts in the form of stories. Stories also enable jurors
to identify gaps where important pieces of the explanatory structure are missing and where
inferences might be necessary. The R/M model posits a similar process. As decision makers
become familiar with a domain, they acquire abstract knowledge about the types of events and
relationships among events that are relevant in particular situations. In new situations of the
Metarecognition 5
same kind, decision makers use this generic knowledge to integrate the new information, and
subject the results to repeated evaluation and modification. In particular, structural knowledge
consisting of causal and intentional relations between events is used to construct narrative story
structures. The main components of a story episode, according to Pennington and Hastie, are
initiating events (which elicit) goals (which motivate) actions (which result in) consequences.
Pennington and Hastie suggest that story construction is a general comprehension strategy for
understanding human action. In tactical naval situations, officers construct stories to explain data
Figure 1 shows the components and relations contained in a story structure for enemy
intent to attack. The central element in this structure is the current intent of the enemy: to attack
with a particular asset against a particular target. The left side of the structure represents prior
causes of the intent, and the right side represents the effects of the intent in the current situation.
Telling a story is not a rote process of filling in slots. The point is to try to make sense of the
hostile intent assessment from the vantage point of each one of these possible causes and effects.
For example, a plausible hostile intent story shows how high-level goals of the relevant country
could have motivated an attack, how the contact was a logical choice as an attack platform given
the overall capabilities of the attacking country, how own ship was a logical target for attack
given the country’s high-level goals and other opportunities, how the contact would have been
able to detect and localize own ship, and how its observed actions make sense as ways of getting
to an attack position quickly and safely. Figure 2 is an example of a hostile intent story at an
----------------------------------------------------
Insert Figure 1 about here
----------------------------------------------------
----------------------------------------------------
Insert Figure 2 about here
----------------------------------------------------
unstructured lists of features; when some of the features are activated by matching cues, the
pattern generates expectations for the other features. In the R/M model, however, causal relations
among the elements in a knowledge structure are crucial for critical thinking. For example, in
critiquing the hypothesis of enemy intent to attack, the decision maker will look for alternative
causes of the gunboat’s actions and alternative effects of enemy goals, capabilities, or
opportunities. If the hypothesis regarding enemy hostile intent is to be accepted, the slots must
not only be filled in; the links to other slots must be plausible—i.e., the decision maker must be
METARECOGNITION
Metacognition has been defined as “individuals' knowledge of the states and processes of
their own mind and/or their ability to control or modify these states and processes” (Gavelek and
Raphael, 1985, p. 105). It has primarily been studied in the context of cognitive development
(e.g., Forrest-Pressley, MacKinnon, and Waller, 1985), on such topics as how children learn to
monitor and control the cognitive activities involved in reading, comprehending, memorizing,
and paying attention. We will extend some of the concepts in that research to the process of
has to do with the way readers learn to extract meaning from a text. Baker (1985, p. 155)
attempts to organize these processes into two broad categories of metacomprehension skill:
“...evaluating the current state of one’s ongoing comprehension” and regulation, in which the
reader who “has evaluated his or her understanding and found it inadequate...selects and deploys
some sort of remedial strategy.” Evaluation includes such skills as being sensitive to problems in
understanding, and accurately characterizing the problem (Gavelek and Raphael, 1985).
Regulation includes skills like determining the correct source of information (e.g., parts of the
text that will fill the gap in comprehension), searching these sources of information, constructing
an answer, and comparing the answer to a criterion to determine its adequacy (Gavelek and
Raphael, 1985). Similarly, Kuhn, Amsel, and O’Loughlin (1988) discuss the importance of
metacognition in the emergence of children’s scientific thinking skills: for example, the ability to
distinguish what one has inferred (conclusions) from what one has actually observed (evidence),
We argue that analogous metacognitive skills are crucial in proficient problem solving
and decision making. The R/M model refers to the metarecognitional skills involved in
There is evidence in problem-solving research that experts are more skilled than novices
in critiquing and correcting. For example, Patel and Groen (1991) found that expert physicians
spent more time verifying their diagnoses than did less experienced physicians. Physics experts,
according to Larkin, McDermott, Simon, & Simon (1980), are more likely than novices to utilize
abstract physical representations of the problem to verify the correctness of their method and
result, e.g., by checking whether all forces are balanced, whether all entities in the diagram are
Metarecognition 8
related to givens in the problem, and so on. In correcting, physics experts change their
representation of the problem until the solution becomes clear or, if this process fails, resort to
more general-purpose strategies, such as means-ends analysis (Larkin, 1981). Chi, Glaser, and
Rees (1982) found that experts returned to and refined their initial representation throughout the
critiquing and correcting are worth performing, and when the current solution must suffice. In
the R/M model, we call this process the quick test. Something like the quick test also governs
reading comprehension: Collins, Brown, and Larkin (1980) show that proficient readers vary the
effort devoted to comprehension according to the purpose with which they are reading. In
decision making, Beach and Mitchell (1978) and Payne, Bettman, and Johnson (1993) have
explored how time, payoffs, and task complexity affect strategy selection.
situation model and plan. This is simply an implicit or explicit awareness, for
example, that cue A was observed on this occasion, while intent to attack along with
2. Processes of critiquing identify problems in the arguments that support the situation
model or plan. Critiquing can result in the discovery of three kinds of problems:
arguments are missing; that is, information has not been considered that might
planning may be complete but conflicting if there are arguments with alternative,
conflicting conclusions that better account for some of the data, or alternative actions
that better achieve some of the goals. Finally, even if understanding and planning are
complete and free of conflict, they may be unreliable if arguments that link evidence
assumptions. These processes fill gaps in models or plans, resolve conflict among
4. A higher-level process, called the quick test, controls critiquing and correcting. The
quick test considers the available time, the costs of an error, and the degree of
uncertainty or novelty in the situation. If conditions are appropriate, the quick test
recognitional response.
----------------------------------------------------
Insert Figure 3 about here
----------------------------------------------------
In sum, the meta-level (the shaded portion of Figure 3) monitors the object-level
(situation models and plans), maintains a model or description of it (i.e., identifies arguments and
problems of incompleteness, conflict, and unreliability), and modifies object-level activity (by
inhibiting overt action, adopting assumptions, and producing queries to recognitional processes).
Two possible misunderstandings of this framework should be dealt with, however briefly.
First, the R/M model does not imply that metacognitive processes are localized in a different
Metarecognition 10
that recognitional and metacognitive processes may be distinguished functionally (rather than
functional distinction is stated by Nelson & Narens (1994): Object level processes provide
information about themselves to meta-level processes, while meta-level processes exert control
over object level processes. Second, the framework does not assert that all decision processes
require a meta-level process, and thus does not imply an infinite regress of metacognitive
adaptive action in the real world. There are persuasive arguments that an efficient solution to this
process of critiquing and modifying. Additional levels provide rapidly diminishing returns and
increased costs; we would not expect such “skills” to be acquired. In sum, the R/M framework
does not imply a “homunculus.” It makes testable predictions about the conditions under which
The R/M model provides a highly dynamic view of decision making. Problems exposed
by critiquing lead to correction steps, which involve the modification and elaboration of situation
models and plans. Critiquing and correcting for one problem can lead to the creation and
detection of other problems, triggering new cycles of correction and evaluation. The process
stops when the quick test concludes that the model or plan is satisfactory, or that the costs of
further thinking outweigh the potential benefits. In this section, we will explore both the
component processes in the R/M model and the manner in which they interact.
Metarecognition 11
Arguments
The first step of metarecognition is awareness of which elements of the story (e.g., the
gunboat’s turning toward own ship) support the key conclusion (e.g., that the gunboat is hostile).
Janik, 1984) is a structure with slots for grounds, conclusion, backing (the basis for the linkage
between grounds and conclusion), and rebuttals (conditions under which the linkage might not
hold). A way of summarizing this structure is: Grounds, so Conclusion, on account of Backing,
unless Rebuttal.
Arguments may proceed from effects to causes (e.g., the arguments based on actions of
the gunboat in Figure 2), from causes to effects (e.g., the argument in Figure 2 based on
Ghaddafi’s goals and opportunities), or from effects to other effects within a causal story
arguments to critique and what kinds of problems to look for. For example, the backing might be
experience in analogous situations (Were they similar enough to the present situation? Is my
experience representative?); the backing may involve an analytical technique (Is each step of
reasoning valid? Does this method agree with other methods?); or it may involve the authority of
Rebuttals are specific ways that the link between grounds and conclusions in an argument
could be broken. If the direction of an argument is from cause to effect, a rebuttal might involve
an alternative possible effect; if the direction of an argument is from effect to cause, a rebuttal
might involve an alternative possible cause. In critiquing arguments, metarecognition looks for
Cues in a situation can recognitionally activate story structures, and these structures in
turn may recognitionally activate additional information that fills their slots. Critiquing goes
beyond this relatively automatic retrieval. In critiquing the completeness of a story, the decision
maker focuses attention on specific components of the structure (i.e., the key conclusion, such as
hostile intent, and a possible cause or effect) and queries for information regarding the cause or
effect that would provide the grounds for an additional argument regarding the conclusion.
In Figure 2, for example, the story is incomplete because the TAO has not fully
considered the factor of capability. Does the choice of a gunboat as an attack platform really
make sense? The gunboat has weapons capable of damaging the cruiser, but this is at best a weak
argument for hostile intent, ignoring other assets that might have been used instead of the
gunboat. In addition, the TAO has not fully considered the factor of opportunity. The cruiser is
below the line of death, but again this is a weak argument. Does the choice of the cruiser as a
target make sense given other possible opportunities? Finally, the TAO has not considered how
the gunboat might have detected the presence of the cruiser. As we shall see, filling these gaps
A key factor in judging the completeness of a situation model is that it provide adequate
arguments for action. In anti-air warfare there is typically a relatively small set of available
options (e.g., sending other aircraft to intercept the contact, warning it, illuminating it, setting
internal alerts, and engaging). The situation model needs to be elaborated in sufficient detail to
justify one of these possible actions. For example, when hostile intent is a precondition for
engagement, situation understanding is incomplete until a convincing story describing intent has
been constructed. Similarly, jurors decide between guilty and not-guilty by applying verdict
Metarecognition 13
In other domains the relevant courses of action are not so simple. In Army battlefield
planning, for example, plans are multidimensional and unique. In that case, the evolving plan
may be subjected to metarecognitional critiquing for completeness. If the plan is incomplete, the
decision maker returns to situation assessment, and tries to construct arguments for elements of
the situation model that further constrain the plan. Situation assessment and planning are
intertwined. Examples of this process are given in Cohen et al. (1993) and Voss et al. (1991).
In fleshing out a story, the decision maker may find new arguments whose conclusions
contradict the conclusions of existing arguments. This is conflict. Both conflict and unreliability
involve discovery of rebuttals to arguments, often in the form of alternative causes or effects.
Unreliability requires only that an alternative cause or effect be possible, thus neutralizing the
effect of the argument. Conflict goes two steps beyond this: First, there must be an argument for
the alternative cause or effect, i.e., positive grounds for believing that it is the case. Second, the
alternative cause or effect must be incompatible with the original conclusion. For both of these
In the Libyan gunboat incident, filling the gaps in the hostile intent story led to the
discovery of conflict (Figure 4). First, the Libyans had an air capability that would have been
much more effective than the gunboat. The inferiority of the gunboat provides grounds for an
argument that its intent is not to attack. Fleshing out the opportunity component of the story
produced a second conflicting argument: Another U.S. ship was both closer to the gunboat and
farther below the Line of Death. Finally, the TAO and captain believed that the gunboat could
not have detected the cruiser at the range at which it turned. This provides an even stronger
Metarecognition 14
----------------------------------------------------
Insert Figure 4 about here
----------------------------------------------------
One way to correct for conflict is to retrieve or collect new information that tips the
balance in favor of one of the conflicting conclusions. Such information is not always available,
however, and decision makers are not always satisfied to resolve conflict by sheer weight or
number of cues. Correcting steps may also include a reassessment of the original conflicting
arguments. If two arguments conflict, then one or both of them must be wrong. The assumptions
underlying those arguments can be identified and evaluated. If one or more of the assumptions
can be judged implausible, the conflict is resolved. Conflict resolution thus leads naturally to
be so familiar that they are retrieved automatically, virtually at the same time the argument is
cue suggesting hostility. Yet officers had frequently experienced friendlies’ failing to respond to
IFF challenges, and as a result readily identified this as an unreliable argument. In other cases,
however, an argument may appear plausible at first blush, but has weaknesses that are only
revealed by more deliberate critiquing. It is as if the decision maker adopts a devil’s advocate
strategy: He temporarily assumes that the argument is false, i.e., the grounds are true but the
For example, in the Libyan gunboat incident, the captain and TAO critiqued the argument
against hostile intent based on the inability of the gunboat to localize the cruiser. They
Metarecognition 15
temporarily assumed this argument was false, and generated several possible explanations: The
gunboat might have new training, new equipment, or Russian advisors. Similarly, they looked
for rebuttals to the argument against hostile intent based on capability. The gunboat might have
been chosen instead of superior air assets if the Libyans were desperate to inflict harm and
unconcerned about their own survival. Finally, the choice of the cruiser as a target might have
involved either lack of awareness of the other U.S. ship or unknown Libyan objectives. Figure 5
shows the rebuttals that were generated for each of the conflicting arguments.
The captain and TAO may explore new assumptions such as these to patch up the attack
story. If they wish to retain the assessment of hostile intent, rebuttals like those shown in Figure
5 must be accepted. There is a traditional but oversimplified point of view according to which
attempting to explain conflicting data is a case of the confirmation bias (e.g., Lord, Ross, &
Lepper, 1979). In many of the situations we have looked at however, no familiar pattern fits the
data; there are data that appear to conflict with every reasonable hypothesis. In such situations
the traditional view either paralyzes decision makers or forces them to construct an unrealizable
statistical average of the possibilities. Expert decision makers, by contrast, try to make sense of
the data by constructing coherent stories. They do not stop there, however. After explaining
conflict, they initiate a new cycle of critiquing, in which they assess the explanation. If it
contains too many unreliable assumptions, a variety of corrective methods are available. They
may collect or retrieve new data to confirm or disconfirm the assumptions: they may drop the
unreliable assumptions and look for other explanations; or they may explore an alternative
conclusion, generating a new story that requires fewer or more plausible assumptions (Cohen,
1986, 1993b). Cohen, Freeman, Wolf, & Militello (1993) found that training in story
----------------------------------------------------
Insert Figure 5 about here
----------------------------------------------------
After critiquing and correcting the hostile intent story, the captain and the TAO turned to
the task of constructing a patrol story. This required critiquing the reliability of cues that
originally suggested hostile intent. They temporarily assumed that the gunboat was on patrol
despite its presence in dangerous circumstances, its heading toward own ship, and its high speed,
and queried for an explanation. First, the gunboat might be out in harm’s way if its crew were
not aware of the presence of U.S. ships. Second, there were so many U.S. ships in the area that
almost any heading would have meant turning toward one of them by coincidence. Third,
perhaps what the captain and TAO regarded as high speed was actually the gunboat’s standard
patrol speed. The new story—that the gunboat was on patrol—depends on the assumption that
In this incident, both of the candidate stories, hostile intent and patrol, involved
questionable assumptions. This is typical of situations in which no standard pattern perfectly fits
the data, and in which, as a result, metacognitive processes are required. Through critiquing and
correcting, however, the officers had become aware of the assumptions underlying both stories.
This awareness enabled them to design a new, more robust story about the gunboat’s intent.
According to this new story, the intent of the gunboat was opportunistic attack. The
opportunistic-attack story does not require assumptions about localizing the cruiser or selecting
it as a target, as required by the attack story, since the gunboat was coming out to engage any
U.S. ships it could find. And it is no longer necessary to assume a surprisingly high normal speed
or extremely poor situation awareness, as in the patrol story, since the intent was hostile. As a
result, the opportunistic attack story became the officers’ working theory of the situation.
Metarecognition 17
Critiquing and correcting for one problem can lead to the creation and detection of other
problems. Figure 6 summarizes how steps of critiquing and correcting can be linked in the R/M
framework. The three types of problems explored by critiquing are shown as three points on a
contradict the key conclusion (e.g., hostile intent); and unreliable assumptions in arguments for
the key conclusion or in rebuttals of arguments against the key conclusion. The arrows showing
transitions from one corner of the triangle to another represent correcting steps. It is these
correcting steps that may sometimes (but not always) produce new problems. For example,
assumptions, can lead either to unreliable arguments or to conflict with other arguments;
rebuttals; dropping or replacing unreliable assumptions can restore the original problems of
incompleteness or conflict. These new problems may then be detected and addressed in a
----------------------------------------------------
Insert Figure 6 about here
----------------------------------------------------
It is important to note that correcting does not always lead to new problems: Additional
data may sometimes confirm rather than disconfirm an initial conclusion, and assumptions may
turn out to be plausible, coherent, few in number, and consistent with the data. We can represent
these outcomes by shrinking the size of the triangle in Figure 6. The smaller the triangle, the less
conflict. Also, the smaller the triangle, the less leeway or need for corrective measures.
Metarecognition 18
Proficient decision makers try to construct complete and coherent situation pictures,
within the constraints of the quick test. Thus they often appear to advance from the upper right or
left corners of the triangle down to the bottom. They try to fill gaps and explain conflict, if
revising assumptions. As we saw, this process does not force decision makers to accept the
resulting story. But it tells them what they must believe if they do accept it. Evaluation of
situation models is reduced to a single common currency: the reliability of the assumptions that
they require.
CONCLUSIONS
Proficient decision makers are recognitionally skilled: that is, they are able to recognize a
large number of situations as familiar and to retrieve an appropriate response. Recent research in
tactical decision making suggests that proficient decision makers are also metarecognitionally
skilled. In novel situations where no familiar pattern fits, proficient decision makers supplement
recognition with processes that verify its results and correct problems. The Recognition /
Metacognition framework suggests a variety of metarecognitional skills that may develop with
experience or serve as the objectives of training and as guidelines in the design of decision aids:
(1) More experienced decision makers are more sophisticated in their use of the Quick
Test. They buy themselves more time for resolving uncertainty by (a) explicitly asking how
much time they have before they must commit to a decision, and (b) estimating the available
time more precisely. For example, in the air defense domain, less experienced officers were
ready to shoot as soon as an approaching contact was within its nominal weapons range of the
cruiser. More experienced officers considered the actual ranges at which such contacts had fired
in the past, specific conditions that might affect firing range in the present situation (e.g.,
Metarecognition 19
weather, visibility), and potential warning cues (e.g., changing altitude, using radar).
(2) More experienced decision makers adopt more sophisticated critiquing strategies.
They start by focusing on what is wrong with the current model, especially incompleteness.
Attempting to fill in missing arguments typically leads to discovery of other problems (i.e.,
unreliable arguments or conflicts among arguments). These problems then motivate the
elaboration of the current model or its replacement by an alternative. Less experienced decision
makers are more likely to consider an alternative hypothesis at the start of their thinking and then
(3) Experienced decision makers adopt more sophisticated correcting strategies. They try
to modify a story in order to explain conflicting evidence, rather than ignoring or discounting it.
They evaluate the assumptions required by alternative stories, rather than comparing the data to
fixed patterns or checklists. They try to construct a more plausible story by revising the most
The R/M model explains how experienced decision makers are able to exploit their
experience in a domain and at the same time handle uncertainty and novelty. They construct and
manipulate concrete, visualizable models of the situation, not abstract aggregations (such as 55%
hostile, 45% not hostile). Uncertainty is represented explicitly at the metacognitive level, in
model.
spans several domains and that involves a variety of converging methods. For example, critical
incident interviews and think-aloud problem-solving sessions can be coded and analyzed to
identify types of situation model structures and metacognitive processes, and to test their
Metarecognition 20
correlation with experience (e.g., Cohen, Thompson, Adelman, Bresnick, Tolcott, & Freeman,
1995). Specific predictions about decision making behavior can be tested experimentally (for
example, the interaction of the “confirmation bias” with the number and plausibility of
R/M model in a hybrid neural-symbolic architecture might permit more extensive investigation
of its implications and empirical validity (e.g., Thompson, Cohen, & Freeman, 1995). The R/M
model might also be extended and tested in the context of team and distributed decision making
(for example, where different team members or geographically separated subteams must monitor
one another’s performance and respond by adjusting their own performance). Finally, the R/M
model might be tested further as a basis for improving performance, in the training of critical
thinking skills and in the design of decision support systems (e.g., Cohen, Freeman, Wolf, &
Militello, 1995).
ACKNOWLEDGMENTS
This work has been supported by the TADMUS program under the sponsorship of the
REFERENCES
Baker, L. (1985). How do we know when we don’t understand? Standards for evaluating text
Academic Press.
Beach, L.R., & Mitchell, T.R. (1978). A contingency model for the selection of decision strate-
Chase, W.G., & Simon, H.A. (1973). The mind's eye in chess. In W.G. Chase (Ed.), Visual
Chi, M., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R.S. Steinberg (Ed.),
Advances in the psychology of human intelligence (Vol. 1) (pp. 7-75). Hillsdale, NJ:
Cohen, M.S. (1986). An expert system framework for non-monotonic reasoning about
probabilistic assumptions. In J.F. Lemmer & L.N. Kanal (Eds.), Uncertainty in artificial
Cohen, M.S. (1993a). Taking risk and taking advice: The role of experience in airline pilot
diversion decisions. In R.S. Jensen & D. Neumeister (Eds.), Proceedings of the Seventh
Cohen, M.S. (1993b). The naturalistic basis of decision biases. In G.A. Klein, J. Orasanu, R.
Calderwood, & C.E. Zsambok (Eds.), Decision making in action: Models and methods
Cohen, M.S., Adelman, L., Tolcott, M.A., Bresnick, T.A., & Marvin, F.F. (1993). A cognitive
Cohen, M.S., Freeman, J.F., Wolf, S., & Militello, L. (1995). Training metacognitive skills in
naval combat decision making (Technical Report 95-4). Arlington, VA: Cognitive
Technologies, Inc.
Cohen, M.S., Thompson, B.B., Adelman, L., Bresnick, T.A., Tolcott, M.A., & Freeman, J.T.
Collins, A., Brown, J.S., & Larkin, K.M. (1980). Inference in text understanding. In R.J. Spiro,
B.C. Bruce, & W.F. Brewer (Eds.), Theoretical issues in reading comprehension (pp.
Forrest-Pressley, D.L., MacKinnon, G.E., & Waller, T.G. (Eds.) (1985). Metacognition,
Gavelek, J., & Raphael, T.E. (1985). Metacognition, instruction, and the role of questioning
Kaempf, G.L., Klein, G., Thordsen, M.L., & Wolf, S. Decision making in complex command-
Klein, G.A. (1993). A Recognition-Primed Decision (RPD) model of rapid decision making. In
G.A. Klein, J. Orasanu, R. Calderwood, & C.E. Zsambok (Eds.), Decision making in
action: Models and methods (pp. 138-147). Norwood, NJ: Ablex Publishing Corporation.
Kuhn, D., Amsel, E., & O’Loughlin, M. (1988). The development of scientific thinking skills.
Larkin, J.H. (1981). Enriching formal knowledge: A model for learning to solve textbook
physics problems. In J.R. Anderson (Ed.), Cognitive skills and their acquisition (pp. 311-
Larkin, J., McDermott, J., Simon, D.P., & Simon, H.A. (1980). Expert and novice performance
Lord, C.G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The
Metarecognition 23
Nelson, T.O., & Narens, L., (1994). Why investigate metacognition? In J. Metcalfe & A.P.
Shimamura (Eds.), Metacognition (pp. 1-25). Cambridge, MA: The MIT Press.
Patel, V.L., & Groen, G.J. (1991). The general and specific nature of medical expertise: A
critical look. In K.A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise:
Payne, J.W., Bettman, J.R., & Johnson, E.J. (1993). The adaptive decision maker. NY:
Pennington, N., & Hastie, R. (1993). A theory of explanation-based decision making. In G.A.
Klein, J. Orasanu, R. Calderwood, & C.E. Zsambok (Eds.), Decision making in action:
Models and methods (pp. 188-201). Norwood, NJ: Ablex Publishing Corporation.
Thompson, B.B., Cohen, M.S., & Freeman, J.T. (1995). Metacognitive behavior in adaptive
agents. Proceedings of the 1995 World Congress on Neural Networks (Vol. 2) (pp. 266-
Toulmin, S.E., Rieke, R., & Janik, A. (1984). An introduction to reasoning. NY: Macmillan
Voss, J.F., Wolfe, C.R., Lawrence, J.A., & Engle, R.A. (1991). From representation to decision:
Frensch (Eds.), Complex problem solving: Principles and mechanisms (pp. 119-158).
LIST OF FIGURES
Figure 2. Hostile intent story early in Libyan gunboat incident. Italicized items are inferred or
predicted. Large arrows represent arguments for hostile intent. Gaps in the story are indicated as
“incomplete.”
Figure 4. Conflicting arguments in the hostile intent story. Large arrows represent new
Figure 5. Assumptions required to patch up the hostile intent story. Boxes superimposed on the
Opportunity * Engage
? own ship)
Is the target (e.g.,
How will ?
the assets
a logical choice in terms of
perform the attack once
its vulnerability,
they are in position?
accessibility, &
lucrativeness?
Figure 1.
Metarecognition 2
Prior Situation
?
US exercises to
demonstrate freedom of
navigation. Ghaddafi
threatens and carries out
Libyan attacks on US ships.
Current Intent Actions: Consequences
Higher level ? * Localize target Damage?cruiser,
Opportunity * Engage
Cruiser is 20?miles below ?
Gunboat expected to fire
"Line of Death" -- but is it missiles at cruiser as soon
the best target? as within range.
INCOMPLETE
Figure 2
Metarecognition 3
CRITIQUING
Figure 3
Metarecognition 4
Prior Situation
?
US exercises to
demonstrate freedom of
navigation. Ghaddafi
threatens and carries out
Libyan attacks on US ships.
Current Intent Actions: Consequences
Higher level ? * Localize target Damage?cruiser,
Gunboat does
Capabilities not intend to
* Reach position
Gunboat is in harm's way,
? attack cruiser. ?
turned toward cruiser,
CONFLICT: The gunboat is increased speed. The
far less capable than gunboat did not respond
Libyan air assets. to warnings.
Opportunity * Engage
CONFLICT: ? There is ?
Gunboat expected to fire
another ship further below missiles at cruiser as soon
the Line of Death and as within range.
closer to the gunboat.
Figure 4
Metarecognition 5
Prior Situation
?
US exercises to
demonstrate freedom of
navigation. Ghaddafi
threatens and carries out
Libyan attacks on US ships.
Current Intent Actions: Consequences
Higher level ? * Localize target Damage?cruiser,
UNRELIABLE:
Opportunity Unknown Libyan * Engage
? There is
CONFLICT: goals or tactics. ?
Gunboat expected to fire
another ship further below missiles at cruiser as soon
the Line of Death and as within range.
closer to the gunboat.
Figure 5
Metarecognition 6
Drop
conflicting
assumptions
Collect or
retrieve new info;
Conflicting
Incomplete Fill gaps by
means of
assumptions
Unreliable
Figure 6
Metarecognition 7
includes research on human reasoning and decision making in real-world settings, training
interface design, and methods for representing and manipulating uncertainty. Current projects
include modeling and training tactical decision making in Army and Navy domains, real-time
capturing of battlefield mental models, hybrid neural and symbolic models of metacognitive
processes, computer and human interactions in target recognition, and commercial airline pilot
decisions. Dr. Cohen obtained his Ph.D. in experimental psychology from Harvard University.
specializes in knowledge elicitation, cognitive task analysis, and training design for problem
identify and train situation assessment skills and critical thinking among commercial airline
pilots and officers of the U.S. Army and Navy. Dr. Freeman received his Ph.D. in cognitive
STEVE WOLF is a research associate at Klein Associates, Inc.. Mr. Wolf has had a key
technical role on projects concerned with expert knowledge and decision support, including
training and decision support systems for the Navy, decision support requirements of airport
baggage inspectors, helicopter pilot safety, Fire Direction Officers decisions, and human-
computer interface designs for a surveillance platform. Mr. Wolf holds a B.S. in psychology