Design and Development Assessment
Steven L. Cornford. Martin S. Feather,
John C. Kelly,
Timothy W. Larson, Burton Sigal
Jet Propulsion Laboratory,
California Institute of Technology
4800 Oak Grove Drive
Pasadena. CA 9 1 109, USA
Steven.L.coFnford@Jpl.Nasa.Gov
Martin.S.Feather@Jpl.Nasa.Gov
John.C.Kelly@Jpl.Nasa.Gov
Timothy.W.Larson@Jpl.Nasa.Gov
Burton.Sigal@Jpl.Nasa.Gov
James D. Kiper
Department of Computer Science
and Systems Analysis
Miami University
Oxford, OH 45056
kiperjd@muohio.ed
Abstract
An assessment
methodology
is described and
illustrated. Thismethodology separates assessment into
the following phases ( I ) Elicitation of requirements; (2)
Elicitation of failure modes and their impact (risk of loss
of requirements); (3) Elicitation of failure mode
mitigations and their effectiveness (degree of reduction of
failure modes); ( 4 ) Calculation of outstanding risk taking
the mitigations into account.
This methodology, with accompanying tool support,
hasbeen apptied to assist in planning the engineering
developtnent of advanced
technologies. Design
assessment featured prominently in these applications.
The overall approach is also applicable to development
assessment (of thedevelopment process to be followed to
implement the design).
Both design and development
assessments
are
demonstrated on hypothetical scenarios based on the
workshop's TRMCS case study. TRMCS information has
been entered into the assessment support tool, and serves
as illustration throughout.
Keywords
Assessment,
Requirements
Elicitation,
Management.
Software
Processes,
Quality
Assurance,
Capability Maturity Model, Tradeoffs
Risk
1: Introduction
In complexand critical systems.assessmentsarea
means to determineadequacy of designs to meet their
requirements, and the adequacy of development plans to
satisfactorily implement designs.
a methodology to performing
Thispaperoutlines
detailed and quantitativeassessments o f systemdesigns
and of software development plans. The key components
to this methodologyarethenotions
of Reqtrirernents
(what it is that the system is supposed to achieve), Failure
Modes (things that, should they occur, will lead to loss of
requirements)
and
Mitigations (design
components,
activities, etc., that reducethe risk of requirements loss
incurred by Failure Modes). The methodology advocates
the disciplined approach to elicitation of each of these,
culminating in the calculation of outstanding risk taking
the mitigations into account.
This approach to assessment is based upon a broader
methodology for spacecraft
mission
assurance
and
planning, called Defect Detection and Prevention
(DDP)
[ I ] . computerized
A
tool
supports
the real-time
application of DDP. The DDP tool represents the elicited
information,
computes
derived
information
(e.g.,
aggregate risk). and graphically displays information. The
DDP tool is designed to offer modest capabilities in all
these areas. It emphasizes tight coordination between its
various capabilities, whichaccountsforitscapacity
to
enable users towork effectively within a large space of
information, discussed further in [2].
The rest of this paper is organized as follows:
The major phases of design assessment are covered
first: requirements elicitation (Section 2 ) , failure modes
elicitation (Section 3 ) . mitigations elicitation (Section 4),
andassessmentcalculation(Section
5 ) . Foreach. the
methodology and tool support is described and illustrated
onhypotheticalscenarioswithinthe
TFWCS domain.
Since the authors are by no means experts in this domain,
it should be understood that the purpose of these scenarios
the
the
of
assessment
is t o illustrate potential
methodology. Development assessment is considered next
(Section 6). Conclusionsfollow(Section 8), and tinally
w n c I'urrhcr illustratlorls 01' development ;lsscssmcnt i n
Ihc TKIMCS applic;ttlon arc 111 ;~n;Ippcndix.
2: RequirementsElicitation
Requirements elicitation is the
performing an assessment. Thesystem's
measured against thoce requirements.
tirst
step
to
design will be
-
2.1: Requirements Elicitation Methodology
Theassessmentprocess must establishthesystem's
requirements. and their relativeimportance. Allthekey
stakeholders must contribute to this activity, in order that
n o critical requirement is accidentallyoverlooked.Since
not all requirements will be equally important, they must
be weightea relative to one another. This will likely need
thesimultaneousinvolvement
of expertsfrommultiple
disciplines. Itis important that this establishing of the
relative importance of the requirements not be biased by
knowledge of theeaseordifficulty
of theachievement
within a given design or approach.
Requirements elicitation is performed in a session at
which all the stakeholders attend. A moderator directs the
tlow of conversation,
encourages
input
from
all
stakeholders,etc. The DDP tool is used to capturethe
elicited requirements and display them for all attendees to
see.
The stage at which the assessment takes place bounds
the level of detail to which requirements can be elicited.
For example,only
after a detaileddesignhasbeen
the
formulated can requirements of design's
subcomponentsbedetermined.Furthermore,
it is only
necessary to elicit enough detail to be able to conduct the
assessment.
As
a result, modest
capabilities
for
representingrequirements
suffice. Thesearediscussed
next.
~~~
requirements
taken
from
the
TFWCS
case
study
documentation.
WegMs
El
El
El
El
El
El
El
El
-El
El
El
1:Allow issuing of help requests
?Guarantee continuous service of the system
3:Timely delivery of help service
4:Guarantee secrecy
5:Handle parallel help requests
6:Minimize u s e r d a m a g e
?:Be open to new service installation
8:Unilorm cost varied users
9:Handle dynamic users
9.1:Changing number of users
9.2:Changing location of users
1 O:Data/history persistence
1 1 :Regulations and Standards
-
2.2: Requirements Elicitation Tool Support
The DDP tooloffersthefollowingcapabilitiesfor
representing and manipulating requirements:
pre-determined
A
set
of useful attributes for
requirements - e.g., title, reference (the authodsource
of the requirement), description (unboundedtext field
for length
comments),
and
relative weight. The
process
(and
tool) make many
of
the attributes
optional, so thatthe userscan make thechoice of
when and how much detail to provide.
Ability to add/edit/remove requirements on the fly. It
is also possible to turn "on" and "off' individual
requirements.
Tree-structured
organization
of requirements,
permitting
on-the-fly
reorganizations
during
the
elicitation process. This form
of
hiernrchical
grouping is particularly useful
as
the
number o f
requirements grows.
Requirements (log scale)
-.
1
2
3
4
-
5
6
7
8 9 . 1 9 . 2 1 0 w
Figure 1 Requirements and a chart of their
weights
For the purposes of illustration, the casestudy's
"Handle dynamic changes to the number and location of
users" has beenturned into a small tree whoseparent
node is "Handle dynamic users", and whose children are
"Changing number o f users"and "Changinglocation of
users".
Requlrcmcnt weights arc shown pictorially in the bar
chart. and the stakeholders' ;Issigned weightsareshown
I P o w e ro u t a g e at centor
the boxes t o the left o l ’ the trcc. The ellect o f bottom-up
computation o l ‘ rcqurrements is disccrnable in the weight
01. rcquircment 9. It’s wcight. X , is the sum o f the weights
assigned t o its t w o children. and its background is
automatically shaded t o indicate that it is calculated. and
thcrcforc not directly cditablc.
111
2 Insecurecomrnunlcatlons
1.Unauthorizedrequests for data
4 . F a l sael a r m s
5-Ovemhelmtngburst of h e l p r e q u e s t s
6:Loss of d a t ar e c o r d s
7 : N e w d e v i c e s i n c o m p a t i b l e wlth existing service
8 : D i s a s t esrc e n a r i o
9:Cornmunications system d o w n
10:Rudimentaryconnectivityfromrouser
11:s losses from h i g h e x p e n s e vs l o w i n c o m e
3: Failure modes elicitation
The second major step of the assessment process is
the elicitation of failure modes - all the things that, should
they occur, willleadto
loss ofrequirements.Thisstep
also includes the determination of how much each failure
modeimpactseachrequirement.Forexample,
a power
outage at the TRMCS center would adversely impact the
“Guarantee of continuous
service”
requirement
(and
others) if nothing were done to compensate for it.
-
3.1: Failure Modes Elicitation Methodology
As was the case for requirements, all the stakeholders
should contribute to the activity of eliciting failure modes,
in order that no critical failure mode is overlooked.
However, determination of how much each failure mode
impacts each requirement need not necessarily involve all
the stakeholders simultaneously. Instead, it is typical that
failure modescan be subdivided into majordisciplines,
and,for
a givendiscipline.only
the experts in that
discipline need be involved in determining the impacts of
its failure modes.
Failuremodesincludebothexternalevents
(e.g.,
lightning strikes, power failures) and internal events (e.g.,
failure caused by a bug in thesystem’ssoftware).This
phase of the assessment process determines the likelihood
and impact of failure modes as if nothing were done to
inhibit their occurrence or reduce their impact.
Mitigation of failure modes, by good design choices and
by followinggooddesignmethodologies,
will betaken
into account in subsequentstages
of theassessment
process.
Wehavepostulated
1 1 major failure modes - see
Figure 2. Some are consequences of external events, for
example number 1, “Power outage at center.” Some may
be caused b,y events internal to the system, for example if
thedesignincludes its own communicationsystemover
whichtheTRMCSsystem
will operate,thenits
own
failure would causenumber 9, “Communicationssystem
down”. Some may be combinations of both, for example
if the
TRMCS
deploys
its own monitors that
in
communicate usingan existingpagingnetwork,then
concert
these
may
lead
to
number 9. ”Rudimentary
connectivity f r o d t o user.”
Figure 2 - hypothesized Failure %lades
3.2: Failure Modes Elicitation - Tool Support
The
DDP
tool’s support for representation
and
to
for
elicitation of failure modes is similar that
requirements.FailureModeshave
manyof
thesame
attributes; they can be organized into trees, etc. A Failure
Modedoes not have a weight(an attribute specific to
requirements), but doeshave an a-priori likelihood ( a n
attribute specific to Failure Modes).
AFailureMode
may haveadifferentimpacton
different requirements. Thus impact is not an attribute of
a FailureModealone,
butisan
attribute of a Failure
Mode x Requirement pair. The DDP tool uses a matrix as
the
means
primary
to
allow
the
entering/editing/inspecting of impacts. The rows of this
matrix are Requirements, and the columns Failure Modes.
of the cell’s
Eachinner
cell holdstheimpactvalue
column’s Failure Mode on the cell’s row’s Requirement.
An impact value is a number in the range 0 to I , where 0
corresponds tono impact whatsoever, and 1 corresponds
to complete loss of theRequirementshouldtheFailure
Mode occur. An empty cell is equivalent to an entry of 0.
Figure 3 shows some hypothesized impact values for
thepreviously
listed Failure Modes ontheTRMCS
requirements.Forexample,the
first rowandcolumn
(shown
highlighted)
correspond
to the
Requirement
“Allow issuing of helprequests”andFailureMode
“Power outage at center”. The inner cell holds the value 1,
indicating that a power failure will lead to complete loss
of ability to issue help request. This is plausible, since the
system at thecenterwouldpresumably
be rendered
inoperable bythe power failure if nothing were done to
mitigate this.
The tool automaticallycalculatessomeaggregate
in thesecond row
values for impacts.Theseareshown
from the top, and third column from the left:
The row
of
aggregate
values
displays.
for
each
Failure Mode. the total expected risk of that Failure
Mode. For Failure Mode FM,this is computed as:
(x
A-priori-impact(FM) = Likelihood(Fh.1) *
(R
Requirements): Weight(R) * Impact(FM.R))
Thisgives a measure ofthe total requirements
loss that eachFailureMode
would cause if not
E
RxFM
Col = Power
outage
Row = Allowissuin
et center
of he1 re uests
-
Figure 3 Requirements x Failure Modes matrix
mitigated against.
The column of aggregatevaluesdisplays,foreach
of
Requirement, the total expected loss that
Requirement due to the impact of Failure Modes. For
Requirement R. this is computed as:
A-priori-loss(R) = Weight(R) * (1(FM
Modes): 1mpactCM.R) * Likelihood(FM))
E
=
actions being taken to reduce the likelihood andor impact
Risk Balance (log scale)
x3.2
” ” “ ” ” _
Failure
This gives a measure of the loss of each
requirement
due
to all the
(unmitigated)
Failure
Modes.
The tool provides bar-chart displays of these. Figure
4 shows the Failure Modes bar chart.
aggregate
loss
Note that it is possible for the
computed for a requirement to exceed the original value
of the requirement! For example, requirement number
1,
“Allow issuing of help requests”, was originally weighted
at 10, and yet has an aggregate unmitigated loss computed
to be 41. This is because there are multiple ways in which
of them
the requirement may beimpacted.Indeed,two
each lead to complete loss of that requirement should they
occur.Nevertheless, we havefound this to be a useful
computed measure - it indicates just how much reduction
of failure mode impactsremains to be accomplished by
mitigations. In application to spacecraft mission assurance
and planning, we have found that in practice people often
employ sufficient mitigations to achieve
some,
often
most. of a requirement. or recognize that a requirement is
it entirely (i.e..
tooexpensive to achieve,andremove
decrease their ambitions).
Removing
requirements
is
more appropriate when using this approachforplanning
than for assessment.
4: MitigationsElicitation
The third step is the elicitation of Mitigations
-
the
T
2
3
4
5
6
7
8
91011
-
Figure 4 Failure Modes’ a-prioriloss
of Failure Modes. For example, a design that included a
backup
power
source
at the TRMCS center would
mitigate the “Power outage at center” Failure Mode. This
step also includes the determination of how much each
Mitigation reduces each Failure Mode.
-
4.1: Mitigations Elicitation Methodology
Forassessmentpurposes,mitigations
willbe found
within the design, the implementation plan, etc. Personnel
knowledgeable of the design details, implementation plan
details, etc., will need to be involved in this step.
We have postulated 14 mitigations that our
hypothetical TRCMS system employs - see Figure 5 . For
at center”
example,number 1 , ”Backuppowersource
suggest a fairly obvious approach to providing continuity
of power. Likethe Failure Modes. these Mitigations are
very high-level. As the designprogrcsses, an assessment
;It that stage would determine mort: detailed and designspecific Failurc modus and mitigations.
1 Backup power source at center
2 Encrypted data transmission
3.Passwords for access to data
El
I
B
4:Cross-check multiple monitors’ readings
5:Triage plan
6:On-call services for peak demand
7:Physical replicatlon of data
8:Standard communication protocols
9:Pre-planned responses
1O:Alternative communication mechanisms
1 1 :Distributed assistance centers
12:High connectivity to healthcare providers
13:Service tiers and options
14:Backup financial insurance
-
Figure 5 Mitigations
-
4.2: Mitigations Elicitation Tool Support
TheDDP
tool’s support
for
representation
and
elicitation of Mitigations is similar to that for
Requirements and Failure Modes. Mitigations do not have
a weight or likelihood.
In a similarmanner
to the relationshipbetween
FailureModes and Requirements,Mitigationscanhave
different effects ondifferentFailureModes.
TheDDP
tool maintains a Mitigation x Failure Mode matrix whose
rowsareMitigations,andcolumnsareFailureModes.
Each cell holds the effectiveness value of the cell’s row’s
Mitigation on thecell’scolumn’sFailureMode.
An
effectiveness value is a number in the range 0 to 1 , where
0 corresponds to no effect whatsoever, and 1 corresponds
to completelyeffective at mitigating the FailureMode.
An empty cell is equivalent to an entry of 0.
Figure 6 shows the ellcctlvcncssmamxlorthese
Mitigations on \he TRMCS Failure Modes. For cxamplc,
the I‘irstrow and column (shown highlighted, correspond
t o the Mitigation ”Backup powersource a t center” and
FuilurcMode ”Powcr outage at center”.The inner cell
holds thc value 9.99, indicating that a backuppower
source will almost completely mitigate this Failure ,Mode.
This is plausible,sincethere is a smallchance thatthe
backup power source itself might be inoperative when
needed, but generallyspeaking
will be sufficient. Of
course,thedetermination
of its sufficiency will require
of appropriatelyskilledpersonnel,
who
thejudgment
of, backup
understand the needs for, andcapabilities
power sources.
The tool automaticallycalculatessomeaggregate
values for impacts taking the current set of mitigations
into account. These are shown in the second row from the
top, and third column from the left:
0
The row of aggregate
values
displays,
for each
of that Failure
Failure Mode, the total expected risk
Mode takingthecurrentset
of Mitigations into
accaunt. For Failure Mode FM, this is computed as:
Mitigated-Impact(FM) = A-Priori-Impact(FM) *
(1 (M E Mitigations): ( 1 Effect(M,FM)))
Thisgives a measure of the total requirements
loss that eachFailureModewouldcause,taking
mitigations into account.
The column of aggregatevaluesdisplays,
for each
risk savings
Mitigation,
the
maximum
expected
application of that Mitigationwouldachieve.For
Mitigation M, this is computed as:
Mitigation(M) = (5 (FM E FailureModes): APriori-Impact(FM) * Effect(M,FM))
(n
P A ~ ~ F CMo l i Poweroutageatcenter
Row = BnckuD Dower source at center
-
Figure 6 Mitigations x Failure Modes matrix
-
o l ' the t ( ~ t a l hcnetit that
'f111s glvc\ .I IIIC;INI'C
each m t t i p [ i ( ~ nPC-ovidc.;.
the c h x r \Ilown I I I I*.~guIcX . 1-lcrc. thc green
\ h o w the "\;tvitlgs" duc t o the Mitigattons, and
1l1c red portlc~rls show the r c s d u d loss-ot~Requirements
dcsl1ilc the hcncfici;rl cII'ect 0 1 the Mitipations. From this
cll,Irt i t I S clear that there I S still some \igniticunt loss of,
cspccially.Iiccpiremenis
1.3, 4 m d 6 . t Be aware that
~ I L C . \
pol.11tl11\
5: ASSESSMENT Ct\I,CULATION
t h i g n asxssmcnt hingcs on cstlmating how well the
dcvgn mitigate5 the talurc nlodcs. and thereby meets the
rcquircments.
-
5.1: Assessment Calculation Tool Support
The DDP tool calculates the status of the impacts on
Requirements by FailureModes,taking into accountthe
elicited information of Requirements.
Failure
Modes.
Mitigations and their attributes and relationships. The tool
makes available several visualizations of this information.
For assessment purposes, the key such visualizations are
the Requirements-centricviewand
the Failure-Modescentric view.
5.2: Requirements-centricView
Risk
these arc /o,y sc;llcs. This I S a heritage o f our criticalsystems setting, where we generallyseek t o push risk
down to very low levels, forwhich a log scale is better
suited.)
For adesign that omitted thetwo sccurity-related
data
transmission" and
mitigations
("Encrypted
"Passwords for acccsi: to data"), the Requirementschart
would be that
shown
in Figure 9. Not surprisingly,
Requirement 4 "Guarantee secrecy" is now the dominant
problem area. Also, Requirement I I , "Regulationsand
Standards" has become more of a concern.
of Outstanding
Figure 7 showsthechart
of the Requirementsas
impacted by all of the(completelyunmitigated)Failure
of thebarsindicates
loss of
Modes.Theredportion
Requirementscaused by FailureModes,whiletheblue
portionindicatesRequirements
that areunaffected by
Failure Modes. It is normal for the bars to be mostly or
totally red at this point, so thecompletelybluebarfor
Requirement 9.1 suggests that either it is a trivially
satisfied requirement, or, more likely, that there are as-yet
unidentified Failure Modes that would impact it.
Requirements (log scale)
x3
1
2
3
4
5
6
7
8 9 . 1 9 . 2 1 0 -
Figure 9 - chart of Requirements, partially
mitigated
5.3: Failure-Modes-centric View of Outstanding
Risk
Requirements (log scale)
Figure 10 shows the chart of the Failure Modes and
the loss of requirements that they are causing, with all the
Mitigations turned "on" (i.e., equivalent to Figure 8. but
from the perspectiveof the Failure Modes).
x3
1
2
3
Figure 7
4
7
6
5
8
9 . 1 9 . 2 l O n
- chart of Requirements, unmitigated
Risk Balance (log scale)
x32
"I
16
5.1
16
Requirements (log scale)
x3
0.5
1
2
3
4
5
6
7
8
9.13.210
n
Figure 8 - chart of Requirements, fully mitigated
Turning "on" a 1 1 o f ourhypothesizedmitigations
Figure 10 - chart of Failure Modes, Mitigated
From this chart it is clear that Failure Mode number
IO. "Rudimentaryconnectivity f r o d t o user"is the most
problematic one for this design.
5.4: hsessment Calculation - Methodology
[llroughout. [SI de.;crlbes this encoding.
Accuracy of thc calculations hinge upon the accuracy
o f the numericalquantitiescntcrcd in the earlier stages.
For this reason, the incIusio11of experts whose combined
6.3: Tailoring through Inclusion of Quantitative
Information
knowlcdgc
spans
the cntirc domain is strongly
cncouragcd.
Even given such involvement, the methodology does
not attempt toyield a single measure of adequacy (e.g.,
it would
be
to
sum
up
the un-lost
temptingthough
requirements, the tool does not do this). Rather, the
methodology is aimed at identifying thc relative strengths
and weaknesses of a given design. This is a necessary step
in assessing a design,and of considerableassistance to
the assessment team.
6: DevelopmentAssessment
The discussion and examples so far have illustrated
the assessment of design. We believe a similar approach
is applicable to the assessment of development,
i.e., the
process by which the design will be implemented.
Wedo notyet have realistic projectexperience to
confirm this belief, so this is a workinghypothesis.
Within this section we describe the overall approach and
status of our activities. Detailed examples are deferred to
the appendix.
6.1: Development Failure Modes
Assessment of softwaredevelopment starts from a
standard list of software development risks. The Software
Engineering Institute (SEI) is onewell-respectedsource
ofsuchinformation.
In particular, thereportSoftware
[3] presents a taxonomy of
RiskEvaluationMethod
software risks. These have been encoded as development
Failure Modes within the DDP tool.
6.2: Development Mitigations
SEIdevelopmentpracticesserve
as development
Mitigations. For these, theSEI’SCapabilityMaturity
Model (CMM) for software [4] is used. Each of the five
maturity levels (initial, repeatable, defined, managed, and
optimizing) consists of several key process areas (KPA).
W A S of level 2 arerequirements
Forexample,the
management, software project planning, software project
trackmg and oversight, software subcontract management,
softwarequalityassurance,andsoftwareconfiguration
management.EachKPA
is, in turn, supported by a few
goals and is implemented by a group of activities. These
activities have been encoded as the available set of
Mitigations within the DDP tool.
Interestingly. we did not find any information as to
which KPX activities addresswhich risks, so we made
our own estimate of this. Within the tool. we assigned a
non-zero effectiveness value to every pair of KPA activity
andsoftware riskthat we thought were related. At that
value
time. we used the samenon-zeroeffectiveness
The aforementioned work cstuhlishcd a qualitative
framework [or development assessnlcnt. For tailoring this
to a specikic assessment (e.g., ot a developrncnt plan for a
TRMCS
design),
q~tcmtitcltive information must
be
elicited and incorporated, in the following areas:
Assigningassessment-specificeffectivenessnumbers
x Mitigationpairs
in their
to the FailureMode
matrix. For example. consider the effect of Mitigation
“Project
commitments
reviewed
by
senior
management” on Failure
Mode
“Insufficient
or
unstablebudget”. If thedevelopmentorganization
plans for recurring
senior
management
budget
reviews, then this will be very effective, and warrant
an effectiveness measureof 0.9, say.
Assigningimpactvalues
to theFailureModes(SEI
risks). In our experiments to date, we have simplified
the DDP-based design assessment process. A single
requirement serves as a placeholder for all concerns,
and a loss-of-Requirements
impact
is assigned
directly to each Failure Mode. For example, knowing
that the TRMCS system will involve development of
a critical communicationcomponent,development
staff inexperience in this area might warrant a
high
impact measure.
7: Conclusions
Other work
on
assessment falls into
two
broad
categories:
High-level
cost/schedule/risk
assessment
and
management. E.g., the COCOMO work [6].
Risk management tools are in use to gather and
maintain risk status and tracking, but generally
these tools employ comparatively simple means
to assess the level of risk (e.g., ask an e x p m to
qualitatively characterize a risk‘s likelihood and
severity).
Verydetailed
risk assessment.Highassurance
system engineering applies intensive assessment
techniques. e.,o., probabilistic risk assessment, to
specific designs. E.g., the nuclear power industry
uses these extensively [ 7 ] .
Our approach tills theareain-between.We
tailor
assessments tomodestly
detailedlevels of designand
developmentinformation. The novelty of our approach
hinges upon a quantitative
approach
that takes into
accountrequirements,
failure modes,andmitigations.
This enables us to conduct assessments to both design and
development plans. Ourassessmentcalculationsyield
relative indications o f which requirementsare at risk,
which Failure Modes are the most problematic. and which
Mitigations are most critical.
Acknowledgements
The research described in this paper was carried out
by the Jet Propulsion Laboratory. California Institute
of
Technology, under a contract with the National
Aeronautics and Space Administration. Reference herein
to any specific commercial product, process, or service by
trade name, trademark. manufacturer, or otherwise, does
by the United
not constitute or implyitsendorsement
or the Jet Propulsion Laboratory,
StatesGovernment
California Institute of Technology.
References
[6] Calibrating
the
COCOMO
11 Post-Architecture
model.
at
Site
Proceedings International
of1998 the
Conference on Software Engineering, 1998, Page(s): 477 -
480
[ 1 J S. Cornford. Managing Risk as a Resource using the Defect
Detection and Prevention process. International Conference
on
Probabilistic
Safety
Assessment
and
Management,
September 13-I X, 1998.
[5] M S . Feather,J.C. Kelly & J.D. Kipcr,"Prototype Tool
Support for SEIProcessandRiskKnowledge".
NASA 2""
Annwll Workshop on Risk Mtrnrrgetnenf, Morgantown. West
Virginia, October 1999.
[2l M.S.
Feather,
S.L. Cornford & M. Gibbel.
Scalable
Mechanisms for RequirementsInteractionManagement, to
appear in ProceedingJ. 4"' IEEE InternationalConference
on Requirements Engineering. June 2000.
[4] Mark C. Paulk, Bill Curtiss. Mary Beth Chrissis, Charles V.
Weber.CapabilityMaturityModel
for Software,Version
1 . I . Technical
Report
CMU/SEI-93-TR-024
Software
Engineering Institute, Carnegie Mellon University, February
1993.
ROLE OF RISK
[7] InternationalNuclearSocietiesCouncil
METHODS
IN
THE
REGULATION
OF
NUCLEAR
<
Web
POWER
http://l33.205.9.136/-INSC/INSCAP/Risk.html>
[3] F. Sistiand J. Sujoe. SoftwareRiskEvaluationMethod
Version 1 .O. Technical Report CMUISEI-94-TR-019,
Software Engineering Institute, Carnegie Mellon University,
1994.
-
Figure 11 Portion of Mitigations x Failure Modes matrix
against the Failure Modes. For example. the effect of the
highlightedMitigation”Projectcommitmentsreviewed
by seniormanagement”onhighlightedFailureMode
“Insufficient or unstablebudget” is set at 0.9. (If the
development
organization
plans
for
recurring
senior
management
budget
reviews,
then
this
will
be
very
effective. and warrunt such a high effectiveness measure.)
In a similar manner, quantitative measures of impact
are assigned t o each o f the Failure Modes. For example.
RxFM
Col = Staff inexperiences,
lacking
knowledge
Row = TRMCS software reliable
into the DDP tool, the same capabilities to calculate and
display requirements loss can be employed for assessment
13 shows
the
Failure-Modes-centric
purposes.
Figure
view of Requirements loss, given that all the Mitigations
are “on”. Failure Modes are shown in sorted (decreasing)
order of Requirements loss, so there are many more, of
lower impact, off the right of the image. The same kind of
comparative
assessment
as was
shown
on design
information becan
performed to development
or skills
Figure 12 -quantitative measures of impact
2.1
1
0.5
2.1.1 3.3.3 3.4.1 3.4.2 3.4.3 2.5.2 2.5.32.5.4 1.3.5 1.3.4 3.2.7 3.3.2 3.5.1 1.3.6 3.3.1 3.5.2 3.5.3 3.5.4 3.5.6 3.2.5 1.3.1 2.3.4
1
41
I
I
L4
1
c
-
Figure 13 sorted Failure Modes with all Mitigations active
1
g
lr""--
-
""" """" ""
~
~
_"_
"""
. . . . . . . . . . . . . . . . . . . . . . .
___
1
""""""""""""""""~
2.1.1 3.3.3 3.4.1 3.4.2 3.4.3 2.1.3 2.4.3 3.2.7 3.3.2 3.5.1 3.2.5 2.1.2 2.1.4 3.3.1 3.5.2 3.5.3 3.5.4 3.5.6 2.5.2 2.5.3 2.5.4 1.3.5
k
-
Figure 14 sorted Failure Modes with all but Software Quality Assurance Mitigations active
View publication stats