Notes On Software Construction-1
Notes On Software Construction-1
Contents
o 1.4 Reuse
2 Managing Construction
3 Practical Considerations
o 3.3 Coding
o 3.8 Integration
4 Construction Technologies
o 4.1 API Design and Use
o 4.11 Middleware
Acronyms
API
COTS
Commercial Off-the-Shelf
GUI
IDE
OMG
POSIX
TDD
Test-Driven Development
UML
Introduction
The term software construction refers to the detailed creation of working software through a
combination of coding, verification, unit testing, integration testing, and debugging. The
Software Construction knowledge area (KA) is linked to all the other KAs, but it is most strongly
linked to Software Design and Software Testing because the software construction process
involves significant software design and testing. The process uses the design output and provides
an input to testing (“design” and “testing” in this case referring to the activities, not the KAs).
Boundaries between design, construction, and testing (if any) will vary depending on the
software life cycle processes that are used in a project. Although some detailed design may be
performed prior to construction, much design work is performed during the construction activity.
Thus, the Software Construction KA is closely linked to the Software Design KA. Throughout
construction, software engineers both unit test and integration test their work. Thus, the Software
typically produces the highest number of configuration items that need to be managed in a
software project (source files, documentation, test cases, and so on). Thus, the Software
Construction KA is also closely linked to the Software Configuration Management KA. While
software quality is important in all the KAs, code is the ultimate deliverable of a software
project, and thus the Software Quality KA is closely linked to the Software Construction KA.
closely related to the Computing Foundations KA, which is concerned with the computer science
foundations that support the design and construction of software products. It is also related to
challenges.
Figure 3.1 gives a graphical representation of the top-level decomposition of the breakdown for
anticipating change
reuse
standards in construction.
The first four concepts apply to design as well as to construction. The following sections define
[1]
Most people are limited in their ability to hold complex structures and information in their
working memories, especially over long periods of time. This proves to be a major factor
influencing how people convey intent to computers and leads to one of the strongest drives in
essentially every aspect of software construction and is particularly critical to testing of software
code creation that is simple and readable rather than clever. It is accomplished through making
use of standards (see section 1.5, Standards in Construction), modular design (see section 3.1,
Construction Design), and numerous other specific techniques (see section 3.3, Coding). It is
Quality).
Most software will change over time, and the anticipation of change drives many aspects of
software construction; changes in the environments in which software operates also affect
software in diverse ways. Anticipating change helps software engineers build extensible
software, which means they can enhance a software product without disrupting the underlying
structure. Anticipating change is supported by many specific techniques (see section 3.3,
Coding).
Constructing for verification means building software in such a way that faults can be readily
found by the software engineers writing the software as well as by the testers and users during
independent testing and operational activities. Specific techniques that support constructing for
verification include following coding standards to support code reviews and unit testing,
organizing code to support automated testing, and restricting the use of complex or hard-to-
1.4 Reuse
Reuse refers to using existing assets in solving different problems. In software construction,
typical assets that are reused include libraries, modules, components, source code, and
well-defined, repeatable process. Systematic reuse can enable significant software productivity,
quality, and cost improvements. Reuse has two closely related facets:"construction for reuse" and
"construction with reuse." The former means to create reusable software assets, while the latter
means to reuse software assets in the construction of a new solution. Reuse often transcends the
boundary of projects, which means reused assets can be constructed in other projects or
organizations.
project’s objectives for efficiency, quality, and cost. Specifically, the choices of allowable
programming language subsets and usage standards are important aids in achieving higher
communication methods (for example, standards for document formats and contents)
programming languages (for example, language standards for languages like Java and C+
+)*coding standards (for example, standards for naming conventions, layout, and
indentation)
tools (for example, diagrammatic standards for notations like UML (Unified Modeling
Language)).
Use of external standards. Construction depends on the use of external standards for construction
languages, construction tools, technical interfaces, and interactions between the Software
Construction KA and other KAs. Standards come from numerous sources, including hardware
and software interface specifications (such as the Object Management Group (OMG)) and
international organizations (such as the IEEE or ISO). Use of internal standards. Standards may
also be created on an organizational basis at the corporate level or for use on specific projects.
2 Managing Construction
Numerous models have been created to develop software; some emphasize construction more
than others. Some models are more linear from the construction point of view—such as the
waterfall and staged-delivery life cycle models. These models treat construction as an activity
that occurs only after significant prerequisite work has been completed—including detailed
requirements work, extensive design work, and detailed planning. The more linear approaches
tend to emphasize the activities that precede construction (requirements and design) and to create
more distinct separations between activities. In these models, the main emphasis of construction
may be coding. Other models are more iterative—such as evolutionary prototyping and agile
development. These approaches tend to treat construction as an activity that occurs concurrently
with other software development activities (including requirements, design, and planning) or that
overlaps them. These approaches tend to mix design, coding, and testing activities, and they
often treat the combination of activities as construction (see the Software Management and
degree on the life cycle model used. In general, software construction is mostly coding and
debugging, but it also involves construction planning, detailed design, unit testing, integration
The choice of construction method is a key aspect of the construction-planning activity. The
choice of construction method affects the extent to which construction prerequisites are
performed, the order in which they are performed, and the degree to which they should be
completed before construction work begins. The approach to construction affects the project
team’s ability to reduce complexity, anticipate change, and construct for verification. Each of
these objectives may also be addressed at the process, requirements, and design levels—but they
will be influenced by the choice of construction method. Construction planning also defines the
order in which components are created and integrated, the integration strategy (for example,
phased or incremental integration), the software quality management processes, the allocation of
task assignments to specific software engineers, and other tasks, according to the chosen method.
Numerous construction activities and artifacts can be measured—including code developed, code
modified, code reused, code destroyed, code complexity, code inspection statistics, fault-fix and
fault-find rates, effort, and scheduling. These measurements can be useful for purposes of
managing construction, ensuring quality during construction, and improving the construction
process, among other uses (see the Software Engineering Process (KA or more on measurement).
3 Practical Considerations
Construction is an activity in which the software engineer has to deal with sometimes chaotic
and changing real-world constraints, and he or she must do so precisely. Due to the influence of
real-world constraints, construction is more driven by practical considerations than some other
KAs, and software engineering is perhaps most craft-like in the construction activities.
Some projects allocate considerable design activity to construction, while others allocate design
to a phase explicitly focused on design. Regardless of the exact allocation, some detailed design
work will occur at the construction level, and that design work tends to be dictated by constraints
imposed by the real-world problem that is being addressed by the software. Just as construction
workers building a physical structure must make small-scale modifications to account for
unanticipated gaps in the builder’s plans, software construction workers must make
modifications on a smaller or larger scale to flesh out details of the software design during
construction. The details of the design activity at the construction level are essentially the same
as described in the Software Design KA, but they are applied on a smaller scale of algorithms,
Construction languages include all forms of communication by which a human can specify an
executable problem solution to a problem. Construction languages and their implementations (for
example, compilers) can affect software quality attributes of performance, reliability, portability,
engineers choose from a limited set of predefined options to create new or custom software
installations. The text-based configuration files used in both the Windows and Unix operating
systems are examples of this, and the menu-style selection lists of some program generators
constitute another example of a configuration language. Toolkit languages are used to build
Scripting languages are commonly used kinds of application programming languages. In some
scripting languages, scripts are called batch files or macros. Programming languages are the
most flexible type of construction languages. They also contain the least amount of information
about specific application areas and development processes therefore, they require the most
training and skill to use effectively. The choice of programming language can have a large effect
usage of C and C++ are questionable choices from a security viewpoint. There are three general
Linguistic notations are distinguished in particular by the use of textual strings to represent
complex software constructions. The combination of textual strings into patterns may have a
sentence-like syntax. Properly used, each such string should have a strong semantic connotation
providing an immediate intuitive understanding of what will happen when the software
construction is executed.
Formal notations rely less on intuitive, everyday meanings of words and text strings and more on
definitions backed up by precise, unambiguous, and formal (or mathematical) definitions. Formal
construction notations and formal methods are at the semantic base of most forms of system
programming notations, where accuracy, time behavior, and testability are more important than
ease of mapping into natural language. Formal constructions also use precisely defined ways of
combining symbols that avoid the ambiguity of many natural language constructions.
Visual notations rely much less on the textual notations of linguistic and formal construction and
instead rely on direct visual interpretation and placement of visual entities that represent the
underlying software. Visual construction tends to be somewhat limited by the difficulty of making
“complex” statements using only the arrangement of icons on a display. However, these icons
can be powerful tools in cases where the primary programming task is simply to build and
definition.
3.3 Coding
Techniques for creating understandable source code, including naming conventions and
Use of classes, enumerated types, variables, named constants, and other similar entities;
example);
Prevention of code-level security breaches (buffer overflows or array index bounds, for
example);
Resource usage via use of exclusion mechanisms and discipline in accessing serially
structures);
Code documentation;
Code tuning,
Construction involves two forms of testing, which are often performed by the software engineer
Unit testing
Integration testing.
The purpose of construction testing is to reduce the gap between the time when faults are
inserted into the code and the time when those faults are detected, thereby reducing the cost
incurred to fix them. In some instances, test cases are written after code has been written. In
other instances, test cases may be created before code is written. Construction testing typically
involves a subset of the various types of testing, which are described in the Software Testing KA.
For instance, construction testing does not typically include system testing, alpha testing, beta
testing, stress testing, configuration testing, usability testing, or other more specialized kinds of
testing. Two standards have been published on the topic of construction testing: IEEE Standard
829-1998, IEEE Standard for Software Test Documentation, and IEEE Standard 1008-1987,
(See sections 2.1.1., Unit Testing, and 2.1.2., Integration Testing, in the Software Testing KA for
Construction for reuse creates software that has the potential to be reused in the future for the
present project or other projects taking a broadbased, multisystem perspective. Construction for
reuse is usually based on variability analysis and design. To avoid the problem of code clones, it
The tasks related to software construction for reuse during coding and testing are as follows:
Variability encapsulation to make the software assets easy to configure and customize.
Construction with reuse means to create new software with the reuse of existing software assets.
The most popular method of reuse is to reuse code from the libraries provided by the language,
platform, tools being used, or an organizational repository. Asides from these, the applications
developed today widely make use of many open-source libraries. Reused and off-the-shelf
software often have the same—or better—quality requirements as newly developed software (for
example, security level). The tasks related to software construction with reuse during coding and
The selection of the reusable units, databases, test procedures, or test data.
The reporting of reuse information on new code, test procedures, or test data.
In addition to faults resulting from requirements and design, faults introduced during
construction can result in serious quality problems—for example, security vulnerabilities. This
includes not only faults in security functionality but also faults elsewhere that allow bypassing of
this functionality and other security weaknesses or violations. Numerous techniques exist to
ensure the quality of code as it is constructed. The primary techniques used for construction
quality include
unit testing and integration testing (see section 3.4, Construction Testing)
debugging
inspections
technical reviews, including security-oriented reviews (see section 2.3.2 in the Software
Quality KA)
The specific technique or techniques selected depend on the nature of the software being
constructed as well as on the skillset of the software engineers performing the construction
activities. Programmers should know good practices and common vulnerabilities—for example,
from widely recognized lists about common vulnerabilities. Automated static analysis of code
for security weaknesses is available for several common programming languages and can be
Construction quality activities are differentiated from other quality activities by their focus.
Construction quality activities focus on code and artifacts that are closely related to code—such
as detailed design—as opposed to other artifacts that are less directly connected to the code, such
3.8 Integration
[1]
A key activity during construction is the integration of individually constructed routines, classes,
components, and subsystems into a single system. In addition, a particular software system may
need to be integrated with other software or hardware systems. Concerns related to construction
integration include planning the sequence in which components will be integrated, identifying
what hardware is needed, creating scaffolding to support interim versions of the software,
determining the degree of testing and quality work performed on components before they are
integrated, and determining points in the project at which interim versions of the software are
tested. Programs can be integrated by means of either the phased or the incremental approach.
Phased integration, also called “big bang” integration, entails delaying the integration of
component software parts until all parts intended for release in a version are complete.
Incremental integration is thought to offer many advantages over the traditional phased
integration—for example, easier error location, improved progress monitoring, earlier product
delivery, and improved customer relations. In incremental integration, the developers write and
test a program in small pieces and then combine the pieces one at a time. Additional test
infrastructure, such as stubs, drivers, and mock objects, are usually needed to enable incremental
integration. By building and integrating one unit at a time (for example, a class or component),
the construction process can provide early feedback to developers and customers. Other
4 Construction Technologies
An application programming interface (API) is the set of signatures that are exported and
available to the users of a library or a framework to write their applications. Besides signatures,
an API should always include statements about the program’s effects and/or behaviors (i.e., its
semantics). API design should try to make the API easy to learn and memorize, lead to readable
code, be hard to misuse, be easy to extend, be complete, and maintain backward compatibility.
As the APIs usually outlast their implementations for a widely used library or framework, it is
desired that the API be straightforward and kept stable to facilitate the development and
maintenance of the client applications. API use involves the processes of selecting, learning,
testing, integrating, and possibly extending APIs provided by a library or framework (see section
reflection. These runtime mechanisms increase the flexibility and adaptability of object-oriented
knowing until runtime what kind of concrete objects the software will include. Because the
program does not know the exact types of the objects in advance, the exact behaviour is
determined at runtime (called dynamic binding). Reflection is the ability of a program to observe
and modify its own structure and behavior at runtime. Reflection allows inspection of classes,
interfaces, fields, and methods at runtime without knowing their names at compile time. It also
allows instantiation at runtime of new objects and invocation of methods using parameterized
Parameterized types, also known as generics (Ada, Eiffel) and templates (C++), enable the
definition of a type or class without specifying all the other types it uses. The unspecified types
are supplied as parameters at the point of use. Parameterized types provide a third way (in
software
that allows runtime checks of the program. Assertions are especially useful in high-reliability
programs. They enable programmers to more quickly flush out mismatched interface
assumptions, errors that creep in when code is modified, and so on. Assertions are normally
compiled into the code at development time and are later compiled out of the code so that they
preconditions and postconditions are included for each routine. When preconditions and
postconditions are used, each routine or class is said to form a contract with the rest of the
and thus helps the understanding of its behavior. Design by contract is thought to improve the
quality of software construction. Defensive programming means to protect a routine from being
broken by invalid inputs. Common ways to handle invalid inputs include checking the values of
all the input parameters and deciding how to handle bad inputs. Assertions are often used in
[1]
The way that errors are handled affects software’s ability to meet requirements related to
correctness, robustness, and other nonfunctional attributes. Assertions are sometimes used to
check for errors. Other error handling techniques—such as returning a neutral value, substituting
the next piece of valid data, logging a warning message, returning an error code, or shutting
down the software—are also used. Exceptions are used to detect and process errors or
exceptional events. The basic structure of an exception is that a routine uses throw to throw a
detected exception and an exception handling block will catch the exception in a try-catch block.
The try-catch block may process the erroneous condition in the routine or it may return control to
the calling routine. Exception handling policies should be carefully designed following common
principles such as including in the exception message all information that led to the exception,
avoiding empty catch blocks, knowing the exceptions the library code throws, perhaps building a
centralized exception reporter, and standardizing the program’s use of exceptions. Fault
tolerance is a collection of techniques that increase software reliability by detecting errors and
then recovering from them if possible or containing their effects if recovery is not possible. The
most common fault tolerance strategies include backing up and retrying, using auxiliary code,
using voting algorithms, and replacing an erroneous value with a phony value that will have a
benign effect.
Executable models abstract away the details of specific programming languages and decisions
about the organization of the software. Different from traditional software models, a
specification built in an executable modeling language like xUML (executable UML) can be
(transformer) can turn an executable model into an implementation using a set of decisions about
the target hardware and software environment. Thus, constructing executable models can be
regarded as a way of constructing executable software. Executable models are one foundation
supporting the Model-Driven Architecture (MDA) initiative of the Object Management Group
(OMG). An executable model is a way to completely specify a Platform Independent Model
(PIM); a PIM is a model of a solution to a problem that does not rely on any implementation
technologies. Then a Platform Specific Model (PSM), which is a model that contains the details
of the implementation, can be produced by weaving together the PIM and the platform on which
it relies.
finite state machines to describe program behaviours. The transition graphs of a state machine
are used in all stages of software development (specification, implementation, debugging, and
documentation). The main idea is to construct computer programs the same way the automation
programming. A table-driven method is a schema that uses tables to look up information rather
than using logic statements (such as if and case). Used in appropriate circumstances, table-driven
code is simpler than complicated logic and easier to modify. When using table-driven methods,
the programmer addresses two issues: what information to store in the table or tables, and how to
To achieve more flexibility, a program is often constructed to support late binding time of its
variables. Runtime configuration is a technique that binds variable values and program settings
when the program is running, usually by updating and reading configuration files in a just-in-
time mode. Internationalization is the technical activity of preparing a program, usually
interactive software, to support multiple locales. The corresponding activity, localization, is the
activity of modifying a program to support a specific local language. Interactive software may
contain dozens or hundreds of prompts, status displays, help messages, error messages, and so
on. The design and construction processes should accommodate string and character-set issues
including which character set is to be used, what kinds of strings are used, how to maintain the
strings without changing the code, and translating the strings into different languages with
Grammar-based input processing involves syntax analysis, or parsing, of the input token stream.
It involves the creation of a data structure (called a parse tree or syntax tree) representing the
input data. The inorder traversal of the parse tree usually gives the expression just parsed. The
parser checks the symbol table for the presence of programmer-defined variables that populate
the tree. After building the parse tree, the program uses it as input to the computational
processes.
variable or abstract data type that provides a simple but useful abstraction for controlling access
operations that are executed with mutual exclusion. A monitor contains the declaration of shared
variables and procedures or functions that operate on those variables. The monitor construct
ensures that only one process at a time is active within the monitor. A mutex (mutual exclusion)
is a synchronization primitive that grants exclusive access to a shared resource by only one
4.11 Middleware
Middleware is a broad classification for software that provides services above the operating
system layer yet below the application program layer. Middleware can provide runtime
containers for software components to provide message passing, persistence, and a transparent
location across a network. Middleware can be viewed as a connector between the components
that use the middleware. Modern message-oriented middleware usually provides an Enterprise
Service Bus (ESB), which supports service-oriented interaction and communication between
systems that are networked to provide the users with access to the various resources that the
programming typically falls into one of several basic architectural categories: client-server, 3-tier
architecture, n-tier architecture, distributed objects, loose coupling, or tight coupling (see section
14.3 of the Computing Foundations KA and section 3.2 of the Software Design KA).
such as Digital Signal Processors (DSPs), microcontrollers, and peripheral processors. These
computational units are independently controlled and communicate with one another. Embedded
systems are typically heterogeneous systems. The design of heterogeneous systems may require
the combination of several specification languages in order to design different parts of the system
—in other words, hardware/software codesign. The key issues include multilanguage validation,
and virtual hardware development proceed concurrently through stepwise decomposition. The
hardware part is usually simulated in field programmable gate arrays (FPGAs) or application-
specific integrated circuits ASICs). The software part is translated into a low-level programming
language.
algorithm selection influences an execution speed and size. Performance analysis is the
investigation of a program’s behavior using information gathered as the program executes, with
the goal of identifying possible hot spots in the program to be improved. Code tuning, which
improves performance at the code level, is the practice of modifying correct code in ways that
make it run more efficiently. Code tuning usually involves only small-scale changes that affect a
single class, a single routine, or, more commonly, a few lines of code. A rich set of code tuning
techniques is available, including those for tuning logic expressions, loops, data transformations,
expressions, and routines. Using a low-level language is another common technique for
Platform standards enable programmers to develop portable applications that can be executed in
compatible environments without changes. Platform standards usually involve a set of standard
services and APIs that compatible platform implementations must implement. Typical examples
of platform standards are Java 2 Platform Enterprise Edition (J2EE) and the POSIX standard for
operating systems (Portable Operating System Interface), which represents a set of standards
development style in which test cases are written prior to writing any code. Test-first
programming can usually detect defects earlier and correct them more easily than traditional
programming styles. Furthermore, writing test cases first forces programmers to think about
requirements and design before coding, thus exposing requirements and design problems sooner.
development tools. The choices of development environments can affect the efficiency and
quality of software construction. In additional to basic code editing functions, modern IDEs often
offer other features like compilation and error detection from within the editor, integration with
A GUI (Graphical User Interface) builder is a software development tool that enables the
developer to create and maintain GUIs in a WYSIWYG (what you see is what you get) mode. A
GUI builder usually includes a visual editor for the developer to design forms and windows and
manage the layout of the widgets by dragging, dropping, and parameter setting. Some GUI
builders can automatically generate the source code corresponding to the visual GUI design.
Because current GUI applications usually follow the event-driven style (in which the flow of the
program is determined by events and event handling), GUI builder tools usually provide code
generation assistants, which automate the most repetitive tasks required for event handling. The
supporting code connects widgets with the outgoing and incoming events that trigger the
functions providing the application logic. Some modern IDEs provide integrated GUI builders or
GUI builder plug-ins. There are also many standalone GUI builders.
elements that are separately testable (for example, classes, routines, components). Unit testing is
often automated. Developers can use unit testing tools and frameworks to extend and create
automated testing environment. With unit testing tools and frameworks, the developer can code
criteria into the test to verify the unit’s correctness under various data sets. each individual test is
implemented as an object, and a test runner runs all of the tests. During the test execution, those
Performance analysis tools are usually used to support code tuning. The most common
performance analysis tools are profiling tools. An execution profiling tool monitors the code
while it runs and records how many times each statement is executed or how much time the
program spends on each statement or execution path. Profiling the code while it is running gives
insight into how the program works, where the hot spots are, and where the developers should
focus the code tuning efforts.Program slicing involves computation of the set of program
statements (i.e., the program slice) that may affect the values of specified variables at some point
of interest, which is referred to as a slicing criterion. Program slicing can be used for locating the
source of errors, program understanding, and optimization analysis. Program slicing tools
compute program slices for various programming languages using static or dynamic analysis
methods.
development work into distinct phases to improve design, product management, and project
management. It is also known as a software development life cycle (SDLC). The methodology
may include the pre-definition of specific deliverables and artifacts that are created and
Most modern development processes can be vaguely described as agile. Other methodologies
include waterfall, prototyping, iterative and incremental development, spiral development, rapid
Some people[who?] consider a life-cycle "model" a more general term for a category of
methodologies and a software development "process" a more specific term to refer to a specific
process chosen by a specific organization. For example, there are many specific software
development processes that fit the spiral life-cycle model. The field is often considered a subset
History
The software development methodology (also known as SDM) framework didn't emerge until
the 1960s. According to Elliott (2004) the systems development life cycle (SDLC) can be
systems. The main idea of the SDLC has been "to pursue the development of information
systems in a very deliberate, structured and methodical way, requiring each stage of the life
cycle––from inception of the idea to delivery of the final system––to be carried out rigidly and
sequentially"[2] within the context of the framework being applied. The main target of this
methodology framework in the 1960s was "to develop large scale functional business systems in
an age of large scale business conglomerates. Information systems activities revolved around
Practices
Several software development approaches have been used since the origin of information
"Traditional" methodologies such as waterfall that have distinct phases are sometimes known as
software development life cycle (SDLC) methodologies[citation needed], though this term could also be
used more generally to refer to any methodology. A "life cycle" approach with distinct phases is
in contrast to Agile approaches which define a process of iteration, but where design,
Continuous integration
Continuous integration is the practice of merging all developer working copies to a shared
mainline several times a day.[4] Grady Booch first named and proposed CI in his 1991 method,[5]
although he did not advocate integrating several times a day. Extreme programming (XP)
adopted the concept of CI and did advocate integrating more than once per day – perhaps as
Prototyping
Software prototyping is about creating prototypes, i.e. incomplete versions of the software
approach to try out particular features in the context of a full methodology (such as
Attempts to reduce inherent project risk by breaking a project into smaller segments and
The client is involved throughout the development process, which increases the
While some prototypes are developed with the expectation that they will be discarded, it
A basic understanding of the fundamental business problem is necessary to avoid solving the
Incremental development
Various methods are acceptable for combining linear and iterative systems development
methodologies, with the primary objective of each being to reduce inherent project risk by
breaking a project into smaller segments and providing more ease-of-change during the
development process.
There are three main variants of incremental development:[1]
1. A series of mini-Waterfalls are performed, where all phases of the Waterfall are
completed for a small part of a system, before proceeding to the next increment, or
3. The initial software concept, requirements analysis, and design of architecture and system
iterative development and the rapid construction of prototypes instead of large amounts of up-
front planning. The "planning" of software developed using RAD is interleaved with writing the
software itself. The lack of extensive pre-planning generally allows software to be written much
business process models using structured techniques. In the next stage, requirements are verified
using prototyping, eventually to refine the data and process models. These stages are repeated
iteratively; further development results in "a combined business requirements and technical
The term was first used to describe a software development process introduced by James Martin
Key objective is for fast development and delivery of a high quality system at a relatively
Attempts to reduce inherent project risk by breaking a project into smaller segments and
Aims to produce high quality systems quickly, primarily via iterative Prototyping (at any
These tools may include Graphical User Interface (GUI) builders, Computer Aided
“timeboxes”. If the project starts to slip, emphasis is on reducing requirements to fit the
Generally includes joint application design (JAD), where users are intensely involved in
facilitated interaction.
Standard systems analysis and design methods can be fitted into this framework.
Methodologies
on iterative development, where requirements and solutions evolve via collaboration between
self-organizing cross-functional teams. The term was coined in the year 2001 when the Agile
Agile software development uses iterative development as a basis but advocates a lighter and
incorporate iteration and the continuous feedback that it provides to successively refine and
Waterfall development
Software design
Implementation
Testing
Maintenance
The first formal description of the method is often cited as an article published by Winston W.
Royce[7] in 1970 although Royce did not use the term "waterfall" in this article. Royce presented
between phases.
Tight control is maintained over the life of the project via extensive written
technology management occurring at the end of most phases before beginning the next
strict waterfall approach discourages revisiting and revising any prior phase once it is complete.
This "inflexibility" in a pure waterfall model has been a source of criticism by supporters of
other more "flexible" models. It has been widely blamed for several large-scale government
projects running over budget, over time and sometimes failing to deliver on requirements due to
the Big Design Up Front approach. Except when contractually required, the waterfall model has
been largely superseded by more flexible and versatile methodologies developed specifically for
Spiral development
Spiral model (Boehm, 1988)
In 1988, Barry Boehm published a formal software system development "spiral model," which
combines some key aspect of the waterfall model and rapid prototyping methodologies, in an
effort to combine advantages of top-down and bottom-up concepts. It provided emphasis in a key
area many felt had been neglected by other methodologies: deliberate iterative risk analysis,
Focus is on risk assessment and on minimizing project risk by breaking a project into
smaller segments and providing more ease-of-change during the development process, as
well as providing the opportunity to evaluate risks and weigh consideration of project
the product and for each of its levels of elaboration, from an overall concept-of-operation
Each trip around the spiral traverses four basic quadrants: (1) determine objectives,
alternatives, and constraints of the iteration; (2) evaluate alternatives; Identify and resolve
risks; (3) develop and verify deliverables from the iteration; and (4) plan the next
iteration.[10]
Begin each cycle with an identification of stakeholders and their "win conditions", and
Offshore development
Offshore custom software development aims at dispatching the software development process
over various geographical areas to optimize project spending by capitalizing on countries with
lower salaries and operating costs. Geographically distributed teams can be integrated at any
Other
Chaos model - The main rule is always resolve the most important issue first.
practices
Slow programming, as part of the larger Slow Movement, emphasizes careful and gradual
work without (or minimal) time pressures. Slow programming aims to avoid bugs and
into four phases, each consisting of one or more executable iterations of the software at
tools and products exist to facilitate UP implementation. One of the more popular
Process meta-models
Some "process models" are abstract descriptions for evaluating, comparing, and improving the
ISO/IEC 12207 is the international standard describing the method to select, implement,
The Capability Maturity Model Integration (CMMI) is one of the leading models and
based on best practice. Independent assessments grade organizations on how well they
follow their defined processes, not on the quality of those processes or the software
and the methods of managing and monitoring progress. Although the standard was
originally created for the manufacturing sector, ISO 9000 standards have been applied to
software development as well. Like CMMI, certification with ISO 9000 does not
guarantee the quality of the end result, only that formalized business processes have been
followed.
assessment of software processes". This standard is aimed at setting out a clear model for
process comparison. SPICE is used much like CMMI. It models processes to manage,
control, guide and monitor software development. This model is then used to measure
improvement. It also identifies strengths that can be continued or integrated into common
In practice
The three basic approaches applied to software development methodology frameworks.
A variety of such frameworks have evolved over the years, each with its own recognized
strengths and weaknesses. One software development methodology framework is not necessarily
suitable for use by all projects. Each of the available methodology frameworks are best suited to
specific kinds of projects, based on various technical, organizational, project and team
considerations.[1]
U.S. defense industry, which requires a rating based on process models to obtain contracts. The
international standard for describing the method of selecting, implementing and monitoring the
and quality. Some try to systematize or formalize the seemingly unruly task of designing
software. Others apply project management techniques to designing software. Large numbers of
software projects do not meet their expectations in terms of functionality, cost, or delivery
schedule - see List of failed and overbudget custom software projects for some notable examples.
Organizations may create a Software Engineering Process Group (SEPG), which is the focal
point for process improvement. Composed of line practitioners who have varied skills, the group
is at the center of the collaborative effort of everyone in the organization who is involved with
A particular development team may also agree to programming environment details, such as
which integrated development environment is used, and one or more dominant programming
frameworks. These details are generally not dictated by the choice of model or general
methodology.
Software development life cycle (SDLC)