UNIT-1
SOFTWARE & SOFTWARE ENGINEERING
Software Engineering is the process of designing, developing, testing, and maintaining
software in a systematic, efficient, and reliable way.
Introduction:
Software Engineering applies engineering principles to software development. It
focuses on creating high-quality software that meets user needs, is cost-effective, and can be
maintained and updated easily. It involves various stages like planning, coding, testing, and
deployment to ensure the software works well and is error-free.
1.1 The Nature of Software
Software plays a dual role in today’s world:
1. As a Product:
Software is the final product that performs tasks such as producing, managing,
displaying or transferring information. It can run on devices like mobile phones,
tablets, desktops, or mainframe computers.
2. As a Vehicle to Deliver a Product:
It controls computer hardware (via operating systems), enables communication (via
networks), and helps create other software (via tools and environments).
Importance of Software:
It transforms and manages personal and business data.
It connects people to global networks (like the Internet).
It can both protect and threaten personal privacy.
It’s critical in both everyday life and global industries.
Evolution of Software:
In the past 50 years, software has grown in complexity due to advances in:
o Hardware performance
o Memory and storage
o Input/output technologies
o Network architectures
Challenges in Software Development:
Even with advanced tools and larger teams, we still face common problems:
Why does software take so long to build?
Why is it so expensive?
Why do bugs still appear after testing?
Why is software maintenance time-consuming?
Why is progress hard to measure during development?
Need for Software Engineering:
These concerns gave rise to software engineering—a structured approach to design,
develop, and maintain reliable and efficient software.
1.1.1 Defining Software
Software consists of:
1. Programs (Instructions): That provides features and functions.
2. Data Structures: That helps manipulate and manage data.
3. Descriptive Information: That explains program operation and usage (e.g., manuals
or documentation).
Unique Characteristics of Software
Logical Entity: Software is not a physical product.
Doesn't Wear Out: Unlike hardware, software doesn’t degrade physically over time.
Changes Cause Errors: Over time, software deteriorates due to frequent updates and
changes that may introduce new bugs.
Failure Rate Behaviour: Software vs. Hardware
📊 Figure 1: Hardware Failure – "Bathtub Curve"
Infant Mortality: High initial failure due to manufacturing/design defects.
Stable Operation: Low failure rate after initial fixes.
Wear Out: Failure rate increases over time due to physical degradation.
📊 Figure 2: Software Failure
Idealized Curve:
o High failure rate at the beginning (due to bugs).
o Low and steady failure rate once bugs are fixed.
o No physical wear-out.
Actual Curve:
o Each change or update causes a spike in failure due to new errors.
o Over time, these spikes increase the minimum failure rate.
o This reflects software deterioration even though it doesn’t physically wear
out.
Software Maintenance vs. Hardware
Hardware: Worn parts are replaced with spares.
Software: No spare parts. Every fix involves design/code modification.
Software maintenance is complex and often leads to further changes and potential
errors.
1.1.2 Software Application Domains
1. System Software
Supports other software and system operations.
Examples: Operating systems, compilers, editors, drivers, networking tools.
Handles both structured (determinate) and unstructured (indeterminate) data.
2. Application Software
Stand-alone programs used to solve specific business or technical problems.
Helps in business operations and decision-making.
Example: Accounting software, inventory management, CRM tools.
3. Engineering/Scientific Software
Used for complex calculations in science and engineering.
Fields: Astronomy, CAD, genetics, molecular biology, meteorology.
Focus: “Number-crunching” and simulations.
4. Embedded Software
Embedded within hardware devices or systems.
Controls device functions for both user interaction and internal system operations.
Examples: Microwave keypad control, car systems (fuel, braking, displays).
5. Product-Line Software
Developed for reuse by many customers.
May serve niche markets (e.g., inventory tools) or mass markets (e.g., antivirus
software).
Focus: Flexibility and configurable features.
6. Web and Mobile Applications
Runs on internet or mobile platforms.
Includes: Browser-based apps, mobile apps, cloud tools.
Used for: Social media, shopping, education, communication, etc.
7. Artificial Intelligence (AI) Software
Solves non-numerical, complex problems.
Uses: Neural networks, machine learning, pattern recognition, robotics, expert
systems.
Focus: Mimicking human intelligence and decision-making.
1.2 The Changing Nature of Software:
1.2.1 Web Applications (WebApps)
Early Web (1990–1995): Static HTML pages with text and limited graphics.
Now: Dynamic, interactive apps integrated with databases and business tools.
Tools: HTML, XML, Java, APIs.
Advanced stage: Semantic Web (Web 3.0) enables smart data access through
semantic databases and APIs.
Domains: Business, e-commerce, education, government, etc.
📊 1.2.2 Mobile Applications
Apps built specifically for mobile platforms (iOS, Android, Windows).
Features:
o Use of mobile UI interactions.
o Access to device features like GPS, camera, accelerometer.
o Local data storage + cloud/web integration.
Two types:
o Mobile App: Installed on device, has deep access to hardware.
o Mobile WebApp: Runs via browser optimized for mobile.
Trend: Distinction between the two is narrowing as mobile browsers improve.
📊 1.2.3 Cloud Computing
Provides on-demand access to computing resources over the internet.
Three layers of service:
1. Infrastructure as a Service (IaaS) – Compute, storage, networking.
2. Platform as a Service (PaaS) – Runtime, databases, queues, identity, object
storage.
3. Software as a Service (SaaS) – Applications like content, communication,
monitoring, finance, etc.
Cloud Architecture:
o Front-End: Client devices (phones, tablets, laptops) + apps/browsers.
o Back-End: Servers, databases, applications, middleware.
Types: Public, Private, or Hybrid clouds.
📊 1.2.4 Product Line Software
A software product line is a set of related systems with common features built from
shared components.
Benefits:
o Reuse of core assets.
o Saves time and effort across products.
o Ensures consistency and quality.
Example: Microsoft Office suite (Word, Excel, Power Point) shares core
functionalities.
Software Process
Software is the set of instructions in the form of programs to govern the
computer system and to process the hardware components. To produce a software
product the set of activities is used. This set is called a software process.
Software Process Framework details the steps and chronological order of a process.
Since it serves as a foundation for them, it is utilized in most applications. Task sets,
umbrella activities, and process framework activities all define the characteristics of the
software development process. Software Process includes:
1. Tasks: They focus on a small, specific objective.
2. Action: It is a set of tasks that produce a major work product.
3. Activities: Activities are groups of related tasks and actions for a major objective.
Software Process Framework Activities
The Software process framework is required for representing common process
activities. Five framework activities are described in a process framework for software
engineering. Communication, planning, modeling, construction, and deployment are all
examples of framework activities. Each engineering action defined by a framework activity
comprises a list of needed work outputs, project milestones, and software quality assurance
(SQA) points. Let's explain each:
Software Process Framework Activities
1. Communication
Definition: Communication involves gathering requirements from customers and
stakeholders to determine the system's objectives and the software's requirements.
Activities:
Requirement Gathering: Engaging with consumers and stakeholders through
meetings, interviews, and surveys to understand their needs and expectations.
Objective Setting: Clearly defining what the system should achieve based on the
gathered requirements.
Explanation: Effective communication is essential to understand what the users need from
the software. This phase ensures that all stakeholders are on the same page regarding the
goals and requirements of the system.
2. Planning
Definition: Planning involves establishing an engineering work plan, describing technical
risks, listing resource requirements, and defining a work schedule.
Activities:
Work Plan: Creating a detailed plan that outlines the tasks and activities needed to
develop the software.
Risk Assessment: Identifying potential technical risks and planning how to mitigate
them.
Resource Allocation: Determining the resources (time, personnel, tools) required for
the project.
Schedule Definition: Setting a timeline for completing different phases of the project.
Explanation: Planning helps in organizing the project and setting clear expectations. It
ensures that the development team has a roadmap to follow and that potential challenges
are anticipated and managed.
3. Modeling
Definition: Modeling involves creating architectural models and designs to better
understand the problem and work towards the best solution.
Activities:
Analysis of Requirements: Breaking down the gathered requirements to understand
what the system needs to do.
Design: Creating architectural and detailed designs that outline how the software will
be structured and how it will function.
Explanation: Modeling translates requirements into a visual and structured representation
of the system. It helps in identifying the best design approach and serves as a blueprint for
development.
4. Construction
Definition: Construction involves creating code, testing the system, fixing bugs, and
confirming that all criteria are met.
Activities:
Code Generation: Writing the actual code based on the design models.
Testing: Running tests to ensure the software works as intended, identifying and fixing
bugs.
Explanation: This phase is where the actual software is built. Testing is crucial to ensure
that the code is error-free and that the software meets all specified requirements.
5. Deployment
Definition: Deployment involves presenting the completed or partially completed product
to customers for evaluation and feedback, then making necessary modifications based on
their input.
Activities:
Product Release: Delivering the software to users, either as a full release or in stages.
Feedback Collection: Gathering feedback from users about their experience with the
software.
Product Improvement: Making changes and improvements based on user feedback to
enhance the product.
Umbrella Activities
Umbrella Activities that take place during a software development process for improved
project management and tracking.
1. Software project tracking and control: This is an activity in which the team can
assess progress and take corrective action to maintain the schedule. Take action to keep
the project on time by comparing the project's progress against the plan.
2. Risk management: The risks that may affect project outcomes or quality can be
analyzed. Analyze potential risks that may have an impact on the software product's
quality and outcome.
3. Software quality assurance: These are activities required to maintain software quality.
Perform actions to ensure the product's quality.
4. Formal technical reviews: It is required to assess engineering work products to
uncover and remove errors before they propagate to the next activity. At each level of
the process, errors are evaluated and fixed.
5. Software configuration management: Managing of configuration process when any
change in the software occurs.
6. Work product preparation and production: The activities to create models,
documents, logs, forms, and lists are carried out.
7. Reusability management: It defines criteria for work product reuse. Reusable work
items should be backed up, and reusable software components should be achieved.
8. Measurement: In this activity, the process can be defined and collected. Also, project
and product measures are used to assist the software team in delivering the required
software.
Software engineering practice.
Generic framework activities— communication, planning, modeling, construction,
and deployment —and umbrella activities establish a skeleton architecture for software
engineering work.
the essence of software engineering practice:
1. Understand the problem (communication and analysis).
2. Plan a solution (modeling and software design).
3. Carry out the plan (code generation).
4. Examine the result for accuracy (testing and quality assurance).
Understand the problem. It’s sometimes diffi cult to admit, but most of us suffer from
hubris when we’re presented with a problem. We listen for a few seconds and then think, Oh
yeah, I understand, let’s get on with solving this thing. Unfortunately, understanding isn’t
always that easy. It’s worth spending a little time answering a few simple questions:
• Who has a stake in the solution to the problem? That is, who are the stakeholders?
• What are the unknowns? What data, functions, and features are required to properly solve
the problem?
• Can the problem be compartmentalized? Is it possible to represent smaller problems that
may be easier to understand?
• Can the problem be represented graphically? Can an analysis model be created?
Plan the solution. Now you understand the problem (or so you think), and you can’t wait to
begin coding. Before you do, slow down just a bit and do a little design:
• Have you seen similar problems before? Are there patterns that are recognizable in a
potential solution? Is there existing software that implements the data, functions, and features
that are required?
• Has a similar problem been solved? If so, are elements of the solution reusable?
• Can sub problems be defined? If so, are solutions readily apparent for the sub problems?
• Can you represent a solution in a manner that leads to effective implementation?
Can a design model be created?
Carry out the plan. The design you’ve created serves as a road map for the system you want
to build. There may be unexpected detours, and it’s possible that you’ll discover an even
better route as you go, but the “plan” will allow you to proceed without getting lost.
• Does the solution conform to the plan? Is source code traceable to the design model?
• Is each component part of the solution provably correct? Has the design and code been
reviewed, or better, have correctness proofs been applied to the algorithm?
Examine the result. You can’t be sure that your solution is perfect, but you can be sure that
you’ve designed a suffi cient number of tests to uncover as many errors as possible.
• Is it possible to test each component part of the solution? Has a reasonable testing strategy
been implemented?
• Does the solution produce results that conform to the data, functions, and features that are
required?
Has the software been validated against all stakeholder requirements? It shouldn’t surprise
you that much of this approach is common sense. In fact, it’s reasonable to state that a
commonsense approach to software engineering will never lead you astray.
Software Engineering Myths:
Most, experienced experts have seen myths or superstitions (false beliefs or interpretations) or
misleading attitudes (naked users) which creates major problems for management and
technical people. The types of software-related myths are listed below.
`
Types of Software Myths
(i) Management Myths:
Myth 1:
We have all the standards and procedures available for software development.
Fact:
Software experts do not know all the requirements for the software development.
And all existing processes are incomplete as new software development is based on new
and different problem.
Myth 2:
The addition of the latest hardware programs will improve the software development.
Fact:
The role of the latest hardware is not very high on standard software development; instead
(CASE) Engineering tools help the computer, they are more important than hardware to
produce quality and productivity.
Hence, the hardware resources are misused.
Myth 3:
With the addition of more people and program planners to Software development can help
meet project deadlines (If lagging behind).
Fact:
If software is late, adding more people will merely make the problem worse. This is
because the people already working on the project now need to spend time educating the
newcomers, and are thus taken away from their work. The newcomers are also far less
productive than the existing software engineers, and so the work put into training them to
work on the software does not immediately meet with an appropriate reduction in work.
(ii)Customer Myths:
The customer can be the direct users of the software, the technical team, marketing / sales
department, or other company. Customer has myths leading to false expectations (customer) &
that's why you create dissatisfaction with the developer.
Myth 1:
A general statement of intent is enough to start writing plans (software development) and
details of objectives can be done over time.
Fact:
Official and detailed description of the database function, ethical performance,
communication, structural issues and the verification process are important.
Unambiguous requirements (usually derived iteratively) are developed only through
effective and continuous
communication between customer and developer.
Myth 2:
Software requirements continually change, but change can be easily accommodated because
software is flexible
Fact:
It is true that software requirements change, but the impact of change varies with the time
at which it is introduced. When requirements changes are requested early (before design or
code has been started), the cost impact is relatively small. However, as time passes, the cost
impact grows rapidly—resources have been committed, a design framework has been
established, and change can cause upheaval that requires additional resources and major
design modification.
Different Stages of Myths
(iii)Practitioner’s Myths:
Myths 1:
They believe that their work has been completed with the writing of the plan.
Fact:
It is true that every 60-80% effort goes into the maintenance phase (as of the latter software
release). Efforts are required, where the product is available first delivered to customers.
Myths 2:
There is no other way to achieve system quality, until it is "running".
Fact:
Systematic review of project technology is the quality of effective software verification
method. These updates are quality filters and more accessible than test.
Myth 3:
An operating system is the only product that can be successfully exported project.
Fact:
A working system is not enough, the right document brochures and booklets are also
required to provide guidance & software support.
Myth 4:
Engineering software will enable us to build powerful and unnecessary document & always
delay us.
Fact:
Software engineering is not about creating documents. It is about creating a quality
product. Better quality leads to reduced rework. And reduced rework results in faster
delivery times.
Generic process Model
Generic process framework for software engineering defines five framework
activities— communication, planning, modeling, construction, and deployment . In
addition, a set of umbrella activities—project tracking and control, risk management, quality
assurance, confi guration management, technical reviews, and others—are applied throughout
the process.
You should note that one important aspect of the software process has not yet been
discussed. This aspect—called process flow —describes how the framework activities and the
actions and tasks that occur within each framework activity are organized with respect to
sequence and time and is illustrated in Figure 2.
A linear process flow executes each of the five framework activities in sequence,
beginning with communication and culminating with deployment (Figure 2a). An iterative
process flow repeats one or more of the activities before proceeding to the next (Figure .2b).
An evolutionary process flow executes the activities in a “circular” manner. Each circuit
through the five activities leads to a more complete version of the software (Figure .2c). A
parallel process flow (Figure 2d) executes one or more activities in parallel with other
activities (e.g., modeling for one aspect of the software might be executed in parallel with
construction of another aspect of the software).
Software Process Assessment & Improvement
Software Process Assessment is a disciplined and organized examination of the
software process which is being used by any organization bases the on the process model.
The Software Process Assessment includes many fields and parts like identification and
characterization of current practices, the ability of current practices to control or avoid
significant causes of poor (software) quality, cost, schedule and identifying areas of
strengths and weaknesses of the software.
Types of Software Assessment:
Self Assessment: This is conducted internally by the people of their own organisation.
Second Party assessment: This is conducted by an external team or people of the own
organisation are supervised by an external team.
Third Party assessment:
In an ideal case Software Process Assessment should be performed in a transparent,
open and collaborative environment. This is very important for the improvement of the
software and the development of the product. The results of the Software Process
Assessment are confidential and are only accessible to the company. The assessment team
must contain at least one person from the organization that is being assessed.
Software Process Cycle:
Generally there are six different steps in the complete cycle:
Selecting a team: The first step is to select all the team members. Everyone must be
software professionals with sound knowledge in software engineering.
The standard process maturity questionnaire is filled out by the representatives of the
site that will be evaluated.
In accordance with the CMM core process areas, the assessment team analyses the
questionnaire results to determine the areas that call for additional investigation.
The evaluation team visits the location to learn more about the software procedures
used there.
The evaluation team compiles a set of results outlining the organization's software
process's advantages and disadvantages.
In order to deliver the findings to the right audience, the assessment team creates a Key
Process Area (KPA) profile analysis.
SCAMPI;
SCAMPI stands for Standard CMMI Assessment Method for Process Improvement. To
fulfil the demands of the CMMI paradigm, the Standard CMMI Assessment Method for
Process Improvement (SCAMPI) was created (Software Engineering Institute, 2000).
Moreover, it is based on the CBA IPI. The CBA IPI and SCAMPI both have three steps.
1. Plan and become ready
2. Carry out the evaluation on-site
3. Report findings
The planning and preparation phase includes the following activities:
Describe the scope of the evaluation.
Create the assessment strategy.
Get the evaluation crew ready and trained.
Make a quick evaluation of the participants.
CMMI Appraisal Questionnaire distribution
Look at the survey results.
Perform a preliminary document evaluation.
The onsite evaluation phase includes the following activities:
Display the results.
Execute the findings.
Complete / end the assessment.
CMM-Based Appraisal for Internal Process Improvement (CBA IPI) — provides a
diagnostic technique for assessing the relative maturity of a software organization; uses the
SEI CMM as the basis for the assessment [Dun01].
SPICE (ISO/IEC15504) —a standard that defines a set of requirements for software process
assessment. The intent of the standard is to assist organizations in developing an objective
evaluation of the efficacy of any defined software process [ISO08].
ISO 9001:2000 for Software— a generic standard that applies to any organization that wants
to improve the overall quality of the products, systems, or services that it provides. Therefore,
the standard is directly applicable to software organizations and companies
Prescriptive Process Model
A prescriptive process model 1 strives for structure and order in software
development. Activities and tasks occur sequentially with defined guidelines for progress.
But are prescriptive models appropriate for a software world that thrives on change? If we
reject traditional process models (and the order they imply) and replace them with something
less structured, do we make it impossible to achieve coordination and coherence in software
work? We call them “prescriptive” because they prescribe a set of process elements—
framework activities, software engineering actions, tasks, work products, quality assurance,
and change control mechanisms for each project. Each process model also prescribes a
process flow (also called a work flow )—that is, the manner in which the process elements are
interrelated to one another.
1. The Waterfall Model
The Waterfall model is also known as ‘Linear sequential model‘or ‘Classic life cycle
model‘. It is used in small projects where requirements are well defined and known before
starting the project. Activities are carried out in a linear and systematic fashion.
The process starts with communication, where requirements are gathered from the customer
and recorded.
Then goes to the planning stage where the cost and time constraints are estimated, a schedule
is outlined and project tracking variables are defined.
Modelling is where a design based on the requirements and keeping the project constraints in
mind is created. After this, code is generated and the actual building of the product is started
in the construction phase.
Testing (unit testing, integration testing) is done after code completion in this phase.
Deployment is the last stage where the product is delivered, customer feedback is received
and, support and maintenance for the product are provided.
Advantages of the waterfall model
A simple model to use and implement.
Easily understandable workflow.
Easy to manage since requirements are known prior to the start of the project.
Can be applied to projects where quality is preferred over cost.
Disadvantages of the waterfall model
It may be difficult for the customer to provide all the specific requirements beforehand.
Cannot be used for complex and object-oriented projects.
Testing and customer evaluation are done at the last stages and hence the risk is high.
Iteration of activities is not promoted which is unavoidable for certain projects.
May lead to “blocking states” in which some project team members must wait for other
members of the team to complete dependent tasks.
a) V-Model
V-Model is an SDLC model, it is also called Verification and Validation Model. V-Model
is widely used in the Software Development Process, and it is considered a disciplined
model. In V-Model, the execution of each process is sequential, that is, the new phase starts
only after the previous phase ends.
It is based on the association of testing phase with each development phase that is in V-
Model with each development phase, its testing phase is also associated in a V-shape in
other words both Software Development and testing activities take place at the same
time.
So in this model, Verification Phase will be on one side, Validation Phase will be on the
other side that is both the activities run simultaneously and both of them are connected
to each other in V-Shape through Coding Phase, hence it is called V-Model.
V-Design: In V-Design the left side represents the development activity, the right side
represents the testing activity.
2. Incremental Process Model
The Incremental process model is also known as ‘Successive version model‘.
In the Incremental process model, a series of releases, called increments, are built and
delivered to the customer. First, a simple working system (core product), that addresses basic
requirements, is delivered. Customer feedback is recorded after each incremental delivery.
Many increments are delivered, by adding more functions, until the required system is
released. This model is used when a user demands a model of product with limited
functionality quickly.
Advantages of incremental process model
Flexible to change requirements.
Changes can be done throughout the development stages.
Errors are reduced since the product is tested by the customer in each phase.
Working software available at the early stage of the process.
Easy to test because of small iterations.
The initial cost is lower.
Disadvantages of incremental process model
Requires good planning and design.
Modules and interfaces should be well defined.
The total cost is high.
Demands a complete planning strategy before commencement.
Refining requirements in each iteration may affect system architecture.
Breaking the problem into increments needs skilful management supervising.
3. Evolutionary Process Models
Evolutionary process models are opted when the requirements may tend to change
and also when the complete sophisticated product delivery cannot be done before a given
deadline, but the delivery of a limited version of it is possible.
In the incremental model, complete requirements are specified beforehand and these
requirements are refined over time for each increment. The evolutionary model permits
requirements, plans and estimates to evolve over time. Here we discuss prototyping and
the spiral model.
a) Prototyping
In cases when the requirements are unclear and are likely to change or when the developer is
doubtful about working of an algorithm, a solution is to build a prototype and find out what is
actually needed. Hence, in this model, one or more prototypes are made with unrefined
currently known requirements before the actual product is made.
A quick design is what occurs in a prototype model. The client evaluates the prototype and
gives feedback and other requirements which are incorporated in the next prototype. This is
repeated until the prototype becomes a complete product that is acceptable to the client. Some
prototypes are built as “throwaways”, others are “evolutionary” in nature as they evolve into
the actual system.
Advantages of prototyping
Active involvement of the user.
Errors are detected earlier.
Feedback after each prototype helps in understanding the system better.
Does not need to know detailed processes, input and output from the beginning.
Disadvantages of prototyping
Multiple prototypes can slow down the process.
Frequent changes can increase complexity.
Unsatisfied client leads to multiple throwaways.
The customer may not be interested or satisfied after evaluating the initial prototype.
b) Spiral Model
In spiral model, the software is developed through a series of increments. The diagram looks
like a spiral with loops where each loop is a phase. Each phase is split into
four sectors/quadrant.
The first circuit around the spiral might result in the development of a product specification.
The subsequent passes around the spiral might be used to develop a prototype and then
progressively more mature versions of the software.
Planning is where the objectives, alternatives and other constraints are determined. The
alternatives are considered, risks in each alternative are analysed and prototypes are refined in
the risk analysis sector. At the development quadrant level risks are known and it proceeds
with developing and testing the product. In the assessment sector, customer evaluation of
product developed is reviewed and the next phase is planned. This loop continues until
acceptable software is built and deployed.
Hence, the spiral model follows an incremental process methodology and unlike other
process models, it deals with the uncertainty by applying a series of risk analysis strategies
throughout the process.
Advantages of spiral model
Reduces risk.
Recommended for complex projects.
Changes can be incorporated at a later stage.
Strong documentation helps in better management.
Disadvantages of spiral model
Costly and not recommended for small projects.
Demands risk assessment expertise.
Looping is a complex process.
Heavy documentation.
4 . RAD Model
RAD Model stands for rapid application development model. The methodology of
RAD model is similar to that of incremental or waterfall model. It is used for small
projects.
The main objective of RAD model is to reuse code, components, tools, processes in project
development.
If the project is large then it is divided into many small projects and these small projects
are planned one by one and completed. In this way, by completing small projects, the
large project gets ready quickly.
In RAD model, the project is completed within the given time and all the requirements
are collected before starting the project. It is very fast and there are very less errors in it.
Specialized process models
Specialized process models take on many of the characteristics of one or more of the
traditional models However, these models tend to be applied when a specialized or narrowly
defined software engineering approach is chosen. There are 3 types of specialized process
models: 1. Component Based Development 2. Formal Methods Model 3. Aspect Oriented
Software development
1. Component Based Development: Commercial off-the-shelf (COTS) software
components, developed by vendors who offer them as products, provide targeted
functionality with well-defined interfaces that enable the component to be integrated into the
software that is to be built. The component-based development model incorporates many of
the characteristics of the spiral model. It is evolutionary in nature, demanding an iterative
approach to the creation of software. However, the component-based development model
constructs applications from prepackaged software component. Modeling and construction
activities begin with the identification of candidate components. These components can be
designed as either conventional software modules or objectoriented classes or packages of
classes. Regardless of the technology that is used to create the components, the component-
based development model incorporates the following steps: 1. Available component-based
products are researched and evaluated for the application domain in question. 2. Component
integration issues are considered. 3. A software architecture is designed to accommodate the
components. 4. Components are integrated into the architecture. 5. Comprehensive testing is
conducted to ensure proper functionality The component-based development model leads to
software reuse, and reusability provides software engineers with a number of measurable
benefits. software engineering team can achieve a reduction in development cycle time as
well as a reduction in project cost if component reuse becomes part of your culture.
2. Formal Methods Model: The formal methods model encompasses a set of activities that
leads to formal mathematical specification of computer software. Formal methods enable to
specify, develop, and verify a computer-based system by applying a rigorous, mathematical
notation. A variation on this approach, called cleanroom software engineering is currently
applied by some software development organizations. When formal methods are used during
development, they provide a mechanism for eliminating many of the problems that are
difficult to overcome using other software engineering paradigms. Ambiguity,
incompleteness, and inconsistency can be discovered and corrected more easily, through the
application of mathematical analysis. When formal methods are used during design, they
serve as a basis for program verification and therefore enable you to discover and correct
errors that might otherwise go undetected. The formal methods model offers the promise of
defect-free software.There are some of the disadvantages too: • The development of formal
models is currently quite time consuming and expensive. • Because few software developers
have the necessary background to apply formal methods, extensive training is required. • It is
difficult to use the models as a communication mechanism for technically unsophisticated
customers
3. Aspect Oriented Software Development: Regardless of the software process that is
chosen, the builders of complex software invariably implement a set of localized features,
functions, and information content. These localized software characteristics are modeled as
components and then constructed within the context of a system architecture. As modern
computer-based systems become more sophisticated certain concerns span the entire
architecture. Some concerns are high-level properties of a system, Other concerns affect
functions, while others are systemic. When concerns cut across multiple system functions,
features, and information, they are often referred to as crosscutting concerns. Aspectual
requirements define those crosscutting concerns that have an impact across the software
architecture. Aspect-oriented software development (AOSD), often referred to as aspect-
oriented programming (AOP), is a relatively new software engineering paradigm that
provides a process and methodological approach for defining, specifying, designing, and
constructing aspects. A distinct aspect-oriented process has not yet matured. However, it is
likely that such a process will adopt characteristics of both evolutionary and concurrent
process models. The evolutionary model is appropriate as aspects are identified and then
constructed. The parallel nature of concurrent development is essential because aspects are
engineered independently of localized software components and yet, aspects have a direct
impact on these components. It is essential to instantiate asynchronous communication
between the software process activities applied to the engineering and construction of aspects
and components.
Unified Process
The Unified Process is an attempt to draw on the best features and characteristics of
traditional software process models, but characterize them in a way that implements many of
the best principles of agile software development. The Unified Process recognizes the
importance of customer communication and streamlined methods for describing the
customer’s view of a system. It emphasizes the important role of software architecture and
“helps the architect focus on the right goals, such as understandability, reliance to future
changes, and reuse. It suggests a process flow that is iterative and incremental, providing the
evolutionary feel that is essential in modern software development.
Inception: The inception phase of the UP encompasses both customer communication and
planning activities. By collaborating with stakeholders, business requirements for the
software are identified; a rough architecture for the system is proposed; and a plan for the
iterative, incremental nature of the ensuing project is developed.
Planning : Planning identifies resources, assesses major risks, defines a schedule, and
establishes a basis for the phases that are to be applied as the software increment is
developed.
Elaboration: The elaboration phase encompasses the communication and modeling
activities of the generic process model. Elaboration refines and expands the preliminary use
cases that were developed as part of the inception phase and expands the architectural
representation to include five different views of the software—the use case model, the
requirements model, the design model, the implementation model, and the deployment
model. Elaboration refines and expands the preliminary use cases that were developed as part
of the inception phase and expands the architectural representation to include five different
views of the software—the use case model, the requirements model, the design model, the
implementation model, and the deployment model.
Construction: The construction phase of the UP is identical to the construction activity
defined for the generic software process. Using the architectural model as input, the
construction phase develops or acquires the software components that will make each use
case operational for end users. To accomplish this, requirements and design models that were
started during the elaboration phase are completed to reflect the final version of the software
increment. All necessary and required features and functions for the software increment are
then implemented in source code. As components are being implemented, unit tests are
designed and executed for each. In addition, integration activities are conducted. Use cases
are used to derive a suite of acceptance tests that are executed prior to the initiation of the
next unified process phase.
Transition: The transition phase of the UP encompasses the latter stages of the generic
construction activity and the first part of the generic deployment activity. y. Software is given
to end users for beta testing and user feedback reports both defects and necessary changes. In
addition, the software team creates the necessary support information that is required for the
release. At the conclusion of the transition phase, the software increment becomes a usable
software release.
Production: The production phase of the UP coincides with the deployment activity of the
generic process. During this phase, the ongoing use of the software is monitored, support for
the operating environment (infrastructure) is provided, and defect reports and requests for
changes are submitted and evaluated.
Personal Software Process (PSP)?
The personal software process (PSP) is focused on individuals to improve their
performance. The PSP is an individual process, and it is a bottom-up approach to software
process improvement. The PSP is a prescriptive process, it is a more mature methodology
with a well-defined set of tools and techniques.
Key Features of Personal Software Process (PSP)
Following are the key features of the Personal Software Process (PSP):
Process-focused: PSP is a process-focused methodology that emphasizes the
importance of following a disciplined approach to software development.
Personalized: PSP is personalized to an individual's skill level, experience, and work
habits. It recognizes that individuals have different strengths and weaknesses, and
tailors the process to meet their specific needs.
Metrics-driven: PSP is metrics-driven, meaning that it emphasizes the collection and
analysis of data to measure progress and identify areas for improvement.
Incremental: PSP is incremental, meaning that it breaks down the development process
into smaller, more manageable pieces that can be completed in a step-by-step fashion.
Quality-focused: PSP is quality-focused, meaning that it emphasizes the importance of
producing high-quality software that meets user requirements and is free of defects.
Advantages of Personal Software Process (PSP)
Advantages of Personal Software Process (PSP) are:
Improved productivity: PSP provides a structured approach to software development
that can help individuals improve their productivity by breaking down the development
process into smaller, more manageable steps.
Improved quality: PSP emphasizes the importance of producing high-quality software
that meets user requirements and is free of defects. By collecting and analyzing data
throughout the development process, individuals can identify and eliminate sources of
errors and improve the quality of their work.
Personalized approach: PSP is tailored to an individual's skill level, experience, and
work habits, which can help individuals work more efficiently and effectively.
Improved estimation: PSP emphasizes the importance of accurate estimation, which
can help individuals plan and execute projects more effectively.
Continuous improvement: PSP promotes a culture of continuous improvement, which
can help individuals learn from past experiences and apply that knowledge to future
projects.
Disadvantages of Personal Software Process (PSP)
Following are the Disadvantages of Personal Software Process (PSP):
Time-consuming: PSP can be time-consuming, particularly when individuals are first
learning the methodology and need to collect and analyze data throughout the
development process.
Complex: PSP can be complex, particularly for individuals who are not familiar with
software engineering concepts or who have limited experience in software
development.
Heavy documentation: PSP requires a significant amount of documentation throughout
the development process, which can be burdensome for some individuals.
Limited to individual use: PSP is designed for individual use, which means that it may
not be suitable for team-based software development projects.
Team Software Process (TSP)?
Team Software Process (TSP) is a team-based process. TSP focuses on team productivity.
Basically, it is a top-down approach. The TSP is an adaptive process, and process
management methodology.
Key Features of Team Software Process (TSP):
The key features of the Team Software Process (TSP) are:
Team-focused: TSP is team-focused, meaning that it emphasizes the importance of
collaboration and communication among team members throughout the software
development process.
Process-driven: TSP is process-driven, meaning that it provides a structured approach
to software development that emphasizes the importance of following a disciplined
process.
Metrics-driven: TSP is metrics-driven, meaning that it emphasizes the collection and
analysis of data to measure progress, identify areas for improvement, and make data-
driven decisions.
Incremental: TSP is incremental, meaning that it breaks down the development process
into smaller, more manageable pieces that can be completed in a step-by-step fashion.
Quality-focused: TSP is quality-focused, meaning that it emphasizes the importance of
producing high-quality software that meets user requirements and is free of defects.
Feedback-oriented: TSP is feedback-oriented, meaning that it emphasizes the
importance of receiving feedback from peers, mentors, and other stakeholders to
identify areas for improvement.
Advantages of Team Software Process (TSP):
Following are the key advantage of the Team Software Process (TSP):
Improved productivity: TSP provides a structured approach to software development
that can help teams improve their productivity by breaking down the development
process into smaller, more manageable steps.
Improved quality: TSP emphasizes the importance of producing high-quality software
that meets user requirements and is free of defects. By collecting and analyzing data
throughout the development process, teams can identify and eliminate sources of errors
and improve the quality of their work.
Team collaboration: TSP promotes team collaboration, which can help teams work
more efficiently and effectively by leveraging the skills and expertise of all team
members.
Improved estimation: TSP emphasizes the importance of accurate estimation, which
can help teams plan and execute projects more effectively.
Continuous improvement: TSP promotes a culture of continuous improvement, which
can help teams learn from past experiences and apply that knowledge to future projects.
Disadvantages of Team Software Process (TSP):
Following are the disadvantage of the Team Software Process (TSP):
Time-consuming: TSP can be time-consuming, particularly when teams are first
learning the methodology and need to collect and analyze data throughout the
development process.
Complex: TSP can be complex, particularly for teams that are not familiar with
software engineering concepts or who have limited experience in software
development.
Heavy documentation: TSP requires a significant amount of documentation
throughout the development process, which can be burdensome for some teams.
Requires discipline: TSP requires teams to follow a disciplined approach to software
development, which can be challenging for some teams who prefer a more flexible
approach.
Cost: TSP can be costly to implement, particularly if teams need to invest in training or
software tools to support the methodology.
PROCESS TECHNOLOGY
One or more of the process models discussed in the preceding sections must be
adapted for use by a software team. To accomplish this, process technology tools have been
developed to help software organizations analyze their current process, organize work tasks,
control and monitor progress, and manage technical quality. Process technology tools allow a
software organization to build an automated model of the process framework, task sets, and
umbrella activities discussed in
The model, normally represented as a network, can then be analysed to determine
typical workflow and examine alternative process structures that might lead to reduced
development time or cost. Once an acceptable process has been created, other process
technology tools
can be used to allocate, monitor, and even control all software engineering activities, actions,
and tasks defined as part of the process model. Each member of a software team can use such
tools to develop a checklist of work tasks to be performed, work products to be produced, and
quality assurance activities to be conducted. The process technology tool can also be used to
coordinate the use of other software engineering tools that are appropriate for a particular
work task.
PRODUCT AND PROCESS
Product:
In the context of software engineering, Product includes any software manufactured based
on the customer's request. This can be a problem solving software or computer based
system. It can also be said that this is the result of a project.
Process:
Process is a set of sequence steps that have to be followed to create a project. The main
purpose of a process is to improve the quality of the project. The process serves as a
template that can be used through the creation of its examples and is used to direct the
project.
The main difference between a process and a product is that the process is a set of steps that
guide the project to achieve a convenient product. while on the other hand, the product is
the result of a project that is manufactured by a wide variety of people.
UNIT-II
REQUIREMENTS ANALYSIS AND SPECIFICATION
INTRODUCTION:
In software engineering, Requirements Analysis and Specification is the crucial
process of identifying, documenting, and managing the needs of a software system. It
involves understanding what the software should do (functional requirements) and how it
should perform (non-functional requirements). This phase is essential for ensuring the final
product meets user expectations and is delivered successfully.
Here's a breakdown of the key aspects:
1. Requirements Analysis:
Purpose:
To gather, analyze, and understand the needs of stakeholders (users, clients, etc.).
Activities:
a. Requirements elicitation: Gathering requirements through various methods
(interviews, surveys, workshops).
b. Requirements analysis: Examining the collected information to identify
inconsistencies, ambiguities, and conflicts.
c. Requirements validation: Ensuring the requirements are accurate, complete,
and feasible.
2. Requirements Specification:
Purpose:
To document the identified requirements in a clear, concise, and unambiguous
manner.
Activities:
a. Creating a Software Requirements Specification (SRS) document: This
document serves as a blueprint for development, outlining both functional and
non-functional requirements.
b. Using various notations: SRS documents may include use cases, user stories,
data models, and other visual representations to enhance understanding.
c. Maintaining a change management process: Any changes to requirements
during development are documented and managed through a controlled process.
3. Key Concepts:
Functional Requirements: Describe what the software should do (e.g., "The system
shall allow users to log in").
Non-Functional Requirements: Describe how the software should perform (e.g.,
"The system shall be available 99.9% of the time").
Stakeholders: Individuals or groups who have an interest in the software system.
SRS: A formal document that details the software's requirements.
4. Importance:
Reduces risks: Proper analysis and specification minimize the chances of developing
a product that doesn't meet user needs.
Improves communication: Clear documentation ensures all stakeholders are on the
same page.
Aids development: The SRS serves as a guide for developers, testers, and other team
members.
Ensures quality: Well-defined requirements lead to a more robust and reliable
software product.
REQUIREMENTS GATHERING
Requirements gathering is a crucial phase in the software development life cycle
(SDLC) and project management. It involves collecting, documenting, and managing the
requirements that define the features and functionalities of a system or application. The
success of a project often depends on the accuracy and completeness of the gathered
requirements in software.
Processes of Requirements Gathering in Software Development:
There are 6 steps crucial for requirement gathering processes
Processes of Requirements Gathering in Software Development
Step 1- Assigning roles:
The first step is to identify and engage with all relevant stakeholders. Stakeholders can
include end-users, clients, project managers, subject matter experts, and anyone else
who has a vested interest in the software project. Understanding their perspectives is
essential for capturing diverse requirements.
Step 2- Define Project Scope:
Clearly define the scope of the project by outlining its objectives, boundaries, and
limitations. This step helps in establishing a common understanding of what the
software is expected to achieve and what functionalities it should include.
Step 3- Conduct Stakeholder Interviews:
Schedule interviews with key stakeholders to gather information about their needs,
preferences, and expectations. Through open-ended questions and discussions, aim to
uncover both explicit and implicit requirements. These interviews provide valuable
insights that contribute to a more holistic understanding of the project.
Step 4- Document Requirements:
Systematically document the gathered requirements. This documentation can take
various forms, such as user stories, use cases, or formal specifications. Clearly articulate
functional requirements (what the system should do) and non-functional requirements
(qualities the system should have, such as performance or security).
Step 5- Verify and Validate Requirements:
Once the requirements are documented, it's crucial to verify and validate them.
Verification ensures that the requirements align with the stakeholders' intentions, while
validation ensures that the documented requirements will meet the project's goals. This
step often involves feedback loops and discussions with stakeholders to refine and
clarify requirements.
Step 6- Prioritize Requirements:
Prioritize the requirements based on their importance to the project goals and
constraints. This step helps in creating a roadmap for development, guiding
the team on which features to prioritize. Prioritization is essential, especially
when resources and time are limited.
Requirement Gathering Techniques:
Effective requirement gathering is essential for the success of a software
development project. Various techniques are employed to collect, analyze, and document
requirements.
Here are some commonly used requirement gathering techniques:
1. Interviews:
Conducting one-on-one or group interviews with stakeholders, including end-users,
clients, and subject matter experts. This allows for direct interaction to gather
detailed information about their needs, expectations, and concerns.
2. Surveys and Questionnaires:
Distributing surveys and questionnaires to a broad audience to collect information
on a larger scale. This technique is useful for gathering feedback from a diverse set
of stakeholders and can be particularly effective in large projects.
3. Workshops:
Organizing facilitated group sessions or workshops where stakeholders come
together to discuss and define requirements. Workshops encourage collaboration,
idea generation, and the resolution of conflicting viewpoints in a structured
environment.
4. Observation:
Directly observing end-users in their work environment to understand their
workflows, pain points, and preferences. Observational techniques help in
uncovering implicit requirements that users might not explicitly state.
5. Prototyping:
Creating mockups or prototypes of the software to provide stakeholders with a
tangible representation of the proposed system. Prototyping allows for early
visualization and feedback, helping to refine requirements based on stakeholders'
reactions.
6. Use Cases and Scenarios:
Developing use cases and scenarios to describe how the system will be used in
different situations. This technique helps in understanding the interactions between
users and the system, making it easier to identify and document functional
requirements.
7. Document Analysis:
Reviewing existing documentation, such as business process manuals, reports, and
forms, to extract relevant information. This technique provides insights into the
current processes and helps identify areas for improvement.
Requirement Gathering is important?
Requirement gathering holds immense importance in software development for several
critical reasons:
1. Clarity of Project Objectives:
Requirement gathering sets the stage by defining and clarifying the objectives of the
software project. It ensures that all stakeholders, including clients, users, and
development teams, have a shared understanding of what needs to be achieved.
2. Customer Satisfaction:
Understanding and meeting customer needs is paramount for customer satisfaction.
Requirement gathering allows developers to comprehend the expectations of end-
users and clients, leading to the creation of a product that aligns with their desires
and requirements.
3. Scope Definition:
Clearly defined requirements help in establishing the scope of the project. This
delineation is crucial for managing expectations, avoiding scope creep (uncontrolled
changes to project scope), and ensuring that the project stays on track.
4. Reduced Misunderstandings:
Ambiguities and misunderstandings are common sources of project failures.
Requirement gathering facilitates clear communication between stakeholders,
reducing the risk of misinterpretations and ensuring that everyone involved is on the
same page.
5. Risk Mitigation:
Identifying and addressing potential issues at the requirements stage helps mitigate
risks early in the development process. This proactive approach minimizes the
chances of costly errors, rework, and delays later in the project life cycle.
Benefits of Requirements Gathering:
The benefits of effective requirements gathering in software development include:
Cost Reduction: One of the primary benefits of effective requirements gathering is cost
reduction. When requirements are well-defined and thoroughly understood at the
beginning of a project, it minimizes the likelihood of costly changes and rework later in
the development process.
Customer Satisfaction: Clear and accurate requirements gathering directly contributes
to customer satisfaction. When the end product aligns closely with the expectations and
needs of the stakeholders, it enhances user experience and meets customer demands.
This satisfaction is not only vital for the success of the current project but also
contributes to positive relationships between the development team and clients,
fostering trust and potential future collaborations.
Improved Communication: Requirements gathering serves as a communication bridge
between various stakeholders involved in a project, including developers, clients, users,
and project managers. Miscommunication is a common source of project failures and
delays. By clearly documenting and understanding requirements, the development team
ensures that everyone involved has a shared vision of the project objectives,
functionalities, and constraints.
Efficient Resource Utilization: Thorough requirements gathering enables the efficient
allocation and utilization of resources. Resources, including time, manpower, and
technology, are finite and valuable. When requirements are well-defined, project teams
can allocate resources more accurately, avoiding unnecessary expenditures or
overcommitting resources to certain aspects of the project.
Enhanced Quality: Well-documented requirements serve as the foundation for quality
assurance throughout the development process. When the project team has a clear
understanding of what needs to be achieved, they can establish quality standards and
criteria from the outset. This clarity enables the implementation of effective testing
strategies, ensuring that each aspect of the system is thoroughly evaluated against the
specified requirements.
Risk Management: Requirements gathering is a crucial component of effective risk
management. By identifying potential risks early in the project, stakeholders can
proactively address ambiguities, conflicting requirements, and other challenges that
could pose a threat to the project's success.
Accurate Planning: Accurate project planning is dependent on a clear understanding of
project requirements. When requirements are well-documented, project managers can
create realistic schedules, milestones, and deliverables. This accurate planning is crucial
for setting expectations, managing stakeholder timelines, and ensuring that the project
progresses according to the established timeline.
Tools for Requirements Gathering in Software Development:
Requirements gathering tools play a crucial role in streamlining the process of collecting,
documenting, and managing project requirements. These tools are designed to enhance
collaboration, improve communication, and facilitate the organization of complex
information. Here are several types of requirements gathering tools and their roles:
Collaboration Tools: Collaboration tools, such as project management platforms (e.g.,
Jira, Trello, Asana), facilitate teamwork and communication among project
stakeholders. These platforms often include features like task assignment, progress
tracking, and discussion forums, enabling teams to collaboratively gather, discuss, and
manage requirements in real-time.
Document Management Tools: Document management tools (e.g., Confluence,
SharePoint) help organize and store project documentation. These tools provide a
centralized repository for requirements, ensuring easy access, version control, and
collaboration. Document management tools are particularly valuable for maintaining a
structured record of evolving project requirements.
Survey and Form Builders: Tools like Google Forms, Typeform, or SurveyMonkey
enable the creation of online surveys and forms. These are useful for gathering
structured data from a large audience, such as feedback, preferences, or specific
information required for project requirements. The collected data can be easily analyzed
and integrated into the requirements gathering process.
Prototyping Tools: Prototyping tools (e.g., Sketch, Balsamiq, Figma) allow the
creation of visual or interactive prototypes. These tools are valuable for translating
requirements into tangible representations that stakeholders can interact with, providing
a clearer understanding of the proposed features and functionalities.
Mind Mapping Tools: Mind mapping tools (e.g., MindMeister, XMind) help visualize
and organize complex ideas and relationships. During requirements gathering, these
tools can be used to create visual representations of interconnected requirements,
helping stakeholders and the project team understand the relationships between
different features and functionalities.
Version Control Systems: Version control systems (e.g., Git, SVN) are essential for
managing changes to project documentation. These tools track revisions, allowing
teams to review, revert, or merge changes seamlessly. This is particularly valuable in
dynamic projects where requirements may undergo frequent updates or refinements.
Requirements Management Software: Specialized requirements management tools
(e.g., IBM Engineering Requirements Management DOORS, Jama Connect) are
designed specifically for capturing, tracking, and managing requirements throughout the
project lifecycle. These tools often offer features such as traceability, impact analysis,
and integration with other project management tools.
Visual Collaboration Tools: Visual collaboration tools (e.g., Miro, Lucidchart)
facilitate collaborative diagramming and visual representation of ideas. These tools can
be used for creating flowcharts, diagrams, or visual models that help communicate
complex requirements in a more intuitive and accessible way.
SOFTWARE REQUIREMENT ANALYSIS
Software requirement analysis simply means complete study, analyzing,
describing software requirements so that requirements that are genuine and needed can be
fulfilled to solve problem. There are several activities involved in analyzing Software
requirements. Some of them are given below :
1. Problem Recognition:
The main aim of requirement analysis is to fully understand main objective of
requirement that includes why it is needed, does it add value to product, will it be
beneficial, does it increase quality of the project, does it will have any other effect. All
these points are fully recognized in problem recognition so that requirements that are
essential can be fulfilled to solve business problems.
2. Evaluation and Synthesis:
Evaluation means judgement about something whether it is worth or not and synthesis
means to create or form something. Here are some tasks are given that is important in
the evaluation and synthesis of software requirement :
To define all functions of software that necessary.
To define all data objects that is present externally and are easily observable.
To evaluate that flow of data is worth or not.
To fully understand overall behavior of system that means overall working of
system.
To identify and discover constraints that are designed.
To define and establish character of system interface to fully understand how system
interacts with two or more components or with one another.
3. Modeling:
After complete gathering of information from above tasks, functional and behavioral
models are established after checking function and behavior of system using a domain
model that also known as the conceptual model.
4. Specification:
The software requirement specification (SRS) which means to specify the requirement
whether it is functional or non-functional should be developed.
5. Review:
After developing the SRS, it must be reviewed to check whether it can be improved or
not and must be refined to make it better and increase the quality.
SOFTWARE REQUIREMENTS SPECIFICATION (SRS)
After the analyst has gathered all the required information regarding the software to
be developed, and has removed all incompleteness, inconsistencies, and anomalies from the
specification, he starts to systematically organise the requirements in the form of an SRS
document. The SRS document usually contains all the user requirements in a structured
though an informal form.
Among all the documents produced during a software development life cycle, SRS
document is probably the most important document and is the toughest to write. One reason
for this difficulty is that the SRS document is expected to cater to the needs of a wide variety
of audience. In the following subsection, we discuss the different categories of users of an
SRS document and their needs from it.
Users of SRS Document
Usually a large number of different people need the SRS document for very different
purposes. Some of the important categories of users of the SRS document and their needs for
use are as follows:
Users, customers, and marketing personnel: These stakeholders need to refer to the SRS
document to ensure that the system as described in the document will meet their needs.
Remember that the customer may not be the user of the software, but may be some one
employed or designated by the user. For generic products, the marketing personnel need to
understand the requirements that they can explain to the customers.
Software developers: The software developers refer to the SRS document to make sure that
they are developing exactly what is required by the customer.
Test engineers: The test engineers use the SRS document to understand the functionalities,
and based on this write the test cases to validate its working. They need that the required
functionality should be clearly described, and the input and output data should have been
identified precisely.
User documentation writers: The user documentation writers need to read the SRS
document to ensure that they understand the features of the product well enough to be able to
write the users’ manuals.
Project managers: The project managers refer to the SRS document to ensure that they can
estimate the cost of the project easily by referring to the SRS document and that it contains
all the information required to plan the project.
Maintenance engineers: The SRS document helps the maintenance engineers to understand
the functionalities supported by the system. A clear knowledge of the functionalities can help
them to understand the design and code. Also, a proper understanding of the functionalities
supported enables them to determine the specific modifications to the system’s functionalities
would be needed for a specific purpose.
Many software engineers in a project consider the SRS document to be a reference
document. However, it is often more appropriate to think of the SRS document as the
documentation of a contract between the development team and the customer. In fact, the
SRS document can be used to resolve any disagreements between the developers and the
customers that may arise in the future. The SRS document can even be used as a legal
document to settle disputes between the customers and the developers in a court of law. Once
the customer agrees to the SRS document, the development team proceeds to develop the
software and ensure that it conforms to all the requirements mentioned in the SRS document
Why Spend Time and Resource to Develop an SRS Document?
A well-formulated SRS document finds a variety of usage other than the primary
intended usage as a basis for starting the software development work. In the following
subsection, we identify the important uses of a well-formulated SRS document:
Forms an agreement between the customers and the developers: A good SRS document
sets the stage for the customers to form their expectation about the software and the
developers about what is expected from the software.
Reduces future reworks: The process of preparation of the SRS document forces the
stakeholders to rigorously think about all of the requirements before design and development
get underway. This reduces later redesign, recoding, and retesting. Careful review of the SRS
document can reveal omissions, misunderstandings, and inconsistencies early in the
development cycle.
Provides a basis for estimating costs and schedules: Project managers usually estimate the
size of the software from an analysis of the SRS document. Based on this estimate they make
other estimations such as the effort required to develop the software and the total cost of
development. The SRS document also serves as a basis for price negotiations with the
customer. The project manager also uses the SRS document for work scheduling.
Provides a baseline for validation and verification: The SRS document provides a baseline
against which compliance of the developed software can be checked. It is also used by the
test engineers to create the test plan.
Facilitates future extensions: The SRS document usually serves as a basis for planning
future enhancements. Before we discuss about how to write an SRS document, we first
discuss the characteristics of a good SRS document and the pitfalls that one must consciously
avoid while writing an SRS document.
Characteristics of a Good SRS Document
The skill of writing a good SRS document usually comes from the experience gained
from writing SRS documents for many projects. However, the analyst should be aware of the
desirable qualities that every good SRS document should possess. IEEE Recommended
Practice for Software Requirements Specifications [IEEE, 1998] describes the content and
qualities of a good software requirements specification (SRS). Some of the identified
desirable qualities of an SRS document are the following:
„„ Concise: The SRS document should be concise and at the same time unambiguous,
consistent, and complete. Verbose and irrelevant descriptions reduce readability and also
increase the possibilities of errors in the document.
Implementation-independent: The SRS should be free of design and implementation
decisions unless those decisions reflect actual requirements. It should only specify what the
system should do and refrain from stating how to do these. This means that the SRS
document should specify the externally visible behaviour of the system and not discuss the
implementation issues. This view with which a requirement specification is written, has been
shown in Figure 1. Observe that in Figure.1, the SRS document describes the output
produced for the different types of input and a description of the processing required to
produce the output from the input (shown in ellipses) and the internal working of the software
is not discussed at all.
FIGURE .1 The black-box view of a system as performing a set of functions.
Traceable: It should be possible to trace a specific requirement to the design elements that
implement it and vice versa. Similarly, it should be possible to trace a requirement to the code
segments that implement it and the test cases that test this requirement and vice versa.
Traceability is also important to verify the results of a phase with respect to the previous
phase and to analyse the impact of changing a requirement on the design elements and the
code.
Modifiable: Customers frequently change the requirements during the software development
due to a variety of reasons. Therefore, in practice the SRS document undergoes several
revisions during software development. Also, an SRS document is often modified after the
project completes to accommodate future enhancements and evolution. To cope up with the
requirements changes, the SRS document should be easily modifiable. For this, an SRS
document should be well-structured. A well-structured document is easy to understand and
modify. Having the description of a requirement scattered across many places in the SRS
document may not be wrong—but it tends to make the requirement difficult to understand
and also any modification to the requirement would become difficult as it would require
changes to be made at large number of places in the document.
Identification of response to undesired events: The SRS document should discuss the
system responses to various undesired events and exceptional conditions that may arise.
Verifiable: All requirements of the system as documented in the SRS document should be
verifiable. This means that it should be possible to design test cases based on the description
of the functionality as to whether or not requirements have been met in an implementation. A
requirement such as “the system should be user friendly” is not verifiable. On the other hand,
the requirement—“When the name of a book is entered, the software should display whether
the book is available for issue or it has been loaned out” is verifiable. Any feature of the
required system that is not verifiable should be listed separately in the goals of the
implementation section of the SRS document.
Attributes of Bad SRS Documents
SRS documents written by novices frequently suffer from a variety of problems. As
discussed earlier, the most damaging problems are incompleteness, ambiguity, and
contradictions. There are many other types of problems that a specification document might
suffer from. By knowing these problems, one can try to avoid them while writing an SRS
document. Some of the important categories of problems that many SRS documents suffer
from are as follows:
Over-specification: It occurs when the analyst tries to address the “how to” aspects in the
SRS document. For example, in the library automation problem, one should not specify
whether the library membership records need to be stored indexed on the member’s first
name or on the library member’s identification (ID) number. Over-specification restricts the
freedom of the designers in arriving at a good design solution.
Forward references: One should not refer to aspects that are discussed much later in the
SRS document. Forward referencing seriously reduces readability of the specification.
Wishful thinking: This type of problems concern description of aspects which would be
difficult to implement.
Noise: The term noise refers to presence of material not directly relevant to the software
development process. For example, in the register customer function, suppose the analyst
writes that customer registration department is manned by clerks who report for work
between 8 am and 5 pm, 7 days a week. This information can be called noise as it would
hardly be of any use to the software developers and would unnecessarily clutter the SRS
document, diverting the attention from the crucial points. Several other “sins” of SRS
documents can be listed and used to guard against writing a bad SRS document and is also
used as a checklist to review an SRS document.
Important Categories of Customer Requirements
A good SRS document, should properly categorize and organise the requirements into
different sections
Functional requirements
The functional requirements capture the functionalities required by the users from the
system. We have already pointed out in Chapter 2 that it is useful to consider a software as
offering a set of functions {fi} to the user. These functions can be considered similar to a
mathematical function f : I → O, meaning that a function transforms an element (ii) in the
input domain (I) to a value (oi) in the output (O). This functional view of a system is shown
schematically in Figure.1. Each function fi of the system can be considered as reading certain
data ii, and then transforming a set of input data (ii) to the corresponding set of output data
(oi). The functional requirements of the system should clearly describe each functionality that
the system would support along with the corresponding input and output data set.
Considering that the functional requirements are a crucial part of the SRS document, we
discuss functional requirements in more detail in Section 4.2.6. Section 4.2.7 discusses how
the functional requirements can be identified from a problem description. Finally, Section
4.2.8 discusses how the functional requirements can be documented effectively.
Non-functional requirements
The non-functional requirements are non-negotiable obligations that must be
supported by the software. The non-functional requirements capture those requirements of the
customer that cannot be expressed as functions (i.e., accepting input data and producing
output data). Non-functional requirements usually address aspects concerning external
interfaces, user interfaces, maintainability, portability, usability, maximum number of
concurrent users, timing, and throughput (transactions per second, etc.). The non-functional
requirements can be critical in the sense that any failure by the developed software to achieve
some minimum defined level in these requirements can be considered as a failure and make
the software unacceptable by the customer. In the following subsections, we discuss the
different categories of non-functional requirements that are described under three different
sections:
Design and implementation constraints: Design and implementation constraints are an
important category of non-functional requirements describe any items or issues that will limit
the options available to the developers. Some of the example constraints can be—corporate or
regulatory policies that needs to be honoured; hardware limitations; interfaces with other
applications; specific technologies, tools, and databases to be used; specific communications
protocols to be used; security considerations; design conventions or programming standards
to be followed, etc. Consider an example of a constraint that can be included in this section—
Oracle DBMS needs to be used as this would facilitate easy interfacing with other
applications that are already operational in the organisation.
External interfaces required: Examples of external interfaces are—hardware, software and
communication interfaces, user interfaces, report formats, etc. To specify the user interfaces,
each interface between the software and the users must be described. The description may
include sample screen images, any GUI standards or style guides that are to be followed,
screen layout constraints, standard buttons and functions (e.g., help) that will appear on every
screen, keyboard shortcuts, error message display standards, and so on. One example of a
user interface requirement of a software can be that it should be usable by factory shop floor
workers who may not even have a high school degree. The details of the user interface design
such as screen designs, menu structure, navigation diagram, etc. should be documented in a
separate user interface specification document.
Other non-functional requirements: This section contains a description of non- functional
requirements that neither are design constraints and nor are external interface requirements.
An important example is a performance requirement such as the number of transactions
completed per unit time. Besides performance requirements, the other non-functional
requirements to be described in this section may include reliability issues, accuracy of results,
and security issues.
Goals of implementation
The ‘goals of implementation’ part of the SRS document offers some general suggestions
regarding the software to be developed. These are not binding on the developers, and they
may take these suggestions into account if possible. For example, the developers may use
these suggestions while choosing among different design solutions.
The goals of implementation section might document issues such as easier revisions
to the system functionalities that may be required in the future, easier support for new devices
to be supported in the future, reusability issues, etc. These are the items which the developers
might keep in their mind during development so that the developed system may meet some
aspects that are not required immediately. It is useful to remember that anything that would
be tested by the user and the acceptance of the system would depend on the outcome of this
task, is usually considered as a requirement to be fulfilled by the system and not a goal and
vice versa.
Functional Requirements
In order to document the functional requirements of a system, it is necessary to first
learn to identify the high-level functions of the systems by reading the informal
documentation of the gathered requirements. The high-level functions would be split into
smaller subrequirements. Each high-level function is an instance of use of the system (use
case) by the user in some way.
However, the above is not a very accurate definition of a high-level function. For
example, how useful must a piece of work be performed by the system for it to be called ‘a
useful piece of work’? Can the printing of the statements of the ATM transaction during
transaction should not be considered a high-level requirement, because the user does not
specifically request for this activity. The receipt gets printed automatically as part of the
withdraw money function. Usually, the user invokes (requests) the services of each high-level
requirement. It may therefore be possible to treat print receipt as part of the withdraw money
function rather than treating it as a high-level function. It is therefore required that for some
of the high-level functions, we might have to debate whether we wish to consider it as a high-
level function or not. However, it would become possible to identify most of the high-level
functions without much difficulty after practising the solution to a few exercise problems.
Each high-level requirement typically involves accepting some data from the user through a
user interface, transforming it to the required response, and then displaying the system
response in proper format. For example, in a library automation software, a high-level
functional requirement might be search-book. This function involves accepting a book name
or a set of key words from the user, running a matching algorithm on the book list, and finally
outputting the matched books. The generated system response can be in several forms, e.g.,
display on the terminal, a print out, some data transferred to the other systems, etc. However,
in degenerate cases, a high-level requirement may not involve any data input to the system or
production of displayable results. For example, it may involve switch on a light, or starting a
motor in an embedded application.
Are high-level functions of a system similar to mathematical functions?
We all know that a mathematical function transforms input data to output data. A highlevel
function transforms certain input data to output data. However, except for very simple high-
level functions, a function rarely reads all its required data in one go and rarely outputs all the
results in one shot. In fact, a high-level function usually involves a series of interactions
between the system and one or more users. An example of the interactions that may occur in
a single high-level requirement has been shown in Figure.2. In Figure.2, the user inputs have
been represented by rectangles and the response produced by the system by circles. Observe
that the rectangles and circles alternate in the execution of a single high-level function of the
system, indicating a series of requests from the user and the corresponding responses from
the system.
Typically, there is some initial data input by the user. After accepting this, the system may
display some response (called system action). Based on this, the user may input further data,
and so on.
In Figure.2, the different scenarios occur depending on the amount entered for withdrawal.
The different scenarios are essentially different behaviour exhibited by the system for the
same high-level function. Typically, each user input and the corresponding system action
may be considered as a sub-requirement of a high-level requirement. Thus, each high-level
requirement can consist of several sub-requirements.
How to Identify the Functional Requirements?
The high-level functional requirements often need to be identified either from an
informal problem description document or from a conceptual understanding of the problem.
Remember that there can be many types of users of a system and their requirements from the
system may be very different. So, it is often useful to first identify the different types of users
who might use the system and then try to identify the different services expected from the
software by different types of users. The decision regarding which functionality of the system
can be taken to be a high-level functional requirement and the one that can be considered as
part of another function (that is, a subfunction) leaves scope for some subjectivity. For
example, consider the issue-book function in a Library Automation System. Suppose, when a
user invokes the issue-book function, the system would require the user to enter the details of
each book to be issued. Should the entry of the book details be considered as a highlevel
function, or as only a part of the issue-book function? Many times, the choice is obvious. But,
sometimes it requires making non-trivial decisions.
How to Document the Functional Requirements?
Once all the high-level functional requirements have been identified and the requirements
problems have been eliminated, these are documented. A function can be documented by
identifying the state at which the data is to be input to the system, its input data domain, the
output data domain, and the type of processing to be carried on the input data to obtain the
output data. We now illustrate the specification of the functional requirements through two
examples. Let us first try to document the withdraw-cash function of an automated teller
machine (ATM) system in the following. The withdraw-cash is a highlevel requirement. It
has several sub-requirements corresponding to the different user interactions. These user
interaction sequences may vary from one invocation from another depending on some
conditions. These different interaction sequences capture the different scenarios. To
accurately describe a functional requirement, we must document all the different scenarios
that may occur.
Example: Withdraw cash from ATM
R.1: Withdraw cash
Description: The withdraw-cash function first determines the type of account that the user
has and the account number from which the user wishes to withdraw cash. It checks the
balance to determine whether the requested amount is available in the account. If enough
balance is available, it outputs the required cash, otherwise it generates an error message.
R.1.1: Select withdraw amount option
Input: “Withdraw amount” option selected
Output: User prompted to enter the account type
R.1.2: Select account type
Input: User selects option from any one of the following—savings/checking/deposit.
Output: Prompt to enter amount
R.1.3: Get required amount
Input: Amount to be withdrawn in integer values greater than 100 and less than 10,000 in
multiples of 100.
Output: The requested cash and printed transaction statement.
Processing: The amount is debited from the user’s account if sufficient balance is available,
otherwise an error message displayed.
Traceability
Traceability means that it would be possible to identify (trace) the specific design
component which implements a given requirement, the code part that corresponds to a given
design component, and test cases that test a given requirement. Thus, any given code
component can be traced to the corresponding design component, and a design component
can be traced to a specific requirement that it implements and vice versa. Traceability
analysis is an important concept and is frequently used during software development. For
example, by doing a traceability analysis, we can tell whether all the requirements have been
satisfactorily addressed in all phases. It can also be used to assess the impact of a
requirements change. That is, traceability makes it easy to identify which parts of the design
and code would be affected, when certain requirement change occurs. It can also be used to
study the impact of a bug that is known to exist in a code part on various requirements, etc.
To achieve traceability, it is necessary that each functional requirement should be
numbered uniquely and consistently. Proper numbering of the requirements makes it possible
for different documents to uniquely refer to specific requirements. An example scheme of
numbering the functional requirements is shown in Examples 4.7 and 4.8, where the
functional requirements have been numbered R.1, R.2, etc. and the subrequirements for the
requirement R.1 have been numbered R.1.1, R.1.2, etc.
FORMAL SYSTEM SPECIFICATION
In recent years, formal techniques have emerged as a central issue in software
engineering. This is not accidental; the importance of precise specification, modelling, and
verification is recognised to be important in most engineering disciplines. Formal methods
provide us with tools to precisely describe a system and show that a system is correctly
implemented. We say a system is correctly implemented when it satisfies its given
specification. The specification of a system can be given either as a list of its desirable
properties (property oriented approach) or as an abstract model of the system (model-oriented
approach). These two approaches are discussed here. Before discussing representative
examples of these two types of formal specification techniques, we first discuss a few basic
concepts in formal specification. We will first highlight some important concepts in formal
methods, and examine the merits and demerits of using formal techniques.
What is a Formal Technique?
A formal technique is a mathematical method to specify a hardware and/or software
system, verify whether a specification is realisable, verify that an implementation satisfies its
specification, prove properties of a system without necessarily running the system, etc. The
mathematical basis of a formal method is provided by its specification language. More
precisely, a formal specification language consists of two sets—syn and sem, and a relation
sat between them. The set syn is called the syntactic domain, the set sem is called the
semantic domain, and the relation sat is called the satisfaction relation. For a given
specification syn, and model of the system sem, if sat (syn, sem), then syn is said to be the
specification of sem, and sem is said to be the specificand of syn.
The generally accepted paradigm for system development is through a hierarchy of
abstractions. Each stage in this hierarchy is an implementation of its preceding stage and a
specification of the succeeding stage. The different stages in this system development activity
are requirements specification, functional design, architectural design, detailed design,
coding, implementation, etc. In general, formal techniques can be used at every stage of the
system development activity to verify that the output of one stage conforms
to the output of the previous stage.
Syntactic domains
The syntactic domain of a formal specification language consists of an alphabet of
symbols and a set of formation rules to construct well-formed formulas from the alphabet.
The well-formed formulas are used to specify a system.
Semantic domains
Formal techniques can have considerably different semantic domains. Abstract data
type specification languages are used to specify algebras, theories, and programs.
Programming languages are used to specify functions from input to output values. Concurrent
and distributed system specification languages are used to specify state sequences, event
sequences, state-transition sequences, synchronisation trees, partial orders, state machines,
etc.
Satisfaction relation
Given the model of a system, it is important to determine whether an element of the
semantic domain satisfies the specifications. This satisfaction is determined by using a
homomorphism known as semantic abstraction function. The semantic abstraction function
maps the elements of the semantic domain into equivalent classes. There can be different
specifications describing different aspects of a system model, possibly using different
specification languages. Some of these specifications describe the system’s behaviour and the
others describe the system’s structure. Consequently, two broad classes of semantic
abstraction functions are defined—those that preserve a system’s behaviour and those that
preserve a system’s structure.
Operational Semantics
Informally, the operational semantics of a formal method is the way computations are
represented. There are different types of operational semantics according to what is meant by
a single run of the system and how the runs are grouped together to describe the behaviour of
the system. In the following subsection we discuss some of the commonly used operational
semantics.
Linear semantics: In this approach, a run of a system is described by a sequence (possibly
infinite) of events or states. The concurrent activities of the system are represented by non-
deterministic interleavings of the atomic actions. For example, a concurrent activity a|| b is
represented by the set of sequential activities a; b and b; a. This is a simple but rather
unnatural representation of concurrency. The behaviour of a system in this model consists of
the set of all its runs. To make this model more realistic, usually justice and fairness
restrictions are imposed on computations to exclude the unwanted interleavings.
Branching semantics: In this approach, the behaviour of a system is represented by a
directed graph. The nodes of the graph represent the possible states in the evolution of a
system. The descendants of each node of the graph represent the states which can be
generated by any of the atomic actions enabled at that state. Although this semantic model
distinguishes the branching points in a computation, still it represents concurrency by
interleaving.
Maximally parallel semantics: In this approach, all the concurrent actions enabled at any
state are assumed to be taken together. This is again not a natural model of concurrency since
it implicitly assumes the availability of all the required computational resources.
Partial order semantics: Under this view, the semantics ascribed to a system is a structure
of states satisfying a partial order relation among the states (events). The partial order
represents a precedence ordering among events, and constrains some events to occur only
after some other events have occurred; while the occurrence of other events (called
concurrent events) is considered to be incomparable. This fact identifies concurrency as a
phenomenon not translatable to any interleaved representation.
SOFTWARE DESIGN:
During the software design phase, the design document is produced, based on the
customer requirements as documented in the SRS document. This view of a design process
has been shown schematically in Figure.1. As shown in Figure.1, the design process starts
using the SRS document and completes with the production of the design document. The
design document produced at the end of the design phase should be implementable using a
programming language in the subsequent (coding) phase.
Fig:1 The design process.
OVERVIEW OF THE DESIGN PROCESS
The design process essentially transforms the SRS document into a design document.
In the following sections and subsections, we will discuss a few important issues associated
with the design process.
Different modules required: The different modules in the solution should be identified.
Each module is a collection of functions and the data shared by these functions. Each module
should accomplish some well-defined task out of the overall responsibility of the software.
Each module should be named according to the task it performs. For example, in an academic
automation software, the module consisting of the functions and data necessary to accomplish
the task of registration of the students should be named handle student registration.
Control relationships among modules: A control relationship between two modules
essentially arises due to function calls across the two modules. The control relationships
existing among various modules should be identified in the design document.
Interfaces among diffrent modules: The interfaces between two modules identifies the
exact data items that are exchanged between the two modules when one module invokes a
function of the other module.
Data structures of the individual modules: Each module normally stores some data that the
functions of the module need to share to accomplish the overall responsibility of the module.
Suitable data structures for storing and managing the data of a module need to be properly
designed and documented.
Algorithms required to implement the individual modules: Each function in a module
usually performs some processing activity. The algorithms required to accomplish the
processing activities of various modules need to be carefully designed and documented with
due considerations given to the accuracy of the results, space and time complexities. Starting
with the SRS document (as shown in Figure .1), the design documents are produced through
iterations over a series of steps that we are going to discuss in this chapter and the subsequent
three chapters. The design documents are reviewed by the members of the development team
to ensure that the design solution conforms to the requirements specification.
Classification of Design Activities
A good software design is seldom realised by using a single step procedure, rather it
requires iterating over a series of steps called the design activities. Let us first classify the
design activities before discussing them in detail. Depending on the order in which various
design activities are performed, we can broadly classify them into two important stages.
1. Preliminary (or high-level) design, and
2. Detailed design.
The meaning and scope of these two stages can vary considerably from one design
methodology to another. However, for the traditional function-oriented design approach, it is
possible to define the objectives of the high-level design as follows:
The outcome of high-level design is called the program structure or the software
architecture. High-level design is a crucial step in the overall design of a software. When the
high-level design is complete, the problem should have been decomposed into many small
functionally independent modules that are cohesive, have low coupling among themselves,
and are arranged in a hierarchy. Many different types of notations have been used to represent
a high-level design. A notation that is widely being used for procedural development is a tree-
like diagram called the structure chart. Another popular design representation techniques
called UML that is being used to document object-oriented design, involves developing
several types of diagrams to document the object-oriented design of a systems. Though other
notations such as Jackson diagram [1975] or Warnier-Orr [1977, 1981] diagram are available
to document a software design, we confine our attention in this text to structure charts and
UML diagrams only.
Once the high-level design is complete, detailed design is undertaken.
The outcome of the detailed design stage is usually documented in the form of a
module specification (MSPEC) document. After the high-level design is complete, the
problem would have been decomposed into small modules, and the data structures and
algorithms to be used described using MSPEC and can be easily grasped by programmers for
initiating coding. In this text, we do not discuss MSPECs and confine our attention to high-
level design only.
Classification of Design Methodologies
The design activities vary considerably based on the specific design methodology
being used. A large number of software design methodologies are available. We can roughly
classify these methodologies into procedural and object-oriented approaches. These two
approaches are two fundamentally different design paradigms. In this chapter, we shall
discuss the important characteristics of these two fundamental design approaches. Over the
next three chapters, we shall study these two approaches in detail.
Do design techniques result in unique solutions?
Even while using the same design methodology, different designers usually arrive at
very different design solutions. The reason is that a design technique often requires the
designer to make many subjective decisions and work out compromises to contradictory
objectives. As a result, it is possible that even the same designer can work out many different
solutions to the same problem. Therefore, obtaining a good design would involve trying out
several alternatives (or candidate solutions) and picking out the best one. However, a
fundamental question that arises at this point is—how to distinguish superior design solution
from an inferior one? Unless we know what a good software design is and how to distinguish
a superior design solution from an inferior one, we can not possibly design one. We
investigate this issue in the next section.
Analysis versus design
Analysis and design activities differ in goal and scope. The analysis results are
generic and does not consider implementation or the issues associated with specific
platforms. The analysis model is usually documented using some graphical formalism. In
case of the function-oriented approach that we are going to discuss, the analysis model would
be documented using data flow diagrams (DFDs), whereas the design would be documented
using structure chart. On the other hand, for object-oriented approach, both the design model
and the analysis model will be documented using unified modelling language (UML). The
analysis model would normally be very difficult to implement using a programming
language.
The design model is obtained from the analysis model through transformations over a
series of steps. In contrast to the analysis model, the design model reflects several decisions
taken regarding the exact way system is to be implemented. The design model should be
detailed enough to be easily implementable using a programming language.
HOW TO CHARACTERISE A GOOD SOFTWARE DESIGN?
Coming up with an accurate characterisation of a good software design that would
hold across diverse problem domains is certainly not easy. In fact, the definition of a “good”
software design can vary depending on the exact application being designed. For example,
“memory size used up by a program” may be an important way to characterize a good
solution for embedded software development—since embedded applications are often
required to work under severely limited memory sizes due to cost, space, or power
consumption considerations. For embedded applications, factors such as design
comprehensibility may take a back seat while judging the goodness of design. Thus for
embedded applications, one may sacrifice design comprehensibility to achieve code
compactness. Similarly, it is not usually true that a criterion that is crucial for some
application, needs to be almost completely ignored for another application. It is therefore
clear that the criteria used to judge a design solution can vary widely across different types of
applications. Not only do the criteria used to judge a design solution depend on the exact
application being designed, but to make the matter worse, there is no general agreement
among software engineers and researchers on the exact criteria to use for judging a design
even for a specific category of application. However, most researchers and software
engineers agree on a few desirable characteristics that every good software design for general
applications must possess. These characteristics are listed below:
Correctness: A good design should first of all be correct. That is, it should correctly
implement all the functionalities of the system.
Understandability: A good design should be easily understandable. Unless a design solution
is easily understandable, it would be difficult to implement and maintain it.
Efficiency: A good design solution should adequately address resource, time, and cost
optimisation issues.
Maintainability: A good design should be easy to change. This is an important requirement,
since change requests usually keep coming from the customer even after product release.
Understandability of a Design: A Major Concern
While performing the design of a certain problem, assume that we have arrived at a
large number of design solutions and need to choose the best one. Obviously all incorrect
designs have to be discarded first. Out of the correct design solutions, how can we identify
the best one? Recollect from our discussions in Chapter 1 that a good design should help
overcome the human cognitive limitations that arise due to limited short-term memory. A
large problem overwhelms the human mind, and a poor design would make the matter worse.
Unless a design solution is easily understandable, it could lead to an implementation having a
large number of defects and at the same time tremendously pushing up the development
costs. Therefore, a good design solution should be simple and easily understandable. A
design that is easy to understand is also easy to develop and maintain. A complex design
would lead to severely increased life cycle costs. Unless a design is easily understandable, it
would require tremendous effort to implement, test, debug, and maintain it. We had already
pointed out in Chapter 2 that about 60 percent of the total effort in the life cycle of a typical
product is spent on maintenance. If the software is not easy to understand, not only would it
lead to increased development costs, the effort required to maintain the product would also
increase manifold. Besides, a design solution that is difficult to understand would lead to a
program that is full of bugs and is unreliable. Recollect that we had already discussed in
Chapter 1 that understandability of a design solution can be enhanced through clever
applications of the principles of abstraction and decomposition.
An understandable design is modular and layered
How can the understandability of two different designs be compared, so that we can pick
the better one? To be able to compare the understandability of two design solutions, we
should at least have an understanding of the general features that an easily understandable
design should possess. A design solution should have the following characteristics to be
easily understandable:
It should assign consistent and meaningful names to various design components.
It should make use of the principles of decomposition and abstraction in good measures
to simplify the design.
We had discussed the essential concepts behind the principles of abstraction and
decomposition principles in Chapter 1. But, how can the abstraction and decomposition
principles are used in arriving at a design solution? These two principles are exploited by
design methodologies to make a design modular and layered. (Though there are also a few
other forms in which the abstraction and decomposition principles can be used in the design
solution, we discuss those later). We can now define the characteristics of an easily
understandable design as follows: A design solution is understandable, if it is modular and
the modules are arranged in distinct layers.
We now elaborate the concepts of modularity and layering of modules:
Modularity
A modular design is an effective decomposition of a problem. It is a basic
characteristic of any good design solution. A modular design, in simple words, implies that
the problem has been decomposed into a set of modules that have only limited interactions
with each other. Decomposition of a problem into modules facilitates taking advantage of the
divide and conquer principle. If different modules have either no interactions or little
interactions with each other, then each module can be understood separately. This reduces the
perceived complexity of the design solution greatly. To understand why this is so, remember
that it may be very difficult to break a bunch of sticks which have been tied together, but very
easy to break the sticks individually.
It is not difficult to argue that modularity is an important characteristic of a good
design solution. But, even with this, how can we compare the modularity of two alternate
design solutions? From an inspection of the module structure, it is at least possible to
intuitively form an idea as to which design is more modular. For example, consider two
alternate design solutions to a problem that are represented in Figure.2, in which the modules
M1, M2, etc. have been drawn as rectangles. The invocation of a module by another module
has been shown as an arrow. It can easily be seen that the design solution of Figure .2(a)
would be easier to understand since the interactions among the different modules is low. But,
can we quantitatively measure the modularity of a design solution? Unless we are able to
quantitatively measure the modularity of a design solution, it will be hard to say which design
solution is more modular than another. Unfortunately, there are no quantitative metrics
available yet to directly measure the modularity of a design. However, we can quantitatively
characterise the modularity of a design solution based on the cohesion and coupling existing
in the design.
A software design with high cohesion and low coupling among modules is the
effective problem decomposition we discussed in Chapter 1. Such a design would lead to
increased productivity during program development by bringing down the perceived problem
complexity.
Fig.2 Two design solutions to the same problem.
Based on this classification, we would be able to easily judge the cohesion and coupling
existing in a design solution. From a knowledge of the cohesion and coupling in a design, we
can form our own opinion about the modularity of the design solution. We shall define the
concepts of cohesion and coupling and the various classes of cohesion and coupling in
Section 5.3. Let us now discuss the other important characteristic of a good design solution—
layered design.
Layered design
A layered design is one in which when the call relations among different modules are
represented graphically, it would result in a tree-like diagram with clear layering. In a layered
design solution, the modules are arranged in a hierarchy of layers. A module can only invoke
functions of the modules in the layer immediately below it. The higher layer modules can be
considered to be similar to managers that invoke (order) the lower layer modules to get
certain tasks done. A layered design can be considered to be implementing control
abstraction, since a module at a lower layer is unaware of (about how to call) the higher layer
modules.
When a failure is detected while executing a module, it is obvious that the modules
below it can possibly be the source of the error. This greatly simplifies debugging since one
would need to concentrate only on a few modules to detect the error. We shall elaborate these
concepts governing layered design of modules in Section 5.4.
COHESION AND COUPLING
We have so far discussed that effective problem decomposition is an important characteristic
of a good design. Good module decomposition is indicated through high cohesion of the
individual modules and low coupling of the modules with each other. Let us now define what
is meant by cohesion and coupling.
In this section, we first elaborate the concepts of cohesion and coupling. Subsequently, we
discuss the classification of cohesion and coupling.
Coupling: Intuitively, we can think of coupling as follows. Two modules are said to be
highly coupled, if either of the following two situations arise:
„. If the function calls between two modules involve passing large chunks of shared data, the
modules are tightly coupled.
„. If the interactions occur through some shared data, then also we say that they are highly
coupled.
If two modules either do not interact with each other at all or at best interact by passing no
data or only a few primitive data items, they are said to have low coupling.
Cohesion: To understand cohesion, let us first understand an analogy. Suppose you listened
to a talk by some speaker. You would call the speech to be cohesive, if all the sentences of
the speech played some role in giving the talk a single and focused theme. Now, we can
extend this to a module in a design solution. When the functions of the module co-operate
with each other for performing a single objective, then the module has good cohesion. If the
functions of the module do very different things and do not co-operate with each other to
perform a single piece of work, then the module has very poor cohesion.
Functional independence
By the term functional independence, we mean that a module performs a single task and
needs very little interaction with other modules. Functional independence is a key to any
good design primarily due to the following advantages it offers:
Error isolation: Whenever an error exists in a module, functional independence reduces the
chances of the error propagating to the other modules. The reason behind this is that if a
module is functionally independent, its interaction with other modules is low. Therefore, an
error existing in the module is very unlikely to affect the functioning of other modules.
Further, once a failure is detected, error isolation makes it very easy to locate the error. On
the other hand, when a module is not functionally independent, once a failure is detected in a
functionality provided by the module, the error can be potentially in any of the large number
of modules and propagated to the functioning of the module.
Scope of reuse: Reuse of a module for the development of other applications becomes easier.
The reasons for this is as follows. A functionally independent module performs some well-
defined and precise task and the interfaces of the module with other modules are very few
and simple. A functionally independent module can therefore be easily taken out and reused
in a different program. On the other hand, if a module interacts with several other modules or
the functions of a module perform very different tasks, then it would be difficult to reuse it.
This is especially so, if the module accesses the data (or code) internal to other modules.
Understandability: When modules are functionally independent, complexity of the design is
greatly reduced. This is because of the fact that different modules can be understood in
isolation, since the modules are independent of each other. We have already pointed out in
Section 5.2 that understandability is a major advantage of a modular design. Besides the three
we have listed here, there are many other advantages of a modular design as well. We shall
not list those here, and leave it as an assignment to the reader to identify them.
Classification of Cohesiveness
Cohesiveness of a module is the degree to which the different functions of the module co-
operate to work towards a single objective. The different modules of a design can possess
different degrees of freedom. However, the different classes of cohesion that modules can
possess are depicted in Figure.3. The cohesiveness increases from coincidental to functional
cohesion. That is, coincidental is the worst type of cohesion and functional is the best
cohesion possible. These different classes of cohesion are elaborated below.
Fig.3 Classification of cohesion.
Coincidental cohesion: A module is said to have coincidental cohesion, if it performs a set
of tasks that relate to each other very loosely, if at all. In this case, we can say that the module
contains a random collection of functions. It is likely that the functions have been placed in
the module out of pure coincidence rather than through some thought or design. The designs
made by novice programmers often possess this category of cohesion, since they often bundle
functions to modules rather arbitrarily. An example of a module with coincidental cohesion
has been shown in Figure .4(a). Observe that the different functions of the module carry out
very different and unrelated activities starting from issuing of library books to creating library
member records on one hand, and handling librarian leave request on the other.
Fig.4 Examples of cohesion.
Logical cohesion: A module is said to be logically cohesive, if all elements of the module
perform similar operations, such as error handling, data input, data output, etc. As an example
of logical cohesion, consider a module that contains a set of print functions to generate
various types of output reports such as grade sheets, salary slips, annual reports, etc.
Temporal cohesion: When a module contains functions that are related by the fact that these
functions are executed in the same time span, then the module is said to possess temporal
cohesion. As an example, consider the following situation. When a computer is booted,
several functions need to be performed. These include initialisation of memory and devices,
loading the operating system, etc. When a single module performs all these tasks, then the
module can be said to exhibit temporal cohesion. Other examples of modules having
temporal cohesion are the following. Similarly, a module would exhibit temporal cohesion, if
it comprises functions for performing initialisation, or start-up, or shut-down of some
process.
Procedural cohesion: A module is said to possess procedural cohesion, if the set of
functions of the module are executed one after the other, though these functions may work
towards entirely different purposes and operate on very different data. Consider the activities
associated with order processing in a trading house. The functions login(), place-order(),
check-order(), print-bill(), place-order-on-vendor(), updateinventory(), and logout() all do
different thing and operate on different data. However, they are normally executed one after
the other during typical order processing by a sales clerk.
Communicational cohesion: A module is said to have communicational cohesion, if all
functions of the module refer to or update the same data structure. As an example of
procedural cohesion, consider a module named student in which the different functions in the
module such as admitStudent, enterMarks, printGradeSheet, etc. access and manipulate data
stored in an array named studentRecords defined within the module.
Sequential cohesion: A module is said to possess sequential cohesion, if the different
functions of the module execute in a sequence, and the output from one function is input to
the next in the sequence. As an example consider the following situation. In an on-line store
consider that after a customer requests for some item, it is first determined if the item is in
stock. In this case, if the functions create-order(), check-item-availability(), place-order-on-
vendor() are placed in a single module, then the module would exhibit sequential cohesion.
Observe that the function create-order() creates an order that is processed by the function
check-item-availability() (whether the items are available in the required quantities in the
inventory) is input to place-order-on-vendor().
Functional cohesion: A module is said to possess functional cohesion, if different functions
of the module co-operate to complete a single task. For example, a module containing all the
functions required to manage employees’ pay-roll displays functional cohesion. In this case,
all the functions of the module (e.g., computeOvertime(), computeWorkHours(),
computeDeductions(), etc.) work together to generate the payslips of the employees. Another
example of a module possessing functional cohesion has been shown in Figure .4(b). In this
example, the functions issue-book(), return-book(), query-book(), and find-borrower(),
together manage all activities concerned with book lending. When a module possesses
functional cohesion, then we should be able to describe what the module does using only one
simple sentence. For example, for the module of Figure.4(a), we can describe the overall
responsibility of the module by saying “It manages the book lending procedure of the
library.”
A simple way to determine the cohesiveness of any given module is as follows. First examine
what do the functions of the module perform. Then, try to write down a sentence to describe
the overall work performed by the module. If you need a compound sentence to describe the
functionality of the module, then it has sequential or communicational cohesion. If you need
words such as “first”, “next”, “after”, “then”, etc., then it possesses sequential or temporal
cohesion. If it needs words such as “initialise”, “setup”, “shutdown”, etc., to define its
functionality, then it has temporal cohesion.
We can now make the following observation. A cohesive module is one in which the
functions interact among themselves heavily to achieve a single goal. As a result, if any of
these functions is removed to a different module, the coupling would increase as the
functions would now interact across two different modules.
Classification of Coupling
The coupling between two modules indicates the degree of interdependence between them.
Intuitively, if two modules interchange large amounts of data, then they are highly
interdependent or coupled. We can alternately state this concept as follows.
The interface complexity is determined based on the number of parameters and the
complexity of the parameters that are interchanged while one module invokes the functions
of the other module.
Let us now classify the different types of coupling that can exist between two modules.
Between any two interacting modules, any of the following five different types of coupling
can exist. These different types of coupling, in increasing order of their severities have also
been shown in Figure.5.
Fig.5 Classification of coupling.
Data coupling: Two modules are data coupled, if they communicate using an elementary
data item that is passed as a parameter between the two, e.g. an integer, a float, a character,
etc. This data item should be problem related and not used for control purposes.
Stamp coupling: Two modules are stamp coupled, if they communicate using a composite
data item such as a record in PASCAL or a structure in C.
Control coupling: Control coupling exists between two modules, if data from one module is
used to direct the order of instruction execution in another. An example of control coupling is
a flag set in one module and tested in another module.
Common coupling: Two modules are common coupled, if they share some global data
items.
Content coupling: Content coupling exists between two modules, if they share code. That is,
a jump from one module into the code of another module can occur. Modern high-level
programming languages such as C do not support such jumps across modules.
The different types of coupling are shown schematically in Figure.5. The degree of
coupling increases from data coupling to content coupling. High coupling among modules
not only makes a design solution difficult to understand and maintain, but it also increases
development effort and also makes it very difficult to get these modules developed
independently by different team members.
LAYERED ARRANGEMENT OF MODULES
The control hierarchy represents the organisation of program components in terms of
their call relationships. Thus we can say that the control hierarchy of a design is determined
by the order in which different modules call each other. Many different types of notations
have been used to represent the control hierarchy. The most common notation is a treelike
diagram known as a structure chart
In a layered design solution, the modules are arranged into several layers based on
their call relationships. A module is allowed to call only the modules that are at a lower layer.
That is, a module should not call a module that is either at a higher layer or even in the same
layer. Figure.6(a) shows a layered design, whereas Figure .6(b) shows a design that is not
layered. Observe that the design solution shown in Figure.6(b), is actually not layered since
all the modules can be considered to be in the same layer. In the following, we state the
significance of a layered design and subsequently we explain it.
In a layered design, the top-most module in the hierarchy can be considered as a
manager that only invokes the services of the lower level module to discharge its
responsibility. The modules at the intermediate layers offer services to their higher layer by
invoking the services of the lower layer modules and also by doing some work themselves to
a limited extent. The modules at the lowest layer are the worker modules. These do not
invoke services of any module and entirely carry out their responsibilities by themselves.
Understanding a layered design is easier since to understand one module, one would have to
at best consider the modules at the lower layers (that is, the modules whose services it
invokes). Besides, in a layered design errors are isolated, since an error in one module can
affect only the higher layer modules. As a result, in case of any failure of a module, only the
modules at the lower levels need to be investigated for the possible error. Thus, debugging
time reduces significantly in a layered design. On the other hand, if the different modules call
each other arbitrarily, then this situation would correspond to modules arranged in a single
layer. Locating an error would be both difficult and time consuming. This is because, once a
failure is observed, the cause of failure (i.e. error) can potentially be in any module, and all
modules would have to be investigated for the error.
In the following, we discuss some important concepts and terminologies associated
with a layered design:
Superordinate and subordinate modules: In a control hierarchy, a module that controls
another module is said to be superordinate to it. Conversely, a module controlled by another
module is said to be subordinate to the controller.
Visibility: A module B is said to be visible to another module A, if A directly calls B. Thus,
only the immediately lower layer modules are said to be visible to a module.
Control abstraction: In a layered design, a module should only invoke the functions of the
modules that are in the layer immediately below it. In other words, the modules at the higher
layers, should not be visible (that is, abstracted out) to the modules at the lower layers. This is
referred to as control abstraction.
Depth and width: Depth and width of a control hierarchy provide an indication of the
number of layers and the overall span of control respectively. For the design of Figure.6(a),
the depth is 3 and width is also 3.
Fan-out: Fan-out is a measure of the number of modules that are directly controlled by a
given module. In Figure 5.6(a), the fan-out of the module M1 is 3. A design in which the
modules have very high fan-out numbers is not a good design. The reason for this is that a
very high fan-out is an indication that the module lacks cohesion. A module having a large
fan-out (greater than 7) is likely to implement several different functions and not just a single
cohesive function.
Fan-in: Fan-in indicates the number of modules that directly invoke a given module. High
fan-in represents code reuse and is in general, desirable in a good design. In Figure.6(a),the
fan-in of the module M1 is 0, that of M2 is 1, and that of M5 is 2.
Fig.5 Examples of good and poor control abstraction.
APPROACHES TO SOFTWARE DESIGN
There are two fundamentally different approaches to software design that are in use today—
function-oriented design, and object-oriented design. Though these two design approaches
are radically different, they are complementary rather than competing techniques. The object-
oriented approach is a relatively newer technology and is still evolving. For development of
large programs, the object-oriented approach is becoming increasingly popular due to certain
advantages that it offers. On the other hand, function oriented designing is a mature
technology and has a large following. Salient features of these two approaches are discussed
in subsections respectively.
Function-oriented Design
The following are the salient features of the function oriented design approach:
Top-down decomposition: A system, to start with, is viewed as a black box that provides
certain services (also known as high-level functions) to the users of the system.
For example, consider a function create-new-library member which essentially creates
the record for a new member, assigns a unique membership number to him, and prints a bill
towards his membership charge. This high-level function may be refined into the
following sub functions:
„assign-membership-number
„create-member-record
„print-bill
Each of these sub functions may be split into more detailed sub functions and so on.
Centralised system state: The system state can be defined as the values of certain data items
that determine the response of the system to a user action or external event. For example, the
set of books (i.e. whether borrowed by different users or available for issue) determines the
state of a library automation system. Such data in procedural programs usually have global
scope and are shared by many modules.
For example, in the library management system, several functions such as the following share
data such as member-records for reference and updation:
„ create-new-member
„ delete-member
„ update-member-record
Object-oriented Design
In the object-oriented design (OOD) approach, a system is viewed as being made up of a
collection of objects (i.e., entities). Each object is associated with a set of functions that are
called its methods. Each object contains its own data and is responsible for managing it. The
data internal to an object cannot be accessed directly by other objects and only through
invocation of the methods of the object. The system state is decentralised since there is no
globally shared data in the system and data is stored in each object. For example, in a library
automation software, each library member may be a separate object with its own data and
functions to operate on the stored data. The methods defined for one object cannot directly
refer to or change the data of other objects.
The object-oriented design paradigm makes extensive use of the principles of abstraction and
decomposition as explained below. Objects decompose a system into functionally
independent modules. Objects can also be considered as instances of abstract data types
(ADTs). The ADT concept did not originate from the object-oriented approach. In fact, ADT
concept was extensively used in the ADA programming language introduced in the 1970s.
ADT is an important concept that forms an important pillar of object-orientation. Let us now
discuss the important concepts behind an ADT. There are, in fact, three important concepts
associated with an ADT—data abstraction, data structure, data type. We discuss these in the
following subsection:
Data abstraction: The principle of data abstraction implies that how data is exactly stored is
abstracted away. This means that any entity external to the object (that is, an instance of an
ADT) would have no knowledge about how data is exactly stored, organised, and
manipulated inside the object. The entities external to the object can access the data internal
to an object only by calling certain well-defined methods supported by the object. Consider
an ADT such as a stack. The data of a stack object may internally be stored in an array, a
linearly linked list, or a bidirectional linked list. The external entities have no knowledge of
this and can access data of a stack object only through the supported operations such as push
and pop.
Data structure: A data structure is constructed from a collection of primitive data items. Just
as a civil engineer builds a large civil engineering structure using primitive building materials
such as bricks, iron rods, and cement; a programmer can construct a data structure as an
organised collection of primitive data items such as integer, floating point numbers,
characters, etc.
Data type: A type is a programming language terminology that refers to anything that can be
instantiated. For example, int, float, char, etc., are the basic data types supported by C
programming language. Thus, we can say that ADTs are user defined data types. In object-
orientation, classes are ADTs. But, what is the advantage of developing an application using
ADTs? Let us examine the three main advantages of using ADTs in programs:
The data of objects are encapsulated within the methods. The encapsulation principle
is also known as data hiding. The encapsulation principle requires that data can be
accessed and manipulated only through the methods supported by the object and not
directly. This localises the errors. The reason for this is as follows. No program
element is allowed to change a data, except through invocation of one of the methods.
So, any error can easily be traced to the code segment changing the value. That is, the
method that changes a data item, making it erroneous can be easily identified.
„ An ADT-based design displays high cohesion and low coupling. Therefore, object
oriented designs are highly modular.
„Since the principle of abstraction is used, it makes the design solution easily
understandable and helps to manage complexity.
Similar objects constitute a class. In other words, each object is a member of some
class. Classes may inherit features from a super class. Conceptually, objects communicate by
message passing. Objects have their own internal data. Thus an object may exist in different
states depending the values of the internal data. In different states, an object may behave
differently
Object-oriented versus function-oriented design approaches
The following are some of the important differences between the function-oriented and
object-oriented design:
Unlike function-oriented design methods in OOD, the basic abstraction is not the
services available to the users of the system such as issue-book, display-bookdetails,
find-issued-books, etc., but real-world entities such as member, book, book-register,
etc. For example in OOD, an employee pay-roll software is not developed by
designing functions such as update-employee-record, get-employeeaddress, etc., but
by designing objects such as employees, departments, etc.
In OOD, state information exists in the form of data distributed among several objects
of the system. In contrast, in a procedural design, the state information is available in
a centralised shared data store. For example, while developing an employee pay-roll
system, the employee data such as the names of the employees, their code numbers,
basic salaries, etc., are usually implemented as global data in a traditional
programming system; whereas in an object-oriented design, these data are distributed
among different employee objects of the system. Objects communicate by message
passing. Therefore, one object may discover the state information of another object by
sending a message to it. Of course, somewhere or other the real-world functions must
be implemented.
Function-oriented techniques group functions together if, as a group, they constitute a
higher level function. On the other hand, object-oriented techniques group functions
together on the basis of the data they operate on.
To illustrate the differences between the object-oriented and the function-oriented design
approaches, let us consider an example—that of an automated fire-alarm system for a large
building.
Automated fire-alarm system—customer requirements
The owner of a large multi-storied building wants to have a computerised fire alarm system
designed, developed, and installed in his building. Smoke detectors and fire alarms would be
placed in each room of the building. The fire alarm system would monitor the status of these
smoke detectors. Whenever a fire condition is reported by any of the smoke detectors, the fire
alarm system should determine the location at which the fire has been sensed and then sound
the alarms only in the neighbouring locations. The fire alarm system should also flash an
alarm message on the computer console. Fire fighting personnel would man the console
round the clock. After a fire condition has been successfully handled, the fire alarm system
should support resetting the alarms by the fire fighting personnel.
Function-oriented approach: In this approach, the different high-level functions are first
identified, and then the data structures are designed.
/* Global data (system state) accessible by various functions */
BOOL detector_status[MAX_ROOMS];
int detector_locs[MAX_ROOMS];
BOOL alarm-status[MAX_ROOMS]; /* alarm activated when status is set */
int alarm_locs[MAX_ROOMS]; /* room number where alarm is located */
int neighbour-alarms[MAX-ROOMS][10]; /* each detector has at most */
/* 10 neighbouring alarm locations */
int sprinkler[MAX_ROOMS];
The functions which operate on the system state are:
interrogate_detectors();
get_detector_location();
determine_neighbour_alarm();
determine_neighbour_sprinkler();
ring_alarm();
activate_sprinkler();
reset_alarm();
reset_sprinkler();
report_fire_location();
Object-oriented approach: In the object-oriented approach, the different classes of objects
are identified. Subsequently, the methods and data for each object are identified. Finally, an
appropriate number of instances of each class is created.
class detector
attributes: status, location, neighbours
operations: create, sense-status, get-location, find-neighbours
class alarm
attributes: location, status
operations: create, ring-alarm, get_location, reset-alarm
class sprinkler
attributes: location, status
operations: create, activate-sprinkler, get_location, reset-sprinkler
We can now compare the function-oriented and the object-oriented approaches based
on the two examples discussed above, and easily observe the following main differences:
„In a function-oriented program, the system state (data) is centralised and several
functions access and modify this central data. In case of an object-oriented program,
the state information (data) is distributed among various objects.
„In the object-oriented design, data is private in different objects and these are not
available to the other objects for direct access and modification.
„The basic unit of designing an object-oriented program is objects, whereas it is
functions and modules in procedural designing. Objects appear as nouns in the
problem description; whereas functions appear as verbs.
At this point, we must emphasise that it is not necessary that an object-oriented design
be implemented by using an object-oriented language only. However, an object-oriented
language such as C++ and Java support the definition of all the basic mechanisms of class,
inheritance, objects, methods, etc. and also support all key object-oriented concepts that we
have just discussed. Thus, an object-oriented language facilitates the implementation of an
OOD. However, an OOD can as well be implemented using a conventional procedural
languages—though it may require more effort to implement an OOD using a procedural
language as compared to the effort required for implementing the same design using an
object-oriented language. In fact, the older C++ compilers were essentially pre-processors
that translated C++ code into C code.
Even though object-oriented and function-oriented techniques are remarkably
different approaches to software design, yet one does not replace the other; but they
complement each other in some sense. For example, usually one applies the top-down
function oriented techniques to design the internal methods of a class, once the classes are
identified. In this case, though outwardly the system appears to have been developed in an
object-oriented fashion, but inside each class there may be a small hierarchy of functions
designed in a top-down manner.