[go: up one dir, main page]

0% found this document useful (0 votes)
22 views132 pages

Unit - 5

Software evolution refers to the continuous improvement and adaptation of software after its initial release, driven by user needs, bug fixes, and changing technology. The maintenance process includes various activities such as corrective, adaptive, preventive, and perfective maintenance, each addressing different aspects of software upkeep. Additionally, models like the Belady and Lehman and Boehm models help estimate maintenance costs and efforts, while reverse engineering aids in understanding and documenting legacy systems.

Uploaded by

fearless61022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views132 pages

Unit - 5

Software evolution refers to the continuous improvement and adaptation of software after its initial release, driven by user needs, bug fixes, and changing technology. The maintenance process includes various activities such as corrective, adaptive, preventive, and perfective maintenance, each addressing different aspects of software upkeep. Additionally, models like the Belady and Lehman and Boehm models help estimate maintenance costs and efforts, while reverse engineering aids in understanding and documenting legacy systems.

Uploaded by

fearless61022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 132

Unit 5

Software Evolution as Entity


• Software Evolution means that software (like an app or program) isn’t finished once it’s built. It
keeps changing and improving over time. These changes happen for many reasons—maybe
users want new features, or there are bugs to fix, or some parts are no longer useful.
• The evolution process includes fundamental activities of change analysis, release planning,
system implementation and releasing a system to customers.
• The cost and impact of these changes are accessed to see how much system is affected by the
change and how much it might cost to implement the change.
• If the proposed changes are accepted, a new release of the software system is planned. During
release planning, all the proposed changes (fault repair, adaptation, and new functionality) are
considered.
• A design is then made for changes to implement in the next version of the system.
• The process of change implementation is an iteration of the development process where the
revisions to the system are designed, implemented and tested.
After changes are Bug Reports from users,
made and tested, the New Features, Outdated
updated software is Functions, Change in
released to users. technology or business
needs

•Design the change


•Write the new code •What needs to be done
•Test everything to make •How much it will cost
sure it works •How long it will take
•Fix any issues •If it’s worth doing
Laws used for Software Evolution

1. Law of Continuing Change

If software is being used in the real world, it must keep changing to stay useful. If it doesn’t
change, it becomes outdated or no longer fits user needs.
2. Law of Increasing Complexity
Every time you make changes to software, it tends to become more complicated. Unless you
actively manage and clean up the code, the system becomes harder to understand and
maintain.
3. Law of Conservation of Organization Stability
Even if you add more developers or resources to a software project, the rate at which the
software changes stays roughly the same. Adding more people doesn’t automatically mean
faster progress.
4. Law of Conservation of Familiarity
Software changes slowly and most updates are small, because people working on it need to
stay familiar with how it works. Big changes are rare and risky.
Software Maintenance
• Software Maintenance is a very broad activity that includes error corrections,
enhancements of capabilities, deletion of obsolete capabilities, and
optimization.
• This includes things like:
1.Fixing bugs (errors) – If something in the software isn’t working right, developers
go in and fix it.
2.Adding new features – Sometimes users need new functions, so developers add
them.
3.Removing old features – Some parts of the software might no longer be needed,
so they get removed.
4.Making it faster or better – Developers can optimize the software to work more
smoothly or quickly.
Need for Maintenance
Correct faults.
Improve the design.
Implement enhancements.
Accommodate programs so that different hardware,
software, system features, and telecommunications
facilities can be used.
Migrate legacy software: Move old software to newer technology
or platforms.
Retire software: Shut down or remove software that’s no longer
needed or supported
Categories of Software Maintenance
 Corrective Maintenance: Fixing bugs and errors after the software is
released.
 Adaptive Maintenance: Updating the software so it works with new
environments or technologies.
 Preventive Maintenance: Making the software better internally to prevent
future issues.
 Perfective Maintenance: Adding new features or improving existing ones
based on user feedback or needs.
Types of Maintenance
Corrective Maintenance: Fixing bugs or problems that are found in the software after it's
been released.

Adaptive Maintenance:

Updating the software to keep it working in a changing environment (new hardware,


software, or operating system).
This includes modifications and updations when the customers need the product to run
on new platforms, on new operating systems, or when they need the product to interface
with new hardware and software.
Adaptive maintenance is generally not requested by the client, rather it is imposed by the
external environment
Types of Maintenance (Contd.)
Preventive Maintenance
• Making changes to prevent future problems. This helps improve stability and avoid
crashes.
• Preventing issues before they happen.
• Example: Developers clean up old code and update libraries to avoid future security
issues.
Perfective Maintenance
• Improving or enhancing the software to make it better for users (even if there’s nothing
wrong with it).
• Making it better or adding new features.
• Example: Adding a new search filter in an app to help users find things faster.
Issues in Software Maintenance
• Lack of Traceability: The code doesn’t clearly show how it relates to
the original requirements or design plans.
• You can’t tell why a part of the code was written or which requirement it
supports.
Lack of Code Comments: The code has little or no explanation
written in comments.
Lack of Domain Knowledge: Developers don’t fully understand
the industry or subject the software is for.
Staff Problem due to limited understanding: Team members
don’t have enough training, experience, or understanding of the system.
Maintenance
Process
Maintenance Process Contd.
Program Understanding –
• Before developers fix or change anything in a software system, they
need to understand how it works. This is like reading the instructions
before trying to fix a machine.
• What Makes Program Understanding Easier or Harder?
• Complexity: If the code is full of confusing logic, like lots of loops,
conditions, or deeply nested structures, it’s harder to follow.
• Documentation: Good notes and explanations (in the form of
documents or comments) make things easier.
• Self-descriptiveness: If the code is written clearly, with meaningful
variable and function names and comments, it’s easier to follow.
Generating Maintenance Proposal
• Once the developers understand how the software works, the next step is to
make a plan for what needs to be done. This includes:
• What changes are needed
• How those changes will be made
• How much time and effort it might take
• What affects this phase:
• Extensibility: This means how easy it is to add or change things in the
software without breaking other parts.
• If the software was built in a smart, modular way, it’s much easier to plan and
make changes.
Maintenance Process Contd.
Ripple Effect
• When you make a change in one part of the software, it might affect other parts—even ones you
didn’t expect. This is called the ripple effect
• In this step we just analyse that if I make changes in this particular part will it affect any other part.
• Why it matters:
• A change in one place might cause unexpected problems in other areas.
• The effect can be logical/functional (changes how things work) or can impact performance (makes
the program slower or more resource-heavy).
Modified Program Testing
• Once the changes are made, the updated program needs to be tested to make sure it still
works correctly and is at least as reliable as before.
• What this phase includes:
• Testing the new feature or fix
• Re-testing other parts of the program that might be affected (this is called regression testing)
• Making sure the software is still stable and reliable
Cost of Maintenance
• Models for maintenance cost estimation
• Before maintaining or updating software, we often need to estimate how much
time, effort, and money it will take. This helps with planning, budgeting, and
decision-making.
• To do that, experts use models—just like how engineers use blueprints.
• Belady and Lehman Model
• Boehm Model.
Belady and Lehman Model
• Lehman and Belady have studied the characteristics of evolution of several software
products [1980]. They have expressed their observations in the form of laws.
• Lehman’s first law: A software product must change continually or become
progressively less useful. Every software product continues to evolve after its
development through maintenance efforts.
• Lehman’s second law: “As software is changed and maintained, its structure becomes
more complicated unless efforts are made to control or redesign it.”. The reason for the
degraded structure is that when you add a function during maintenance, you build top
of an existing program, often in a way that the existing program was not intended to
support. If you do not redesign the system, the additions will be more complex that
they should be.
• Lehman’s third law: “The rate of software development (or change) remains steady
over the product’s life. Even after the software is released, developers keep writing or
modifying code at a steady pace — during both development and maintenance.
• A game app gets regular updates — new levels, bug fixes, better graphics — even after
its launch. Developers are constantly working, just as much as they were before the first
release.
Belady and Lehman Model
This model indicates that the effort and cost can increase exponentially if poor
software development approach is used and the person or group that used the
approach is no longer available to perform maintenance.
The basic equation is
M = P + Ke(c-d)
Where
M – Total effort expanded(How much time/effort is needed overall)
P – Productive effort (The effort directly spent on useful updates or fixes)
K – An empirically determined constant(A number based on real-world observations,
used to balance the formula)
c – Complexity measure due to lack of good design and documentation
d – Degree to which maintenance team is familiar with the software.
Belady and Lehman Model
Example – The development effort for a software project is 500 person-
months. The empirically determined constant K is 0.3. The complexity
of the code is quite high and is equal to 8. calculate the total effort
expended (M) if
i) Maintenance team has good level of understanding of the project
(d = 0.9)
ii) Maintenance team has poor understanding of project (d = 0.1)

M = P + Ke(c-d)
Belady and Lehman Model
Solution –
Development effort (P) = 500 pm
K = 0.3
c=8
i) d = 0.9
M = P + Ke(c-d)
= 500 + 0.3e(8-0.9) =863.59 pm
ii) d= 0.1
M = P + Ke(c-d)
= 500 + 0.3e(8-0.1) =1309.18 pm
Belady and Lehman Model
• The development effort for a project is 600 PMs. The empirically
determined constant (K) of Belady and Lehman model is 0.5. The
complexity of code is quite high and is equal to 7. Calculate the
total effort expended (M) if maintenance team has reasonable
level of understanding of the project (d=0.7).
M = P + Ke(c-d)
Boehm Model
Boehm used a quantity called Annual Change Traffic(ACT) which is defined
as
“The fraction of a software product’s source instructions which undergo
change during a year either through addition, deletion or modification”.
ACT tells us how much of the software changes in a year, as a percentage
of the total code.
ACT = (KLOC added + KLOC deleted) / KLOC total
The Annual Maintenance Effort (AME) in person-month can be calculated as:
AME = ACT * SDE
Where SDE – Software development effort in person-month
ACT – Annual Change Traffic
Bohem Model
The formula can be modified further by using some Effort Adjustment
Factors(EAF).
The modified equation is given as :
AME = ACT * SDE * EAF
Bohem Model
Example – Annual change traffic (ACT) for a software system is 15 % per
year. The development effort is 600 pms. Compute an estimate for
annual maintenance effort(AME). If life time of the project is 10 years,
what is the total effort of the project.
Soln – AME = ACT * SDE
= 0.15 * 600 = 90 pm
For 10 years AME= 10 * 90 = 900pm
Total effort = 600 + 900 = 1500 pm.
Reverse Engineering
• It’s the process of taking apart a software system to figure out how it works—
especially when:
• The original developers are gone
• There’s little or no documentation
• The code is complex or old
Why Do We Use Reverse Engineering?
Here’s when it’s useful:
Situation Why Reverse Engineering Helps
Old software (legacy systems) Helps understand and maintain old code
❓ Missing documentation Helps recreate lost technical info
🐞 Bug hunting Helps find errors or security issues
🔧 System upgrades Helps plan changes when no one knows how it works
• How Reverse Engineering Works:
1.Code Analysis:
Reverse engineering starts by examining the code of the software. This could involve
looking at the source code (if available) or even analyzing the compiled machine code.
2.Creating Visuals:
Once the code is understood, the next step is often to create diagrams or charts that
show how the system functions. These could include things like:
1. Data flow diagrams (to see how data moves through the system)
2. UML diagrams (to show relationships between different parts of the software)
3.Generating Documentation:
If no documentation exists, reverse engineering helps generate it. This includes
technical specs, design documentation, user manuals, and more.
• Example in Action:
• Imagine a company has an old inventory management system with no
documentation, and they need to add a new feature. The system is
too complex for the new developers to understand, and the original
team is no longer available.
• Through reverse engineering, they analyze the code, understand its
flow, and create a set of diagrams and documents that explain how
the system works. This allows them to confidently make the changes
and ensure the system continues to function correctly.
Reverse Engineering
• Scope and Tasks
• The areas there reverse engineering is applicable include (but not limited to):
1. Program comprehension
2. Redocumentation and/ or document generation
3. Recovery of design approach and design details at any level of abstraction
4. Identifying reusable components
5. Identifying components that need restructuring
6. Recovering business rules, and
7. Understanding high level system description
Reverse Engineering
• 1. Mapping Between Application and Program Domains
• When doing reverse engineering, you try to connect two
worlds:
• 🧭 1. Application Domain (WHAT it does)
• This is the real-world problem the software is solving.
• It’s about user needs, business rules, goals, etc.
• Think: “What is this software supposed to do?”
• 💻 2. Program Domain (HOW it does it)
• This is the actual code written to solve that problem.
• It's the technical side—functions, classes, variables, loops, etc.
• Think: “How did the developers implement that feature in
code?”
• Example:
In a banking application, you might have a feature for checking
an account balance. Mapping would involve looking at the code
that performs calculations and retrieves data from the
database, then linking it to the user interface that displays the
• 2. Mapping Between Concrete (Actual Code) and Abstract Levels(Design)
• Looking at your code and figuring out which design idea it was written for.
• Goal:
Look at the existing code, Understand what each part of the code is
supposed to do in real life, Match the code (concrete) to the idea behind it
(abstract)
• Example:
Let’s say you designed a feature where users can sort their shopping cart
by price. The abstract design describes what should happen (sort items by
price), but the concrete code is the specific code written to actually do the
sorting. Reverse engineering connects the two.
• 3. Rediscovering High-Level Structures
• What it means:
Sometimes, when software is built over time by different teams, its original
design gets lost, and it becomes hard to understand the big picture. Reverse
engineering helps you rediscover that original design or high-level structure,
which tells you how the software was intended to work overall.
• You look at the full system, and try to figure out how everything was meant to
work together, like the big picture.
• Example:
If you have an old inventory system and don't remember how things like the
database and user interface were originally designed to work together, reverse
engineering helps you figure that out again.
• 4. Finding Missing Links Between Program Syntax and Semantics
• What it means:
Code is written in a specific syntax (structure), but just looking at the syntax
doesn’t always explain what the code does. Reverse engineering helps figure out
the meaning (or semantics) of the code—what it’s actually trying to achieve.
• Looking at the code structure (syntax) and figuring out the actual meaning or
purpose behind it (semantics).
• Example:
Imagine you see a complicated piece of code that sorts customer names in a list.
The syntax of the code might be hard to understand at first, but reverse
engineering will help you figure out that the purpose of the code is to display a
sorted list of customers for the user.
Reverse Engineering
when we reverse engineer software (try to understand how it works), we go
through different levels of detail — from very small technical stuff up to the
overall design.
Levels of Reverse Engineering
Reverse Engineers detect low level implementation constructs and replace
them with their high level counterparts.
You start by reading low-level code (like variables, loops, functions). Then
you figure out what higher-level idea it represents (like a payment system,
or login feature)
Reverse Engineering: Levels of
Abstraction
requirements, functional
descriptions

Derive specifications (what the


system is supposed to do) from
design

Extract the design from the


implementation.

Actual Code
Reverse Engineering
Redocumentation
Redocumentation is the recreation of a semantically equivalent
representation within the same relative abstraction level.
Design recovery
Design recovery entails identifying and extracting meaningful higher-
level abstractions beyond those obtained directly from examination of
the source code. This may be achieved from a combination of code,
existing design documentation, personal experience, and knowledge
of the problem and application domains.
Re-Engineering
• Software re-engineering is the process of analyzing, modifying, and
reconstructing existing (often legacy) software systems to improve their
quality, performance, maintainability, or to adapt them to new
environments — without discarding the entire system.
Starts from scratch Begins with an existing system

Build new
features or Improve, restructure,
systems adapt existing code
Re-Engineering
Type Legacy?
👴 Old, well-tested, documented code ❌ No
🆕 New, rushed code with no docs/tests ✅ Yes

The following suggestions may be useful for the modification of the


legacy code:
Study code well before attempting changes
 Concentrate on overall control flow and not coding
 Heavily comment internal code
 Use own variables, constants and declarations to localize the effect
 Keep detailed maintenance document
Use modern design techniques
Re-Engineering types

Source Code Translation: Source Code Translation means changing a program


written in one programming language into another.
Hardware platform update: The company might switch to new computers
or systems. The old programming language might not work on the new
machines because there's no software (like compilers) available to run it.
Staff Skill Shortages: It might be hard to find people who still know how to
work with the old language, especially if it's rare or outdated.
Organizational policy changes: Sometimes, a company decides to use only
one programming language across all their projects. This helps save money
and effort, since they don't need to support and maintain many old tools
and compilers for different languages.
Re-Engineering
Program Restructuring: Program Restructuring means changing the
way a program is written without changing what it does. The goal is to
make the code clearer or faster.
Control flow driven restructuring: This involves the imposition of
a clear control structure within the source code and can be either
inter modular or intra modular in nature.
• Efficiency driven restructuring: This focuses on improving how fast or
how well the program runs.
• For example, replacing a long list of IF...THEN...ELSE statements with a
simpler CASE or SWITCH statement can make the program faster and
easier to read.
Re-Engineering
❑Adaption driven restructuring:
❑This involves changing the coding style in order to adapt the program
to a new programming language or new operating environment, for
instance changing an imperative program in PASCAL into a functional
program in LISP.
Software Configuration Management
• Software Configuration Management (SCM) is like the organizer or manager
for all the parts of a software project. It helps keep everything under control
when changes happen — which they always do!
• Because change can occur at any time, SCM activities are developed to
(1) identify change: Find out what is changing (e.g., a file, function, or
requirement).
(2) control change: Decide if the change should happen and how it should be
done.
(3) ensure that change is being properly implemented, and Make sure the
change is made correctly and doesn’t break other things.
(4) report changes: Keep track of what changed, who changed it, and why.
SOFTWARE CONFIGURATION
MANAGEMENT
• However, there are four fundamental sources of change:
• New business or market conditions dictate changes in product
requirements or business rules.
• New customer needs demand modification of data produced by
information systems, functionality delivered by products, or services
delivered by a computer-based system.
• Reorganization or business growth/downsizing causes changes in
project priorities or software engineering team structure.
• Budgetary or scheduling constraints cause a redefinition of the system
or product.
SCM Concepts: Baselines
• A baseline is a software configuration management concept that helps us
to control change without seriously impeding justifiable change. The IEEE
(IEEE Std. No. 610.12-1990) defines a baseline as:
A baseline is a version of a product or document that’s been officially
approved and will be used as a foundation for future work. It can only be
changed through a controlled process.
A baseline is like a "checkpoint" in a software project.
• It’s a version of your software (or document) that has been:
• Reviewed
• Approved
• And is now frozen — meaning it won’t be changed casually.
Baselines
• In the context of software engineering, a baseline is a milestone in the
development of software that is marked by the delivery of one or more
software configuration items (SCIs)` and the approval of these SCIs that is
obtained through a formal technical review.
• For example, the elements of a Design Specification have been
documented and reviewed.
• Errors are found and corrected. Once all parts of the specification have
been reviewed, corrected and then approved, the Design Specification
becomes a baseline.
• Further changes to the program architecture (documented in the Design
Specification) can be made only after each has been evaluated and
approved.
• Although baselines can be defined at any level of detail, the most common
software baselines are shown in Figure.
Software configuration
Item

Baseline

Private
Workplace of the
Developer
Baselines
• Software engineering tasks produce one or more SCIs.
• After SCIs are reviewed and approved, they are placed in a project database
(also called a project library or software repository).
• When a member of a software engineering team wants to make a
modification to a baselined SCI, it is copied from the project database into the
engineer's private work space.
• However, this extracted SCI can be modified only if SCM controls are followed.
• The arrows in figure illustrate the modification path for a baselined SCI.
THE SCM PROCESS
• What does SCM actually do?
1.Controls change – Makes sure changes are reviewed, approved, and tracked.
2.Identifies items – Keeps track of all parts (SCIs) of the software, like code files, documents, designs.
3.Handles versions – Tracks every version of the software as it evolves.
4.Audits the configuration – Checks if everything is correct and follows the process.
5.Reports changes – Lets everyone know what changed, why, and by whom.
•How do we manage all the different versions of a program without confusion?
•How do we control changes both before and after releasing the software?
•Who decides which changes are important and should be approved?
•How do we know the change was done right?
•How do we inform the team or customer about the changes?
SCM Process: IDENTIFICATION OF OBJECTS
IN THE SOFTWARE CONFIGURATION
•To control and manage software configuration items, each must be separately named
and then organized using an object-oriented approach.
•Two types of objects can be identified : basic objects and aggregate objects.
•Basic Object
•A single piece of work created during the project.
•Examples:
•A part of a requirements document
•A source code file
•A test case or test suite
•Aggregate Object
•A collection of basic objects (or even other aggregate objects)
•.Example:
•A Design Specification document is an aggregate object. It might include:
•A data modelDescriptions of multiple components
Evolution Graph
Obj is object , can be a
test case, code file,
requirement
Version Control
• Version control combines procedures and tools to manage different
versions of configuration objects that are created during the software
process.
• This is supported by associating attributes with each software version,
and then allowing a configuration to be specified [and constructed] by
describing the set of desired attributes.
• Each node on the graph is an aggregate object, that is, a complete
version of the software.
• Each version of the software is a collection of SCIs (source code,
documents, data), and each version may be composed of different
variants.
• One or more attributes is assigned for each variant.
• For example Interface for android and iOS.
CHANGE CONTROL
• In large software projects, if changes are made without
proper control, things can quickly spiral into confusion and
errors — this is called uncontrolled change, and it leads to
chaos.
• For such projects, change control combines human
procedures and automated tools to provide a mechanism for
the control of change.
• The request is evaluated carefully, based on:
• Technical merit – Is it a good solution?
• Side effects – Could it break something else?
• Impact – What other parts of the system would be affected?
• Cost – How much time and effort will the change take?
Engineering Change
Order (ECO)
CHANGE CONTROL
1. Change Report
•A change report is created to summarize the evaluation.
•It includes details like:
•What the change is
•Why it's needed
•What the impact would be

2. Change Control Authority (CCA)


•The CCA is a person or a group responsible for making the final decision.
•They decide:
•Should the change be approved?
•How important is it?
•When should it be implemented?

3. Engineering Change Order (ECO)


•If the change is approved, an ECO is created.
•It explains:
• What needs to be changed
• Any limits or constraints (e.g., must not affect security)
• How it will be reviewed or tested
CHANGE CONTROL
• Check-out: A developer takes a file (like a piece of code) out of the
shared database to work on it.
• Check-in: After making changes, they return it to the shared
database, so others can use the updated version
• It helps enforce two key parts of change control:
• 1. Access Control: Who is allowed to make changes?
• 2. Synchronization Control: How do we avoid people overwriting
each other's work?
CHANGE CONTROL
1. Approved Change Request
•A change request is reviewed and approved, and an ECO (Engineering Change Order) is created.
2. Check-Out Begins
•A software engineer checks out the configuration object (code, document, etc.).
•The system checks:
• Access Control: Does this person have permission to edit this file?
• Synchronization Control: It locks the object in the shared database to prevent conflicting updates.
3. Work on the Extracted Version
•The engineer works on a copy of the original (called the extracted version).
•The original version is protected — others can view or copy it, but not modify it during this time.
4. Review, Test, and Verify
•The modified version goes through Software Quality Assurance (SQA) steps.
•It’s tested and reviewed to ensure it works correctly and doesn’t break anything.
5. Check-In and Unlock
•Once approved, the engineer checks in the updated version.
•The new version becomes the baseline.
•The lock is released, allowing others to now modify or check out the latest version.
Configuration Audit
• To track the proper implementation of changes, the audit is done in two folds:
1. Formal Technical Review
2. Software Configuration Audit
• 1. ✅ Formal Technical Review (FTR)
• What it checks:
The technical accuracy of the updated item (SCI – Software Configuration Item).
• Who does it:
One or more technical reviewers (usually senior engineers or domain experts).
• What they look for:
• Are the changes logically correct?
• Are there any mistakes in the updated code or document?
• Does the updated object remain consistent with related SCIs? (e.g., Does the new
code still match the design or requirements?)
Configuration Audit
• The configuration audit complements the Formal Technical Review (FTR) by focusing on aspects not
directly related to technical correctness but are still crucial for ensuring the change is properly managed.
Here's what gets checked:
• 1. ✅ Verification of Changes
• Did the change specified in the ECO (Engineering Change Order) happen?
• Were any additional, unapproved changes made?
• This ensures that only the approved modifications are applied.
• 2. ✔️Formal Technical Review (FTR) Completion
• Has the Formal Technical Review been conducted?
• If not, the audit ensures that the review process was properly followed before proceeding.
• 3. Software Process and Standards Compliance
• Was the software process followed correctly?
• This ensures that the changes meet software engineering standards and are not rushed or bypassing
best practices.
• 4. 📝 Change Documentation
• Does the configuration object specify:
• The date of the change?
• The author of the change?
• This helps track when and who made the change.
• 5. SCM Procedures Followed
• Are the proper SCM procedures followed for:
• Noting the change
• Recording the change
• Reporting the change
• This ensures the change is properly tracked and documented in the version control system.
• 6. 🔄 Updates to Related SCIs
• Have all related SCIs been properly updated?
• For example, if the code was changed, were the related documentation and test cases also
updated?
Risk Management
• Risk management is all about identifying, assessing, and minimizing the
negative effects of potential risks in a project. The goal is not to avoid
projects with risks (because almost all projects have some risk) but to
reduce the impact of those risks when they occur.
• What is a Risk?
• A risk is an event that may or may not happen.
• It's a probabilistic event — meaning, there's a chance it could happen, but
there's also a chance it could not happen.
Risk Management Concepts
• Risk management aims to ensure that risks have minimal impact on:
• Cost: Staying within budget
• Quality: Maintaining the standard of the product
• Schedule: Meeting deadlines
• Risk management is mainly concerned with unexpected, non-routine events — these
are probabilistic events that may or may not happen. Examples could include:
• Unexpected technical failures.
• New regulatory requirements.
• Unexpected market changes.
Risk Management
• Risk Management Focuses on Two Key Areas:
1.Risk Assessment:
1. This is about evaluating the risks you’ve identified:
1.How likely is the risk to occur?
2.What impact would it have on the project if it did happen?
2. Once assessed, you can decide which risks need attention and which
ones can be ignored.
2.Risk Control:
1. This is about taking action to:
1.Prevent risks from occurring (if possible).
2.Mitigate the impact of a risk if it does occur (for example, having a
backup plan).
2. You’ll implement strategies to monitor risks and take corrective
actions if necessary.
Risk Management Activities
Risk Assessment
• Risk assessment is an essential activity that happens during
project planning. It involves:
1.Identifying risks: Finding all the potential problems that could
happen.
• Think about what could fail or cause delays, errors, or extra
costs.
• To help find risks, you can use different methods, such as:
• Checklists: Lists of known risks that have happened in similar
projects before.
• Surveys: Asking team members or stakeholders about possible
risks.
• Meetings and Brainstorming: Discussing the project with the
team to come up with potential risks.
• Reviews: Looking over plans, processes, and work products to
identify weak spots.
Risk Assessment
• Checklists of frequently occurring risks are probably the
most common tool for risk identification—most
organizations prepare a list of commonly occurring risks
for projects, prepared from a survey of previous projects.
• Such a list can form the starting point for identifying risks
for the current project.
• Boehm has produced a list of the top 10 risk items likely
to compromise the success of a software project.
• Though risks in a project are specific to the project, this
list forms a good starting point for identifying such risks.
Not enough skilled people on
the team.
Timelines and costs that are too
optimistic.

Building features that nobody needs.

A confusing or poor design that users


dislike.
Adding unnecessary features that
weren’t asked for.
The project keeps changing what it
needs.
Issues with parts bought or used
from outside vendors.
Problems with outside teams not
delivering.
Problems with outside teams
not delivering.
Trying to do things that are too
technically advanced.
Risk Assessment
• The top-ranked risk item is personnel shortfalls. This
involves just having fewer people than necessary or not
having people with specific skills that a project might
require.
• Some of the ways to manage this risk is to get the top
talent possible and to match the needs of the project
with the skills of the available personnel.
• Adequate training, along with having some key personnel
for critical areas of the project, will also reduce this risk.
Risk Assessment
• The second item, unrealistic schedules and budgets,
happens very frequently due to business and other
reasons.
• It is very common that high-level management imposes a
schedule for a software project that is not based on the
characteristics of the project and is unrealistic.
• Underestimation may also happen due to inexperience or
optimism.
Risk Assessment
• The next few items are related to requirements. Projects
run the risk of developing the wrong software if the
requirements analysis is not done properly and if
development begins too early.
• Similarly, often improper user interface may be
developed.
• This requires extensive rework of the user interface later
or the software benefits are not obtained because users
are reluctant to use it.
Risk Assessment
• Some requirement changes are to be expected in any
project, but sometimes frequent changes are requested,
which is often a reflection of the fact that the client has
not yet understood or settled on its own requirements.
• The effect of requirement changes is substantial in terms
of cost, especially if the changes occur when the project
has progressed to later phases.
• Performance shortfalls are critical in real-time systems
and poor performance can mean the failure of the
project.
Risk Assessment
• If a project depends on externally available components—
either to be provided by the client or to be procured as an
off-the-shelf component— the project runs some risks.
• The project might be delayed if the external component is
not available on time.
• The project would also suffer if the quality of the external
component is poor or if the component turns out to be
incompatible with the other project components or with
the environment in which the software is developed or is to
operate.
• If a project relies on technology that is not well developed,
it may fail.
• This is a risk due to straining the computer science
capabilities.
Risk Identification
• Risk assessment is an essential activity that happens during
project planning. It involves:
1.Identifying risks: Finding all the potential problems that could
happen.
• Think about what could fail or cause delays, errors, or extra
costs.
• To help find risks, you can use different methods, such as:
• Checklists: Lists of known risks that have happened in similar
projects before.
• Surveys: Asking team members or stakeholders about possible
risks.
• Meetings and Brainstorming: Discussing the project with the
team to come up with potential risks.
• Reviews: Looking over plans, processes, and work products to
identify weak spots.
Risk Analysis
• Risk analysis is a key step in managing successful software projects. It involves identifying
potential issues that could negatively impact the project and estimating:
1.The probability that each risk might occur.
2.The impact (or loss) if the risk actually happens.
• Using Cost Models for Risk Analysis:
• If you’re already using cost estimation models to calculate project costs and schedules, you can
use those same models to estimate cost and schedule risks as well.
• This makes the analysis more consistent and quantifiable.
• Sources of Cost Risk:
• There are two common reasons why cost estimates might be wrong:
• Underestimating cost drivers
(For example: effort needed, technology complexity, or developer skill levels)
• Underestimating project size
(For example: number of features, lines of code, or system complexity)
Risk Analysis
• Worst-Case Analysis
• When we estimate using worst-case values (for size, cost
drivers, etc.), we get the worst-case effort.
• From this worst-case effort, we can then easily calculate the
worst-case schedule (i.e., how long the project might take in
the worst scenario).
• Instead of relying on just one worst-case scenario, you can do
a more detailed analysis by:
• Considering multiple possible cases (best-case, most likely,
worst-case).
• Using a probability distribution for different drivers (e.g., how
likely is it that team experience is low, medium, or high?).
• This gives you a range of outcomes and helps you better
understand risk variability.
Risk Analysis
• Once the probabilities of risks materializing and losses due to
materialization of different risks have been analyzed, they can
be prioritized.
• One approach for prioritization is through the concept of risk
exposure (RE), which is sometimes called risk impact. RE is
defined by the relationship

•where Prob{UO) is the probability of the risk materializing (i.e.,


undesirable outcome) and Loss{UO) is the total loss incurred due to the
unsatisfactory outcome.
•The loss is not only the direct financial loss that might be incurred but
also any loss in terms of credibility, future business, and loss of property
or life.
•The RE is the expected value of the loss due to a particular risk.
•For risk prioritization using RE, the higher the RE, the higher the
priority of the risk item.
Risk Analysis
• A subjective assessment can be done by the estimate of
one person or by using a group consensus technique like
the Delphi approach.
• In the Delphi method, a group of people discusses the
problem of estimation and finally converges on a
consensus estimate.
Risk Control
• The main objective of risk management is to identify the
top few risk items and then focus on them.
• Once a project manager has identified and prioritized the
risks, the top risks can be easily identified.
• Knowing the risks is of value only if you can prepare a
plan so that their consequences are minimal—that is the
basic goal of risk management.
• One obvious strategy is risk avoidance, which entails
taking actions that will avoid the risk altogether
Risk Control
• For most risks, the strategy is to perform the actions that
will either reduce the probability of the risk materializing
or reduce the loss due to the risk materializing.
• These are called risk mitigation steps.
• To decide what mitigation steps to take, a list of
commonly used risk mitigation steps for various risks is
very useful here.
Risk Control
• Selecting a risk mitigation step is not just an intellectual exercise.
• The risk mitigation step must be executed (and monitored).
• To ensure that the needed actions are executed properly, they
must be incorporated into the detailed project schedule.
• Risk prioritization and consequent planning are based on the
risk perception at the time the risk analysis is performed.
• Because risks are probabilistic events that frequently depend on
external factors, the threat due to risks may change with time as
factors change.
• Clearly, then, the risk perception may also change with time.
Furthermore, the risk mitigation steps undertaken may affect
the risk perception.
Risk Control
• This dynamism implies that risks in a project should not
be treated as static and must be monitored and re-
evaluated periodically.
• Risk monitoring is the activity of monitoring the status of
various risks and their control activities.
• One simple approach for risk monitoring is to analyze the
risks afresh at each major milestone, and change the
plans as needed.
What is CASE ??
CASE – Stands for Computer Aided Software Engineering.
Definition –
It is the scientific application of a set of tools and methods to a
software system which is meant to result in high-quality, defect-free,
and maintainable software products
It also refers to methods for the development of information systems
together with automated tools that can be used in the software
development process.
What is CASE ??
Two key ideas of Computer-aided Software System Engineering (CASE)
are
oTo increase productivity
oTo help produce better quality software at lower
cost
CASE & It’s Scope

CASE technology provides software-process support by automating some process


activities and by providing information about the software being developed.
Examples of activities which can be automated by using CASE tools are
• The development of graphical models as part of the requirement specification or the
software design
• Understanding a design using a data dictionary, which holds information about the
entities and relations in a design.
• The generation of user interfaces
• Program debugging
• The automated translation of programs from an older version of a programming
language to a recent version.
Levels Of CASE
• There are three different levels of CASE Technology
• Production support Technology – This includes support for process
activities such as specification, design, implementation , testing and
so on.
• Process Management Technology – This includes tools to support
process modeling and process management.
• Meta Case Tools – are generators, which are used to create
production process-management tools
Architecture of CASE Environment
Important Components of a modern
CASE environment
1. User Interface – it provides a consistent framework for
accessing different tools, thus making it easier for the user to interact
with different tools and reduces learning time of how the different
tools are used.
Important Components of a modern
CASE environment
2. Tools Management System (Tool set) – The tools set section holds
different types of improved quality tools. The tools layer incorporates a
set of tools-management services with the CASE tool themselves.
• The Tools Management Services (TMS) controls the behavior of tools
within the environment.
• If multitasking is used during the execution of the one or more tools, TMS
performs multitask synchronization and communication, co-ordinates the
flow of information from the repository and object-management system
into the tools, accomplishes security and auditing functions and collects
metrics on tool use.
Important Components of a modern
CASE environment
3. Object Management System (OMS) – maps these (specification
design, text data project plan, etc.) logical entities into the underlying
storage-management system i.e. the repository.
4. Repository – is the CASE database and the access-control
functions that enable the OMS to interact with the database. The
CASE repository generally referred as Project Database, Data
Dictionary, CASE Database.
Characteristics of CASE Tools
1. A graphical interface to draw diagrams, chart models, (upper
case, middle case, lower case)
2. An information repository, a data dictionary for efficient
information management selection, usage, application and storage.
3. Common user interface for integration of multiple tools used in
various phases.
4. Automatic code generators
5. Automating testing tools
Types of CASE Tools
• Upper CASE Tools
• Lower CASE Tools
• Integrated Case Tools (I –CASE) Tools
CASE Tool Types
Upper CASE Tools –
Designed to support the Analysis and Design phase of SDLC.
These are also known as front-end tools.
They support traditional diagrammatic languages such as ER diagrams,
Data flow diagram, Structure charts, Decision Trees, Decision tables,
etc.
The example of CASE tool in this group are ERwin and Visual UML.
CASE Tool Types
Lower Case Tools –
• Designed to support the implementation, testing and maintenance
phase of SDLC
• Example – Code Generators.
• These are called back-end tools.
• The example of CASE tool in the group are Ecore Diagram Editor and
dzine.
CASE Tool Types
Integrated Tools –
• These tools support the entire life cycle, such as project management
and estimation of costs
• These are also known as I- CASE Tools.
• The example of CASE tool in this group is Rational Rose from IBM.
Advantages of CASE Tools

1. Improved Productivity
2. Better Documentation
3. Improved Accuracy
4. Intangible benefits
5. Improved Quality
6. Reduced Lifetime Maintenance
7. Reduced cost of Software
8. Produce high quality and consistent document
9. Easy to program software
10. Easy project management
11. An increase in project control through better planning,
monitoring and communication.
12. Help standardization of notations and diagrams
13. Reduction of time and effort
Disadvantages of CASE
• Purchasing is not an easy task – Cost is very high.
• Learning curve – Initial productivity may fall. Users may require extra
training to use CASE tools.
• Tool Mix – Proper selection of CASE tools should there to get
maximum benefit.
• May lead to restriction to the tool’s capabilities
Resource Allocation Model
• Putnam first studied the problem of what should
be a proper staffing pattern for software projects.
• He extended the work of Norden who had earlier
investigated the staffing pattern of research and
development (R&D) type of projects.
• In order to appreciate the staffing pattern of
software projects, Norden’s and Putnam’s results
must be understood.
Resource Allocation Model: Norden’s
Work
• Staffing pattern can be approximated by the Rayleigh distribution
curve.
• Norden represented the Rayleigh curve by the following equation:

where E is the effort required at time t.


• E is an indication of the number of engineers (or the staffing level) at any particular
time during the duration of the project,
• K is the area under the curve, and td is the time at which the curve attains its
maximum value.
• It must be remembered that the results of Norden are applicable to general R & D
projects and were not meant to model the staffing pattern of software development
projects.
Resource
Allocation
Model:
Nordens
Work
Resource Allocation Model :
Putnam’s Work
• Putnam studied the problem of staffing of software projects and found that the
software development has characteristics very similar to other R & D projects
studied by Norden and that the Rayleigh-Norden curve can be used to relate the
number of delivered lines of code to the effort and the time required to develop the
project.
• By analyzing a large number of army projects, Putnam derived the following
expression:

• The various terms of this expression are as follows:


• K is the total effort expended (in PM) in the product development and L is the
product size in KLOC.
• td corresponds to the time of system and integration testing. Therefore, td can be
approximately considered as the time required to develop the software.
Resource Allocation Model :
Putnam’s Work
• Ck is the state of technology constant and reflects constraints that impede the
progress of the programmer.
• Typical values of Ck = 2 for poor development environment (no methodology, poor
documentation, and review, etc.), Ck = 8 for good software development
environment (software engineering principles are adhered to), Ck = 11 for an
excellent environment (in addition to following software engineering principles,
automated tools and techniques are used).
• The exact value of Ck for a specific project can be computed from the historical
data of the organization developing it.
Resource Allocation Model:
Putnam’s Work
• Putnam suggested that optimal staff build-up on a project should follow the
Rayleigh curve.
• Only a small number of engineers are needed at the beginning of a project to
carry out planning and specification tasks.
• As the project progresses and more detailed work is required, the number of
engineers reaches a peak.
• After implementation and unit testing, the number of project staff falls.
• However, the staff build-up should not be carried out in large installments.
• The team size should either be increased or decreased slowly whenever
required to match the Rayleigh-Norden curve.
• Experience shows that a very rapid build up of project staff any time during
the project development correlates with schedule slippage.
Resource Allocation Model :
Putnam’s Work
• It should be clear that a constant level of manpower through out the project
duration would lead to wastage of effort and increase the time and effort
required to develop the product.
• If a constant number of engineers are used over all the phases of a project,
some phases would be overstaffed and the other phases would be
understaffed causing inefficient use of manpower, leading to schedule
slippage and increase in cost.
Resource Allocation Model :
Putnam’s Work
• Effect of schedule change on cost:
• By analyzing a large number of army projects, Putnam derived the
following expression:

K is the total effort expended (in PM) in the product development


L is the product size in KLOC,
td corresponds to the time of system and integration testing and
Ck is the state of technology constant and reflects constraints that impede
the progress of the programmer
Resource Allocation Model :
Putnam’s Work
• It can be easily observed that when the schedule of a project is compressed,
the required development effort as well as project development cost increases
in proportion to the fourth power of the degree of compression.
• It means that a relatively small compression in delivery schedule can result in
substantial penalty of human effort as well as development cost.
• For example, if the estimated development time is 1 year, then in order to
develop the product in 6 months, the total effort required to develop the
product (and hence the project cost) increases 16 times.
COCOMO Model
• COCOMO (Constructive Cost Model) is a regression model based on LOC, i.e. number of Lines
of Code.
• It is a procedural cost estimate model for software projects and often used as a process of
reliably predicting the various parameters associated with making a project such as size, effort,
cost, time and quality.
• It was proposed by Barry Boehm in 1970 and is based on the study of 63 projects, which make
it one of the best-documented models.
• The key parameters which define the quality of any software products, which are also an
outcome of the COCOMO are primarily Effort & Schedule.
• Effort: Amount of labor that will be required to complete a task. It is measured in person-
months units.
• Schedule: Simply means the amount of time required for the completion of the job, which is,
of course, proportional to the effort put. It is measured in the units of time such as weeks,
months.
COCOMO Model
• Different models of COCOMO have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required.
• All of these models can be applied to a variety of projects, whose characteristics
determine the value of constant to be used in subsequent calculations. These
characteristics are pertaining to different system types.
• Basic COCOMO can be used for quick and slightly
rough calculations of Software Costs.
• Its accuracy is somewhat restricted due to the
absence of sufficient factor considerations.
Intermediate Model –However, in reality, no system’s effort and
schedule can be solely calculated on the basis of Lines of Code. For
that, various other factors such as reliability, experience, Capability.
These factors are known as Cost Drivers and the Intermediate Model
utilizes 15 such drivers for cost estimation.
Classification of Cost Drivers and their attributes:
The product of all effort multipliers results in an effort
adjustment factor (EAF). Typical values for EAF range
from 0.9 to 1.4.
Detailed COCOMO Model
• Detailed COCOMO incorporates all qualities of the standard version with an assessment
of the cost drivers effect on each method of the software engineering process.
• The detailed model uses various effort multipliers for each cost driver property.
• In detailed cocomo, the whole software is differentiated into multiple modules, and then
we apply COCOMO in various modules to estimate effort and then sum the effort.
• The five phases of detailed COCOMO are:
1. Planning and requirements
2. System structure
3. Complete structure
4. Module code and test
5. Integration and test
Detailed COCOMO = Intermediate COCOMO + assessment of Cost Drivers impact on each phase.
Detailed COCOMO Model
• Multiply all 15 Cost Drivers to get Effort Adjustment Factor(EAF)
• E(Effort) = ab(KLOC)bb * EAF(in Person-Month)
• D(Development Time) = cb(E)db (in month)

• Ep (Total Effort) = µp * E (in Person-Month)


• Dp (Total Development Time) = τp * D (in month)
Detailed COCOMO : Example
Consider a project to develop a full screen editor. The major components
identified and their sizes are (i) Screen Edit – 4K (ii) Command Lang
Interpreter – 2K (iii) File Input and Output – 1K (iv) Cursor movement – 2K (v)
Screen Movement – 3K. Assume the Required software reliability is high,
product complexity is high, analyst capability is high & programming language
experience is low. Use COCOMO model to estimate cost and time for different
phases.
Size of modules : 4 + 2 + 1 + 2 + 3 = 13 KLOC [Organic]

Cost Drivers Very Low Nominal High Very High Extra High
Low
RELY 0.75 0.88 1.00 1.15 1.40 --
CPLX 0.70 0.85 1.00 1.15 1.30 1.65
ACAP 1.46 1.19 1.00 0.86 0.71
LEXP * 1.15 1.14
EAF = 1.15 * 0.86 *1.07 1.00
1.07 = 1.2169 0.95 -- --
Example (Contd.)
Initial Effort (E) = ab(KLOC)bb * EAF = 3.2*(12)1.05 * 1.2169 = 52.9 person-months
Initial Development Time = cb(E)db =2.5*(52.9)0.38 = 11.29 months
Phase value of µp and τp
Phase wise effort & development time distribution

Plan & System Detail Module code & Integration &


Reqr Design Design test Test
Organic Small µp 0.06 0.16 0.26 0.42 0.16
Organic Small τp 0.10 0.19 0.24 0.39 0.18

E D Ep (in person-months) Dp (in months)


Plan & Requirement 52.9 11.29 0.06*52.9 = 3.17 0.10*11.29=1.12
System Design 52.9 11.29 0.16*52.9=8.46 0.19*11.29=2.14
Detail Design 52.9 11.29 0.26*52.9=13.74 0.24*11.29=2.70
Module code & test 52.9 11.29 0.42*52.9=22.21 0.39*11.29=4.40
Integration & test 52.9 11.29 0.16*52.9=8.46 0.18*11.29=2.03
1. End user programming: This category is applicable to small systems,
developed by end user using application generators. End user may
write small programs using application generators.
2. Infrastructure sector: Software that provide infrastructure like OS ,
DBMS, networking systems etc. These developers generally have
good knowledge of software development.
3. Intermediate sectors: Divided into three sub categories.
1. Application generators and composition aids: Create largely
pre packaged capabilities for user programming.
2. Application composition sector: GUI’s , databases , domain
specific components such as financial , medical etc.
3. System integration: Large scale highly embedded systems

You might also like