[go: up one dir, main page]

0% found this document useful (0 votes)
22 views135 pages

SE

The document provides an overview of Software Engineering, covering its definition, key goals, components, characteristics, and the software crisis. It details various software engineering processes, models, and the importance of Software Requirement Specifications (SRS) in ensuring quality and meeting user needs. Additionally, it discusses software design concepts, including architectural and low-level design, modularization, and design structure charts.

Uploaded by

vsvaibhav980
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views135 pages

SE

The document provides an overview of Software Engineering, covering its definition, key goals, components, characteristics, and the software crisis. It details various software engineering processes, models, and the importance of Software Requirement Specifications (SRS) in ensuring quality and meeting user needs. Additionally, it discusses software design concepts, including architectural and low-level design, modularization, and design structure charts.

Uploaded by

vsvaibhav980
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 135

UNIT 01

1. Introduction to Software Engineering

Software Engineering is a branch of computer science that deals with the systematic development, operation, maintenance, and retirement of
software. It applies engineering principles to software development to create reliable and efficient software.

Key Goals:

High-quality software

Delivered on time

Within budget

Meets user requirements


Importance:

Reduces complexity through abstraction and modeling

Facilitates team collaboration

Enhances maintainability and scalability

2. Software Components

These are the building blocks of a software system. Major components include:

1.
User Interface (UI) – How users interact with the software.

2.
Business Logic – Core functionality and rules of the system.
3.
Data Access Layer – Mechanism for accessing and managing data.

4.
Database – Stores persistent data.

5.
Middleware – Allows communication between components.

Optional:

Web Services/APIs

Security Modules

Logging and Error Handling Systems


3. Software Characteristics

Software has unique characteristics compared to physical products:

Intangibility: It cannot be touched or seen.

Complexity: Highly complex due to logic and conditions.

Invisibility: Hard to visualize the full structure.

Flexibility: Can be changed easily.

Maintenance-intensive: Needs regular updates and fixes.

No wear and tear: Software doesn't degrade physically.


4. Software Crisis

This refers to the problems faced during the early days of software development (1960s–70s), still relevant today.

Major Issues:

Projects running over time and budget

Poor software quality

Unmanageable complexity

Difficulty in maintenance

Lack of user satisfaction

Causes:
Unclear requirements

Lack of structured development processes

Poor project management

Inadequate testing

5. Software Engineering Processes

A software process is a structured set of activities required to develop a software system.

Main Phases:
1.
Requirements Gathering
2.
System Design

3.
Implementation (Coding)

4.
Testing

5.
Deployment

6.
Maintenance

Common Models:

Waterfall Model

Agile Development
Iterative Model

Spiral Model

V-Model

Each has advantages depending on the project type and requirements.

6. Similarities and Differences from Conventional Engineering


Aspect Software Engineering Conventional Engineering
Tangibility Intangible (code) Tangible (physical objects)
Reusability High (code/modules) Low (specific parts)
Maintenance Frequent, critical Periodic, predictable
Testing Simulated/virtual Real-world environment
Failure Often logical (bugs) Often physical (wear, break)

Similarities:
Use of structured processes

Emphasis on quality

Involves planning, design, implementation, and maintenance

Requires teamwork and management

7. Software Quality Attributes

These are non-functional requirements that determine how well the software performs.

Key Attributes:
1.
Correctness – Meets the specified requirements.
2.
Reliability – Performs consistently over time.

3.
Efficiency – Uses resources optimally (CPU, memory).

4.
Usability – Easy to understand and operate.

5.
Maintainability – Easy to fix and upgrade.

6.
Portability – Works across different environments.

7.
Scalability – Performs well under increased load.

8.
Security – Resists unauthorized access and attacks.

9.
Reusability – Components can be reused in other projects.
Software Development Life Cycle (SDLC)
The SDLC is a structured framework that describes the phases involved in the development of software from initial feasibility study to
maintenance. Each model within SDLC provides a different approach to software development.

1. Waterfall Model

Definition:
The Waterfall Model is a linear and sequential approach where each phase must be completed before the next one begins.

Phases:
1.
Requirement Analysis

2.
System Design

3.
Implementation

4.
Integration and Testing

5.
Deployment

6.
Maintenance

Advantages:

Simple and easy to understand.

Well-structured documentation.

Ideal for smaller projects with clearly defined requirements.


Disadvantages:

Difficult to accommodate changes once development starts.

Late detection of errors (during testing).

Not suitable for complex or object-oriented projects.

2. Prototype Model

Definition:
This model involves building a working prototype of the system early in the process to help understand and refine requirements.

Steps:
1.
Requirements gathering
2.
Quick design

3.
Build prototype

4.
User evaluation

5.
Refinement

6.
Final system development

Advantages:

Better requirement understanding.

Reduces risk of system failure.


User feedback is integrated early.

Disadvantages:

Can be time-consuming.

May lead to false expectations.

Poor prototype may lead to poor final system.

3. Spiral Model

Definition:
The Spiral Model, proposed by Barry Boehm, combines the features of the Waterfall and Prototype models with an emphasis on risk analysis.

Phases in Each Spiral Loop:


1.
Determine objectives and constraints

2.
Identify and resolve risks

3.
Develop and verify deliverables

4.
Plan the next iteration

Advantages:

Strong risk management.

Suitable for large, complex, and high-risk projects.

Continuous refinement through iterations.


Disadvantages:

Expensive due to risk analysis.

Requires skilled risk assessors.

Not suitable for small projects.

4. Evolutionary Development Models

Definition:
This model focuses on developing the system in small pieces (increments) with frequent user interaction. Each release is a working version.

Types:

Incremental Model: Software is built and delivered in increments.


Concurrent Development Model: Simultaneous development of components.

Advantages:

Quick delivery of initial version.

Easier to manage changing requirements.

Continuous user involvement.

Disadvantages:

Integration may become complex.

Requires good planning and design.


5. Iterative Enhancement Model

Definition:
In this model, the system is developed in repeated cycles (iterations). Each iteration includes design, development, and testing.

Process:
1.
Initial planning

2.
Design and develop a small module

3.
Test and evaluate

4.
Improve in next iteration

Advantages:
Continuous improvement.

Early delivery of partial systems.

Easy to incorporate user feedback.

Disadvantages:

Requires more time and resources.

Incomplete or poorly planned iterations can create confusion.

Summary Table
Model Best For Key Feature Main Drawback
Waterfall Simple, well-defined projects Linear, sequential phases Inflexible to changes
Prototype Projects with unclear requirements Early feedback from users May lead to poor design
Spiral Large, high-risk projects Risk analysis + iterative design Costly and complex
Evolutionary Development Projects needing early versions Frequent updates, user input Integration challenges
Iterative Enhancement Complex systems with evolving features Repeated improvement Scope creep risk

UNIT 02

1. Software Requirement Specifications (SRS)


Definition:
The SRS is a formal document that describes the expected behavior of a software system. It outlines functional and non-functional
requirements, interfaces, and constraints.

Contents of an SRS:
1.
Introduction
Purpose

Scope

Definitions, acronyms

2.
Overall Description

Product perspective

Product functions

User characteristics

3.
Functional Requirements
What the system should do (e.g., login, register, generate reports)

4.
Non-functional Requirements

Usability, performance, security, reliability, etc.

5.
External Interfaces

Hardware, software, user, communication interfaces

6.
Constraints

Regulatory policies, hardware limitations


Importance of SRS:

Acts as a contract between stakeholders and developers

Serves as a reference for validation and verification

Helps in design, development, and testing

2. Requirement Engineering Process


Definition:
Requirement Engineering (RE) is a process of defining, documenting, and maintaining requirements in the software development process.

Phases of RE:
a. Elicitation (Gathering Requirements)

Goal: Understand what the users want.

Techniques:

Interviews: One-on-one with stakeholders

Questionnaires: For distributed users

Workshops: Brainstorming sessions

Observation: Watching how users work

Prototyping: Creating sample interfaces

Challenges:
Users may not know what they want

Communication barriers

Unclear goals

b. Analysis

Goal: Organize and model the gathered information, remove conflicts and ambiguities.

Activities:

Detecting conflicts in requirements

Classifying and prioritizing requirements


Modeling system behavior using DFDs, ERDs, etc.

Tools:

Data Flow Diagrams (DFDs)

Entity Relationship Diagrams (ERDs)

Use case diagrams

c. Documentation

Goal: Formally write down all functional and non-functional requirements in the SRS document.

Tips:
Use clear, unambiguous language

Include diagrams, tables

Structure requirements logically

d. Review

Goal: Ensure the documented requirements are complete, correct, and understandable.

Activities:

Peer reviews

Walkthroughs with stakeholders


Checklist-based validation

e. Management of User Needs (Requirement Management)

Goal: Handle changes to requirements during the project life cycle.

Tasks:

Version control of requirements

Tracking requirement changes

Impact analysis of changes

Re-prioritization
Why it’s needed:

Requirements evolve due to changing business needs or technology

3. Feasibility Study
Definition:
A feasibility study evaluates a project’s potential for success before it is developed.

Types:
1.
Technical Feasibility – Is the technology available and practical?

2.
Economic Feasibility – Is the project cost-effective?

3.
Operational Feasibility – Will the system work in the intended environment?

4.
Legal Feasibility – Are there any legal issues (licensing, data privacy)?

5.
Schedule Feasibility – Can it be done in the given time?

Output: A feasibility report guiding the decision to proceed.

4. Information Modelling
Definition:
Information modeling is about representing system data, structure, and flow visually to help understand the system requirements.

Used for:

Clarifying requirements
Identifying entities, attributes, relationships

Supporting system design

5. Data Flow Diagrams (DFDs)


Definition:
A DFD is a graphical representation of the flow of data in a system.

Components:
1.
Process – A function or activity (circle)

2.
Data Flow – Movement of data (arrow)

3.
Data Store – Where data is stored (open-ended rectangle)
4.
External Entity – Source or destination of data (rectangle)

Levels:

Level 0 (Context Diagram) – Overview of the system

Level 1 and beyond – More detailed breakdowns

Purpose:

Understand and analyze how information moves through a system

6. Entity Relationship Diagrams (ERDs)


Definition:
ERDs show entities, their attributes, and relationships between them.

Components:
1.
Entity – Object or concept (rectangle)

2.
Attribute – Property of an entity (oval)

3.
Relationship – Association between entities (diamond)

4.
Primary Key – Unique identifier for an entity

Used For:

Database design

Understanding data requirements


7. Decision Tables
Definition:
A decision table is a tabular method for representing complex decision logic based on different conditions.

Structure:

Conditions – Inputs (yes/no or values)

Actions – Outputs or operations

Rules – Combination of condition values triggering actions

Example:

Condition A Condition B Action X Action Y


Yes No Yes No
No Yes No Yes

Benefits:

Clear decision-making logic

Reduces ambiguity

Easy to automate rules

1. RS Document (Requirements Specification Document)


The RS Document (commonly referred to as the SRS – Software Requirements Specification) is a comprehensive description of the intended
purpose and environment for software under development.
Key Sections in an RS/SRS Document:
1.
Introduction

Purpose

Scope

Definitions

2.
Overall Description

Product perspective and functions

User needs
Assumptions and dependencies

3.
Functional Requirements

Describes system features and behavior

4.
Non-functional Requirements

Performance, security, usability, etc.

5.
External Interfaces

Communication with hardware, users, and software


6.
Design Constraints

Regulatory, hardware, or compliance constraints

2. IEEE Standards for SRS (IEEE 830 Standard)


The IEEE 830 is the most widely used standard for writing SRS documents. It ensures consistency, clarity, and completeness.

Key Elements Suggested by IEEE 830:

Correctness – All requirements accurately represent user needs.

Unambiguity – Only one interpretation is possible.


Completeness – No missing functionalities or requirements.

Consistency – No conflicting requirements.

Verifiability – Each requirement must be testable.

Modifiability – Easy to update and maintain.

Traceability – Can trace each requirement to its source and through development stages.

IEEE 830 was superseded by IEEE 29148, which is part of ISO/IEC/IEEE 29148:2018 (modern integrated standard for requirements).

3. Software Quality Assurance (SQA)


Definition:
SQA is a set of activities for ensuring the quality of software products and processes throughout the software development lifecycle.
a. Verification and Validation (V&V)

Verification:

Ensures the product is built correctly (i.e., follows the design and specification).

Activities: Reviews, inspections, walkthroughs

Question answered: “Are we building the product right?”

Validation:

Ensures the product meets user needs and expectations.

Activities: Testing (unit, integration, system)


Question answered: “Are we building the right product?”

b. SQA Plans

An SQA Plan outlines the process, tools, and responsibilities for achieving quality goals.

Key Elements:
1.
Purpose and scope

2.
Standards and procedures

3.
SQA tasks and responsibilities

4.
Review and audit procedures

5.
Test plans and strategies

6.
Problem reporting and corrective action

7.
Tools, methods, and techniques

8.
SQA reporting and records

c. Software Quality Frameworks

These frameworks provide structured methods and standards to ensure software quality:

Popular Frameworks:

IEEE SQA Standards


ISO 9000

CMM/CMMI

Six Sigma

Total Quality Management (TQM)

They guide how organizations plan, monitor, and improve their quality processes.

d. ISO 9000 Models

ISO 9000 is a family of quality management standards that apply across industries, including software.

Key Concepts:

Focus on customer satisfaction


Emphasis on process standardization and documentation

Continuous improvement through the Plan-Do-Check-Act (PDCA) cycle

In Software:

ISO 9001 is used for Quality Management Systems

Encourages well-documented, repeatable development processes

Benefits:

International recognition

Better process control


Reduced risk and rework

e. SEI-CMM Model (Capability Maturity Model)

Developed by the Software Engineering Institute (SEI) at Carnegie Mellon University.

Purpose:

To evaluate and improve the maturity of software development processes in organizations.

CMM Levels:
Level Maturity Stage Description
1 Initial Ad hoc, chaotic processes
2 Repeatable Basic project management
3 Defined Standardized software processes
4 Managed Metrics-driven process control
5 Optimizing Continuous process improvement
Comparison: ISO 9000 vs SEI-CMM
Feature ISO 9000 SEI-CMM
Scope All industries Software industry
Focus Quality assurance Process maturity
Certification Level Organization Process capability level
Nature Compliance to standards Capability and improvement

UNIT 03

1. Basic Concept of Software Design


Definition:

Software design is the process of transforming user requirements (from the SRS) into a suitable form, which helps in implementation and
coding. It defines how the system will be built.

Key Objectives:
Satisfy requirements

Improve maintainability

Enhance performance and reliability

Minimize complexity

Types of Design:

Architectural Design (High-Level Design)

Detailed Design (Low-Level Design)


2. Architectural Design (High-Level Design)
Definition:

Architectural design defines the overall structure of the system, the main components (modules), and how they interact.

Focus:

System decomposition into major modules

Interaction among modules

Use of design patterns (e.g., MVC, layered architecture)

Artifacts Produced:

Block diagrams
Subsystems

Component interfaces

Control/data flow diagrams

Examples:

Client-server architecture

Microservices architecture

Layered (e.g., Presentation, Business, Data)


3. Low-Level Design (Detailed Design)
Definition:

Low-Level Design (LLD) describes the internal logic of each module or component defined in the high-level design.

Focus:

Internal workflows

Data structures

Algorithms

Function definitions
4. Modularization
Definition:

Modularization is the process of dividing a system into independent modules to reduce complexity and enhance reusability.

Benefits:

Easier to test and maintain

Promotes separation of concerns

Encourages reuse

Principles:

Single Responsibility Principle (SRP)


Modules should have high cohesion and low coupling

5. Design Structure Charts (DSCs)


Definition:

DSCs are hierarchical diagrams that show the modules in a system and their calling relationships.

Components:

Rectangles: Modules

Arrows/lines: Control or data flow

Conditions/Loops: Indicated by annotations


Use:

Visualize how modules interact

Understand module hierarchy

Guide development and testing

6. Pseudo Code
Definition:

Pseudo code is an informal high-level description of an algorithm that uses the structural conventions of programming without adhering to
syntax rules.

Purpose:
Helps plan logic before coding

Easier for non-programmers to understand

Used for algorithm explanation

Example:
Function Factorial(n)
if n == 0 then
return 1
else
return n * Factorial(n-1)

7. Flow Charts
Definition:

A flowchart is a graphical representation of a step-by-step process, showing decisions, loops, and actions.
Symbols Used:
Symbol Meaning
Oval Start/End
Rectangle Process Step
Diamond Decision
Arrow Flow direction

Benefits:

Easy to understand logic flow

Useful for debugging

Visual communication tool

8. Coupling and Cohesion


These are key software quality metrics that measure modular design quality.

Cohesion (Intra-module):

Definition:

How closely related and focused the responsibilities of a single module are.

Types of Cohesion (Best to Worst):


1.
Functional Cohesion – One well-defined task (Best)

2.
Sequential – Output of one part is input to another

3.
Communicational – Operate on same data

4.
Procedural – Perform steps in a specific order
5.
Temporal – Tasks done at the same time

6.
Logical – Based on a control flag

7.
Coincidental Cohesion – Random grouping (Worst)

Coupling (Inter-module):

Definition:

The degree of interdependence between modules.

Types of Coupling (Worst to Best):


1.
Content Coupling – One module modifies internal data of another (Worst)

2.
Common Coupling – Modules share global data

3.
Control Coupling – One module controls flow of another

4.
Stamp Coupling – Passes a data structure

5.
Data Coupling – Only necessary data is passed (Best)

Goal in Design:

High cohesion

Low coupling
1. Function-Oriented Design
Definition:

Function-Oriented Design is a strategy where software is decomposed based on functions or procedures. It focuses on what the system does
(i.e., behavior or transformations of data).

Key Features:

Decomposes the system into functional modules

Each function represents a major processing step

Emphasizes data flow between functions


Uses tools like Data Flow Diagrams (DFDs) and Structure Charts

Process:
1.
Start with high-level functions (e.g., “process transaction”)

2.
Decompose into sub-functions (e.g., “validate input”, “calculate bill”)

3.
Continue until low-level, implementable functions are reached

Advantages:

Simple and easy to understand for small systems


Good for data transformation applications

Disadvantages:

Difficult to modify or extend

Poor encapsulation

Not suitable for complex, event-driven systems

2. Object-Oriented Design (OOD)


Definition:

Object-Oriented Design focuses on modeling software based on real-world entities (objects), which have attributes (data) and methods
(behavior).
Key Concepts:

Class: A blueprint for objects

Object: An instance of a class

Encapsulation: Bundling of data and methods

Inheritance: Reuse of common features

Polymorphism: Unified interface for different types

Design Process:
1.
Identify objects from requirements

2.
Group similar objects into classes

3.
Define relationships (inheritance, association)

4.
Design interactions using UML (Unified Modeling Language)

Advantages:

Promotes modularity and reuse

Easier to maintain and extend

Closer to real-world modeling


Disadvantages:

More complex initially

Requires careful design to avoid overengineering

3. Top-Down Design
Definition:

Top-Down Design is a stepwise refinement approach where the system is broken down from the highest level into smaller components.

Process:
1.
Start with the main system goal or module

2.
Decompose it into subsystems or sub-functions

3.
Repeat until each module can be implemented directly

Advantages:

Clear structure and planning

Easy to assign modules to teams

Better for understanding overall logic

Disadvantages:
Lower-level issues may be overlooked initially

Implementation delays if lower modules are unclear

4. Bottom-Up Design
Definition:

Bottom-Up Design starts from basic, reusable components and builds up to create larger systems.

Process:
1.
Identify reusable modules (e.g., authentication, database access)

2.
Integrate them to form higher-level functionalities

3.
Assemble the complete system from these blocks

Advantages:

Promotes reuse

Good for using existing libraries

Faster for prototype development

Disadvantages:

Lacks initial system overview


Integration may be difficult if components don’t fit well

1. Software Measurement and Metrics: Introduction


Software Measurement:

It is the process of quantifying properties of software or its specifications to assess productivity, quality, cost, and performance.

Software Metrics:

Quantitative measures used to estimate various aspects of software development and maintenance such as:

Size
Complexity

Effort

Reliability

Maintainability

2. Size-Oriented Metrics
These metrics are based on the size of the software — usually measured in terms of:

Lines of Code (LOC)

Number of operators and operands


Function Points (FP)

2.1 Halstead Software Science (Maurice Halstead)


It is based on lexical properties of the software (like operators and operands).

Basic Terms:

n1: Number of distinct operators

n2: Number of distinct operands

N1: Total number of operators

N2: Total number of operands


Formulas:

Program Vocabulary:
n = n1 + n2

Program Length:
N = N1 + N2

Volume:
V = N * log2(n)

Difficulty:
D = (n1 / 2) * (N2 / n2)

Effort:
E=D*V

Time Required to Program:


T = E / 18 (Seconds)
Estimated Bugs:
B = E^(2/3) / 3000

Usefulness:

Useful for estimating effort and complexity

Language-independent (to some extent)

2.2 Function Point (FP) Metrics


Developed by: Allan Albrecht (IBM)
Used for measuring functionality provided by the software from the user's point of view, not developer’s.

Components Counted:
Component Weight (Low/Avg/High)
External Inputs (EI) 3/4/6
External Outputs (EO) 4/5/7
External Inquiries (EQ) 3/4/6
Internal Logical Files (ILF) 7 / 10 / 15
External Interface Files (EIF) 5 / 7 / 10

Steps to Calculate FP:


1.
Count functional components (EI, EO, EQ, ILF, EIF) and assign weights

2.
Compute Unadjusted Function Point (UFP) = sum of all (count × weight)

3.
Calculate Technical Complexity Factor (TCF) from 14 general system characteristics (like performance, usability, etc.)

4.
Final FP = UFP × TCF

Advantages:
Independent of programming language

Based on user requirements

Useful for estimating cost early

3. Cyclomatic Complexity
Introduced by: Thomas McCabe
It measures the logical complexity of a program based on the control flow graph (CFG).

Control Flow Graph (CFG):

A graphical representation of all paths that might be traversed through a program during its execution.
Nodes: Represent program statements or blocks

Edges: Represent control flow

Cyclomatic Complexity (V(G)) Formula:


1.
V(G) = E - N + 2P

E = number of edges

N = number of nodes

P = number of connected components (usually 1 for a single program)

2.
Or directly:
V(G) = Number of decision points + 1
(e.g., if, while, for, case)

Interpretation:
Cyclomatic Complexity Meaning
1–10 Simple, easy to test
11–20 Moderate complexity
21–50 Complex, needs review
>50 Very complex, high risk

Example:

For a program with 3 if conditions,


Cyclomatic Complexity = 3 (decisions) + 1 = 4
UNIT 04

1. Testing Objectives
Software testing aims to:

Verify the software works according to the requirements

Detect defects early to prevent failures

Ensure the software is reliable, secure, and efficient

Assess if the software is ready for release

Provide confidence in software quality


Testing is not just about finding bugs — it’s also about ensuring the software performs its intended function correctly and efficiently.

2. Unit Testing
Definition:

Unit testing involves testing individual units or components (e.g., functions, classes, or methods) in isolation.

Goal:

Ensure each unit performs as expected and handles all edge cases.

Who Performs It?

Typically done by developers during the development phase.

Tools Used:

JUnit (Java)
NUnit (.NET)

PyTest (Python)

Advantages:

Early detection of bugs

Easier debugging (since scope is small)

Helps with code refactoring

3. Integration Testing
Definition:

Combines and tests interacting modules to ensure they work together properly.

Goal:

Detect interface defects and communication errors between modules.

Types:

Top-down integration: Test from the main module downward using stubs.

Bottom-up integration: Start from lower modules upward using drivers.

Big Bang: Combine all at once and test (least structured).

Sandwich/Hybrid: Mix of top-down and bottom-up.


Tools:

JUnit with mock objects

Selenium (for web integration)

4. Acceptance Testing
Definition:

Conducted to validate the software against business requirements and ensure it’s ready for delivery to the client.

Who Performs It?

Usually by end-users or clients, sometimes QA teams.

Types:
Alpha testing: Done in-house by potential users

Beta testing: Done by end-users in a real environment

Goal:

Confirm the system satisfies the business need

Make release decisions

5. Regression Testing
Definition:

Done after changes (bug fixes, enhancements) to ensure new changes don’t affect existing functionality.
Why Important?

Code changes may unintentionally break working features

Performed When?

After bug fixes

After enhancements or refactoring

Before releases

Tools:

Selenium
QTP/UFT

TestNG

JUnit (automated tests)

6. Testing for Functionality


Definition:

Checks if the software performs its intended operations correctly.

Includes:

Input validation
Output correctness

UI behavior

Business logic correctness

Examples:

Logging in with valid/invalid credentials

Calculating correct bill totals

Tools:

Manual functional test cases


Automated: Selenium, Cucumber, QTP

7. Testing for Performance


Definition:

Checks how the system performs in terms of speed, responsiveness, stability, and scalability under expected or peak loads.

Types:

Load Testing: Normal load conditions

Stress Testing: Beyond normal limits

Volume Testing: Large data volumes

Spike Testing: Sudden traffic increases


Metrics Measured:

Response time

Throughput

CPU/memory usage

Tools:

Apache JMeter

LoadRunner

NeoLoad
Summary Table
Type of Testing Performed By Purpose Level
Unit Testing Developer Test individual components Code-level
Integration Testing Developer/Tester Test interactions between modules Module-level
Acceptance Testing Client/User Validate software meets requirements System-level
Regression Testing QA/Test Automation Verify new changes don’t break code All levels
Functionality Testing QA/Testers Test functional correctness System/module
Performance Testing Performance Testers Test non-functional performance System-level

1. Top-Down and Bottom-Up Testing Strategies


These are integration testing strategies, used to test how individual modules of software interact with each other.
1.1 Top-Down Testing

Definition:
Integration starts from the top-level modules (main control modules) and proceeds downward.

How It Works:

Use test stubs to simulate lower modules that are not yet integrated.

Each level is tested one at a time, replacing missing modules with stubs.

Example:
If you have modules A B C (A calls B, and B calls C):

Test A first.

Use stubs for B and C if not available.


Gradually replace stubs with actual code.

Advantages:

Errors in high-level logic found early.

No need to wait for all modules.

Disadvantages:

Lower-level functions may not be tested thoroughly early on.

1.2 Bottom-Up Testing

Definition:
Integration begins with low-level modules, progressing upward to higher modules.

How It Works:

Use test drivers to simulate upper-level modules.

Once lower-level modules are tested, they are integrated upward.

Example:
Test C first, using a test driver that simulates B or A.

Advantages:

Lower-level utility functions are tested early.

No need to write stubs.

Disadvantages:
High-level logic and flow remain untested until later.

2. Test Drivers and Test Stubs


These are temporary code components used during integration testing.

Component Used In Purpose


Test Driver Bottom-Up Simulates a calling module
Test Stub Top-Down Simulates a called module

Example:

If function A calls B, and B is not ready:

Use a stub to mimic B's response (in top-down testing).


If B is ready but A is not, use a driver to call B (bottom-up).

3. Structural Testing (White Box Testing)


Definition:
Tests the internal logic and structure of the code.

Key Techniques:

Statement Coverage – Each line of code is executed.

Branch Coverage – Each decision (e.g., if/else) is tested both ways.

Path Coverage – All possible paths through the program are tested.

Loop Testing – Checks boundary conditions of loops.


Characteristics:

Requires code knowledge

Done by developers

Test cases derived from code logic

Tools:

JUnit (Java)

Code coverage tools: JaCoCo, Cobertura


4. Functional Testing (Black Box Testing)
Definition:
Tests the software without knowledge of internal code. Focuses on input/output behavior.

Techniques:

Equivalence Partitioning

Boundary Value Analysis

Decision Table Testing

State Transition Testing

Characteristics:
Tester only knows what the software should do, not how it works.

Done by QA/testers

Test cases based on requirements

Example:

For a login system:

Test valid and invalid inputs

Don't care about actual code — only the output

5. Test Data Suite Preparation


Definition:
Creating a set of inputs and expected outputs to test specific functionality.

Steps:
1.
Analyze Requirements

2.
Identify Inputs (valid & invalid)

3.
Define Expected Outputs

4.
Organize Tests (positive, negative, boundary cases)

5.
Use tools or write scripts to generate test data (for large-scale testing)

Types of Test Data:

Normal data – expected typical inputs


Boundary data – edge values

Invalid data – to test robustness

Stress data – large volume

6. Alpha and Beta Testing of Products


These are types of acceptance testing done before final release.

6.1 Alpha Testing

Where: At developer’s site, in a controlled environment


By Whom: Internal testers or select users
Characteristics:

Done near end of development

Simulates real users

Bugs found are fixed immediately

Example: A gaming company internally tests a game with employees before external release.

6.2 Beta Testing

Where: At customer’s site


By Whom: Real end-users

Characteristics:
Real-world environment

Collects user feedback, bug reports, performance data

Results used to make final adjustments

Example: A mobile app released in beta on Play Store to gather user feedback.

Summary Table
Testing Type What It Does Who Performs It
Top-Down Testing Integrates top modules first Developers/Testers
Bottom-Up Testing Integrates bottom modules first Developers/Testers
Test Drivers Simulate calling modules Developers
Test Stubs Simulate called modules Developers
White Box Testing Internal logic testing Developers
Black Box Testing Requirement-based testing Testers/QA
Test Data Suite Inputs to test software Testers
Alpha Testing In-house pre-release testing Internal Users
Beta Testing Real-world external testing End Users/Clients

1. Static Testing Strategies


Definition:
Static testing involves reviewing the software artifacts (e.g., requirements, design, code, documentation) without executing the code. It helps
detect errors early in the development process and improves software quality by focusing on correctness, consistency, and compliance.

Why Static Testing?

Early defect detection (before code execution)

Cost-effective: Catching issues early is cheaper than fixing them later

Improves quality by identifying issues related to logic, design, documentation, etc.


2. Formal Technical Reviews (Peer Reviews)
Definition:

A formal peer review where a team of reviewers assesses the software artifacts (code, design, requirements) against predefined criteria. The
focus is on identifying defects, issues with design or functionality, and improving the overall quality of the project.

Steps:
1.
Preparation: The author (developer/creator) provides the artifact (e.g., code) to reviewers beforehand.

2.
Review Meeting: Reviewers meet to discuss issues and concerns.

3.
Defect Identification: Reviewers check for errors, missing requirements, or inconsistencies.

4.
Defect Resolution: After the review, the author addresses the issues identified by the team.
5.
Documentation: A report is prepared listing issues and decisions made during the review.

Key Characteristics:

Formal: Structured process with documentation

Collaborative: Involves peers from various domains (designers, developers, testers)

Quality Improvement: Focuses on improving software, not blaming individuals

Advantages:

Detects defects early

Prevents defect propagation to later stages


Improves communication between team members

3. Walkthrough
Definition:

A walkthrough is an informal review process where the author of the artifact presents it to a group, explaining the logic, design, or code. The
goal is to get feedback and identify potential issues.

Steps:
1.
Preparation: The author prepares the document or code to be walked through.

2.
Presentation: The author walks through the material with the team, explaining the approach, logic, and design.

3.
Feedback: Reviewers ask questions and provide suggestions for improvements or point out issues.
4.
Resolution: The author incorporates feedback and resolves any issues raised.

Key Characteristics:

Informal: Less structured than a formal review, but still valuable

Collaborative: Involves discussion and feedback

Focus on Understanding: Helps the author and team understand the material better

Advantages:

Helps in building a shared understanding of the software among the team

Low-cost, early feedback


Identifies inconsistencies or misunderstandings in requirements or design

4. Code Inspection
Definition:

A code inspection is a formal and systematic process where a team of reviewers examines the source code to identify defects, violations of
coding standards, and opportunities for improvement. It is a detailed and structured activity.

Steps:
1.
Planning: The team identifies the code to be inspected and prepares an inspection checklist.

2.
Pre-Inspection: The author distributes the code and related documentation (e.g., design, specifications) to the review team.

3.
Inspection Meeting: Reviewers check the code against predefined criteria (e.g., correctness, adherence to standards, potential bugs).
4.
Defect Identification: The reviewers identify any issues in the code and suggest improvements.

5.
Post-Inspection: The issues are addressed by the author, and a report is created summarizing the findings.

Key Characteristics:

Formal: A structured process with predefined roles and procedures

Detailed: Focuses on individual lines of code and internal logic

Defect Prevention: The goal is to identify potential defects early, before execution

Advantages:

Catches subtle issues, such as logic flaws or violations of coding standards


Enhances code quality and maintainability

Increases understanding of the code among team members

5. Compliance with Design and Coding Standards


Definition:

Ensuring the software adheres to established design principles and coding standards to maintain consistency, readability, maintainability, and
quality.

Design Standards:

Design standards dictate how software should be structured, including:

Modularization: Ensuring components are well-defined and reusable.


Separation of Concerns: Each component should address a single responsibility.

Consistency: Consistent naming conventions, data structures, and methods across the design.

Coding Standards:

Coding standards define how code should be written to ensure readability and maintainability. These include:

Naming Conventions: Standardizing variable, function, and class names.

Indentation and Formatting: Following consistent indentation and line breaks for readability.

Commenting: Providing clear comments to explain code, especially complex logic.

Error Handling: Ensuring errors are handled appropriately and consistently.


Code Complexity: Keeping code complexity manageable (e.g., avoiding large functions or excessive nested loops).

How It Works:

Regular code reviews and inspections help ensure compliance with the standards.

Automated tools can be used to check coding standards (e.g., linters, static code analysis tools).

Advantages:

Ensures consistency and uniformity in the codebase

Improves readability and maintainability

Reduces the risk of defects caused by poor coding practices


Comparison Table: Static Testing Strategies
Strategy Formal Reviewers Involved Goal Process
Formal Technical Review Yes Developers, Testers, Designers Identify defects, improve quality Structured, Documented
Walkthrough No Author, Peers Discuss code/design, get feedback Informal, Collaborative
Code Inspection Yes Developers, Testers Identify defects, ensure standards Formal, Detailed
Compliance with Standards Yes Developers, QA Ensure consistency and quality Ongoing, Tool-supported

Conclusion
Benefits of Static Testing:

Early Defect Detection: Identifying errors before execution

Cost-Effective: Less expensive than fixing bugs in later stages

Improved Quality: Focuses on correctness, maintainability, and adherence to standards


Team Collaboration: Involves the whole team in the review process, enhancing communication and knowledge sharing

UNIT 05

1. Software as an Evolutionary Entity


Definition:
Software is often treated as an evolutionary entity, meaning it is subject to continuous change and improvement throughout its lifecycle. It is
rare for software to remain static after it is deployed; instead, it evolves due to new requirements, bug fixes, and improvements.

Key Characteristics:

Dynamic Nature: Software is rarely perfect at the time of release, and it must adapt to changing needs, environments, and technologies.

Continuous Modifications: Software often undergoes regular updates, additions, and modifications to meet user needs or correct errors.
Lifecycle: Software lifecycle includes initial development, deployment, and continuous maintenance, making it an ongoing process.

Implication for Development:

Software development teams must plan for future modifications and support.

Proper architecture and design are essential for scalability, maintainability, and ease of evolution.

2. Need for Maintenance


Software maintenance is essential for ensuring that a system continues to function properly and meets evolving user needs after deployment.

Reasons for Maintenance:


1.
Bug Fixes: Errors or bugs discovered after deployment must be corrected.

2.
Performance Improvements: As user needs grow, software may need optimizations for speed, scalability, or other performance factors.

3.
Adaptation to Changes: Software may need to adapt to changes in hardware, operating systems, or other external environments.

4.
User Requests: New features or enhancements requested by users or clients.

5.
Security: Continuous updates are necessary to patch security vulnerabilities.

3. Categories of Maintenance
Software maintenance can be classified into several categories based on the nature of the changes being made. The main categories are:

3.1 Preventive Maintenance

Definition:
Maintenance activities performed to prevent potential problems or failures before they occur, ensuring the system continues to operate
smoothly.
Key Characteristics:

Involves activities like code refactoring, updating dependencies, and enhancing performance.

Focuses on reducing the risk of future defects or malfunctions.

Aimed at improving system reliability and maintainability.

Examples:

Regularly updating software libraries and components to keep them compatible with newer technologies.

Code refactoring to improve readability and simplify the codebase for easier future modifications.

3.2 Corrective Maintenance


Definition:
Maintenance activities aimed at correcting defects, errors, or bugs that were identified after the system’s deployment.

Key Characteristics:

It addresses functional issues or bugs that affect the performance or operation of the software.

Typically triggered by user complaints or detected during testing after deployment.

Corrective maintenance does not usually introduce new features but fixes existing problems.

Examples:

Fixing a bug in a feature that causes incorrect output.

Patching a security vulnerability that could be exploited by attackers.


3.3 Perfective Maintenance

Definition:
This type of maintenance focuses on enhancing the software's performance, improving features, or refining existing functions according to new
user requirements.

Key Characteristics:

Focuses on improving usability, performance, or adding new features to keep the software competitive.

Changes may be user-driven or based on evolving market conditions or technologies.

Often driven by customer feedback or evolving business needs.

Examples:

Adding a new feature that improves user interaction with the software.
Enhancing software performance by optimizing algorithms or database queries.

4. Cost of Maintenance
Definition:
The cost of maintaining software includes all expenses related to corrective, preventive, and perfective maintenance activities over the lifetime
of the software.

Factors Affecting Cost:


1.
Complexity of the Software: More complex systems require more effort and resources to maintain.

2.
Size of the System: Larger systems typically have higher maintenance costs.

3.
Frequency of Changes: The more frequently the software is updated, the higher the ongoing cost.

4.
Quality of the Original Code: Well-structured and documented code is easier to maintain and update.

5.
Environment Changes: If the software has to be adapted to new operating systems, hardware, or frameworks, it adds to the cost.

Statistics:

Maintenance can take up a significant portion of the software lifecycle costs. For example, research suggests that maintenance costs can
account for 60-80% of total lifecycle costs.

5. Software Re-Engineering
Definition:
Software re-engineering involves restructuring or rewriting parts of existing software to improve its performance, quality, or maintainability
while retaining its functionality.

Key Activities:
1.
Reverse Engineering: Analyzing the current system to understand its components and design.
2.
Restructuring: Improving the structure or design of the software, without changing its functionality.

3.
Re-documentation: Updating or creating documentation for the system to aid future development or maintenance.

4.
Migration: Moving software from one platform or environment to another (e.g., legacy system to modern infrastructure).

Benefits:

Extends the life of legacy systems.

Improves maintainability and performance.

Helps migrate legacy systems to newer, more sustainable platforms.

Example:
Rewriting a legacy system from COBOL to Java or modern web technologies.

6. Reverse Engineering
Definition:
Reverse engineering involves analyzing the software to extract knowledge or design information from the existing code, typically for the
purpose of reengineering or understanding legacy systems.

Key Activities:
1.
Code Analysis: Examining the source code to understand its structure, logic, and dependencies.

2.
Extracting Design Information: Creating models or documentation that were not originally available.

3.
Understanding Legacy Systems: Reverse engineering is often used when no original design documentation is available, helping to rebuild
lost knowledge.
Benefits:

Helps to understand and maintain legacy systems when documentation is outdated or unavailable.

Useful for software migration or reengineering.

Example:

Analyzing a legacy banking system built decades ago to extract business rules and system architecture, so it can be migrated to a modern
platform.

1. Software Configuration Management (SCM) Activities


Definition:
Software Configuration Management (SCM) involves managing changes to software systems to ensure that the system remains consistent,
controlled, and traceable throughout the development lifecycle.

Key Activities in SCM:


1.
Configuration Identification:
Identifying and labeling the components of the software (e.g., source code, documentation) that need to be controlled.

2.
Version Control:
Managing different versions of software components, ensuring that changes can be tracked and previous versions can be restored if needed.

3.
Change Control:
Managing changes to the software and ensuring that only authorized changes are made.

4.
Configuration Status Accounting:
Keeping records of all configurations and changes made to the software system, along with their statuses (approved, pending, etc.).

5.
Configuration Audits:
Ensuring that the software configuration is consistent with the specifications and requirements.
2. Change Control Process
Definition:
The Change Control Process is a systematic approach to managing changes in the software system, ensuring that no unauthorized changes
are made, and that changes are tracked and documented.

Steps in Change Control Process:


1.
Change Request:
A formal request to modify the system, usually initiated by users, stakeholders, or team members.

2.
Impact Analysis:
Analyzing the impact of the proposed change on the project, including its effect on schedule, cost, and other components.

3.
Approval/Reject:
Based on the impact analysis, the change request is either approved or rejected by the change control board or project management team.

4.
Implementation:
Once approved, the change is implemented into the system.
5.
Testing:
The modified software is tested to ensure that the change has been successfully integrated and that no new issues have been introduced.

6.
Documentation and Tracking:
All changes are documented, and their status is tracked to ensure proper communication and accountability.

3. Software Version Control


Definition:
Version Control is the management of changes to software code over time. It helps developers track changes, collaborate on projects, and
revert to earlier versions when necessary.

Key Features:
1.
Tracking Changes:
Keeps track of all modifications made to the code, including additions, deletions, and updates.

2.
Branching:
Allows developers to work on different features or bug fixes independently by creating branches, which can later be merged.
3.
Merging:
Combines changes from different branches or versions into a single version of the code.

4.
Reverting:
Enables reverting to previous versions if a new version causes issues or defects.

5.
Collaboration:
Facilitates teamwork by allowing multiple developers to work on the same codebase without conflict.

Popular Version Control Systems:

Git (distributed version control system)

SVN (Subversion) (centralized version control system)

Mercurial (distributed version control system)


4. CASE (Computer-Aided Software Engineering) Tools Overview
Definition:
CASE Tools are software applications that assist in software development and maintenance tasks, including modeling, design, testing, and
project management.

Types of CASE Tools:


1.
Upper CASE:
Focuses on the early stages of software development (e.g., requirements gathering, design, and modeling).

Examples: Rational Rose, Enterprise Architect.

2.
Lower CASE:
Focuses on later stages like coding, testing, and maintenance.

Examples: JUnit, Selenium, Bugzilla.


3.
Integrated CASE:
Combines both upper and lower CASE tools, offering a comprehensive suite for the entire software development lifecycle.

Examples: Microsoft Visual Studio, Eclipse IDE.

Benefits:

Increased productivity: Automates repetitive tasks.

Improved quality: Facilitates better design, testing, and documentation.

Enhanced collaboration: Supports team-based development and version control.

5. Estimation of Various Parameters (Cost, Effort, Schedule)


Estimation is an important process in software engineering to predict the resources needed for a project and to plan for its completion.

Cost Estimation:

Involves predicting the overall cost of the software project, including development, testing, and maintenance. It can be calculated using:

Expert Judgment

Analogous Estimating (comparing with similar past projects)

Parametric Estimating (using statistical models)

Effort Estimation:

Determining the amount of developer work required to complete the project. It is often measured in person-months or person-hours.

Schedule/Duration Estimation:
Predicting the timeline for project completion. The schedule takes into account the complexity of tasks, the number of resources, and the work
dependencies.

6. Constructive Cost Model (COCOMO)


Definition:
COCOMO (Constructive Cost Model) is a software cost estimation model developed by Barry Boehm. It uses historical project data and system
characteristics to estimate the effort, cost, and schedule required for a software project.

COCOMO Levels:
1.
Basic COCOMO:
Provides a rough estimate based on the size of the software (e.g., LOC – Lines of Code).

2.
Intermediate COCOMO:
Incorporates additional factors such as software reliability, personnel experience, and hardware constraints.

3.
Detailed COCOMO:
Involves more detailed factors and provides a more accurate estimate based on various project attributes.
COCOMO Formula:

For basic COCOMO:


Effort=a×(KLOC)b\text{Effort} = a \times (\text{KLOC})^b
Where:

aa and bb are constants determined by the project type (organic, semi-detached, embedded).

KLOC\text{KLOC} is the number of thousands of lines of code.

7. Resource Allocation Models


Definition:
Resource Allocation Models help in assigning the right resources (e.g., people, hardware, software) to various tasks within the software project
to maximize efficiency and meet deadlines.

Key Resource Allocation Models:


1.
Linear Programming:
A mathematical method for determining the optimal allocation of resources.

2.
Heuristic Methods:
Used when it is difficult to find an exact solution. These methods rely on practical experience or rules of thumb.

3.
Critical Path Method (CPM):
A project management technique used to determine the longest sequence of dependent tasks and allocate resources accordingly.

4.
PERT (Program Evaluation and Review Technique):
Used to schedule and allocate resources by considering the uncertainty in task durations.

8. Software Risk Analysis and Management


Definition:
Software Risk Analysis and Management involves identifying, assessing, and managing risks that could affect the software project’s success.

Risk Management Process:


1.
Risk Identification:
Identifying potential risks that could affect the project (e.g., technical challenges, staffing issues).

2.
Risk Assessment:
Analyzing the likelihood and impact of each identified risk. Risks are categorized based on their severity and probability.

3.
Risk Mitigation:
Developing strategies to avoid or reduce the impact of risks (e.g., using reliable technologies, training staff).

4.
Risk Monitoring:
Continuously monitoring risks throughout the project lifecycle and updating mitigation strategies as necessary.

Common Risks in Software Projects:

Schedule risks: Delays in project timelines.

Technical risks: Challenges in implementing certain features or technologies.


Resource risks: Shortage of qualified personnel or hardware issues.

You might also like